WorldWideScience

Sample records for two-point analysis showed

  1. Evaluation of the H-point standard additions method (HPSAM) and the generalized H-point standard additions method (GHPSAM) for the UV-analysis of two-component mixtures.

    Science.gov (United States)

    Hund, E; Massart, D L; Smeyers-Verbeke, J

    1999-10-01

    The H-point standard additions method (HPSAM) and two versions of the generalized H-point standard additions method (GHPSAM) are evaluated for the UV-analysis of two-component mixtures. Synthetic mixtures of anhydrous caffeine and phenazone as well as of atovaquone and proguanil hydrochloride were used. Furthermore, the method was applied to pharmaceutical formulations that contain these compounds as active drug substances. This paper shows both the difficulties that are related to the methods and the conditions by which acceptable results can be obtained.

  2. Dark Energy Survey Year 1 Results: Methodology and Projections for Joint Analysis of Galaxy Clustering, Galaxy Lensing, and CMB Lensing Two-point Functions

    Energy Technology Data Exchange (ETDEWEB)

    Giannantonio, T.; et al.

    2018-02-14

    Optical imaging surveys measure both the galaxy density and the gravitational lensing-induced shear fields across the sky. Recently, the Dark Energy Survey (DES) collaboration used a joint fit to two-point correlations between these observables to place tight constraints on cosmology (DES Collaboration et al. 2017). In this work, we develop the methodology to extend the DES Collaboration et al. (2017) analysis to include cross-correlations of the optical survey observables with gravitational lensing of the cosmic microwave background (CMB) as measured by the South Pole Telescope (SPT) and Planck. Using simulated analyses, we show how the resulting set of five two-point functions increases the robustness of the cosmological constraints to systematic errors in galaxy lensing shear calibration. Additionally, we show that contamination of the SPT+Planck CMB lensing map by the thermal Sunyaev-Zel'dovich effect is a potentially large source of systematic error for two-point function analyses, but show that it can be reduced to acceptable levels in our analysis by masking clusters of galaxies and imposing angular scale cuts on the two-point functions. The methodology developed here will be applied to the analysis of data from the DES, the SPT, and Planck in a companion work.

  3. The massless two-loop two-point function

    International Nuclear Information System (INIS)

    Bierenbaum, I.; Weinzierl, S.

    2003-01-01

    We consider the massless two-loop two-point function with arbitrary powers of the propagators and derive a representation from which we can obtain the Laurent expansion to any desired order in the dimensional regularization parameter ε. As a side product, we show that in the Laurent expansion of the two-loop integral only rational numbers and multiple zeta values occur. Our method of calculation obtains the two-loop integral as a convolution product of two primitive one-loop integrals. We comment on the generalization of this product structure to higher loop integrals. (orig.)

  4. A New Numerical Algorithm for Two-Point Boundary Value Problems

    OpenAIRE

    Guo, Lihua; Wu, Boying; Zhang, Dazhi

    2014-01-01

    We present a new numerical algorithm for two-point boundary value problems. We first present the exact solution in the form of series and then prove that the n-term numerical solution converges uniformly to the exact solution. Furthermore, we establish the numerical stability and error analysis. The numerical results show the effectiveness of the proposed algorithm.

  5. Fast randomized point location without preprocessing in two- and three-dimensional Delaunay triangulations

    Energy Technology Data Exchange (ETDEWEB)

    Muecke, E.P.; Saias, I.; Zhu, B.

    1996-05-01

    This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point simply by walking through the triangulation, after selecting a good starting point by random sampling. The analysis generalizes and extends a recent result of d = 2 dimensions by proving this procedure to take expected time close to O(n{sup 1/(d+1)}) for point location in Delaunay triangulations of n random points in d = 3 dimensions. Empirical results in both two and three dimensions show that this procedure is efficient in practice.

  6. How two word-trained dogs integrate pointing and naming

    NARCIS (Netherlands)

    Grassmann, Susanne; Kaminski, Juliane; Tomasello, Michael

    Two word-trained dogs were presented with acts of reference in which a human pointed, named objects, or simultaneously did both. The question was whether these dogs would assume co-reference of pointing and naming and thus pick the pointed-to object. Results show that the dogs did indeed assume

  7. Two-dimensional spin-orbit Dirac point in monolayer HfGeTe

    Science.gov (United States)

    Guan, Shan; Liu, Ying; Yu, Zhi-Ming; Wang, Shan-Shan; Yao, Yugui; Yang, Shengyuan A.

    2017-10-01

    Dirac points in two-dimensional (2D) materials have been a fascinating subject of research, with graphene as the most prominent example. However, the Dirac points in existing 2D materials, including graphene, are vulnerable against spin-orbit coupling (SOC). Here, based on first-principles calculations and theoretical analysis, we propose a new family of stable 2D materials, the HfGeTe-family monolayers, which host so-called spin-orbit Dirac points (SDPs) close to the Fermi level. These Dirac points are special in that they are formed only under significant SOC, hence they are intrinsically robust against SOC. We show that the existence of a pair of SDPs are dictated by the nonsymmorphic space group symmetry of the system, which are very robust under various types of lattice strains. The energy, the dispersion, and the valley occupation around the Dirac points can be effectively tuned by strain. We construct a low-energy effective model to characterize the Dirac fermions around the SDPs. Furthermore, we find that the material is simultaneously a 2D Z2 topological metal, which possesses nontrivial Z2 invariant in the bulk and spin-helical edge states on the boundary. From the calculated exfoliation energies and mechanical properties, we show that these materials can be readily obtained in experiment from the existing bulk materials. Our result reveals HfGeTe-family monolayers as a promising platform for exploring spin-orbit Dirac fermions and topological phases in two-dimensions.

  8. Second feature of the matter two-point function

    Science.gov (United States)

    Tansella, Vittorio

    2018-05-01

    We point out the existence of a second feature in the matter two-point function, besides the acoustic peak, due to the baryon-baryon correlation in the early Universe and positioned at twice the distance of the peak. We discuss how the existence of this feature is implied by the well-known heuristic argument that explains the baryon bump in the correlation function. A standard χ2 analysis to estimate the detection significance of the second feature is mimicked. We conclude that, for realistic values of the baryon density, a SKA-like galaxy survey will not be able to detect this feature with standard correlation function analysis.

  9. Detecting outliers and/or leverage points: a robust two-stage procedure with bootstrap cut-off points

    Directory of Open Access Journals (Sweden)

    Ettore Marubini

    2014-01-01

    Full Text Available This paper presents a robust two-stage procedure for identification of outlying observations in regression analysis. The exploratory stage identifies leverage points and vertical outliers through a robust distance estimator based on Minimum Covariance Determinant (MCD. After deletion of these points, the confirmatory stage carries out an Ordinary Least Squares (OLS analysis on the remaining subset of data and investigates the effect of adding back in the previously deleted observations. Cut-off points pertinent to different diagnostics are generated by bootstrapping and the cases are definitely labelled as good-leverage, bad-leverage, vertical outliers and typical cases. The procedure is applied to four examples.

  10. Two-sorted Point-Interval Temporal Logics

    DEFF Research Database (Denmark)

    Balbiani, Philippe; Goranko, Valentin; Sciavicco, Guido

    2011-01-01

    There are two natural and well-studied approaches to temporal ontology and reasoning: point-based and interval-based. Usually, interval-based temporal reasoning deals with points as particular, duration-less intervals. Here we develop explicitly two-sorted point-interval temporal logical framework...... whereby time instants (points) and time periods (intervals) are considered on a par, and the perspective can shift between them within the formal discourse. We focus on fragments involving only modal operators that correspond to the inter-sort relations between points and intervals. We analyze...

  11. Prospective comparison of liver stiffness measurements between two point wave elastography methods: Virtual ouch quantification and elastography point quantification

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hyun Suk; Lee, Jeong Min; Yoon, Jeong Hee; Lee, Dong Ho; Chang, Won; Han, Joon Koo [Seoul National University Hospital, Seoul (Korea, Republic of)

    2016-09-15

    To prospectively compare technical success rate and reliable measurements of virtual touch quantification (VTQ) elastography and elastography point quantification (ElastPQ), and to correlate liver stiffness (LS) measurements obtained by the two elastography techniques. Our study included 85 patients, 80 of whom were previously diagnosed with chronic liver disease. The technical success rate and reliable measurements of the two kinds of point shear wave elastography (pSWE) techniques were compared by χ{sup 2} analysis. LS values measured using the two techniques were compared and correlated via Wilcoxon signed-rank test, Spearman correlation coefficient, and 95% Bland-Altman limit of agreement. The intraobserver reproducibility of ElastPQ was determined by 95% Bland-Altman limit of agreement and intraclass correlation coefficient (ICC). The two pSWE techniques showed similar technical success rate (98.8% for VTQ vs. 95.3% for ElastPQ, p = 0.823) and reliable LS measurements (95.3% for VTQ vs. 90.6% for ElastPQ, p = 0.509). The mean LS measurements obtained by VTQ (1.71 ± 0.47 m/s) and ElastPQ (1.66 ± 0.41 m/s) were not significantly different (p = 0.209). The LS measurements obtained by the two techniques showed strong correlation (r = 0.820); in addition, the 95% limit of agreement of the two methods was 27.5% of the mean. Finally, the ICC of repeat ElastPQ measurements was 0.991. Virtual touch quantification and ElastPQ showed similar technical success rate and reliable measurements, with strongly correlated LS measurements. However, the two methods are not interchangeable due to the large limit of agreement.

  12. Two-point correlation functions in inhomogeneous and anisotropic cosmologies

    International Nuclear Information System (INIS)

    Marcori, Oton H.; Pereira, Thiago S.

    2017-01-01

    Two-point correlation functions are ubiquitous tools of modern cosmology, appearing in disparate topics ranging from cosmological inflation to late-time astrophysics. When the background spacetime is maximally symmetric, invariance arguments can be used to fix the functional dependence of this function as the invariant distance between any two points. In this paper we introduce a novel formalism which fixes this functional dependence directly from the isometries of the background metric, thus allowing one to quickly assess the overall features of Gaussian correlators without resorting to the full machinery of perturbation theory. As an application we construct the CMB temperature correlation function in one inhomogeneous (namely, an off-center LTB model) and two spatially flat and anisotropic (Bianchi) universes, and derive their covariance matrices in the limit of almost Friedmannian symmetry. We show how the method can be extended to arbitrary N -point correlation functions and illustrate its use by constructing three-point correlation functions in some simple geometries.

  13. Two-point correlation functions in inhomogeneous and anisotropic cosmologies

    Energy Technology Data Exchange (ETDEWEB)

    Marcori, Oton H.; Pereira, Thiago S., E-mail: otonhm@hotmail.com, E-mail: tspereira@uel.br [Departamento de Física, Universidade Estadual de Londrina, 86057-970, Londrina PR (Brazil)

    2017-02-01

    Two-point correlation functions are ubiquitous tools of modern cosmology, appearing in disparate topics ranging from cosmological inflation to late-time astrophysics. When the background spacetime is maximally symmetric, invariance arguments can be used to fix the functional dependence of this function as the invariant distance between any two points. In this paper we introduce a novel formalism which fixes this functional dependence directly from the isometries of the background metric, thus allowing one to quickly assess the overall features of Gaussian correlators without resorting to the full machinery of perturbation theory. As an application we construct the CMB temperature correlation function in one inhomogeneous (namely, an off-center LTB model) and two spatially flat and anisotropic (Bianchi) universes, and derive their covariance matrices in the limit of almost Friedmannian symmetry. We show how the method can be extended to arbitrary N -point correlation functions and illustrate its use by constructing three-point correlation functions in some simple geometries.

  14. Periodic Points in Genus Two: Holomorphic Sections over Hilbert Modular Varieties, Teichmuller Dynamics, and Billiards

    OpenAIRE

    Apisa, Paul

    2017-01-01

    We show that all GL(2, R)-equivariant point markings over orbit closures of primitive genus two translation surfaces arise from marking pairs of points exchanged by the hyperelliptic involution, Weierstrass points, or the golden points in the golden eigenform locus. As corollaries, we classify the holomorphically varying families of points over orbifold covers of genus two Hilbert modular surfaces, solve the finite blocking problem on genus two translation surfaces, and show that there is at ...

  15. Two-craft Coulomb formation study about circular orbits and libration points

    Science.gov (United States)

    Inampudi, Ravi Kishore

    This dissertation investigates the dynamics and control of a two-craft Coulomb formation in circular orbits and at libration points; it addresses relative equilibria, stability and optimal reconfigurations of such formations. The relative equilibria of a two-craft tether formation connected by line-of-sight elastic forces moving in circular orbits and at libration points are investigated. In circular Earth orbits and Earth-Moon libration points, the radial, along-track, and orbit normal great circle equilibria conditions are found. An example of modeling the tether force using Coulomb force is discussed. Furthermore, the non-great-circle equilibria conditions for a two-spacecraft tether structure in circular Earth orbit and at collinear libration points are developed. Then the linearized dynamics and stability analysis of a 2-craft Coulomb formation at Earth-Moon libration points are studied. For orbit-radial equilibrium, Coulomb forces control the relative distance between the two satellites. The gravity gradient torques on the formation due to the two planets help stabilize the formation. Similar analysis is performed for along-track and orbit-normal relative equilibrium configurations. Where necessary, the craft use a hybrid thrusting-electrostatic actuation system. The two-craft dynamics at the libration points provide a general framework with circular Earth orbit dynamics forming a special case. In the presence of differential solar drag perturbations, a Lyapunov feedback controller is designed to stabilize a radial equilibrium, two-craft Coulomb formation at collinear libration points. The second part of the thesis investigates optimal reconfigurations of two-craft Coulomb formations in circular Earth orbits by applying nonlinear optimal control techniques. The objective of these reconfigurations is to maneuver the two-craft formation between two charged equilibria configurations. The reconfiguration of spacecraft is posed as an optimization problem using the

  16. Parametric statistical change point analysis

    CERN Document Server

    Chen, Jie

    2000-01-01

    This work is an in-depth study of the change point problem from a general point of view and a further examination of change point analysis of the most commonly used statistical models Change point problems are encountered in such disciplines as economics, finance, medicine, psychology, signal processing, and geology, to mention only several The exposition is clear and systematic, with a great deal of introductory material included Different models are presented in each chapter, including gamma and exponential models, rarely examined thus far in the literature Other models covered in detail are the multivariate normal, univariate normal, regression, and discrete models Extensive examples throughout the text emphasize key concepts and different methodologies are used, namely the likelihood ratio criterion, and the Bayesian and information criterion approaches A comprehensive bibliography and two indices complete the study

  17. Mistakes and Pitfalls Associated with Two-Point Compression Ultrasound for Deep Vein Thrombosis

    Directory of Open Access Journals (Sweden)

    Tony Zitek, MD

    2016-03-01

    Full Text Available Introduction: Two-point compression ultrasound is purportedly a simple and accurate means to diagnose proximal lower extremity deep vein thrombosis (DVT, but the pitfalls of this technique have not been fully elucidated. The objective of this study is to determine the accuracy of emergency medicine resident-performed two-point compression ultrasound, and to determine what technical errors are commonly made by novice ultrasonographers using this technique. Methods: This was a prospective diagnostic test assessment of a convenience sample of adult emergency department (ED patients suspected of having a lower extremity DVT. After brief training on the technique, residents performed two-point compression ultrasounds on enrolled patients. Subsequently a radiology department ultrasound was performed and used as the gold standard. Residents were instructed to save videos of their ultrasounds for technical analysis. Results: Overall, 288 two-point compression ultrasound studies were performed. There were 28 cases that were deemed to be positive for DVT by radiology ultrasound. Among these 28, 16 were identified by the residents with two-point compression. Among the 260 cases deemed to be negative for DVT by radiology ultrasound, 10 were thought to be positive by the residents using two-point compression. This led to a sensitivity of 57.1% (95% CI [38.8-75.5] and a specificity of 96.1% (95% CI [93.8-98.5] for resident-performed two-point compression ultrasound. This corresponds to a positive predictive value of 61.5% (95% CI [42.8-80.2] and a negative predictive value of 95.4% (95% CI [92.9-98.0]. The positive likelihood ratio is 14.9 (95% CI [7.5-29.5] and the negative likelihood ratio is 0.45 (95% CI [0.29-0.68]. Video analysis revealed that in four cases the resident did not identify a DVT because the thrombus was isolated to the superior femoral vein (SFV, which is not evaluated by two-point compression. Moreover, the video analysis revealed that the

  18. Two-point entanglement near a quantum phase transition

    International Nuclear Information System (INIS)

    Chen, Han-Dong

    2007-01-01

    In this work, we study the two-point entanglement S(i, j), which measures the entanglement between two separated degrees of freedom (ij) and the rest of system, near a quantum phase transition. Away from the critical point, S(i, j) saturates with a characteristic length scale ξ E , as the distance |i - j| increases. The entanglement length ξ E agrees with the correlation length. The universality and finite size scaling of entanglement are demonstrated in a class of exactly solvable one-dimensional spin model. By connecting the two-point entanglement to correlation functions in the long range limit, we argue that the prediction power of a two-point entanglement is universal as long as the two involved points are separated far enough

  19. Interior point algorithms theory and analysis

    CERN Document Server

    Ye, Yinyu

    2011-01-01

    The first comprehensive review of the theory and practice of one of today's most powerful optimization techniques. The explosive growth of research into and development of interior point algorithms over the past two decades has significantly improved the complexity of linear programming and yielded some of today's most sophisticated computing techniques. This book offers a comprehensive and thorough treatment of the theory, analysis, and implementation of this powerful computational tool. Interior Point Algorithms provides detailed coverage of all basic and advanced aspects of the subject.

  20. IMAGE-PLANE ANALYSIS OF n-POINT-MASS LENS CRITICAL CURVES AND CAUSTICS

    Energy Technology Data Exchange (ETDEWEB)

    Danek, Kamil; Heyrovský, David, E-mail: kamil.danek@utf.mff.cuni.cz, E-mail: heyrovsky@utf.mff.cuni.cz [Institute of Theoretical Physics, Faculty of Mathematics and Physics, Charles University in Prague (Czech Republic)

    2015-06-10

    The interpretation of gravitational microlensing events caused by planetary systems or multiple stars is based on the n-point-mass lens model. The first planets detected by microlensing were well described by the two-point-mass model of a star with one planet. By the end of 2014, four events involving three-point-mass lenses had been announced. Two of the lenses were stars with two planetary companions each; two were binary stars with a planet orbiting one component. While the two-point-mass model is well understood, the same cannot be said for lenses with three or more components. Even the range of possible critical-curve topologies and caustic geometries of the three-point-mass lens remains unknown. In this paper we provide new tools for mapping the critical-curve topology and caustic cusp number in the parameter space of n-point-mass lenses. We perform our analysis in the image plane of the lens. We show that all contours of the Jacobian are critical curves of re-scaled versions of the lens configuration. Utilizing this property further, we introduce the cusp curve to identify cusp-image positions on all contours simultaneously. In order to track cusp-number changes in caustic metamorphoses, we define the morph curve, which pinpoints the positions of metamorphosis-point images along the cusp curve. We demonstrate the usage of both curves on simple two- and three-point-mass lens examples. For the three simplest caustic metamorphoses we illustrate the local structure of the image and source planes.

  1. Percolation analysis for cosmic web with discrete points

    Science.gov (United States)

    Zhang, Jiajun; Cheng, Dalong; Chu, Ming-Chung

    2018-01-01

    Percolation analysis has long been used to quantify the connectivity of the cosmic web. Most of the previous work is based on density fields on grids. By smoothing into fields, we lose information about galaxy properties like shape or luminosity. The lack of mathematical modeling also limits our understanding for the percolation analysis. To overcome these difficulties, we have studied percolation analysis based on discrete points. Using a friends-of-friends (FoF) algorithm, we generate the S -b b relation, between the fractional mass of the largest connected group (S ) and the FoF linking length (b b ). We propose a new model, the probability cloud cluster expansion theory to relate the S -b b relation with correlation functions. We show that the S -b b relation reflects a combination of all orders of correlation functions. Using N-body simulation, we find that the S -b b relation is robust against redshift distortion and incompleteness in observation. From the Bolshoi simulation, with halo abundance matching (HAM), we have generated a mock galaxy catalog. Good matching of the projected two-point correlation function with observation is confirmed. However, comparing the mock catalog with the latest galaxy catalog from Sloan Digital Sky Survey (SDSS) Data Release (DR)12, we have found significant differences in their S -b b relations. This indicates that the mock galaxy catalog cannot accurately retain higher-order correlation functions than the two-point correlation function, which reveals the limit of the HAM method. As a new measurement, the S -b b relation is applicable to a wide range of data types, fast to compute, and robust against redshift distortion and incompleteness and contains information of all orders of correlation functions.

  2. Numerical analysis on pump turbine runaway points

    International Nuclear Information System (INIS)

    Guo, L; Liu, J T; Wang, L Q; Jiao, L; Li, Z F

    2012-01-01

    To research the character of pump turbine runaway points with different guide vane opening, a hydraulic model was established based on a pumped storage power station. The RNG k-ε model and SMPLEC algorithms was used to simulate the internal flow fields. The result of the simulation was compared with the test data and good correspondence was got between experimental data and CFD result. Based on this model, internal flow analysis was carried out. The result show that when the pump turbine ran at the runway speed, lots of vortexes appeared in the flow passage of the runner. These vortexes could always be observed even if the guide vane opening changes. That is an important way of energy loss in the runaway condition. Pressure on two sides of the runner blades were almost the same. So the runner power is very low. High speed induced large centrifugal force and the small guide vane opening gave the water velocity a large tangential component, then an obvious water ring could be observed between the runner blades and guide vanes in small guide vane opening condition. That ring disappeared when the opening bigger than 20°. These conclusions can provide a theory basis for the analysis and simulation of the pump turbine runaway points.

  3. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    Science.gov (United States)

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  4. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    Directory of Open Access Journals (Sweden)

    Yueqian Shen

    2016-12-01

    Full Text Available A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  5. Growth Curve Analysis and Change-Points Detection in Extremes

    KAUST Repository

    Meng, Rui

    2016-05-15

    The thesis consists of two coherent projects. The first project presents the results of evaluating salinity tolerance in barley using growth curve analysis where different growth trajectories are observed within barley families. The study of salinity tolerance in plants is crucial to understanding plant growth and productivity. Because fully-automated smarthouses with conveyor systems allow non-destructive and high-throughput phenotyping of large number of plants, it is now possible to apply advanced statistical tools to analyze daily measurements and to study salinity tolerance. To compare different growth patterns of barley variates, we use functional data analysis techniques to analyze the daily projected shoot areas. In particular, we apply the curve registration method to align all the curves from the same barley family in order to summarize the family-wise features. We also illustrate how to use statistical modeling to account for spatial variation in microclimate in smarthouses and for temporal variation across runs, which is crucial for identifying traits of the barley variates. In our analysis, we show that the concentrations of sodium and potassium in leaves are negatively correlated, and their interactions are associated with the degree of salinity tolerance. The second project studies change-points detection methods in extremes when multiple time series data are available. Motived by the scientific question of whether the chances to experience extreme weather are different in different seasons of a year, we develop a change-points detection model to study changes in extremes or in the tail of a distribution. Most of existing models identify seasons from multiple yearly time series assuming a season or a change-point location remains exactly the same across years. In this work, we propose a random effect model that allows the change-point to vary from year to year, following a given distribution. Both parametric and nonparametric methods are developed

  6. Human Body 3D Posture Estimation Using Significant Points and Two Cameras

    Science.gov (United States)

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422

  7. Two-Point Incremental Forming with Partial Die: Theory and Experimentation

    Science.gov (United States)

    Silva, M. B.; Martins, P. A. F.

    2013-04-01

    This paper proposes a new level of understanding of two-point incremental forming (TPIF) with partial die by means of a combined theoretical and experimental investigation. The theoretical developments include an innovative extension of the analytical model for rotational symmetric single point incremental forming (SPIF), originally developed by the authors, to address the influence of the major operating parameters of TPIF and to successfully explain the differences in formability between SPIF and TPIF. The experimental work comprised the mechanical characterization of the material and the determination of its formability limits at necking and fracture by means of circle grid analysis and benchmark incremental sheet forming tests. Results show the adequacy of the proposed analytical model to handle the deformation mechanics of SPIF and TPIF with partial die and demonstrate that neck formation is suppressed in TPIF, so that traditional forming limit curves are inapplicable to describe failure and must be replaced by fracture forming limits derived from ductile damage mechanics. The overall geometric accuracy of sheet metal parts produced by TPIF with partial die is found to be better than that of parts fabricated by SPIF due to smaller elastic recovery upon unloading.

  8. Theoretical analysis of two ACO approaches for the traveling salesman problem

    DEFF Research Database (Denmark)

    Kötzing, Timo; Neumann, Frank; Röglin, Heiko

    2012-01-01

    Bioinspired algorithms, such as evolutionary algorithms and ant colony optimization, are widely used for different combinatorial optimization problems. These algorithms rely heavily on the use of randomness and are hard to understand from a theoretical point of view. This paper contributes...... to the theoretical analysis of ant colony optimization and studies this type of algorithm on one of the most prominent combinatorial optimization problems, namely the traveling salesperson problem (TSP). We present a new construction graph and show that it has a stronger local property than one commonly used...... for constructing solutions of the TSP. The rigorous runtime analysis for two ant colony optimization algorithms, based on these two construction procedures, shows that they lead to good approximation in expected polynomial time on random instances. Furthermore, we point out in which situations our algorithms get...

  9. A two-point diagnostic for the H II galaxy Hubble diagram

    Science.gov (United States)

    Leaf, Kyle; Melia, Fulvio

    2018-03-01

    A previous analysis of starburst-dominated H II galaxies and H II regions has demonstrated a statistically significant preference for the Friedmann-Robertson-Walker cosmology with zero active mass, known as the Rh = ct universe, over Λcold dark matter (ΛCDM) and its related dark-matter parametrizations. In this paper, we employ a two-point diagnostic with these data to present a complementary statistical comparison of Rh = ct with Planck ΛCDM. Our two-point diagnostic compares, in a pairwise fashion, the difference between the distance modulus measured at two redshifts with that predicted by each cosmology. Our results support the conclusion drawn by a previous comparative analysis demonstrating that Rh = ct is statistically preferred over Planck ΛCDM. But we also find that the reported errors in the H II measurements may not be purely Gaussian, perhaps due to a partial contamination by non-Gaussian systematic effects. The use of H II galaxies and H II regions as standard candles may be improved even further with a better handling of the systematics in these sources.

  10. Do acupuncture points exist?

    International Nuclear Information System (INIS)

    Yan Xiaohui; Zhang Xinyi; Liu Chenglin; Dang, Ruishan; Huang Yuying; He Wei; Ding Guanghong

    2009-01-01

    We used synchrotron x-ray fluorescence analysis to probe the distribution of four chemical elements in and around acupuncture points, two located in the forearm and two in the lower leg. Three of the four acupuncture points showed significantly elevated concentrations of elements Ca, Fe, Cu and Zn in relation to levels in the surrounding tissue, with similar elevation ratios for Cu and Fe. The mapped distribution of these elements implies that each acupuncture point seems to be elliptical with the long axis along the meridian. (note)

  11. Do acupuncture points exist?

    Energy Technology Data Exchange (ETDEWEB)

    Yan Xiaohui; Zhang Xinyi [Department of Physics, Surface Physics Laboratory (State Key Laboratory), and Synchrotron Radiation Research Center of Fudan University, Shanghai 200433 (China); Liu Chenglin [Physics Department of Yancheng Teachers' College, Yancheng 224002 (China); Dang, Ruishan [Second Military Medical University, Shanghai 200433 (China); Huang Yuying; He Wei [Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100039 (China); Ding Guanghong [Shanghai Research Center of Acupuncture and Meridian, Pudong, Shanghai 201203 (China)

    2009-05-07

    We used synchrotron x-ray fluorescence analysis to probe the distribution of four chemical elements in and around acupuncture points, two located in the forearm and two in the lower leg. Three of the four acupuncture points showed significantly elevated concentrations of elements Ca, Fe, Cu and Zn in relation to levels in the surrounding tissue, with similar elevation ratios for Cu and Fe. The mapped distribution of these elements implies that each acupuncture point seems to be elliptical with the long axis along the meridian. (note)

  12. Asymptotic Performance Analysis of Two-Way Relaying FSO Networks with Nonzero Boresight Pointing Errors Over Double-Generalized Gamma Fading Channels

    KAUST Repository

    Yang, Liang; Alouini, Mohamed-Slim; Ansari, Imran Shafique

    2018-01-01

    In this correspondence, an asymptotic performance analysis for two-way relaying free-space optical (FSO) communication systems with nonzero boresight pointing errors over double-generalized gamma fading channels is presented. Assuming amplify-and-forward (AF) relaying, two nodes having the FSO ability can communicate with each other through the optical links. With this setup, an approximate cumulative distribution function (CDF) expression for the overall signal-to-noise ratio (SNR) is presented. With this statistic distribution, we derive the asymptotic analytical results for the outage probability and average bit error rate. Furthermore, we provide the asymptotic average capacity analysis for high SNR by using the momentsbased method.

  13. Asymptotic Performance Analysis of Two-Way Relaying FSO Networks with Nonzero Boresight Pointing Errors Over Double-Generalized Gamma Fading Channels

    KAUST Repository

    Yang, Liang

    2018-05-07

    In this correspondence, an asymptotic performance analysis for two-way relaying free-space optical (FSO) communication systems with nonzero boresight pointing errors over double-generalized gamma fading channels is presented. Assuming amplify-and-forward (AF) relaying, two nodes having the FSO ability can communicate with each other through the optical links. With this setup, an approximate cumulative distribution function (CDF) expression for the overall signal-to-noise ratio (SNR) is presented. With this statistic distribution, we derive the asymptotic analytical results for the outage probability and average bit error rate. Furthermore, we provide the asymptotic average capacity analysis for high SNR by using the momentsbased method.

  14. Gene expression analysis of two extensively drug-resistant tuberculosis isolates show that two-component response systems enhance drug resistance.

    Science.gov (United States)

    Yu, Guohua; Cui, Zhenling; Sun, Xian; Peng, Jinfu; Jiang, Jun; Wu, Wei; Huang, Wenhua; Chu, Kaili; Zhang, Lu; Ge, Baoxue; Li, Yao

    2015-05-01

    Global analysis of expression profiles using DNA microarrays was performed between a reference strain H37Rv and two clinical extensively drug-resistant isolates in response to three anti-tuberculosis drug exposures (isoniazid, capreomycin, and rifampicin). A deep analysis was then conducted using a combination of genome sequences of the resistant isolates, resistance information, and related public microarray data. Certain known resistance-associated gene sets were significantly overrepresented in upregulated genes in the resistant isolates relative to that observed in H37Rv, which suggested a link between resistance and expression levels of particular genes. In addition, isoniazid and capreomycin response genes, but not rifampicin, either obtained from published works or our data, were highly consistent with the differentially expressed genes of resistant isolates compared to those of H37Rv, indicating a strong association between drug resistance of the isolates and genes differentially regulated by isoniazid and capreomycin exposures. Based on these results, 92 genes of the studied isolates were identified as candidate resistance genes, 10 of which are known resistance-related genes. Regulatory network analysis of candidate resistance genes using published networks and literature mining showed that three two-component regulatory systems and regulator CRP play significant roles in the resistance of the isolates by mediating the production of essential envelope components. Finally, drug sensitivity testing indicated strong correlations between expression levels of these regulatory genes and sensitivity to multiple anti-tuberculosis drugs in Mycobacterium tuberculosis. These findings may provide novel insights into the mechanism underlying the emergence and development of drug resistance in resistant tuberculosis isolates and useful clues for further studies on this issue. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Gauge-fixing parameter dependence of two-point gauge-variant correlation functions

    International Nuclear Information System (INIS)

    Zhai, C.

    1996-01-01

    The gauge-fixing parameter ξ dependence of two-point gauge-variant correlation functions is studied for QED and QCD. We show that, in three Euclidean dimensions, or for four-dimensional thermal gauge theories, the usual procedure of getting a general covariant gauge-fixing term by averaging over a class of covariant gauge-fixing conditions leads to a nontrivial gauge-fixing parameter dependence in gauge-variant two-point correlation functions (e.g., fermion propagators). This nontrivial gauge-fixing parameter dependence modifies the large-distance behavior of the two-point correlation functions by introducing additional exponentially decaying factors. These factors are the origin of the gauge dependence encountered in some perturbative evaluations of the damping rates and the static chromoelectric screening length in a general covariant gauge. To avoid this modification of the long-distance behavior introduced by performing the average over a class of covariant gauge-fixing conditions, one can either choose a vanishing gauge-fixing parameter or apply an unphysical infrared cutoff. copyright 1996 The American Physical Society

  16. NOTE: Do acupuncture points exist?

    Science.gov (United States)

    Yan, Xiaohui; Zhang, Xinyi; Liu, Chenglin; Dang, Ruishan; Huang, Yuying; He, Wei; Ding, Guanghong

    2009-05-01

    We used synchrotron x-ray fluorescence analysis to probe the distribution of four chemical elements in and around acupuncture points, two located in the forearm and two in the lower leg. Three of the four acupuncture points showed significantly elevated concentrations of elements Ca, Fe, Cu and Zn in relation to levels in the surrounding tissue, with similar elevation ratios for Cu and Fe. The mapped distribution of these elements implies that each acupuncture point seems to be elliptical with the long axis along the meridian.

  17. Dynamics of Two Point Vortices in an External Compressible Shear Flow

    Science.gov (United States)

    Vetchanin, Evgeny V.; Mamaev, Ivan S.

    2017-12-01

    This paper is concerned with a system of equations that describes the motion of two point vortices in a flow possessing constant uniform vorticity and perturbed by an acoustic wave. The system is shown to have both regular and chaotic regimes of motion. In addition, simple and chaotic attractors are found in the system. Attention is given to bifurcations of fixed points of a Poincaré map which lead to the appearance of these regimes. It is shown that, in the case where the total vortex strength changes, the "reversible pitch-fork" bifurcation is a typical scenario of emergence of asymptotically stable fixed and periodic points. As a result of this bifurcation, a saddle point, a stable and an unstable point of the same period emerge from an elliptic point of some period. By constructing and analyzing charts of dynamical regimes and bifurcation diagrams we show that a cascade of period-doubling bifurcations is a typical scenario of transition to chaos in the system under consideration.

  18. Two-point resistance of a resistor network embedded on a globe.

    Science.gov (United States)

    Tan, Zhi-Zhong; Essam, J W; Wu, F Y

    2014-07-01

    We consider the problem of two-point resistance in an (m-1) × n resistor network embedded on a globe, a geometry topologically equivalent to an m × n cobweb with its boundary collapsed into one single point. We deduce a concise formula for the resistance between any two nodes on the globe using a method of direct summation pioneered by one of us [Z.-Z. Tan, L. Zhou, and J. H. Yang, J. Phys. A: Math. Theor. 46, 195202 (2013)]. This method is contrasted with the Laplacian matrix approach formulated also by one of us [F. Y. Wu, J. Phys. A: Math. Gen. 37, 6653 (2004)], which is difficult to apply to the geometry of a globe. Our analysis gives the result in the form of a single summation.

  19. Point Cluster Analysis Using a 3D Voronoi Diagram with Applications in Point Cloud Segmentation

    Directory of Open Access Journals (Sweden)

    Shen Ying

    2015-08-01

    Full Text Available Three-dimensional (3D point analysis and visualization is one of the most effective methods of point cluster detection and segmentation in geospatial datasets. However, serious scattering and clotting characteristics interfere with the visual detection of 3D point clusters. To overcome this problem, this study proposes the use of 3D Voronoi diagrams to analyze and visualize 3D points instead of the original data item. The proposed algorithm computes the cluster of 3D points by applying a set of 3D Voronoi cells to describe and quantify 3D points. The decompositions of point cloud of 3D models are guided by the 3D Voronoi cell parameters. The parameter values are mapped from the Voronoi cells to 3D points to show the spatial pattern and relationships; thus, a 3D point cluster pattern can be highlighted and easily recognized. To capture different cluster patterns, continuous progressive clusters and segmentations are tested. The 3D spatial relationship is shown to facilitate cluster detection. Furthermore, the generated segmentations of real 3D data cases are exploited to demonstrate the feasibility of our approach in detecting different spatial clusters for continuous point cloud segmentation.

  20. Quantum motion on two planes connected at one point

    International Nuclear Information System (INIS)

    Exner, P.; Seba, P.

    1986-01-01

    Free motion of a particle on the manifold which consists of two planes connected at one point is studied. The four-parameter family of admissible Hamiltonians is constructed by self-adjoint extensions of the free Hamiltonian with the singular point removed. The probability of penetration between the two parts of the configuration manifold is calculated. The results can be used as a model for quantum point-contact spectroscopy

  1. Fermion-induced quantum critical points.

    Science.gov (United States)

    Li, Zi-Xiang; Jiang, Yi-Fan; Jian, Shao-Kai; Yao, Hong

    2017-08-22

    A unified theory of quantum critical points beyond the conventional Landau-Ginzburg-Wilson paradigm remains unknown. According to Landau cubic criterion, phase transitions should be first-order when cubic terms of order parameters are allowed by symmetry in the Landau-Ginzburg free energy. Here, from renormalization group analysis, we show that second-order quantum phase transitions can occur at such putatively first-order transitions in interacting two-dimensional Dirac semimetals. As such type of Landau-forbidden quantum critical points are induced by gapless fermions, we call them fermion-induced quantum critical points. We further introduce a microscopic model of SU(N) fermions on the honeycomb lattice featuring a transition between Dirac semimetals and Kekule valence bond solids. Remarkably, our large-scale sign-problem-free Majorana quantum Monte Carlo simulations show convincing evidences of a fermion-induced quantum critical points for N = 2, 3, 4, 5 and 6, consistent with the renormalization group analysis. We finally discuss possible experimental realizations of the fermion-induced quantum critical points in graphene and graphene-like materials.Quantum phase transitions are governed by Landau-Ginzburg theory and the exceptions are rare. Here, Li et al. propose a type of Landau-forbidden quantum critical points induced by gapless fermions in two-dimensional Dirac semimetals.

  2. Dual-time-point FDG-PET/CT Imaging of Temporal Bone Chondroblastoma: A Report of Two Cases

    Directory of Open Access Journals (Sweden)

    Akira Toriihara

    2015-07-01

    Full Text Available Temporal bone chondroblastoma is an extremely rare benign bone tumor. We encountered two cases showing similar imaging findings on computed tomography (CT, magnetic resonance imaging (MRI, and dual-time-point 18F-fluorodeoxyglucose (18F-FDG positron emission tomography (PET/CT. In both cases, CT images revealed temporal bone defects and sclerotic changes around the tumor. Most parts of the tumor showed low signal intensity on T2- weighted MRI images and non-uniform enhancement on gadolinium contrast-enhanced T1-weighted images. No increase in signal intensity was noted in diffusion-weighted images. Dual-time-point PET/CT showed markedly elevated 18F-FDG uptake, which increased from the early to delayed phase. Nevertheless, immunohistochemical analysis of the resected tumor tissue revealed weak expression of glucose transporter-1 and hexokinase II in both tumors. Temporal bone tumors, showing markedly elevated 18F-FDG uptake, which increases from the early to delayed phase on PET/CT images, may be diagnosed as malignant bone tumors. Therefore, the differential diagnosis should include chondroblastoma in combination with its characteristic findings on CT and MRI.

  3. Meta-analysis of thirty-two case-control and two ecological radon studies of lung cancer.

    Science.gov (United States)

    Dobrzynski, Ludwik; Fornalski, Krzysztof W; Reszczynska, Joanna

    2018-03-01

    A re-analysis has been carried out of thirty-two case-control and two ecological studies concerning the influence of radon, a radioactive gas, on the risk of lung cancer. Three mathematically simplest dose-response relationships (models) were tested: constant (zero health effect), linear, and parabolic (linear-quadratic). Health effect end-points reported in the analysed studies are odds ratios or relative risk ratios, related either to morbidity or mortality. In our preliminary analysis, we show that the results of dose-response fitting are qualitatively (within uncertainties, given as error bars) the same, whichever of these health effect end-points are applied. Therefore, we deemed it reasonable to aggregate all response data into the so-called Relative Health Factor and jointly analysed such mixed data, to obtain better statistical power. In the second part of our analysis, robust Bayesian and classical methods of analysis were applied to this combined dataset. In this part of our analysis, we selected different subranges of radon concentrations. In view of substantial differences between the methodology used by the authors of case-control and ecological studies, the mathematical relationships (models) were applied mainly to the thirty-two case-control studies. The degree to which the two ecological studies, analysed separately, affect the overall results when combined with the thirty-two case-control studies, has also been evaluated. In all, as a result of our meta-analysis of the combined cohort, we conclude that the analysed data concerning radon concentrations below ~1000 Bq/m3 (~20 mSv/year of effective dose to the whole body) do not support the thesis that radon may be a cause of any statistically significant increase in lung cancer incidence.

  4. STRUCTURE LINE DETECTION FROM LIDAR POINT CLOUDS USING TOPOLOGICAL ELEVATION ANALYSIS

    Directory of Open Access Journals (Sweden)

    C. Y. Lo

    2012-07-01

    Full Text Available Airborne LIDAR point clouds, which have considerable points on object surfaces, are essential to building modeling. In the last two decades, studies have developed different approaches to identify structure lines using two main approaches, data-driven and modeldriven. These studies have shown that automatic modeling processes depend on certain considerations, such as used thresholds, initial value, designed formulas, and predefined cues. Following the development of laser scanning systems, scanning rates have increased and can provide point clouds with higher point density. Therefore, this study proposes using topological elevation analysis (TEA to detect structure lines instead of threshold-dependent concepts and predefined constraints. This analysis contains two parts: data pre-processing and structure line detection. To preserve the original elevation information, a pseudo-grid for generating digital surface models is produced during the first part. The highest point in each grid is set as the elevation value, and its original threedimensional position is preserved. In the second part, using TEA, the structure lines are identified based on the topology of local elevation changes in two directions. Because structure lines can contain certain geometric properties, their locations have small relieves in the radial direction and steep elevation changes in the circular direction. Following the proposed approach, TEA can be used to determine 3D line information without selecting thresholds. For validation, the TEA results are compared with those of the region growing approach. The results indicate that the proposed method can produce structure lines using dense point clouds.

  5. Non-equilibrium scalar two point functions in AdS/CFT

    Energy Technology Data Exchange (ETDEWEB)

    Keränen, Ville [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Kleinert, Philipp [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Merton College, University of Oxford,Merton Street, Oxford OX1 4JD (United Kingdom)

    2015-04-22

    In the first part of the paper, we discuss different versions of the AdS/CFT dictionary out of equilibrium. We show that the Skenderis-van Rees prescription and the “extrapolate” dictionary are equivalent at the level of “in-in” two point functions of free scalar fields in arbitrary asymptotically AdS spacetimes. In the second part of the paper, we calculate two point correlation functions in dynamical spacetimes using the “extrapolate” dictionary. These calculations are performed for conformally coupled scalar fields in examples of spacetimes undergoing gravitational collapse, the AdS{sub 2}-Vaidya spacetime and the AdS{sub 3}-Vaidya spacetime, which allow us to address the problem of thermalization following a quench in the boundary field theory. The computation of the correlators is formulated as an initial value problem in the bulk spacetime. Finally, we compare our results for AdS{sub 3}-Vaidya to results in the previous literature obtained using the geodesic approximation and we find qualitative agreement.

  6. Non-equilibrium scalar two point functions in AdS/CFT

    International Nuclear Information System (INIS)

    Keränen, Ville; Kleinert, Philipp

    2015-01-01

    In the first part of the paper, we discuss different versions of the AdS/CFT dictionary out of equilibrium. We show that the Skenderis-van Rees prescription and the “extrapolate” dictionary are equivalent at the level of “in-in” two point functions of free scalar fields in arbitrary asymptotically AdS spacetimes. In the second part of the paper, we calculate two point correlation functions in dynamical spacetimes using the “extrapolate” dictionary. These calculations are performed for conformally coupled scalar fields in examples of spacetimes undergoing gravitational collapse, the AdS 2 -Vaidya spacetime and the AdS 3 -Vaidya spacetime, which allow us to address the problem of thermalization following a quench in the boundary field theory. The computation of the correlators is formulated as an initial value problem in the bulk spacetime. Finally, we compare our results for AdS 3 -Vaidya to results in the previous literature obtained using the geodesic approximation and we find qualitative agreement.

  7. Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve

    Science.gov (United States)

    Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.

    2009-04-01

    Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.

  8. Duality of two-point functions for confined non-relativistic quark-antiquark systems

    International Nuclear Information System (INIS)

    Fishbane, P.M.; Gasiorowicz, S.G.; Kaus, P.

    1985-01-01

    An analog to the scattering matrix describes the spectrum and high-energy behavior of confined systems. We show that for non-relativistic systems this S-matrix is identical to a two-point function which transparently describes the bound states for all angular momenta. Confined systems can thus be described in a dual fashion. This result makes it possible to study the modification of linear trajectories (originating in a long-range confining potential) due to short range forces which are unknown except for the way in which they modify the asymptotic behavior of the two point function. A type of effective range expansion is one way to calculate the energy shifts. 9 refs

  9. Free-ranging dogs show age related plasticity in their ability to follow human pointing.

    Science.gov (United States)

    Bhattacharjee, Debottam; N, Nikhil Dev; Gupta, Shreya; Sau, Shubhra; Sarkar, Rohan; Biswas, Arpita; Banerjee, Arunita; Babu, Daisy; Mehta, Diksha; Bhadra, Anindita

    2017-01-01

    Differences in pet dogs' and captive wolves' ability to follow human communicative intents have led to the proposition of several hypotheses regarding the possession and development of social cognitive skills in dogs. It is possible that the social cognitive abilities of pet dogs are induced by indirect conditioning through living with humans, and studying free-ranging dogs can provide deeper insights into differentiating between innate abilities and conditioning in dogs. Free-ranging dogs are mostly scavengers, indirectly depending on humans for their sustenance. Humans can act both as food providers and as threats to these dogs, and thus understanding human gestures can be a survival need for the free-ranging dogs. We tested the responsiveness of such dogs in urban areas toward simple human pointing cues using dynamic proximal points. Our experiment showed that pups readily follow proximal pointing and exhibit weaker avoidance to humans, but stop doing so at the later stages of development. While juveniles showed frequent and prolonged gaze alternations, only adults adjusted their behaviour based on the reliability of the human experimenter after being rewarded. Thus free-ranging dogs show a tendency to respond to human pointing gestures, with a certain level of behavioural plasticity that allows learning from ontogenic experience.

  10. Effective diffusion constant in a two-dimensional medium of charged point scatterers

    International Nuclear Information System (INIS)

    Dean, D S; Drummond, I T; Horgan, R R

    2004-01-01

    We obtain exact results for the effective diffusion constant of a two-dimensional Langevin tracer particle in the force field generated by charged point scatterers with quenched positions. We show that if the point scatterers have a screened Coulomb (Yukawa) potential and are uniformly and independently distributed then the effective diffusion constant obeys the Volgel-Fulcher-Tammann law where it vanishes. Exact results are also obtained for pure Coulomb scatterers frozen in an equilibrium configuration of the same temperature as that of the tracer

  11. Two-Point Codes for the Generalised GK curve

    DEFF Research Database (Denmark)

    Barelli, Élise; Beelen, Peter; Datta, Mrinmoy

    2017-01-01

    completely cover and in many cases improve on their results, using different techniques, while also supporting any GGK curve. Our method builds on the order bound for AG codes: to enable this, we study certain Weierstrass semigroups. This allows an efficient algorithm for computing our improved bounds. We......We improve previously known lower bounds for the minimum distance of certain two-point AG codes constructed using a Generalized Giulietti–Korchmaros curve (GGK). Castellanos and Tizziotti recently described such bounds for two-point codes coming from the Giulietti–Korchmaros curve (GK). Our results...

  12. Neutronic analysis of two-fluid thorium molten salt reactor

    International Nuclear Information System (INIS)

    Frybort, Jan; Vocka, Radim

    2009-01-01

    The aim of this paper is to evaluate features of the two-fluid MSBR through a parametric study and compare its properties to one-fluid MSBR concepts. The starting point of the analysis is the original ORNL 1000 MWe reactor design, although simplified to some extent. We studied the influence of dimensions of distinct reactor parts - fuel and fertile channels radius, plenum height, design etc. - on fundamental reactor properties: breeding ratio and doubling time, reactor inventory, graphite lifetime, and temperature feedback coefficients. The calculations were carried out using MCNP5 code. Based on obtained results we proposed an improved reactor design. Our results show clear advantages of the concept with two separate fluoride salts if compared to the one fluid concept in breading, doubling time, and temperature feedback coefficients. Limitations of the two-fluid concept - particularly the graphite lifetime - are also pointed out. The reactor design can be a subject of further optimizations, namely from the viewpoint of reactor safety. (author)

  13. Stability Analysis of Periodic Systems by Truncated Point Mappings

    Science.gov (United States)

    Guttalu, R. S.; Flashner, H.

    1996-01-01

    An approach is presented deriving analytical stability and bifurcation conditions for systems with periodically varying coefficients. The method is based on a point mapping(period to period mapping) representation of the system's dynamics. An algorithm is employed to obtain an analytical expression for the point mapping and its dependence on the system's parameters. The algorithm is devised to derive the coefficients of a multinominal expansion of the point mapping up to an arbitrary order in terms of the state variables and of the parameters. Analytical stability and bifurcation condition are then formulated and expressed as functional relations between the parameters. To demonstrate the application of the method, the parametric stability of Mathieu's equation and of a two-degree of freedom system are investigated. The results obtained by the proposed approach are compared to those obtained by perturbation analysis and by direct integration which we considered to the "exact solution". It is shown that, unlike perturbation analysis, the proposed method provides very accurate solution even for large valuesof the parameters. If an expansion of the point mapping in terms of a small parameter is performed the method is equivalent to perturbation analysis. Moreover, it is demonstrated that the method can be easily applied to multiple-degree-of-freedom systems using the same framework. This feature is an important advantage since most of the existing analysis methods apply mainly to single-degree-of-freedom systems and their extension to higher dimensions is difficult and computationally cumbersome.

  14. An Evaluation of Recent Pick-up Point Experiments in European Cities: the Rise of two Competing Models?

    OpenAIRE

    AUGEREAU, V; DABLANC, L

    2007-01-01

    In this paper, we present an analysis of recent collection point/lockerbank experiments in Europe, including the history of some of the most notable experiments. Two 'models' are currently quite successful (Kiala relay points in France and Packstation locker banks in Germany), although they are quite different. As a first interpretation of these results, we propose that these two models be considered as complementary to one another.

  15. The potential of cloud point system as a novel two-phase partitioning system for biotransformation.

    Science.gov (United States)

    Wang, Zhilong

    2007-05-01

    Although the extractive biotransformation in two-phase partitioning systems have been studied extensively, such as the water-organic solvent two-phase system, the aqueous two-phase system, the reverse micelle system, and the room temperature ionic liquid, etc., this has not yet resulted in a widespread industrial application. Based on the discussion of the main obstacles, an exploitation of a cloud point system, which has already been applied in a separation field known as a cloud point extraction, as a novel two-phase partitioning system for biotransformation, is reviewed by analysis of some topical examples. At the end of the review, the process control and downstream processing in the application of the novel two-phase partitioning system for biotransformation are also briefly discussed.

  16. Change point analysis and assessment

    DEFF Research Database (Denmark)

    Müller, Sabine; Neergaard, Helle; Ulhøi, John Parm

    2011-01-01

    The aim of this article is to develop an analytical framework for studying processes such as continuous innovation and business development in high-tech SME clusters that transcends the traditional qualitative-quantitative divide. It integrates four existing and well-recognized approaches...... to studying events, processes and change, mamely change-point analysis, event-history analysis, critical-incident technique and sequence analysis....

  17. [Research on infrared radiation characteristics of skin covering two acupuncture points in the hand and forearm, NeiGuan and LaoGong points].

    Science.gov (United States)

    Wang, Zhen; Yu, Wenlong; Cui, Han; Shi, Huafeng; Jin, Lei

    2013-06-01

    In order to research the infrared radiation characteristics of the skin covering Traditional Chinese acupuncture points, which are NeiGuan in the forearm and LaoGong in the center of the palm, we detected continuously the infrared radiation spectra of the human body surface by using Fourier Transform Infrared Spectroscopy. The experimental results showed that firstly, the differences of the infrared radiation spectra of the human body surface were obvious between individuals. Secondly, the infrared radiation intensity of the human body surface changed with time changing. The infrared radiation intensity in two special wavelength ranges (wavelengths from 6. 79 microm to 6. 85 microm and from 13. 6 microm to 14. 0 microm) changed much more than that in other ranges obviously. Thirdly, the proportions of the infrared radiation spectra changed, which were calculated from the spectra of two different aupuncture points, were same in these two special wavelength ranges, but their magnitude changes were different. These results suggested that the infrared radiation of acupuncture points have the same biological basis, and the mechanism of the infrared radiation in these two special wavelength ranges is different from other tissue heat radiation.

  18. Detection of Point Sources on Two-Dimensional Images Based on Peaks

    Directory of Open Access Journals (Sweden)

    R. B. Barreiro

    2005-09-01

    Full Text Available This paper considers the detection of point sources in two-dimensional astronomical images. The detection scheme we propose is based on peak statistics. We discuss the example of the detection of far galaxies in cosmic microwave background experiments throughout the paper, although the method we present is totally general and can be used in many other fields of data analysis. We consider sources with a Gaussian profile—that is, a fair approximation of the profile of a point source convolved with the detector beam in microwave experiments—on a background modeled by a homogeneous and isotropic Gaussian random field characterized by a scale-free power spectrum. Point sources are enhanced with respect to the background by means of linear filters. After filtering, we identify local maxima and apply our detection scheme, a Neyman-Pearson detector that defines our region of acceptance based on the a priori pdf of the sources and the ratio of number densities. We study the different performances of some linear filters that have been used in this context in the literature: the Mexican hat wavelet, the matched filter, and the scale-adaptive filter. We consider as well an extension to two dimensions of the biparametric scale-adaptive filter (BSAF. The BSAF depends on two parameters which are determined by maximizing the number density of real detections while fixing the number density of spurious detections. For our detection criterion the BSAF outperforms the other filters in the interesting case of white noise.

  19. The Purification Method of Matching Points Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    DONG Yang

    2017-02-01

    Full Text Available The traditional purification method of matching points usually uses a small number of the points as initial input. Though it can meet most of the requirements of point constraints, the iterative purification solution is easy to fall into local extreme, which results in the missing of correct matching points. To solve this problem, we introduce the principal component analysis method to use the whole point set as initial input. And thorough mismatching points step eliminating and robust solving, more accurate global optimal solution, which intends to reduce the omission rate of correct matching points and thus reaches better purification effect, can be obtained. Experimental results show that this method can obtain the global optimal solution under a certain original false matching rate, and can decrease or avoid the omission of correct matching points.

  20. Comparison of pressure perception of static and dynamic two point ...

    African Journals Online (AJOL)

    ... the right and left index finger (p<0.05). Conclusion: Age and gender did not affect the perception of static and dynamic two point discrimination while the limb side (left or right) affected the perception of static and dynamic two point discrimination. The index finger is also more sensitive to moving rather static sensations.

  1. Two-point vs multipoint sample collection for the analysis of energy expenditure by use of the doubly labeled water method

    International Nuclear Information System (INIS)

    Welle, S.

    1990-01-01

    Energy expenditure over a 2-wk period was determined by the doubly labeled water (2H2(18)O) method in nine adults. When daily samples were analyzed, energy expenditure was 2859 +/- 453 kcal/d (means +/- SD); when only the first and last time points were considered, the mean calculated energy expenditure was not significantly different (2947 +/- 430 kcal/d). An analysis of theoretical cases in which isotope flux is not constant indicates that the multipoint method can cause errors in the calculation of average isotope fluxes, but these are generally small. Simulations of the effect of analytical error indicate that increasing the number of replicates on two points reduces the impact of technical errors more effectively than does performing single analyses on multiple samples. It appears that generally there is no advantage to collecting frequent samples when the 2H2(18)O method is used to estimate energy expenditure in adult humans

  2. Music analysis and point-set compression

    DEFF Research Database (Denmark)

    Meredith, David

    A musical analysis represents a particular way of understanding certain aspects of the structure of a piece of music. The quality of an analysis can be evaluated to some extent by the degree to which knowledge of it improves performance on tasks such as mistake spotting, memorising a piece...... as the minimum description length principle and relates closely to certain ideas in the theory of Kolmogorov complexity. Inspired by this general principle, the hypothesis explored in this paper is that the best ways of understanding (or explanations for) a piece of music are those that are represented...... by the shortest possible descriptions of the piece. With this in mind, two compression algorithms are presented, COSIATEC and SIATECCompress. Each of these algorithms takes as input an in extenso description of a piece of music as a set of points in pitch-time space representing notes. Each algorithm...

  3. Environmental protection standards - from the point of view of systems analysis

    Energy Technology Data Exchange (ETDEWEB)

    Becker, K

    1978-11-01

    A project of the International Institute of Applied Systems Analysis (IIASA) in Laxenburg castle near Vienna is reviewed where standards for environmental protection are interpreted from the point of view of systems analysis. Some examples are given to show how results are influenced not only by technical and economic factors but also by psychological and political factors.

  4. Environmental protection standards - from the point of view of systems analysis

    International Nuclear Information System (INIS)

    Becker, K.

    1978-01-01

    A project of the International Institute of Applied Systems Analysis (IIASA) in Laxenburg castle near Vienna is reviewed where standards for environmental protection are interpreted from the point of view of systems analysis. Some examples are given to show how results are influenced not only by technical and economic factors but also by psychological and political factors. (orig.) [de

  5. Two-point model for divertor transport

    International Nuclear Information System (INIS)

    Galambos, J.D.; Peng, Y.K.M.

    1984-04-01

    Plasma transport along divertor field lines was investigated using a two-point model. This treatment requires considerably less effort to find solutions to the transport equations than previously used one-dimensional (1-D) models and is useful for studying general trends. It also can be a valuable tool for benchmarking more sophisticated models. The model was used to investigate the possibility of operating in the so-called high density, low temperature regime

  6. Implementation of Steiner point of fuzzy set.

    Science.gov (United States)

    Liang, Jiuzhen; Wang, Dejiang

    2014-01-01

    This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.

  7. Two-point anchoring of a lanthanide-binding peptide to a target protein enhances the paramagnetic anisotropic effect

    International Nuclear Information System (INIS)

    Saio, Tomohide; Ogura, Kenji; Yokochi, Masashi; Kobashigawa, Yoshihiro; Inagaki, Fuyuhiko

    2009-01-01

    Paramagnetic lanthanide ions fixed in a protein frame induce several paramagnetic effects such as pseudo-contact shifts and residual dipolar couplings. These effects provide long-range distance and angular information for proteins and, therefore, are valuable in protein structural analysis. However, until recently this approach had been restricted to metal-binding proteins, but now it has become applicable to non-metalloproteins through the use of a lanthanide-binding tag. Here we report a lanthanide-binding peptide tag anchored via two points to the target proteins. Compared to conventional single-point attached tags, the two-point linked tag provides two to threefold stronger anisotropic effects. Though there is slight residual mobility of the lanthanide-binding tag, the present tag provides a higher anisotropic paramagnetic effect

  8. Homotopy analysis solutions of point kinetics equations with one delayed precursor group

    International Nuclear Information System (INIS)

    Zhu Qian; Luo Lei; Chen Zhiyun; Li Haofeng

    2010-01-01

    Homotopy analysis method is proposed to obtain series solutions of nonlinear differential equations. Homotopy analysis method was applied for the point kinetics equations with one delayed precursor group. Analytic solutions were obtained using homotopy analysis method, and the algorithm was analysed. The results show that the algorithm computation time and precision agree with the engineering requirements. (authors)

  9. Effect of the number of two-wheeled containers at a gathering point on energetic workload and work efficiency

    NARCIS (Netherlands)

    Kuijer, P. Paul F M; Frings-Dresen, Monique H W; Van Der Beek, Allard J.; Van Dieën, Jaap H.; Visser, Bart

    2000-01-01

    The effect of the number of two-wheeled containers at a gathering point on the energetic workload and the work efficiency in refuse collecting was studied. The results showed that the size of the gathering point had no effect on the energetic workload. However, the size of the gathering point had an

  10. Critical point analysis of phase envelope diagram

    International Nuclear Information System (INIS)

    Soetikno, Darmadi; Siagian, Ucok W. R.; Kusdiantara, Rudy; Puspita, Dila; Sidarto, Kuntjoro A.; Soewono, Edy; Gunawan, Agus Y.

    2014-01-01

    Phase diagram or phase envelope is a relation between temperature and pressure that shows the condition of equilibria between the different phases of chemical compounds, mixture of compounds, and solutions. Phase diagram is an important issue in chemical thermodynamics and hydrocarbon reservoir. It is very useful for process simulation, hydrocarbon reactor design, and petroleum engineering studies. It is constructed from the bubble line, dew line, and critical point. Bubble line and dew line are composed of bubble points and dew points, respectively. Bubble point is the first point at which the gas is formed when a liquid is heated. Meanwhile, dew point is the first point where the liquid is formed when the gas is cooled. Critical point is the point where all of the properties of gases and liquids are equal, such as temperature, pressure, amount of substance, and others. Critical point is very useful in fuel processing and dissolution of certain chemicals. Here in this paper, we will show the critical point analytically. Then, it will be compared with numerical calculations of Peng-Robinson equation by using Newton-Raphson method. As case studies, several hydrocarbon mixtures are simulated using by Matlab

  11. Critical point analysis of phase envelope diagram

    Energy Technology Data Exchange (ETDEWEB)

    Soetikno, Darmadi; Siagian, Ucok W. R. [Department of Petroleum Engineering, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Kusdiantara, Rudy, E-mail: rkusdiantara@s.itb.ac.id; Puspita, Dila, E-mail: rkusdiantara@s.itb.ac.id; Sidarto, Kuntjoro A., E-mail: rkusdiantara@s.itb.ac.id; Soewono, Edy; Gunawan, Agus Y. [Department of Mathematics, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia)

    2014-03-24

    Phase diagram or phase envelope is a relation between temperature and pressure that shows the condition of equilibria between the different phases of chemical compounds, mixture of compounds, and solutions. Phase diagram is an important issue in chemical thermodynamics and hydrocarbon reservoir. It is very useful for process simulation, hydrocarbon reactor design, and petroleum engineering studies. It is constructed from the bubble line, dew line, and critical point. Bubble line and dew line are composed of bubble points and dew points, respectively. Bubble point is the first point at which the gas is formed when a liquid is heated. Meanwhile, dew point is the first point where the liquid is formed when the gas is cooled. Critical point is the point where all of the properties of gases and liquids are equal, such as temperature, pressure, amount of substance, and others. Critical point is very useful in fuel processing and dissolution of certain chemicals. Here in this paper, we will show the critical point analytically. Then, it will be compared with numerical calculations of Peng-Robinson equation by using Newton-Raphson method. As case studies, several hydrocarbon mixtures are simulated using by Matlab.

  12. Zero-point energies in the two-center shell model. II

    International Nuclear Information System (INIS)

    Reinhard, P.-G.

    1978-01-01

    The zero-point energy (ZPE) contained in the potential-energy surface of a two-center shell model (TCSM) is evaluated. In extension of previous work, the author uses here the full TCSM with l.s force, smoothing and asymmetry. The results show a critical dependence on the height of the potential barrier between the centers. The ZPE turns out to be non-negligible along the fission path for 236 U, and even more so for lighter systems. It is negligible for surface quadrupole motion and it is just on the fringe of being negligible for motion along the asymmetry coordinate. (Auth.)

  13. Alien calculus and a Schwinger-Dyson equation: two-point function with a nonperturbative mass scale

    Science.gov (United States)

    Bellon, Marc P.; Clavier, Pierre J.

    2018-02-01

    Starting from the Schwinger-Dyson equation and the renormalization group equation for the massless Wess-Zumino model, we compute the dominant nonperturbative contributions to the anomalous dimension of the theory, which are related by alien calculus to singularities of the Borel transform on integer points. The sum of these dominant contributions has an analytic expression. When applied to the two-point function, this analysis gives a tame evolution in the deep euclidean domain at this approximation level, making doubtful the arguments on the triviality of the quantum field theory with positive β -function. On the other side, we have a singularity of the propagator for timelike momenta of the order of the renormalization group invariant scale of the theory, which has a nonperturbative relationship with the renormalization point of the theory. All these results do not seem to have an interpretation in terms of semiclassical analysis of a Feynman path integral.

  14. Mode analysis of heuristic behavior of searching for multimodal optimum point

    Energy Technology Data Exchange (ETDEWEB)

    Kamei, K; Araki, Y; Inoue, K

    1982-01-01

    Describes an experimental study of a heuristic behavior of searching for the global optimum (maximum) point of a two-dimensional, multimodal, nonlinear and unknown function. First, the authors define three modes dealing with the trial purposes, called the purpose modes and show the heuristic search behaviors expressed by the purpose modes which the human subjects select in the search experiments. Second, the authors classify the heuristic search behaviors into three types according to the mode transitions and extracts eight states of searches which cause the mode transitions. Third, a model of the heuristic search behavior is composed of the eight mode transitions. The analysis of the heuristic search behaviors by use of the purpose modes plays an important role in the heuristic search techniques. 6 references.

  15. Two-Gyro Pointing Stability of HST measured with ACS

    Science.gov (United States)

    Koekemoer, Anton M.; Kozhurina-Platais, Vera; Riess, Adam; Sirianni, Marco; Biretta, John; Pavlovsky

    2005-06-01

    We present the results of the pointing stability tests for HST, as measured with the ACS/ HRC during the Two-Gyro test program conducted in February 2005. We measure the shifts of 185 exposures of the globular clusters NGC6341 and Omega Centauri, obtained over a total of 13 orbits, and compare the measured pointings to those that were commanded in the observing program. We find in all cases that the measured shifts and rotations have the same level of accuracy as those that were commanded in three-gyro mode. Specifically, the pointing offsets during an orbit relative to the first exposure can be characterized with distributions having a dispersion of 2.3 milliarcseconds for shifts and 0.00097 degrees for rotations, thus less than 0.1 HRC pixels, and agree extremely well with similar values measured for comparable exposures obtained in three-gyro mode. In addition, we successfully processed these two-gyro test data through the MultiDrizzle software which is used in the HST pipeline to perform automated registration, cosmic ray rejection and image combination for multiple exposure sequences, and we find excellent agreement with similar exposures obtained in three-gyro mode. In summary, we find no significant difference between the quality of HST pointing as measured from these two-gyro test data, relative to the nominal behavior of HST in regular three-gyro operations.

  16. Active control on high-order coherence and statistic characterization on random phase fluctuation of two classical point sources.

    Science.gov (United States)

    Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan

    2016-03-29

    Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.

  17. Definition of distance for nonlinear time series analysis of marked point process data

    Energy Technology Data Exchange (ETDEWEB)

    Iwayama, Koji, E-mail: koji@sat.t.u-tokyo.ac.jp [Research Institute for Food and Agriculture, Ryukoku Univeristy, 1-5 Yokotani, Seta Oe-cho, Otsu-Shi, Shiga 520-2194 (Japan); Hirata, Yoshito; Aihara, Kazuyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)

    2017-01-30

    Marked point process data are time series of discrete events accompanied with some values, such as economic trades, earthquakes, and lightnings. A distance for marked point process data allows us to apply nonlinear time series analysis to such data. We propose a distance for marked point process data which can be calculated much faster than the existing distance when the number of marks is small. Furthermore, under some assumptions, the Kullback–Leibler divergences between posterior distributions for neighbors defined by this distance are small. We performed some numerical simulations showing that analysis based on the proposed distance is effective. - Highlights: • A new distance for marked point process data is proposed. • The distance can be computed fast enough for a small number of marks. • The method to optimize parameter values of the distance is also proposed. • Numerical simulations indicate that the analysis based on the distance is effective.

  18. A two-point kinetic model for the PROTEUS reactor

    International Nuclear Information System (INIS)

    Dam, H. van.

    1995-03-01

    A two-point reactor kinetic model for the PROTEUS-reactor is developed and the results are described in terms of frequency dependent reactivity transfer functions for the core and the reflector. It is shown that at higher frequencies space-dependent effects occur which imply failure of the one-point kinetic model. In the modulus of the transfer functions these effects become apparent above a radian frequency of about 100 s -1 , whereas for the phase behaviour the deviation from a point model already starts at a radian frequency of 10 s -1 . (orig.)

  19. Holographic two-point functions for 4d log-gravity

    NARCIS (Netherlands)

    Johansson, Niklas; Naseh, Ali; Zojer, Thomas

    We compute holographic one- and two-point functions of critical higher-curvature gravity in four dimensions. The two most important operators are the stress tensor and its logarithmic partner, sourced by ordinary massless and by logarithmic non-normalisable gravitons, respectively. In addition, the

  20. A similarity hypothesis for the two-point correlation tensor in a temporally evolving plane wake

    Science.gov (United States)

    Ewing, D. W.; George, W. K.; Moser, R. D.; Rogers, M. M.

    1995-01-01

    The analysis demonstrated that the governing equations for the two-point velocity correlation tensor in the temporally evolving wake admit similarity solutions, which include the similarity solutions for the single-point moment as a special case. The resulting equations for the similarity solutions include two constants, beta and Re(sub sigma), that are ratios of three characteristic time scales of processes in the flow: a viscous time scale, a time scale characteristic of the spread rate of the flow, and a characteristic time scale of the mean strain rate. The values of these ratios depend on the initial conditions of the flow and are most likely measures of the coherent structures in the initial conditions. The occurrences of these constants in the governing equations for the similarity solutions indicates that these solutions, in general, will only be the same for two flows if these two constants are equal (and hence the coherent structures in the flows are related). The comparisons between the predictions of the similarity hypothesis and the data presented here and elsewhere indicate that the similarity solutions for the two-point correlation tensors provide a good approximation of the measures of those motions that are not significantly affected by the boundary conditions caused by the finite extent of real flows. Thus, the two-point similarity hypothesis provides a useful tool for both numerical and physical experimentalist that can be used to examine how the finite extent of real flows affect the evolution of the different scales of motion in the flow.

  1. Analysis on signal properties due to concurrent leaks at two points in water supply pipelines

    International Nuclear Information System (INIS)

    Lee, Young Sup

    2015-01-01

    Intelligent leak detection is an essential component of a underground water supply pipeline network such as a smart water grid system. In this network, numerous leak detection sensors are needed to cover all of the pipelines in a specific area installed at specific regular distances. It is also necessary to determine the existence of any leaks and estimate its location within a short time after it occurs. In this study, the leak signal properties and feasibility of leak location detection were investigated when concurrent leaks occurred at two points in a pipeline. The straight distance between the two leak sensors in the 100A sized cast-iron pipeline was 315.6 m, and their signals were measured with one leak and two concurrent leaks. Each leak location was described after analyzing the frequency properties and cross-correlation of the measured signals.

  2. Analysis on signal properties due to concurrent leaks at two points in water supply pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Sup [Dept. of Embedded Systems Engineering, Incheon National University, Incheon (Korea, Republic of)

    2015-02-15

    Intelligent leak detection is an essential component of a underground water supply pipeline network such as a smart water grid system. In this network, numerous leak detection sensors are needed to cover all of the pipelines in a specific area installed at specific regular distances. It is also necessary to determine the existence of any leaks and estimate its location within a short time after it occurs. In this study, the leak signal properties and feasibility of leak location detection were investigated when concurrent leaks occurred at two points in a pipeline. The straight distance between the two leak sensors in the 100A sized cast-iron pipeline was 315.6 m, and their signals were measured with one leak and two concurrent leaks. Each leak location was described after analyzing the frequency properties and cross-correlation of the measured signals.

  3. Lactate point-of-care testing for acidosis: Cross-comparison of two devices with routine laboratory results

    Directory of Open Access Journals (Sweden)

    Remco van Horssen

    2016-04-01

    Full Text Available Objectives: Lactate is a major parameter in medical decision making. During labor, it is an indicator for fetal acidosis and immediate intervention. In the Emergency Department (ED, rapid analysis of lactate/blood gas is crucial for optimal patient care. Our objectives were to cross-compare-for the first time-two point-of-care testing (POCT lactate devices with routine laboratory results using novel tight precision targets and evaluate different lactate cut-off concentrations to predict metabolic acidosis. Design and methods: Blood samples from the delivery room (n=66 and from the ED (n=85 were analyzed on two POCT devices, the StatStrip-Lactate (Nova Biomedical and the iSTAT-1 (CG4+ cassettes, Abbott, and compared to the routine laboratory analyzer (ABL-735, Radiometer. Lactate concentrations were cross-compared between these analyzers. Results: The StatStrip correlated well with the ABL-735 (R=0.9737 and with the iSTAT-1 (R=0.9774 for lactate in umbilical cord blood. Lactate concentrations in ED samples measured on the iSTAT-1 and ABL-735 showed a correlation coefficient of R=0.9953. Analytical imprecision was excellent for lactate and pH, while for pO2 and pCO2 the coefficient of variation was relatively high using the iSTAT-1. Conclusion: Both POCT devices showed adequate analytical performance to measure lactate. The StatStrip can indicate metabolic acidosis in 1 μl blood and will be implemented at the delivery room. Keywords: Lactate, Point-of-care testing, Blood gas, Fetal acidosis

  4. Protein truncation test: analysis of two novel point mutations at the carboxy-terminus of the human dystrophin gene associated with mental retardation.

    Science.gov (United States)

    Tuffery, S; Lenk, U; Roberts, R G; Coubes, C; Demaille, J; Claustres, M

    1995-01-01

    Approximately one-third of the mutations responsible for Duchenne muscular dytrophy (DMD) do not involve gross rearrangements of the dystrophin gene. Methods for intensive mutation screening have recently been applied to this immense gene, which resulted in the identification of a number of point mutations in DMD patients, mostly translation-terminating mutations. A number of data raised the possibility that the C-terminal region of dystrophin might be involved in some cases of mental retardation associated with DMD. Using single-strand conformation analysis of products amplified by polymerase chain reaction (PCR-SSCA) to screen the terminal domains of the dystrophin gene (exons 60-79) of 20 unrelated patients with DMD or BMD, we detected two novel point mutations in two mentally retarded DMD patients: a 1-bp deletion in exon 70 (10334delC) and a 5' splice donor site alteration in intron 69 (10294 + 1G-->T). Both mutations should result in a premature translation termination of dystrophin. The possible effects on the reading frame were analyzed by the study of reverse transcripts amplified from peripheral blood lymphocytes mRNA and by the protein truncation test.

  5. Comparative Analysis on Two Accounting Systems of Rural Economic Originations

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    In order to normalize the financial account of two kinds of economic organizations,the comparative analysis is conducted on the Accounting System of Village Collective Economic Organization and Accounting System of Farmers’ Cooperatives(Trial) issued by the Ministry of Finance.The comparison points out that application and accounting principles of the two kinds of accounting systems are different.The differences and similarities of the five accounting elements are analyzed including property,liabilities,rights of owners,costs and profits and losses,as well as the reasons of the differences and similarities.Results show that both of the two accounting systems reflect the principles of simplification and clarification.The village collective accounting system works in rural village committee,which acts the administrative duties,the features of concerted benefits of it is showed.While the accounting system of farmers’ cooperatives is based on the village collective accounting system and combines the norms of accounting system of enterprises,so the system represents the demands of collaboration and profit-making.

  6. Bayesian change-point analysis reveals developmental change in a classic theory of mind task.

    Science.gov (United States)

    Baker, Sara T; Leslie, Alan M; Gallistel, C R; Hood, Bruce M

    2016-12-01

    Although learning and development reflect changes situated in an individual brain, most discussions of behavioral change are based on the evidence of group averages. Our reliance on group-averaged data creates a dilemma. On the one hand, we need to use traditional inferential statistics. On the other hand, group averages are highly ambiguous when we need to understand change in the individual; the average pattern of change may characterize all, some, or none of the individuals in the group. Here we present a new method for statistically characterizing developmental change in each individual child we study. Using false-belief tasks, fifty-two children in two cohorts were repeatedly tested for varying lengths of time between 3 and 5 years of age. Using a novel Bayesian change point analysis, we determined both the presence and-just as importantly-the absence of change in individual longitudinal cumulative records. Whenever the analysis supports a change conclusion, it identifies in that child's record the most likely point at which change occurred. Results show striking variability in patterns of change and stability across individual children. We then group the individuals by their various patterns of change or no change. The resulting patterns provide scarce support for sudden changes in competence and shed new light on the concepts of "passing" and "failing" in developmental studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. A model for the two-point velocity correlation function in turbulent channel flow

    International Nuclear Information System (INIS)

    Sahay, A.; Sreenivasan, K.R.

    1996-01-01

    A relatively simple analytical expression is presented to approximate the equal-time, two-point, double-velocity correlation function in turbulent channel flow. To assess the accuracy of the model, we perform the spectral decomposition of the integral operator having the model correlation function as its kernel. Comparisons of the empirical eigenvalues and eigenfunctions with those constructed from direct numerical simulations data show good agreement. copyright 1996 American Institute of Physics

  8. Two- and three-point functions in Liouville theory

    International Nuclear Information System (INIS)

    Dorn, H.; Otto, H.J.

    1994-04-01

    Based on our generalization of the Goulian-Li continuation in the power of the 2D cosmological term we construct the two and three-point correlation functions for Liouville exponentials with generic real coefficients. As a strong argument in favour of the procedure we prove the Liouville equation of motion on the level of three-point functions. The analytical structure of the correlation functions as well as some of its consequences for string theory are discussed. This includes a conjecture on the mass shell condition for excitations of noncritical strings. We also make a comment concerning the correlation functions of the Liouville field itself. (orig.)

  9. Two critical tests for the Critical Point earthquake

    Science.gov (United States)

    Tzanis, A.; Vallianatos, F.

    2003-04-01

    It has been credibly argued that the earthquake generation process is a critical phenomenon culminating with a large event that corresponds to some critical point. In this view, a great earthquake represents the end of a cycle on its associated fault network and the beginning of a new one. The dynamic organization of the fault network evolves as the cycle progresses and a great earthquake becomes more probable, thereby rendering possible the prediction of the cycle’s end by monitoring the approach of the fault network toward a critical state. This process may be described by a power-law time-to-failure scaling of the cumulative seismic release rate. Observational evidence has confirmed the power-law scaling in many cases and has empirically determined that the critical exponent in the power law is typically of the order n=0.3. There are also two theoretical predictions for the value of the critical exponent. Ben-Zion and Lyakhovsky (Pure appl. geophys., 159, 2385-2412, 2002) give n=1/3. Rundle et al. (Pure appl. geophys., 157, 2165-2182, 2000) show that the power-law activation associated with a spinodal instability is essentially identical to the power-law acceleration of Benioff strain observed prior to earthquakes; in this case n=0.25. More recently, the CP model has gained support from the development of more dependable models of regional seismicity with realistic fault geometry that show accelerating seismicity before large events. Essentially, these models involve stress transfer to the fault network during the cycle such, that the region of accelerating seismicity will scale with the size of the culminating event, as for instance in Bowman and King (Geophys. Res. Let., 38, 4039-4042, 2001). It is thus possible to understand the observed characteristics of distributed accelerating seismicity in terms of a simple process of increasing tectonic stress in a region already subjected to stress inhomogeneities at all scale lengths. Then, the region of

  10. Measure of departure from marginal point-symmetry for two-way contingency tables

    Directory of Open Access Journals (Sweden)

    Kouji Yamamoto

    2013-05-01

    Full Text Available For two-way contingency tables, Tomizawa (1985 considered the point-symmetry and marginal point-symmetry models, and Tomizawa, Yamamoto and Tahata (2007 proposed a measure to represent the degree of departure from point-symmetry. The present paper proposes a measure to represent the degree of departure from marginal pointsymmetry for two-way tables. The proposed measure is expressed by using Cressie-Read power-divergence or Patil-Taillie diversity index. This measure would be useful for comparing the degrees of departure from marginal point-symmetry in several tables. The relationship between the degree of departure from marginal point-symmetry and the measure is shown when it is reasonable to assume underlying bivariate normal distribution. Examples are shown.

  11. Two point function for a simple general relativistic quantum model

    OpenAIRE

    Colosi, Daniele

    2007-01-01

    We study the quantum theory of a simple general relativistic quantum model of two coupled harmonic oscillators and compute the two-point function following a proposal first introduced in the context of loop quantum gravity.

  12. Uncertainty analysis of point by point sampling complex surfaces using touch probe CMMs

    DEFF Research Database (Denmark)

    Barini, Emanuele; Tosello, Guido; De Chiffre, Leonardo

    2007-01-01

    The paper describes a study concerning point by point scanning of complex surfaces using tactile CMMs. A four factors-two level full factorial experiment was carried out, involving measurements on a complex surface configuration item comprising a sphere, a cylinder and a cone, combined in a singl...

  13. Analysis of genetic distance between Peruvian Alpaca (Vicugna Pacos showing two distinct fleece phenotypes, Suri and Huacaya, by means of microsatellite markers

    Directory of Open Access Journals (Sweden)

    Carlo Renieri

    2011-10-01

    Full Text Available Two coat phenotypes exist in Alpaca, Huacaya and Suri. The two coats show different fleece structure, textile characteristics and prices on the market. Although present scientific knowledge suggests a simple genetic model of inheritance, there is a tendency to manage and consider the two phenotypes as two different breeds. A 13 microsatellite panel was used in this study to assess genetic distance between Suri and Huacaya alpacas in a sample of non-related animals from two phenotypically pure flocks at the Illpa-Puno experimental station in Quimsachata, Peru. The animals are part of a germplasm established approximately 20 years ago and have been bred separately according to their coat type since then. Genetic variability parameters were also calculated. The data were statistically analyzed using the software Genalex 6.3, Phylip 3.69 and Fstat 2.9.3.2. The sample was tested for Hardy-Weinberg equilibrium (HWE and after strict Bonferroni correction only one locus (LCA37 showed deviation from equilibrium (Ploci associations showed significant disequilibrium. Observed heterozygosis (Ho= 0.766; SE=0.044, expected heterozygosis (He=0.769; SE=0.033, number of alleles (Na=9.667, SE=0.772 and Fixation index (F=0.004; SE=0.036 are comparable to data from previous studies. Measures of genetic distance were 0.06 for Nei’s and 0.03 for Cavalli-Sforza’s. The analysis of molecular variance reported no existing variance between populations. Considering the origin of the animals, their post domestication evolution and the reproductive practices in place, the results do not show genetic differentiation between the two populations for the studied loci.

  14. Two-point functions in (loop) quantum cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Calcagni, Gianluca; Oriti, Daniele [Max-Planck-Institute for Gravitational Physics (Albert Einstein Institute), Am Muehlenberg 1, D-14476 Golm (Germany); Gielen, Steffen [Max-Planck-Institute for Gravitational Physics (Albert Einstein Institute), Am Muehlenberg 1, D-14476 Golm (Germany); DAMTP, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA (United Kingdom)

    2011-07-01

    We discuss the path-integral formulation of quantum cosmology with a massless scalar field as a sum-over-histories of volume transitions, with particular but non-exclusive reference to loop quantum cosmology (LQC). Exploiting the analogy with the relativistic particle, we give a complete overview of the possible two-point functions, pointing out the choices involved in their definitions, deriving their vertex expansions and the composition laws they satisfy. We clarify the origin and relations of different quantities previously defined in the literature, in particular the tie between definitions using a group averaging procedure and those in a deparametrized framework. Finally, we draw some conclusions about the physics of a single quantum universe (where there exist superselection rules on positive- and negative-frequency sectors and different choices of inner product are physically equivalent) and multiverse field theories where the role of these sectors and the inner product are reinterpreted.

  15. Two-point functions in (loop) quantum cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Calcagni, Gianluca; Gielen, Steffen; Oriti, Daniele, E-mail: calcagni@aei.mpg.de, E-mail: gielen@aei.mpg.de, E-mail: doriti@aei.mpg.de [Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Muehlenberg 1, D-14476 Golm (Germany)

    2011-06-21

    The path-integral formulation of quantum cosmology with a massless scalar field as a sum-over-histories of volume transitions is discussed, with particular but non-exclusive reference to loop quantum cosmology. Exploiting the analogy with the relativistic particle, we give a complete overview of the possible two-point functions, pointing out the choices involved in their definitions, deriving their vertex expansions and the composition laws they satisfy. We clarify the origin and relations of different quantities previously defined in the literature, in particular the tie between definitions using a group averaging procedure and those in a deparametrized framework. Finally, we draw some conclusions about the physics of a single quantum universe (where there exist superselection rules on positive- and negative-frequency sectors and different choices of inner product are physically equivalent) and multiverse field theories where the role of these sectors and the inner product are reinterpreted.

  16. Two-point functions in (loop) quantum cosmology

    International Nuclear Information System (INIS)

    Calcagni, Gianluca; Gielen, Steffen; Oriti, Daniele

    2011-01-01

    The path-integral formulation of quantum cosmology with a massless scalar field as a sum-over-histories of volume transitions is discussed, with particular but non-exclusive reference to loop quantum cosmology. Exploiting the analogy with the relativistic particle, we give a complete overview of the possible two-point functions, pointing out the choices involved in their definitions, deriving their vertex expansions and the composition laws they satisfy. We clarify the origin and relations of different quantities previously defined in the literature, in particular the tie between definitions using a group averaging procedure and those in a deparametrized framework. Finally, we draw some conclusions about the physics of a single quantum universe (where there exist superselection rules on positive- and negative-frequency sectors and different choices of inner product are physically equivalent) and multiverse field theories where the role of these sectors and the inner product are reinterpreted.

  17. Asymptotic behaviour of two-point functions in multi-species models

    Directory of Open Access Journals (Sweden)

    Karol K. Kozlowski

    2016-05-01

    Full Text Available We extract the long-distance asymptotic behaviour of two-point correlation functions in massless quantum integrable models containing multi-species excitations. For such a purpose, we extend to these models the method of a large-distance regime re-summation of the form factor expansion of correlation functions. The key feature of our analysis is a technical hypothesis on the large-volume behaviour of the form factors of local operators in such models. We check the validity of this hypothesis on the example of the SU(3-invariant XXX magnet by means of the determinant representations for the form factors of local operators in this model. Our approach confirms the structure of the critical exponents obtained previously for numerous models solvable by the nested Bethe Ansatz.

  18. Comparison of clinical outcomes of multi-point umbrella suturing and single purse suturing with two-point traction after procedure for prolapse and hemorrhoids (PPH) surgery.

    Science.gov (United States)

    Jiang, Huiyong; Hao, Xiuyan; Xin, Ying; Pan, Youzhen

    2017-11-01

    To compare the clinical outcomes of multipoint umbrella suture and single-purse suture with two-point traction after procedure for prolapse and hemorrhoids surgery (PPH) for the treatment of mixed hemorrhoids. Ninety patients were randomly divided into a PPH plus single-purse suture group (Group A) and a PPH plus multipoint umbrella suture (Group B). All operations were performed by an experienced surgeon. Operation time, width of the specimen, hemorrhoids retraction extent, postoperative pain, postoperative bleeding, and length of hospitalization were recorded and compared. Statistical analysis was conducted by t-test and χ2 test. There were no significant differences in sex, age, course of disease, and degree of prolapse of hemorrhoids between the two groups. The operative time in Group A was significantly shorter than that in Group B (P hemorrhoid core retraction were significantly lower in Group B (P  0.05 for all comparisons) was observed. The multipoint umbrella suture showed better clinical outcomes because of its targeted suture according to the extent of hemorrhoid prolapse. Copyright © 2017. Published by Elsevier Ltd.

  19. Measurement analysis of two radials with a common-origin point and its application.

    Science.gov (United States)

    Liu, Zhenyao; Yang, Jidong; Zhu, Weiwei; Zhou, Shang; Tan, Xuanping

    2017-08-01

    In spectral analysis, a chemical component is usually identified by its characteristic spectra, especially the peaks. If two components have overlapping spectral peaks, they are generally considered to be indiscriminate in current analytical chemistry textbooks and related literature. However, if the intensities of the overlapping major spectral peaks are additive, and have different rates of change with respect to variations in the concentration of the individual components, a simple method, named the 'common-origin ray', for the simultaneous determination of two components can be established. Several case studies highlighting its applications are presented. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Point Information Gain and Multidimensional Data Analysis

    Directory of Open Access Journals (Sweden)

    Renata Rychtáriková

    2016-10-01

    Full Text Available We generalize the point information gain (PIG and derived quantities, i.e., point information gain entropy (PIE and point information gain entropy density (PIED, for the case of the Rényi entropy and simulate the behavior of PIG for typical distributions. We also use these methods for the analysis of multidimensional datasets. We demonstrate the main properties of PIE/PIED spectra for the real data with the examples of several images and discuss further possible utilizations in other fields of data processing.

  1. An approach for analyzing the ensemble mean from a dynamic point of view

    OpenAIRE

    Pengfei, Wang

    2014-01-01

    Simultaneous ensemble mean equations (LEMEs) for the Lorenz model are obtained, enabling us to analyze the properties of the ensemble mean from a dynamical point of view. The qualitative analysis for the two-sample and n-sample LEMEs show the locations and number of stable points are different from the Lorenz equations (LEs), and the results are validated by numerical experiments. The analysis for the eigenmatrix of the stable points of LEMEs indicates that the stability of these stable point...

  2. Bifurcations of heterodimensional cycles with two saddle points

    Energy Technology Data Exchange (ETDEWEB)

    Geng Fengjie [School of Information Technology, China University of Geosciences (Beijing), Beijing 100083 (China)], E-mail: gengfengjie_hbu@163.com; Zhu Deming [Department of Mathematics, East China Normal University, Shanghai 200062 (China)], E-mail: dmzhu@math.ecnu.edu.cn; Xu Yancong [Department of Mathematics, East China Normal University, Shanghai 200062 (China)], E-mail: yancongx@163.com

    2009-03-15

    The bifurcations of 2-point heterodimensional cycles are investigated in this paper. Under some generic conditions, we establish the existence of one homoclinic loop, one periodic orbit, two periodic orbits, one 2-fold periodic orbit, and the coexistence of one periodic orbit and heteroclinic loop. Some bifurcation patterns different to the case of non-heterodimensional heteroclinic cycles are revealed.

  3. Bifurcations of heterodimensional cycles with two saddle points

    International Nuclear Information System (INIS)

    Geng Fengjie; Zhu Deming; Xu Yancong

    2009-01-01

    The bifurcations of 2-point heterodimensional cycles are investigated in this paper. Under some generic conditions, we establish the existence of one homoclinic loop, one periodic orbit, two periodic orbits, one 2-fold periodic orbit, and the coexistence of one periodic orbit and heteroclinic loop. Some bifurcation patterns different to the case of non-heterodimensional heteroclinic cycles are revealed.

  4. Two- and three-point functions in the D=1 matrix model

    International Nuclear Information System (INIS)

    Ben-Menahem, S.

    1991-01-01

    The critical behavior of the genus-zero two-point function in the D=1 matrix model is carefully analyzed for arbitrary embedding-space momentum. Kostov's result is recovered for momenta below a certain value P 0 (which is 1/√α' in the continuum language), with a non-universal form factor which is expressed simply in terms of the critical fermion trajectory. For momenta above P 0 , the Kostov scaling term is found to be subdominant. We then extend the large-N WKB treatment to calculate the genus-zero three-point function, and elucidate its critical behavior when all momenta are below P 0 . The resulting universal scaling behavior, as well as the non-universal form factor for the three-point function, are related to the two-point functions of the individual external momenta, through the factorization familiar from continuum conformal field theories. (orig.)

  5. Solving fuzzy two-point boundary value problem using fuzzy Laplace transform

    OpenAIRE

    Ahmad, Latif; Farooq, Muhammad; Ullah, Saif; Abdullah, Saleem

    2014-01-01

    A natural way to model dynamic systems under uncertainty is to use fuzzy boundary value problems (FBVPs) and related uncertain systems. In this paper we use fuzzy Laplace transform to find the solution of two-point boundary value under generalized Hukuhara differentiability. We illustrate the method for the solution of the well known two-point boundary value problem Schrodinger equation, and homogeneous boundary value problem. Consequently, we investigate the solutions of FBVPs under as a ne...

  6. Pasteurised milk and implementation of HACCP (Hazard Analysis Critical Control Point

    Directory of Open Access Journals (Sweden)

    T.B Murdiati

    2004-10-01

    Full Text Available The purpose of pasteurisation is to destroy pathogen bacteria without affecting the taste, flavor, and nutritional value. A study on the implementation of HACCP (Hazard Analysis Critical Control Point in producing pasteurized milk was carried out in four processing unit of pasteurised milk, one in Jakarta, two in Bandung and one in Bogor. The critical control points in the production line were identified. Milk samples were collected from the critical points and were analysed for the total number of microbes. Antibiotic residues were detected on raw milks. The study indicated that one unit in Bandung dan one unit in Jakarta produced pasteurized milk with lower number of microbes than the other units, due to better management and control applied along the chain of production. Penisilin residues was detected in raw milk used by unit in Bogor. Six critical points and the hazard might arise in those points were identified, as well as how to prevent the hazards. Quality assurance system such as HACCP would be able to produce high quality and safety of pasteurised milk, and should be implemented gradually.

  7. Marmosets, Raree shows and Pulcinelle: an Analysis and Edition of a Hitherto-Unpublished Carnival Play by Antonio de Zamora

    Directory of Open Access Journals (Sweden)

    Fernando Plata

    2013-05-01

    Full Text Available This paper is an analyisis, annotation and edition of the Mojiganga del mundinovo (‘The raree show, a carnival play’ by Antonio de Zamora. The play was performed in 1698 Madrid by the troupe of Carlos Vallejo, along with the sacramental one-act play El templo vivo de Dios (‘The living temple of God’. The hitherto unpublished text is based on the only two extant manuscripts, located in archives in Madrid. Despite the play’s title, my analysis argues that the novelty in this play is not so much the raree show, a contraption popularized four decades earlier in Golden Age theater, as the marmosets. The death of two marmosets and the ensuing desolation of their owner, Ms. Estupenda, both trigger the play and provide it with a plot. The marmosets, too, point to a changing mentality in late 17th-century society, regarding the possession among ladies of marmosets and other monkeys as pets.

  8. Comparison of apparent diffusion coefficients (ADCs) between two-point and multi-point analyses using high-B-value diffusion MR imaging

    International Nuclear Information System (INIS)

    Kubo, Hitoshi; Maeda, Masayuki; Araki, Akinobu

    2001-01-01

    We evaluated the accuracy of calculating apparent diffusion coefficients (ADCs) using high-B-value diffusion images. Echo planar diffusion-weighted MR images were obtained at 1.5 tesla in five standard locations in six subjects using gradient strengths corresponding to B values from 0 to 3000 s/mm 2 . Estimation of ADCs was made using two methods: a nonlinear regression model using measurements from a full set of B values (multi-point method) and linear estimation using B values of 0 and max only (two-point method). A high correlation between the two methods was noted (r=0.99), and the mean percentage differences were -0.53% and 0.53% in phantom and human brain, respectively. These results suggest there is little error in estimating ADCs calculated by the two-point technique using high-B-value diffusion MR images. (author)

  9. Design of glass-ceramic complex microstructure with using onset point of crystallization in differential thermal analysis

    International Nuclear Information System (INIS)

    Hwang, Seongjin; Kim, Jinho; Shin, Hyo-Soon; Kim, Jong-Hee; Kim, Hyungsun

    2008-01-01

    Two types of frits with different compositions were used to develop a high strength substrate in electronic packaging using a low temperature co-fired ceramic process. In order to reveal the crystallization stage during heating to approximately 900 deg. C, a glass-ceramic consisting of the two types of frits, which had been crystallized to diopside and anorthite after firing, was tested at different mixing ratios of the frits. The exothermal peaks deconvoluted by a Gauss function in the differential thermal analysis curves were used to determine the onset point of crystallization of diopside or anorthite. The onset points of crystallization were affected by the mixing ratio of the frits, and the microstructure of the glass-ceramic depended on the onset point of crystallization. It was found that when multicrystalline phases appear in the microstructure, the resulting complex microstructure could be predicted from the onset point of crystallization obtained by differential thermal analysis

  10. Parametric study of two-body floating-point wave absorber

    Science.gov (United States)

    Amiri, Atena; Panahi, Roozbeh; Radfar, Soheil

    2016-03-01

    In this paper, we present a comprehensive numerical simulation of a point wave absorber in deep water. Analyses are performed in both the frequency and time domains. The converter is a two-body floating-point absorber (FPA) with one degree of freedom in the heave direction. Its two parts are connected by a linear mass-spring-damper system. The commercial ANSYS-AQWA software used in this study performs well in considering validations. The velocity potential is obtained by assuming incompressible and irrotational flow. As such, we investigated the effects of wave characteristics on energy conversion and device efficiency, including wave height and wave period, as well as the device diameter, draft, geometry, and damping coefficient. To validate the model, we compared our numerical results with those from similar experiments. Our study results can clearly help to maximize the converter's efficiency when considering specific conditions.

  11. Analysis of relationship between registration performance of point cloud statistical model and generation method of corresponding points

    International Nuclear Information System (INIS)

    Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata

    2010-01-01

    Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)

  12. A Study of a Two Stage Maximum Power Point Tracking Control of a Photovoltaic System under Partially Shaded Insolation Conditions

    Science.gov (United States)

    Kobayashi, Kenji; Takano, Ichiro; Sawada, Yoshio

    A photovoltaic array shows relatively low output power density, and has a greatly drooping Current-Voltage (I-V) characteristic. Therefore, Maximum Power Point Tracking (MPPT) control is used to maximize the output power of the array. Many papers have been reported in relation to MPPT. However, the Current-Power (I-P) curve sometimes shows multi-local maximum points mode under non-uniform insolation conditions. The operating point of the PV system tends to converge to a local maximum output point which is not the real maximal output point on the I-P curve. Some papers have been also reported, trying to avoid this difficulty. However most of those control systems become rather complicated. Then, the two stage MPPT control method is proposed in this paper to realize a relatively simple control system which can track the real maximum power point even under non-uniform insolation conditions. The feasibility of this control concept is confirmed for steady insolation as well as for rapidly changing insolation by simulation study using software PSIM and LabVIEW. In addition, simulated experiment confirms fundament al operation of the two stage MPPT control.

  13. Two-dimensional DFA scaling analysis applied to encrypted images

    Science.gov (United States)

    Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.

    2015-01-01

    The technique of detrended fluctuation analysis (DFA) has been widely used to unveil scaling properties of many different signals. In this paper, we determine scaling properties in the encrypted images by means of a two-dimensional DFA approach. To carry out the image encryption, we use an enhanced cryptosystem based on a rule-90 cellular automaton and we compare the results obtained with its unmodified version and the encryption system AES. The numerical results show that the encrypted images present a persistent behavior which is close to that of the 1/f-noise. These results point to the possibility that the DFA scaling exponent can be used to measure the quality of the encrypted image content.

  14. Breed differences in dogs sensitivity to human points: a meta-analysis.

    Science.gov (United States)

    Dorey, Nicole R; Udell, Monique A R; Wynne, Clive D L

    2009-07-01

    The last decade has seen a substantial increase in research on the behavioral and cognitive abilities of pet dogs, Canis familiaris. The most commonly used experimental paradigm is the object-choice task in which a dog is given a choice of two containers and guided to the reinforced object by human pointing gestures. We review here studies of this type and attempt a meta-analysis of the available data. In the meta-analysis breeds of dogs were grouped into the eight categories of the American Kennel Club, and into four clusters identified by Parker and Ostrander [Parker, H.G., Ostrander, E.A., 2005. Canine genomics and genetics: running with the pack. PLoS Genet. 1, 507-513] on the basis of a genetic analysis. No differences in performance between breeds categorized in either fashion were identified. Rather, all dog breeds appear to be similarly and highly successful in following human points to locate desired food. We suggest this result could be due to the paucity of data available in published studies, and the restricted range of breeds tested.

  15. Quantum electrodynamics and light rays. [Two-point correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Sudarshan, E.C.G.

    1978-11-01

    Light is a quantum electrodynamic entity and hence bundles of rays must be describable in this framework. The duality in the description of elementary optical phenomena is demonstrated in terms of two-point correlation functions and in terms of collections of light rays. The generalizations necessary to deal with two-slit interference and diffraction by a rectangular slit are worked out and the usefulness of the notion of rays of darkness illustrated. 10 references.

  16. Hygienic-sanitary working practices and implementation of a Hazard Analysis and Critical Control Point (HACCP plan in lobster processing industries

    Directory of Open Access Journals (Sweden)

    Cristina Farias da Fonseca

    2013-03-01

    Full Text Available This study aimed to verify the hygienic-sanitary working practices and to create and implement a Hazard Analysis Critical Control Point (HACCP in two lobster processing industries in Pernambuco State, Brazil. The industries studied process frozen whole lobsters, frozen whole cooked lobsters, and frozen lobster tails for exportation. The application of the hygienic-sanitary checklist in the industries analyzed achieved conformity rates over 96% to the aspects evaluated. The use of the Hazard Analysis Critical Control Point (HACCP plan resulted in the detection of two critical control points (CCPs including the receiving and classification steps in the processing of frozen lobster and frozen lobster tails, and an additional critical control point (CCP was detected during the cooking step of processing of the whole frozen cooked lobster. The proper implementation of the Hazard Analysis Critical Control Point (HACCP plan in the lobster processing industries studied proved to be the safest and most cost-effective method to monitor each critical control point (CCP hazards.

  17. Fermion-induced quantum critical points

    OpenAIRE

    Li, Zi-Xiang; Jiang, Yi-Fan; Jian, Shao-Kai; Yao, Hong

    2017-01-01

    A unified theory of quantum critical points beyond the conventional Landau?Ginzburg?Wilson paradigm remains unknown. According to Landau cubic criterion, phase transitions should be first-order when cubic terms of order parameters are allowed by symmetry in the Landau?Ginzburg free energy. Here, from renormalization group analysis, we show that second-order quantum phase transitions can occur at such putatively first-order transitions in interacting two-dimensional Dirac semimetals. As such t...

  18. Point-point and point-line moving-window correlation spectroscopy and its applications

    Science.gov (United States)

    Zhou, Qun; Sun, Suqin; Zhan, Daqi; Yu, Zhiwu

    2008-07-01

    In this paper, we present a new extension of generalized two-dimensional (2D) correlation spectroscopy. Two new algorithms, namely point-point (P-P) correlation and point-line (P-L) correlation, have been introduced to do the moving-window 2D correlation (MW2D) analysis. The new method has been applied to a spectral model consisting of two different processes. The results indicate that P-P correlation spectroscopy can unveil the details and re-constitute the entire process, whilst the P-L can provide general feature of the concerned processes. Phase transition behavior of dimyristoylphosphotidylethanolamine (DMPE) has been studied using MW2D correlation spectroscopy. The newly proposed method verifies that the phase transition temperature is 56 °C, same as the result got from a differential scanning calorimeter. To illustrate the new method further, a lysine and lactose mixture has been studied under thermo perturbation. Using the P-P MW2D, the Maillard reaction of the mixture was clearly monitored, which has been very difficult using conventional display of FTIR spectra.

  19. Conclusion of LOD-score analysis for family data generated under two-locus models.

    Science.gov (United States)

    Dizier, M H; Babron, M C; Clerget-Darpoux, F

    1996-06-01

    The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker.

  20. Conclusions of LOD-score analysis for family data generated under two-locus models

    Energy Technology Data Exchange (ETDEWEB)

    Dizier, M.H.; Babron, M.C.; Clergt-Darpoux, F. [Unite de Recherches d`Epidemiologie Genetique, Paris (France)

    1996-06-01

    The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker. 17 refs., 3 tabs.

  1. Confidence intervals for the first crossing point of two hazard functions.

    Science.gov (United States)

    Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng

    2009-12-01

    The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.

  2. Tipping point analysis of ocean acoustic noise

    Science.gov (United States)

    Livina, Valerie N.; Brouwer, Albert; Harris, Peter; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2018-02-01

    We apply tipping point analysis to a large record of ocean acoustic data to identify the main components of the acoustic dynamical system and study possible bifurcations and transitions of the system. The analysis is based on a statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the data using time-series techniques. We analyse long-term and seasonal trends, system states and acoustic fluctuations to reconstruct a one-dimensional stochastic equation to approximate the acoustic dynamical system. We apply potential analysis to acoustic fluctuations and detect several changes in the system states in the past 14 years. These are most likely caused by climatic phenomena. We analyse trends in sound pressure level within different frequency bands and hypothesize a possible anthropogenic impact on the acoustic environment. The tipping point analysis framework provides insight into the structure of the acoustic data and helps identify its dynamic phenomena, correctly reproducing the probability distribution and scaling properties (power-law correlations) of the time series.

  3. Tipping point analysis of ocean acoustic noise

    Directory of Open Access Journals (Sweden)

    V. N. Livina

    2018-02-01

    Full Text Available We apply tipping point analysis to a large record of ocean acoustic data to identify the main components of the acoustic dynamical system and study possible bifurcations and transitions of the system. The analysis is based on a statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the data using time-series techniques. We analyse long-term and seasonal trends, system states and acoustic fluctuations to reconstruct a one-dimensional stochastic equation to approximate the acoustic dynamical system. We apply potential analysis to acoustic fluctuations and detect several changes in the system states in the past 14 years. These are most likely caused by climatic phenomena. We analyse trends in sound pressure level within different frequency bands and hypothesize a possible anthropogenic impact on the acoustic environment. The tipping point analysis framework provides insight into the structure of the acoustic data and helps identify its dynamic phenomena, correctly reproducing the probability distribution and scaling properties (power-law correlations of the time series.

  4. Analytic continuation of massless two-loop four-point functions

    International Nuclear Information System (INIS)

    Gehrmann, T.; Remiddi, E.

    2002-01-01

    We describe the analytic continuation of two-loop four-point functions with one off-shell external leg and internal massless propagators from the Euclidean region of space-like 1→3 decay to Minkowskian regions relevant to all 1→3 and 2→2 reactions with one space-like or time-like off-shell external leg. Our results can be used to derive two-loop master integrals and unrenormalized matrix elements for hadronic vector-boson-plus-jet production and deep inelastic two-plus-one-jet production, from results previously obtained for three-jet production in electron-positron annihilation. (author)

  5. Young adult females' views regarding online privacy protection at two time points.

    Science.gov (United States)

    Moreno, Megan A; Kelleher, Erin; Ameenuddin, Nusheen; Rastogi, Sarah

    2014-09-01

    Risks associated with adolescent Internet use include exposure to inappropriate information and privacy violations. Privacy expectations and policies have changed over time. Recent Facebook security setting changes heighten these risks. The purpose of this study was to investigate views and experiences with Internet safety and privacy protection among older adolescent females at two time points, in 2009 and 2012. Two waves of focus groups were conducted, one in 2009 and the other in 2012. During these focus groups, female university students discussed Internet safety risks and strategies and privacy protection. All focus groups were audio recorded and manually transcribed. Qualitative analysis was conducted at the end of each wave and then reviewed and combined in a separate analysis using the constant comparative method. A total of 48 females participated across the two waves. The themes included (1) abundant urban myths, such as the ability for companies to access private information; (2) the importance of filtering one's displayed information; and (3) maintaining age limits on social media access to avoid younger teens' presence on Facebook. The findings present a complex picture of how adolescents view privacy protection and online safety. Older adolescents may be valuable partners in promoting safe and age-appropriate Internet use for younger teens in the changing landscape of privacy. Copyright © 2014. Published by Elsevier Inc.

  6. Machinery Fault Diagnosis Using Two-Channel Analysis Method Based on Fictitious System Frequency Response Function

    Directory of Open Access Journals (Sweden)

    Kihong Shin

    2015-01-01

    Full Text Available Most existing techniques for machinery health monitoring that utilize measured vibration signals usually require measurement points to be as close as possible to the expected fault components of interest. This is particularly important for implementing condition-based maintenance since the incipient fault signal power may be too small to be detected if a sensor is located further away from the fault source. However, a measurement sensor is often not attached to the ideal point due to geometric or environmental restrictions. In such a case, many of the conventional diagnostic techniques may not be successfully applicable. In this paper, a two-channel analysis method is proposed to overcome such difficulty. It uses two vibration signals simultaneously measured at arbitrary points in a machine. The proposed method is described theoretically by introducing a fictitious system frequency response function. It is then verified experimentally for bearing fault detection. The results show that the suggested method may be a good alternative when ideal points for measurement sensors are not readily available.

  7. Visualizing nD Point Clouds as Topological Landscape Profiles to Guide Local Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick [Univ. of Leipzig (Germany). Computer Science Dept.; Heine, Christian [Univ. of Leipzig (Germany). Computer Science Dept.; Federal Inst. of Technology (ETH), Zurich (Switzerland). Dept. of Computer Science; Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Scheuermann, Gerik [Univ. of Leipzig (Germany). Computer Science Dept.

    2012-05-04

    Analyzing high-dimensional point clouds is a classical challenge in visual analytics. Traditional techniques, such as projections or axis-based techniques, suffer from projection artifacts, occlusion, and visual complexity.We propose to split data analysis into two parts to address these shortcomings. First, a structural overview phase abstracts data by its density distribution. This phase performs topological analysis to support accurate and non-overlapping presentation of the high-dimensional cluster structure as a topological landscape profile. Utilizing a landscape metaphor, it presents clusters and their nesting as hills whose height, width, and shape reflect cluster coherence, size, and stability, respectively. A second local analysis phase utilizes this global structural knowledge to select individual clusters or point sets for further, localized data analysis. Focusing on structural entities significantly reduces visual clutter in established geometric visualizations and permits a clearer, more thorough data analysis. In conclusion, this analysis complements the global topological perspective and enables the user to study subspaces or geometric properties, such as shape.

  8. Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model

    Science.gov (United States)

    Musekiwa, Alfred; Manda, Samuel O. M.; Mwambi, Henry G.; Chen, Ding-Geng

    2016-01-01

    Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results. PMID:27798661

  9. Hyphenation of two simultaneously employed soft photo ionization mass spectrometers with thermal analysis of biomass and biochar

    Energy Technology Data Exchange (ETDEWEB)

    Fendt, Alois [Joint Mass Spectrometry Centre, Chair of Analytical Chemistry, Institute of Chemistry, University of Rostock, 18059 Rostock (Germany); Joint Mass Spectrometry Centre, Cooperation Group for Analysis of Complex Molecular Systems, Institute of Ecological Chemistry, Helmholtz Zentrum Muenchen - German Research Center for Environmental Health (GmbH), IngolstaedterLandstr. 1, 85764 Neuherberg (Germany); Analytical Chemistry, Institute of Physics, University of Augsburg, 86159 Augsburg (Germany); Geissler, Robert [Joint Mass Spectrometry Centre, Cooperation Group for Analysis of Complex Molecular Systems, Institute of Ecological Chemistry, Helmholtz Zentrum Muenchen - German Research Center for Environmental Health (GmbH), IngolstaedterLandstr. 1, 85764 Neuherberg (Germany); Analytical Chemistry, Institute of Physics, University of Augsburg, 86159 Augsburg (Germany); Streibel, Thorsten, E-mail: thorsten.streibel@uni-rostock.de [Joint Mass Spectrometry Centre, Chair of Analytical Chemistry, Institute of Chemistry, University of Rostock, 18059 Rostock (Germany); Joint Mass Spectrometry Centre, Cooperation Group for Analysis of Complex Molecular Systems, Institute of Ecological Chemistry, Helmholtz Zentrum Muenchen - German Research Center for Environmental Health (GmbH), IngolstaedterLandstr. 1, 85764 Neuherberg (Germany); and others

    2013-01-10

    Highlights: Black-Right-Pointing-Pointer First simultaneous hyphenation of two time-of-flight mass spectrometers with different soft photo ionization techniques (SPI and REMPI) to Thermal Analysis using a newly developed prototype for EGA is presented. Black-Right-Pointing-Pointer Resonance enhanced multi-photon ionization (REMPI) enables sensitive and selective analysis of aromatic species. Black-Right-Pointing-Pointer Single photon ionization (SPI) using VUV light supplied by an innovative electron-beam pumped excimer light source (EBEL) comprehensively ionizes (nearly) all organic molecules. Black-Right-Pointing-Pointer The resulting mass spectra show distinct patterns for the evolved gases of the miscellaneous biomasses and chars thereof. Black-Right-Pointing-Pointer The potential for detailed kinetic studies is apparent on account of the complex pyrolysis gas compositions. - Abstract: Evolved gas analysis (EGA) is a powerful and complementary tool for Thermal Analysis. In this context, two time-of-flight mass spectrometers with different soft photo-ionization techniques are simultaneously hyphenated to a thermo balance and applied in form of a newly developed prototype for EGA of pyrolysis gases from biomass and biochar. Resonance enhanced multi-photon ionization (REMPI) is applied for selective analysis of aromatic species. Furthermore, single photon ionization (SPI) using VUV light supplied by an electron-beam pumped excimer light source (EBEL) was used to comprehensively ionize (nearly) all organic molecules. The soft ionization capability of photo-ionization techniques allows direct and on-line analysis of the evolved pyrolysis gases. Characteristic mass spectra with specific patterns could be obtained for the miscellaneous biomass feeds used. Temperature profiles of the biochars reveal a desorption step, followed by pyrolysis as observed for the biomasses. Furthermore, the potential for kinetic studies is apparent for this instrumental setup.

  10. Nonlinear bending and collapse analysis of a poked cylinder and other point-loaded cylinders

    International Nuclear Information System (INIS)

    Sobel, L.H.

    1983-06-01

    This paper analyzes the geometrically nonlinear bending and collapse behavior of an elastic, simply supported cylindrical shell subjected to an inward-directed point load applied at midlength. The large displacement analysis results for this thin (R/t = 638) poked cylinder were obtained from the STAGSC-1 finite element computer program. STAGSC-1 results are also presented for two other point-loaded shell problems: a pinched cylinder (R/t = 100), and a venetian blind (R/t = 250)

  11. 21 CFR 123.6 - Hazard analysis and Hazard Analysis Critical Control Point (HACCP) plan.

    Science.gov (United States)

    2010-04-01

    ... Control Point (HACCP) plan. 123.6 Section 123.6 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF... Provisions § 123.6 Hazard analysis and Hazard Analysis Critical Control Point (HACCP) plan. (a) Hazard... fish or fishery product being processed in the absence of those controls. (b) The HACCP plan. Every...

  12. 21 CFR 120.8 - Hazard Analysis and Critical Control Point (HACCP) plan.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Hazard Analysis and Critical Control Point (HACCP... SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION HAZARD ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS General Provisions § 120.8 Hazard Analysis and Critical Control Point (HACCP) plan. (a) HACCP plan. Each...

  13. Effect of two different forms of three-point line on game actions in ...

    African Journals Online (AJOL)

    The aim of this study was to compare two different designs of the three-point line to analyze which one allows for a higher frequency of motor actions that, according to the literature, should be strengthened when including a three-point line in youth basketball. In the first of two championships, female mini-basketball players ...

  14. Two cloud-point phenomena in tetrabutylammonium perfluorooctanoate aqueous solutions: anomalous temperature-induced phase and structure transitions.

    Science.gov (United States)

    Yan, Peng; Huang, Jin; Lu, Run-Chao; Jin, Chen; Xiao, Jin-Xin; Chen, Yong-Ming

    2005-03-24

    This paper reported the phase behavior and aggregate structure of tetrabutylammonium perfluorooctanoate (TBPFO), determined by differential scanning calorimeter, electrical conductivity, static/dynamic light scattering, and rheology methods. We found that above a certain concentration the TBPFO solution showed anomalous temperature-dependent phase behavior and structure transitions. Such an ionic surfactant solution exhibits two cloud points. When the temperature was increased, the solution turned from a homogeneous-phase to a liquid-liquid two-phase system, then to another homogeneous-phase, and finally to another liquid-liquid two-phase system. In the first homogeneous-phase region, the aggregates of TBPFO were rodlike micelles and the solution was Newtonian fluid. While in the second homogeneous-phase region, the aggregates of TBPFO were large wormlike micelles, and the solution behaved as pseudoplastic fluid that also exhibited viscoelastic behavior. We thought that the first cloud point might be caused by the "bridge" effect of the tetrabutylammonium counterion between the micelles and the second one by the formation of the micellar network.

  15. Geometric convergence of some two-point Pade approximations

    International Nuclear Information System (INIS)

    Nemeth, G.

    1983-01-01

    The geometric convergences of some two-point Pade approximations are investigated on the real positive axis and on certain infinite sets of the complex plane. Some theorems concerning the geometric convergence of Pade approximations are proved, and bounds on geometric convergence rates are given. The results may be interesting considering the applications both in numerical computations and in approximation theory. As a specific case, the numerical calculations connected with the plasma dispersion function may be performed. (D.Gy.)

  16. Change-Point and Trend Analysis on Annual Maximum Discharge in Continental United States

    Science.gov (United States)

    Serinaldi, F.; Villarini, G.; Smith, J. A.; Krajewski, W. F.

    2008-12-01

    Annual maximum discharge records from 36 stations representing different hydro-climatic regimes in the continental United States with at least 100 years of records are used to investigate the presence of temporal trends and abrupt changes in mean and variance. Change point analysis is performed by means of two non- parametric (Pettitt and CUSUM), one semi-parametric (Guan), and two parametric (Rodionov and Bayesian Change Point) tests. Two non-parametric (Mann-Kendall and Spearman) and one parametric (Pearson) tests are applied to detect the presence of temporal trends. Generalized Additive Model for Location Scale and Shape (GAMLSS) models are also used to parametrically model the streamflow data exploiting their flexibility to account for changes and temporal trends in the parameters of distribution functions. Additionally, serial correlation is assessed in advance by computing the autocorrelation function (ACF), and the Hurst parameter is estimated using two estimators (aggregated variance and differenced variance methods) to investigate the presence of long range dependence. The results of this study indicate lack of long range dependence in the maximum streamflow series. At some stations the authors found a statistically significant change point in the mean and/or variance, while in general they detected no statistically significant temporal trends.

  17. Two-point functions and logarithmic boundary operators in boundary logarithmic conformal field theories

    International Nuclear Information System (INIS)

    Ishimoto, Yukitaka

    2004-01-01

    Amongst conformal field theories, there exist logarithmic conformal field theories such as c p,1 models. We have investigated c p,q models with a boundary in search of logarithmic theories and have found logarithmic solutions of two-point functions in the context of the Coulomb gas picture. We have also found the relations between coefficients in the two-point functions and correlation functions of logarithmic boundary operators, and have confirmed the solutions in [hep-th/0003184]. Other two-point functions and boundary operators have also been studied in the free boson construction of boundary CFT with SU(2) k symmetry in regard to logarithmic theories. This paper is based on a part of D. Phil. Thesis [hep-th/0312160]. (author)

  18. An introduction to nonlinear analysis and fixed point theory

    CERN Document Server

    Pathak, Hemant Kumar

    2018-01-01

    This book systematically introduces the theory of nonlinear analysis, providing an overview of topics such as geometry of Banach spaces, differential calculus in Banach spaces, monotone operators, and fixed point theorems. It also discusses degree theory, nonlinear matrix equations, control theory, differential and integral equations, and inclusions. The book presents surjectivity theorems, variational inequalities, stochastic game theory and mathematical biology, along with a large number of applications of these theories in various other disciplines. Nonlinear analysis is characterised by its applications in numerous interdisciplinary fields, ranging from engineering to space science, hydromechanics to astrophysics, chemistry to biology, theoretical mechanics to biomechanics and economics to stochastic game theory. Organised into ten chapters, the book shows the elegance of the subject and its deep-rooted concepts and techniques, which provide the tools for developing more realistic and accurate models for ...

  19. Existence and uniqueness for a two-point interface boundary value problem

    Directory of Open Access Journals (Sweden)

    Rakhim Aitbayev

    2013-10-01

    Full Text Available We obtain sufficient conditions, easily verifiable, for the existence and uniqueness of piecewise smooth solutions of a linear two-point boundary-value problem with general interface conditions. The coefficients of the differential equation may have jump discontinuities at the interface point. As an example, the conditions obtained are applied to a problem with typical interface such as perfect contact, non-perfect contact, and flux jump conditions.

  20. Point defect characterization in HAADF-STEM images using multivariate statistical analysis

    International Nuclear Information System (INIS)

    Sarahan, Michael C.; Chi, Miaofang; Masiel, Daniel J.; Browning, Nigel D.

    2011-01-01

    Quantitative analysis of point defects is demonstrated through the use of multivariate statistical analysis. This analysis consists of principal component analysis for dimensional estimation and reduction, followed by independent component analysis to obtain physically meaningful, statistically independent factor images. Results from these analyses are presented in the form of factor images and scores. Factor images show characteristic intensity variations corresponding to physical structure changes, while scores relate how much those variations are present in the original data. The application of this technique is demonstrated on a set of experimental images of dislocation cores along a low-angle tilt grain boundary in strontium titanate. A relationship between chemical composition and lattice strain is highlighted in the analysis results, with picometer-scale shifts in several columns measurable from compositional changes in a separate column. -- Research Highlights: → Multivariate analysis of HAADF-STEM images. → Distinct structural variations among SrTiO 3 dislocation cores. → Picometer atomic column shifts correlated with atomic column population changes.

  1. Application of a two-sinker densimeter for phase-equilibrium measurements: A new technique for the detection of dew points and measurements on the (methane + propane) system

    International Nuclear Information System (INIS)

    McLinden, Mark O.; Richter, Markus

    2016-01-01

    Highlights: • A new technique for detecting dew points in fluid mixtures is described. • The method makes use of a two-sinker densimeter. • The technique is based on a quantitative measurement of sample mass adsorbed onto the surface of the densimeter sinkers. • The dew-point density and dew-point pressure are determined with low uncertainty. • The method is applied to the (methane + propane) system and compared to traditional methods. - Abstract: We explore a novel method for determining the dew-point density and dew-point pressure of fluid mixtures and compare it to traditional methods. The (p, ρ, T, x) behavior of three (methane + propane) mixtures was investigated with a two-sinker magnetic suspension densimeter over the temperature range of (248.15–293.15) K; the measurements extended from low pressures into the two-phase region. The compositions of the gravimetrically prepared mixtures were (0.74977, 0.50688, and 0.26579) mole fraction methane. We analyzed isothermal data by: (1) a “traditional” analysis of the intersection of a virial fit of the (p vs. ρ) data in the single-phase region with a linear fit of the data in the two-phase region; and (2) an analysis of the adsorbed mass on the sinker surfaces. We compared these to a traditional isochoric experiment. We conclude that the “adsorbed mass” analysis of an isothermal experiment provides an accurate determination of the dew-point temperature, pressure, and density. However, a two-sinker densimeter is required.

  2. Analysis of Stress Updates in the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    The material-point method (MPM) is a new numerical method for analysis of large strain engineering problems. The MPM applies a dual formulation, where the state of the problem (mass, stress, strain, velocity etc.) is tracked using a finite set of material points while the governing equations...... are solved on a background computational grid. Several references state, that one of the main advantages of the material-point method is the easy application of complicated material behaviour as the constitutive response is updated individually for each material point. However, as discussed here, the MPM way...

  3. Zero-point energies in the two-center shell model

    International Nuclear Information System (INIS)

    Reinhard, P.G.

    1975-01-01

    The zero-point energies (ZPE) contained in the potential-energy surfaces (PES) of a two-center shell model are evaluated. For the c.m. motion of the system as a whole the kinetic ZPE was found to be negligible, whereas it varies appreciably for the rotational and oscillation modes (about 5-9MeV). For the latter two modes the ZPE also depends sensitively on the changing pairing structure, which can induce strong local fluctuations, particularly in light nuclei. The potential ZPE is very small for heavy nuclei, but might just become important in light nuclei. (Auth.)

  4. Spatial Distribution Characteristics of Healthcare Facilities in Nanjing: Network Point Pattern Analysis and Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Jianhua Ni

    2016-08-01

    Full Text Available The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities.

  5. A study of a two stage maximum power point tracking control of a photovoltaic system under partially shaded insolation conditions

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Kenji; Takano, Ichiro; Sawada, Yoshio [Kogakuin University, Tokyo 163-8677 (Japan)

    2006-11-23

    A photovoltaic (PV) array shows relatively low output power density, and has a greatly drooping current-voltage (I-V) characteristic. Therefore, maximum power point tracking (MPPT) control is used to maximize the output power of the PV array. Many papers have been reported in relation to MPPT. However, the current-power (I-P) curve sometimes shows multi-local maximum point mode under non-uniform insolation conditions. The operating point of the PV system tends to converge to a local maximum output point which is not the real maximal output point on the I-P curve. Some papers have been also reported, trying to avoid this difficulty. However, most of those control systems become rather complicated. Then, the two stage MPPT control method is proposed in this paper to realize a relatively simple control system which can track the real maximum power point even under non-uniform insolation conditions. The feasibility of this control concept is confirmed for steady insolation as well as for rapidly changing insolation by simulation study using software PSIM and LabVIEW. (author)

  6. Two-point method uncertainty during control and measurement of cylindrical element diameters

    Science.gov (United States)

    Glukhov, V. I.; Shalay, V. V.; Radev, H.

    2018-04-01

    The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.

  7. EXISTENCE OF POSITIVE SOLUTION TO TWO-POINT BOUNDARY VALUE PROBLEM FOR A SYSTEM OF SECOND ORDER ORDINARY DIFFERENTIAL EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, we consider a two-point boundary value problem for a system of second order ordinary differential equations. Under some conditions, we show the existence of positive solution to the system of second order ordinary differential equa-tions.

  8. Flow speed measurement using two-point collective light scattering

    International Nuclear Information System (INIS)

    Heinemeier, N.P.

    1998-09-01

    Measurements of turbulence in plasmas and fluids using the technique of collective light scattering have always been plagued by very poor spatial resolution. In 1994, a novel two-point collective light scattering system for the measurement of transport in a fusion plasma was proposed. This diagnostic method was design for a great improvement of the spatial resolution, without sacrificing accuracy in the velocity measurement. The system was installed at the W7-AS steallartor in Garching, Germany, in 1996, and has been operating since. This master thesis is an investigation of the possible application of this new method to the measurement of flow speeds in normal fluids, in particular air, although the results presented in this work have significance for the plasma measurements as well. The main goal of the project was the experimental verification of previous theoretical predictions. However, the theoretical considerations presented in the thesis show that the method can only be hoped to work for flows that are almost laminar and shearless, which makes it of very small practical interest. Furthermore, this result also implies that the diagnostic at W7-AS cannot be expected to give the results originally hoped for. (au)

  9. Nonlinear consider covariance analysis using a sigma-point filter formulation

    Science.gov (United States)

    Lisano, Michael E.

    2006-01-01

    The research reported here extends the mathematical formulation of nonlinear, sigma-point estimators to enable consider covariance analysis for dynamical systems. This paper presents a novel sigma-point consider filter algorithm, for consider-parameterized nonlinear estimation, following the unscented Kalman filter (UKF) variation on the sigma-point filter formulation, which requires no partial derivatives of dynamics models or measurement models with respect to the parameter list. It is shown that, consistent with the attributes of sigma-point estimators, a consider-parameterized sigma-point estimator can be developed entirely without requiring the derivation of any partial-derivative matrices related to the dynamical system, the measurements, or the considered parameters, which appears to be an advantage over the formulation of a linear-theory sequential consider estimator. It is also demonstrated that a consider covariance analysis performed with this 'partial-derivative-free' formulation yields equivalent results to the linear-theory consider filter, for purely linear problems.

  10. Analysis of a simple pendulum driven at its suspension point

    International Nuclear Information System (INIS)

    Yoshida, S; Findley, T

    2005-01-01

    To familiarize undergraduate students with the dynamics of a damped driven harmonic oscillator, a simple pendulum was set up and driven at its suspension point under different damping conditions. From the time domain analysis, the decay constant was estimated and used to predict the frequency response. The simple pendulum was then driven at a series of frequencies near the resonance. By measuring the maximum amplitude at each driving frequency, the frequency response was determined. With one free parameter, which was determined under the first damping condition, the predicted frequency responses showed good agreement with the measured frequency responses under all damping conditions

  11. The Nielsen identities for the two-point functions of QED and QCD

    International Nuclear Information System (INIS)

    Breckenridge, J.C.; Sasketchewan Univ., Saskatoon, SK; Lavelle, M.J.; Steele, T.G.; Sasketchewan Univ., Saskatoon, SK

    1995-01-01

    We consider the Nielsen identities for the two-point functions of full QCD and QED in the class of Lorentz gauges. For pedagogical reasons the identities are first derived in QED to demonstrate the gauge independence of the photon self-energy, and of the electron mass shell. In QCD we derive the general identity and hence the identities for the quark, gluon and ghost propagators. The explicit contributions to the gluon and ghost identities are calculated to one-loop order, and then we show that the quark identity requires that in on-shell schemes the quark mass renormalisation must be gauge independent. Furthermore, we obtain formal solutions for the gluon self-energy and ghost propagator in terms of the gauge dependence of other, independent Green functions. (orig.)

  12. Identification of 'Point A' as the prevalent source of error in cephalometric analysis of lateral radiographs.

    Science.gov (United States)

    Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R

    2018-04-10

    Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  13. Improvements of the two-dimensional FDTD method for the simulation of normal- and superconducting planar waveguides using time series analysis

    International Nuclear Information System (INIS)

    Hofschen, S.; Wolff, I.

    1996-01-01

    Time-domain simulation results of two-dimensional (2-D) planar waveguide finite-difference time-domain (FDTD) analysis are normally analyzed using Fourier transform. The introduced method of time series analysis to extract propagation and attenuation constants reduces the desired computation time drastically. Additionally, a nonequidistant discretization together with an adequate excitation technique is used to reduce the number of spatial grid points. Therefore, it is possible to reduce the number of spatial grid points. Therefore, it is possible to simulate normal- and superconducting planar waveguide structures with very thin conductors and small dimensions, as they are used in MMIC technology. The simulation results are compared with measurements and show good agreement

  14. Improvements of the two-dimensional FDTD method for the simulation of normal- and superconducting planar waveguides using time series analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hofschen, S.; Wolff, I. [Gerhard Mercator Univ. of Duisburg (Germany). Dept. of Electrical Engineering

    1996-08-01

    Time-domain simulation results of two-dimensional (2-D) planar waveguide finite-difference time-domain (FDTD) analysis are normally analyzed using Fourier transform. The introduced method of time series analysis to extract propagation and attenuation constants reduces the desired computation time drastically. Additionally, a nonequidistant discretization together with an adequate excitation technique is used to reduce the number of spatial grid points. Therefore, it is possible to reduce the number of spatial grid points. Therefore, it is possible to simulate normal- and superconducting planar waveguide structures with very thin conductors and small dimensions, as they are used in MMIC technology. The simulation results are compared with measurements and show good agreement.

  15. Analysis of tree stand horizontal structure using random point field methods

    Directory of Open Access Journals (Sweden)

    O. P. Sekretenko

    2015-06-01

    Full Text Available This paper uses the model approach to analyze the horizontal structure of forest stands. The main types of models of random point fields and statistical procedures that can be used to analyze spatial patterns of trees of uneven and even-aged stands are described. We show how modern methods of spatial statistics can be used to address one of the objectives of forestry – to clarify the laws of natural thinning of forest stand and the corresponding changes in its spatial structure over time. Studying natural forest thinning, we describe the consecutive stages of modeling: selection of the appropriate parametric model, parameter estimation and generation of point patterns in accordance with the selected model, the selection of statistical functions to describe the horizontal structure of forest stands and testing of statistical hypotheses. We show the possibilities of a specialized software package, spatstat, which is designed to meet the challenges of spatial statistics and provides software support for modern methods of analysis of spatial data. We show that a model of stand thinning that does not consider inter-tree interaction can project the size distribution of the trees properly, but the spatial pattern of the modeled stand is not quite consistent with observed data. Using data of three even-aged pine forest stands of 25, 55, and 90-years old, we demonstrate that the spatial point process models are useful for combining measurements in the forest stands of different ages to study the forest stand natural thinning.

  16. Analysis and research on Maximum Power Point Tracking of Photovoltaic Array with Fuzzy Logic Control and Three-point Weight Comparison Method

    Institute of Scientific and Technical Information of China (English)

    LIN; Kuang-Jang; LIN; Chii-Ruey

    2010-01-01

    The Photovoltaic Array has a best optimal operating point where the array operating can obtain the maximum power.However, the optimal operating point can be compromised by the strength of solar radiation,angle,and by the change of environment and load.Due to the constant changes in these conditions,it has become very difficult to locate the optimal operating point by following a mathematical model.Therefore,this study will focus mostly on the application of Fuzzy Logic Control theory and Three-point Weight Comparison Method in effort to locate the optimal operating point of solar panel and achieve maximum efficiency in power generation. The Three-point Weight Comparison Method is the comparison between the characteristic curves of the voltage of photovoltaic array and output power;it is a rather simple way to track the maximum power.The Fuzzy Logic Control,on the other hand,can be used to solve problems that cannot be effectively dealt with by calculation rules,such as concepts,contemplation, deductive reasoning,and identification.Therefore,this paper uses these two kinds of methods to make simulation successively. The simulation results show that,the Three-point Comparison Method is more effective under the environment with more frequent change of solar radiation;however,the Fuzzy Logic Control has better tacking efficiency under the environment with violent change of solar radiation.

  17. Comparison of two intraoral scanners based on three-dimensional surface analysis

    Directory of Open Access Journals (Sweden)

    Kyung-Min Lee

    2018-02-01

    Full Text Available Abstract Background This in vivo study evaluated the difference of two well-known intraoral scanners used in dentistry, namely iTero (Align Technology and TRIOS (3Shape. Methods Thirty-two participants underwent intraoral scans with TRIOS and iTero scanners, as well as conventional alginate impressions. The scans obtained with the two intraoral scanners were compared with each other and were also compared with the corresponding model scans by means of three-dimensional surface analysis. The average differences between the two intraoral scans on the surfaces were evaluated by color-mapping. The average differences in the three-dimensional direction between each intraoral scans and its corresponding model scan were calculated at all points on the surfaces. Results The average differences between the two intraoral scanners were 0.057 mm at the maxilla and 0.069 mm at the mandible. Color histograms showed that local deviations between the two scanners occurred in the posterior area. As for difference in the three-dimensional direction, there was no statistically significant difference between two scanners. Conclusions Although there were some deviations in visible inspection, there was no statistical significance between the two intraoral scanners.

  18. Throughput analysis of point-to-multi-point hybric FSO/RF network

    KAUST Repository

    Rakia, Tamer

    2017-07-31

    This paper presents and analyzes a point-to-multi-point (P2MP) network that uses a number of free-space optical (FSO) links for data transmission from the central node to the different remote nodes. A common backup radio-frequency (RF) link is used by the central node for data transmission to any remote node in case of the failure of any one of FSO links. We develop a cross-layer Markov chain model to study the throughput from central node to a tagged remote node. Numerical examples are presented to compare the performance of the proposed P2MP hybrid FSO/RF network with that of a P2MP FSO-only network and show that the P2MP Hybrid FSO/RF network achieves considerable performance improvement over the P2MP FSO-only network.

  19. Modeling fixation locations using spatial point processes.

    Science.gov (United States)

    Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix

    2013-10-01

    Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.

  20. Scattering analysis of point processes and random measures

    International Nuclear Information System (INIS)

    Hanisch, K.H.

    1984-01-01

    In the present paper scattering analysis of point processes and random measures is studied. Known formulae which connect the scattering intensity with the pair distribution function of the studied structures are proved in a rigorous manner with tools of the theory of point processes and random measures. For some special fibre processes the scattering intensity is computed. For a class of random measures, namely for 'grain-germ-models', a new formula is proved which yields the pair distribution function of the 'grain-germ-model' in terms of the pair distribution function of the underlying point process (the 'germs') and of the mean structure factor and the mean squared structure factor of the particles (the 'grains'). (author)

  1. Fingerprint Analysis with Marked Point Processes

    DEFF Research Database (Denmark)

    Forbes, Peter G. M.; Lauritzen, Steffen; Møller, Jesper

    We present a framework for fingerprint matching based on marked point process models. An efficient Monte Carlo algorithm is developed to calculate the marginal likelihood ratio for the hypothesis that two observed prints originate from the same finger against the hypothesis that they originate from...... different fingers. Our model achieves good performance on an NIST-FBI fingerprint database of 258 matched fingerprint pairs....

  2. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    Science.gov (United States)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  3. Selection rule for Dirac-like points in two-dimensional dielectric photonic crystals

    KAUST Repository

    Li, Yan

    2013-01-01

    We developed a selection rule for Dirac-like points in two-dimensional dielectric photonic crystals. The rule is derived from a perturbation theory and states that a non-zero, mode-coupling integral between the degenerate Bloch states guarantees a Dirac-like point, regardless of the type of the degeneracy. In fact, the selection rule can also be determined from the symmetry of the Bloch states even without computing the integral. Thus, the existence of Dirac-like points can be quickly and conclusively predicted for various photonic crystals independent of wave polarization, lattice structure, and composition. © 2013 Optical Society of America.

  4. Testing Local Independence between Two Point Processes

    DEFF Research Database (Denmark)

    Allard, Denis; Brix, Anders; Chadæuf, Joël

    2001-01-01

    Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush......Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush...

  5. Fast and accurate computation of projected two-point functions

    Science.gov (United States)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithm1Our code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  6. Infinite-component conformal fields. Spectral representation of the two-point function

    International Nuclear Information System (INIS)

    Zaikov, R.P.; Tcholakov, V.

    1975-01-01

    The infinite-component conformal fields (with respect to the stability subgroup) are considered. The spectral representation of the conformally invariant two-point function is obtained. This function is nonvanishing as/lso for one ''fundamental'' and one infinite-component field

  7. Intestinal transcriptome analysis revealed differential salinity adaptation between two tilapiine species.

    Science.gov (United States)

    Ronkin, Dana; Seroussi, Eyal; Nitzan, Tali; Doron-Faigenboim, Adi; Cnaani, Avner

    2015-03-01

    Tilapias are a group of freshwater species, which vary in their ability to adapt to high salinity water. Osmotic regulation in fish is conducted mainly in the gills, kidney, and gastrointestinal tract (GIT). The mechanisms involved in ion and water transport through the GIT is not well-characterized, with only a few described complexes. Comparing the transcriptome of the anterior and posterior intestinal sections of a freshwater and saltwater adapted fish by deep-sequencing, we examined the salinity adaptation of two tilapia species: the high salinity-tolerant Oreochromis mossambicus (Mozambique tilapia), and the less salinity-tolerant Oreochromis niloticus (Nile tilapia). This comparative analysis revealed high similarity in gene expression response to salinity change between species in the posterior intestine and large differences in the anterior intestine. Furthermore, in the anterior intestine 68 genes were saltwater up-regulated in one species and down-regulated in the other species (47 genes up-regulated in O. niloticus and down-regulated in O. mossambicus, with 21 genes showing the reverse pattern). Gene ontology (GO) analysis showed a high proportion of transporter and ion channel function among these genes. The results of this study point to a group of genes that differed in their salinity-dependent regulation pattern in the anterior intestine as potentially having a role in the differential salinity tolerance of these two closely related species. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Solving inverse two-point boundary value problems using collage coding

    Science.gov (United States)

    Kunze, H.; Murdock, S.

    2006-08-01

    The method of collage coding, with its roots in fractal imaging, is the central tool in a recently established rigorous framework for solving inverse initial value problems for ordinary differential equations (Kunze and Vrscay 1999 Inverse Problems 15 745-70). We extend these ideas to solve the following inverse problem: given a function u(x) on [A, B] (which may be the interpolation of data points), determine a two-point boundary value problem on [A, B] which admits u(x) as a solution as closely as desired. The solution of such inverse problems may be useful in parameter estimation or determination of potential functional forms of the underlying differential equation. We discuss ways to improve results, including the development of a partitioning scheme. Several examples are considered.

  9. Flow speed measurement using two-point collective light scattering

    Energy Technology Data Exchange (ETDEWEB)

    Heinemeier, N.P

    1998-09-01

    Measurements of turbulence in plasmas and fluids using the technique of collective light scattering have always been plagued by very poor spatial resolution. In 1994, a novel two-point collective light scattering system for the measurement of transport in a fusion plasma was proposed. This diagnostic method was design for a great improvement of the spatial resolution, without sacrificing accuracy in the velocity measurement. The system was installed at the W7-AS steallartor in Garching, Germany, in 1996, and has been operating since. This master thesis is an investigation of the possible application of this new method to the measurement of flow speeds in normal fluids, in particular air, although the results presented in this work have significance for the plasma measurements as well. The main goal of the project was the experimental verification of previous theoretical predictions. However, the theoretical considerations presented in the thesis show that the method can only be hoped to work for flows that are almost laminar and shearless, which makes it of very small practical interest. Furthermore, this result also implies that the diagnostic at W7-AS cannot be expected to give the results originally hoped for. (au) 1 tab., 51 ills., 29 refs.

  10. Two-point correlation function for Dirichlet L-functions

    Science.gov (United States)

    Bogomolny, E.; Keating, J. P.

    2013-03-01

    The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy-Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question.

  11. Two-point correlation function for Dirichlet L-functions

    International Nuclear Information System (INIS)

    Bogomolny, E; Keating, J P

    2013-01-01

    The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy–Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question. (paper)

  12. Study of the interference of plumes released from two near-ground point sources in an open channel

    International Nuclear Information System (INIS)

    Oskouie, Shahin N.; Wang, Bing-Chen; Yee, Eugene

    2015-01-01

    Highlights: • DNS study of turbulent dispersion and mixing of passive scalars. • Interference of two passive plumes in a boundary layer flow. • Cross correlation, co-spectra and coherency spectra of two plumes. - Abstract: The dispersion and mixing of passive scalars released from two near-ground point sources into an open-channel flow are studied using direct numerical simulation. A comparative study based on eight test cases has been conducted to investigate the effects of Reynolds number and source separation distance on the dispersion and interference of the two plumes. In order to determine the nonlinear relationship between the variance of concentration fluctuations of the total plume and those produced by each of the two plumes, the covariance of the two concentration fields is studied in both physical and spectral spaces. The results show that at the source height, the streamwise evolution of the cross correlation between the fluctuating components of the two concentration fields can be classified into four stages, which feature zero, destructive and constructive interferences and a complete mixing state. The characteristics of these four stages of plume mixing are further confirmed through an analysis of the pre-multiplied co-spectra and coherency spectra. From the coherency spectrum, it is observed that there exists a range of ‘leading scales’, which are several times larger than the Kolmogorov scale but are smaller than or comparable to the scale of the most energetic eddies of turbulence. At the leading scales, the mixing between the two interfering plumes is the fastest and the coherency spectrum associated with these scales can quickly approach its asymptotic value of unity.

  13. On two-point boundary correlations in the six-vertex model with domain wall boundary conditions

    Science.gov (United States)

    Colomo, F.; Pronko, A. G.

    2005-05-01

    The six-vertex model with domain wall boundary conditions on an N × N square lattice is considered. The two-point correlation function describing the probability of having two vertices in a given state at opposite (top and bottom) boundaries of the lattice is calculated. It is shown that this two-point boundary correlator is expressible in a very simple way in terms of the one-point boundary correlators of the model on N × N and (N - 1) × (N - 1) lattices. In alternating sign matrix (ASM) language this result implies that the doubly refined x-enumerations of ASMs are just appropriate combinations of the singly refined ones.

  14. 78 FR 24816 - Pricing for the 2013 American Eagle West Point Two-Coin Silver Set

    Science.gov (United States)

    2013-04-26

    ... DEPARTMENT OF THE TREASURY United States Mint Pricing for the 2013 American Eagle West Point Two-Coin Silver Set AGENCY: United States Mint, Department of the Treasury. ACTION: Notice. SUMMARY: The United States Mint is announcing the price of the 2013 American Eagle West Point Two-Coin Silver Set. The...

  15. Integrated modeling and analysis methodology for precision pointing applications

    Science.gov (United States)

    Gutierrez, Homero L.

    2002-07-01

    Space-based optical systems that perform tasks such as laser communications, Earth imaging, and astronomical observations require precise line-of-sight (LOS) pointing. A general approach is described for integrated modeling and analysis of these types of systems within the MATLAB/Simulink environment. The approach can be applied during all stages of program development, from early conceptual design studies to hardware implementation phases. The main objective is to predict the dynamic pointing performance subject to anticipated disturbances and noise sources. Secondary objectives include assessing the control stability, levying subsystem requirements, supporting pointing error budgets, and performing trade studies. The integrated model resides in Simulink, and several MATLAB graphical user interfaces (GUI"s) allow the user to configure the model, select analysis options, run analyses, and process the results. A convenient parameter naming and storage scheme, as well as model conditioning and reduction tools and run-time enhancements, are incorporated into the framework. This enables the proposed architecture to accommodate models of realistic complexity.

  16. Spatially resolved synchrotron-induced X-ray fluorescence analyses of metal point drawings and their mysterious inscriptions

    International Nuclear Information System (INIS)

    Reiche, Ina; Radtke, Martin; Berger, Achim; Goerner, Wolf; Ketelsen, Thomas; Merchel, Silke; Riederer, Josef; Riesemeier, Heinrich; Roth, Michael

    2004-01-01

    Synchrotron-induced X-ray fluorescence (Sy-XRF) analysis was used to study the chemical composition of precious Renaissance silverpoint drawings. Drawings by famous artists such as Albrecht Duerer (1471-1528) and Jan van Eyck (approximately 1395-1441) must be investigated non-destructively. Moreover, extremely sensitive synchrotron- or accelerator-based techniques are needed since only small quantities of silver are deposited on the paper. New criteria for attributing these works to a particular artist could be established based on the analysis of the chemical composition of the metal points used. We illustrate how analysis can give new art historical information by means of two case studies. Two particular drawings, one of Albrecht Duerer, showing a profile portrait of his closest friend, 'Willibald Pirckheimer' (1503), and a second one attributed to Jan van Eyck, showing a 'Portrait of an elderly man', often named 'Niccolo Albergati', are the object of intense art historical controversy. Both drawings show inscriptions next to the figures. Analyses by Sy-XRF could reveal the same kind of silverpoint for the Pirckheimer portrait and its mysterious Greek inscription, contrary to the drawing by Van Eyck where at least three different metal points were applied. Two different types of silver marks were found in this portrait. Silver containing gold marks were detected in the inscriptions and over-subscriptions. This is the first evidence of the use of gold points for metal point drawings in the Middle Ages

  17. Shot Noise Suppression in a Quantum Point Contact with Short Channel Length

    International Nuclear Information System (INIS)

    Jeong, Heejun

    2015-01-01

    An experimental study on the current shot noise of a quantum point contact with short channel length is reported. The experimentally measured maximum energy level spacing between the ground and the first excited state of the device reached up to 7.5 meV, probably due to the hard wall confinement by using shallow electron gas and sharp point contact geometry. The two-dimensional non-equilibrium shot noise contour map shows noise suppression characteristics in a wide range of bias voltage. Fano factor analysis indicates spin-polarized transport through a short quantum point contact. (paper)

  18. Single cell analysis of G1 check points-the relationship between the restriction point and phosphorylation of pRb

    International Nuclear Information System (INIS)

    Martinsson, Hanna-Stina; Starborg, Maria; Erlandsson, Fredrik; Zetterberg, Anders

    2005-01-01

    Single cell analysis allows high resolution investigation of temporal relationships between transition events in G 1 . It has been suggested that phosphorylation of the retinoblastoma tumor suppressor protein (pRb) is the molecular mechanism behind passage through the restriction point (R). We performed a detailed single cell study of the temporal relationship between R and pRb phosphorylation in human fibroblasts using time lapse video-microscopy combined with immunocytochemistry. Four principally different criteria for pRb phosphorylation were used, namely (i) phosphorylation of residues Ser 795 and Ser 780 (ii) degree of pRb-association with the nuclear structure, a property that is closely related with pRb phosphorylation status, (iii) release of the transcription factor E2F-1 from pRb, and (iv) accumulation of cyclin E, which is dependent on phosphorylation of pRb. The analyses of individual cells revealed that passage through R preceded phosphorylation of pRb, which occurs in a gradually increasing proportion of cells in late G 1 . Our data clearly suggest that pRb phosphorylation is not the molecular mechanism behind the passage through R. The restriction point and phosphorylation of pRb thus seem to represent two separate check point in G 1

  19. Systematic Assessment of Attenuated Total Reflectance-Fourier Transforms Infrared Spectroscopy Coupled with Multivariate Analysis for Forensic Analysis of Black Ball-point Pen Inks

    International Nuclear Information System (INIS)

    Lee, L.C.; Mohamed Rozali Othman; Pua, H.; Lee, L.C.

    2012-01-01

    This manuscript aims to provide a new and non-destructive method for systematic analysis of inks on a questioned document. Ink samples were analyzed in situ on the paper substrate by micro-ATR-FTIR spectroscopy and the data obtained was processed and evaluated by a series of multivariate chemometrics. Absorbance value from wavenumbers of 2000-675 cm -1 were first processed by cluster analysis (CA), followed by principal component analysis (PCA) to form a set of new variables. Subsequently, the variables set was used for classification, differentiation and identification of 155 sample pens that comprise nine different brands. Results show that nine black ball-point pen brands could be classified into three main groups via discriminant analysis (DA). Differentiation analyses of nine different pen brands performed using one-way ANOVA indicated only two pairs of brands cannot be differentiated at 95 % confidence interval. Finally an identification flow chart was proposed to determine the brand of unknown pen inks. As a conclusion, the proposed method for extracting and creating a new variable set from infrared spectrum was evaluated to be satisfactory for systematic analysis of inks based on their infrared spectrum. (author)

  20. Saddle-points of a two dimensional random lattice theory

    International Nuclear Information System (INIS)

    Pertermann, D.

    1985-07-01

    A two dimensional random lattice theory with a free massless scalar field is considered. We analyse the field theoretic generating functional for any given choice of positions of the lattice sites. Asking for saddle-points of this generating functional with respect to the positions we find the hexagonal lattice and a triangulated version of the hypercubic lattice as candidates. The investigation of the neighbourhood of a single lattice site yields triangulated rectangles and regular polygons extremizing the above generating functional on the local level. (author)

  1. Two-point density correlations of quasicondensates in free expansion

    DEFF Research Database (Denmark)

    Manz, S.; Bücker, R.; Betz, T.

    2010-01-01

    We measure the two-point density correlation function of freely expanding quasicondensates in the weakly interacting quasi-one-dimensional (1D) regime. While initially suppressed in the trap, density fluctuations emerge gradually during expansion as a result of initial phase fluctuations present...... in the trapped quasicondensate. Asymptotically, they are governed by the thermal coherence length of the system. Our measurements take place in an intermediate regime where density correlations are related to near-field diffraction effects and anomalous correlations play an important role. Comparison...

  2. One-point fluctuation analysis of the high-energy neutrino sky

    Energy Technology Data Exchange (ETDEWEB)

    Feyereisen, Michael R.; Ando, Shin' ichiro [GRAPPA Institute, University of Amsterdam, Science Park 904, 1098 XH Amsterdam (Netherlands); Tamborra, Irene, E-mail: m.r.feyereisen@uva.nl, E-mail: tamborra@nbi.ku.dk, E-mail: s.ando@uva.nl [Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2017-03-01

    We perform the first one-point fluctuation analysis of the high-energy neutrino sky. This method reveals itself to be especially suited to contemporary neutrino data, as it allows to study the properties of the astrophysical components of the high-energy flux detected by the IceCube telescope, even with low statistics and in the absence of point source detection. Besides the veto-passing atmospheric foregrounds, we adopt a simple model of the high-energy neutrino background by assuming two main extra-galactic components: star-forming galaxies and blazars. By leveraging multi-wavelength data from Herschel and Fermi , we predict the spectral and anisotropic probability distributions for their expected neutrino counts in IceCube. We find that star-forming galaxies are likely to remain a diffuse background due to the poor angular resolution of IceCube, and we determine an upper limit on the number of shower events that can reasonably be associated to blazars. We also find that upper limits on the contribution of blazars to the measured flux are unfavourably affected by the skewness of the blazar flux distribution. One-point event clustering and likelihood analyses of the IceCube HESE data suggest that this method has the potential to dramatically improve over more conventional model-based analyses, especially for the next generation of neutrino telescopes.

  3. Smooth random change point models.

    Science.gov (United States)

    van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E

    2011-03-15

    Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.

  4. Analysis of high-resolution simulations for the Black Forest region from a point of view of tourism climatology - a comparison between two regional climate models (REMO and CLM)

    Science.gov (United States)

    Endler, Christina; Matzarakis, Andreas

    2011-03-01

    An analysis of climate simulations from a point of view of tourism climatology based on two regional climate models, namely REMO and CLM, was performed for a regional domain in the southwest of Germany, the Black Forest region, for two time frames, 1971-2000 that represents the twentieth century climate and 2021-2050 that represents the future climate. In that context, the Intergovernmental Panel on Climate Change (IPCC) scenarios A1B and B1 are used. The analysis focuses on human-biometeorological and applied climatologic issues, especially for tourism purposes - that means parameters belonging to thermal (physiologically equivalent temperature, PET), physical (precipitation, snow, wind), and aesthetic (fog, cloud cover) facets of climate in tourism. In general, both models reveal similar trends, but differ in their extent. The trend of thermal comfort is contradicting: it tends to decrease in REMO, while it shows a slight increase in CLM. Moreover, REMO reveals a wider range of future climate trends than CLM, especially for sunshine, dry days, and heat stress. Both models are driven by the same global coupled atmosphere-ocean model ECHAM5/MPI-OM. Because both models are not able to resolve meso- and micro-scale processes such as cloud microphysics, differences between model results and discrepancies in the development of even those parameters (e.g., cloud formation and cover) are due to different model parameterization and formulation. Climatic changes expected by 2050 are small compared to 2100, but may have major impacts on tourism as for example, snow cover and its duration are highly vulnerable to a warmer climate directly affecting tourism in winter. Beyond indirect impacts are of high relevance as they influence tourism as well. Thus, changes in climate, natural environment, demography, tourists' demands, among other things affect economy in general. The analysis of the CLM results and its comparison with the REMO results complete the analysis performed

  5. Two-point model for electron transport in EBT

    International Nuclear Information System (INIS)

    Chiu, S.C.; Guest, G.E.

    1980-01-01

    The electron transport in EBT is simulated by a two-point model corresponding to the central plasma and the edge. The central plasma is assumed to obey neoclassical collisionless transport. The edge plasma is assumed turbulent and modeled by Bohm diffusion. The steady-state temperatures and densities in both regions are obtained as functions of neutral influx and microwave power. It is found that as the neutral influx decreases and power increases, the edge density decreases while the core density increases. We conclude that if ring instability is responsible for the T-M mode transition, and if stability is correlated with cold electron density at the edge, it will depend sensitively on ambient gas pressure and microwave power

  6. Some exact results for the two-point function of an integrable quantum field theory

    International Nuclear Information System (INIS)

    Creamer, D.B.; Thacker, H.B.; Wilkinson, D.

    1981-01-01

    The two-point correlation function for the quantum nonlinear Schroedinger (one-dimensional delta-function gas) model is studied. An infinite-series representation for this function is derived using the quantum inverse-scattering formalism. For the case of zero temperature, the infinite-coupling (c→infinity) result of Jimbo, Miwa, Mori, and Sato is extended to give an exact expression for the order-1/c correction to the two-point function in terms of a Painleve transcendent of the fifth kind

  7. Some exact results for the two-point function of an integrable quantum field theory

    International Nuclear Information System (INIS)

    Creamer, D.B.; Thacker, H.B.; Wilkinson, D.

    1981-02-01

    The two point correlation function for the quantum nonlinear Schroedinger (delta-function gas) model is studied. An infinite series representation for this function is derived using the quantum inverse scattering formalism. For the case of zero temperature, the infinite coupling (c → infinity) result of Jimbo, Miwa, Mori and Sato is extended to give an exact expression for the order 1/c correction to the two point function in terms of a Painleve transcendent of the fifth kind

  8. A renormalization group scaling analysis for compressible two-phase flow

    International Nuclear Information System (INIS)

    Chen, Y.; Deng, Y.; Glimm, J.; Li, G.; Zhang, Q.; Sharp, D.H.

    1993-01-01

    Computational solutions to the Rayleigh--Taylor fluid mixing problem, as modeled by the two-fluid two-dimensional Euler equations, are presented. Data from these solutions are analyzed from the point of view of Reynolds averaged equations, using scaling laws derived from a renormalization group analysis. The computations, carried out with the front tracking method on an Intel iPSC/860, are highly resolved and statistical convergence of ensemble averages is achieved. The computations are consistent with the experimentally observed growth rates for nearly incompressible flows. The dynamics of the interior portion of the mixing zone is simplified by the use of scaling variables. The size of the mixing zone suggests fixed-point behavior. The profile of statistical quantities within the mixing zone exhibit self-similarity under fixed-point scaling to a limited degree. The effect of compressibility is also examined. It is found that, for even moderate compressibility, the growth rates fail to satisfy universal scaling, and moreover, increase significantly with increasing compressibility. The growth rates predicted from a renormalization group fixed-point model are in a reasonable agreement with the results of the exact numerical simulations, even for flows outside of the incompressible limit

  9. Fast Change Point Detection for Electricity Market Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Berkeley, UC; Gu, William; Choi, Jaesik; Gu, Ming; Simon, Horst; Wu, Kesheng

    2013-08-25

    Electricity is a vital part of our daily life; therefore it is important to avoid irregularities such as the California Electricity Crisis of 2000 and 2001. In this work, we seek to predict anomalies using advanced machine learning algorithms. These algorithms are effective, but computationally expensive, especially if we plan to apply them on hourly electricity market data covering a number of years. To address this challenge, we significantly accelerate the computation of the Gaussian Process (GP) for time series data. In the context of a Change Point Detection (CPD) algorithm, we reduce its computational complexity from O($n^{5}$) to O($n^{2}$). Our efficient algorithm makes it possible to compute the Change Points using the hourly price data from the California Electricity Crisis. By comparing the detected Change Points with known events, we show that the Change Point Detection algorithm is indeed effective in detecting signals preceding major events.

  10. Interaction between two point-like charges in nonlinear electrostatics

    Energy Technology Data Exchange (ETDEWEB)

    Breev, A.I. [Tomsk State University, Tomsk (Russian Federation); Tomsk Polytechnic University, Tomsk (Russian Federation); Shabad, A.E. [P.N. Lebedev Physical Institute, Moscow (Russian Federation); Tomsk State University, Tomsk (Russian Federation)

    2018-01-15

    We consider two point-like charges in electrostatic interaction within the framework of a nonlinear model, associated with QED, that provides finiteness of their field energy. We find the common field of the two charges in a dipole-like approximation, where the separation between them R is much smaller than the observation distance r: with the linear accuracy with respect to the ratio R/r, and in the opposite approximation, where R >> r, up to the term quadratic in the ratio r/R. The consideration proposes the law a + bR{sup 1/3} for the energy, when the charges are close to one another, R → 0. This leads to the singularity of the force between them to be R{sup -2/3}, which is weaker than the Coulomb law, R{sup -2}. (orig.)

  11. Interaction between two point-like charges in nonlinear electrostatics

    Science.gov (United States)

    Breev, A. I.; Shabad, A. E.

    2018-01-01

    We consider two point-like charges in electrostatic interaction within the framework of a nonlinear model, associated with QED, that provides finiteness of their field energy. We find the common field of the two charges in a dipole-like approximation, where the separation between them R is much smaller than the observation distance r : with the linear accuracy with respect to the ratio R / r, and in the opposite approximation, where R≫ r, up to the term quadratic in the ratio r / R. The consideration proposes the law a+b R^{1/3} for the energy, when the charges are close to one another, R→ 0. This leads to the singularity of the force between them to be R^{-2/3}, which is weaker than the Coulomb law, R^{-2}.

  12. Visibility Analysis in a Point Cloud Based on the Medial Axis Transform

    NARCIS (Netherlands)

    Peters, R.; Ledoux, H.; Biljecki, F.

    2015-01-01

    Visibility analysis is an important application of 3D GIS data. Current approaches require 3D city models that are often derived from detailed aerial point clouds. We present an approach to visibility analysis that does not require a city model but works directly on the point cloud. Our approach is

  13. Streamline topologies near simple degenerate critical points in two-dimensional flow away from boundaries

    DEFF Research Database (Denmark)

    Brøns, Morten; Hartnack, Johan Nicolai

    1998-01-01

    Streamline patterns and their bifurcations in two-dimensional incompressible flow are investigated from a topological point of view. The velocity field is expanded at a point in the fluid, and the expansion coefficients are considered as bifurcation parameters. A series of non-linear coordinate c...

  14. Streamline topologies near simple degenerate critical points in two-dimensional flow away from boundaries

    DEFF Research Database (Denmark)

    Brøns, Morten; Hartnack, Johan Nicolai

    1999-01-01

    Streamline patterns and their bifurcations in two-dimensional incompressible flow are investigated from a topological point of view. The velocity field is expanded at a point in the fluid, and the expansion coefficients are considered as bifurcation parameters. A series of nonlinear coordinate ch...

  15. Benefits analysis of Soft Open Points for electrical distribution network operation

    International Nuclear Information System (INIS)

    Cao, Wanyu; Wu, Jianzhong; Jenkins, Nick; Wang, Chengshan; Green, Timothy

    2016-01-01

    Highlights: • An analysis framework was developed to quantify the operational benefits. • The framework considers both network reconfiguration and SOP control. • Benefits were analyzed through both quantitative and sensitivity analysis. - Abstract: Soft Open Points (SOPs) are power electronic devices installed in place of normally-open points in electrical power distribution networks. They are able to provide active power flow control, reactive power compensation and voltage regulation under normal network operating conditions, as well as fast fault isolation and supply restoration under abnormal conditions. A steady state analysis framework was developed to quantify the operational benefits of a distribution network with SOPs under normal network operating conditions. A generic power injection model was developed and used to determine the optimal SOP operation using an improved Powell’s Direct Set method. Physical limits and power losses of the SOP device (based on back to back voltage-source converters) were considered in the model. Distribution network reconfiguration algorithms, with and without SOPs, were developed and used to identify the benefits of using SOPs. Test results on a 33-bus distribution network compared the benefits of using SOPs, traditional network reconfiguration and the combination of both. The results showed that using only one SOP achieved a similar improvement in network operation compared to the case of using network reconfiguration with all branches equipped with remotely controlled switches. A combination of SOP control and network reconfiguration provided the optimal network operation.

  16. Two-dimensional transient thermal analysis of a fuel rod by finite volume method

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Rhayanne Yalle Negreiros; Silva, Mário Augusto Bezerra da; Lira, Carlos Alberto de Oliveira, E-mail: ryncosta@gmail.com, E-mail: mabs500@gmail.com, E-mail: cabol@ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear

    2017-07-01

    One of the greatest concerns when studying a nuclear reactor is the warranty of safe temperature limits all over the system at all time. The preservation of core structure along with the constraint of radioactive material into a controlled system are the main focus during the operation of a reactor. The purpose of this paper is to present the temperature distribution for a nominal channel of the AP1000 reactor developed by Westinghouse Co. during steady-state and transient operations. In the analysis, the system was subjected to normal operation conditions and then to blockages of the coolant flow. The time necessary to achieve a new safe stationary stage (when it was possible) was presented. The methodology applied in this analysis was based on a two-dimensional survey accomplished by the application of Finite Volume Method (FVM). A steady solution is obtained and compared with an analytical analysis that disregard axial heat transport to determine its relevance. The results show the importance of axial heat transport consideration in this type of study. A transient analysis shows the behavior of the system when submitted to coolant blockage at channel's entrance. Three blockages were simulated (10%, 20% and 30%) and the results show that, for a nominal channel, the system can still be considerate safe (there's no bubble formation until that point). (author)

  17. Three- and two-point one-loop integrals in heavy particle effective theories

    International Nuclear Information System (INIS)

    Bouzas, A.O.

    2000-01-01

    We give a complete analytical computation of three- and two-point loop integrals occurring in heavy particle theories, involving a velocity change, for arbitrary real values of the external masses and residual momenta. (orig.)

  18. Trend analysis and change point detection of annual and seasonal temperature series in Peninsular Malaysia

    Science.gov (United States)

    Suhaila, Jamaludin; Yusop, Zulkifli

    2017-06-01

    Most of the trend analysis that has been conducted has not considered the existence of a change point in the time series analysis. If these occurred, then the trend analysis will not be able to detect an obvious increasing or decreasing trend over certain parts of the time series. Furthermore, the lack of discussion on the possible factors that influenced either the decreasing or the increasing trend in the series needs to be addressed in any trend analysis. Hence, this study proposes to investigate the trends, and change point detection of mean, maximum and minimum temperature series, both annually and seasonally in Peninsular Malaysia and determine the possible factors that could contribute to the significance trends. In this study, Pettitt and sequential Mann-Kendall (SQ-MK) tests were used to examine the occurrence of any abrupt climate changes in the independent series. The analyses of the abrupt changes in temperature series suggested that most of the change points in Peninsular Malaysia were detected during the years 1996, 1997 and 1998. These detection points captured by Pettitt and SQ-MK tests are possibly related to climatic factors, such as El Niño and La Niña events. The findings also showed that the majority of the significant change points that exist in the series are related to the significant trend of the stations. Significant increasing trends of annual and seasonal mean, maximum and minimum temperatures in Peninsular Malaysia were found with a range of 2-5 °C/100 years during the last 32 years. It was observed that the magnitudes of the increasing trend in minimum temperatures were larger than the maximum temperatures for most of the studied stations, particularly at the urban stations. These increases are suspected to be linked with the effect of urban heat island other than El Niño event.

  19. Generation of arbitrary two-point correlated directed networks with given modularity

    International Nuclear Information System (INIS)

    Zhou Jie; Xiao Gaoxi; Wong, Limsoon; Fu Xiuju; Ma, Stefan; Cheng, Tee Hiang

    2010-01-01

    In this Letter, we introduce measures of correlation in directed networks and develop an efficient algorithm for generating directed networks with arbitrary two-point correlation. Furthermore, a method is proposed for adjusting community structure in directed networks without changing the correlation. Effectiveness of both methods is verified by numerical results.

  20. Fat suppression strategies in MR imaging of breast cancer at 3.0 T. Comparison of the two-point dixon technique and the frequency selective inversion method

    International Nuclear Information System (INIS)

    Kaneko Mikami, Wakako; Kazama, Toshiki; Sato, Hirotaka

    2013-01-01

    The purpose of this study was to compare two fat suppression methods in contrast-enhanced MR imaging of breast cancer at 3.0 T: the two-point Dixon method and the frequency selective inversion method. Forty female patients with breast cancer underwent contrast-enhanced three-dimensional T1-weighted MR imaging at 3.0 T. Both the two-point Dixon method and the frequency selective inversion method were applied. Quantitative analyses of the residual fat signal-to-noise ratio and the contrast noise ratio (CNR) of lesion-to-breast parenchyma, lesion-to-fat, and parenchyma-to-fat were performed. Qualitative analyses of the uniformity of fat suppression, image contrast, and the visibility of breast lesions and axillary metastatic adenopathy were performed. The signal-to-noise ratio was significantly lower in the two-point Dixon method (P<0.001). All CNR values were significantly higher in the two-point Dixon method (P<0.001 and P=0.001, respectively). According to qualitative analysis, both the uniformity of fat suppression and image contrast with the two-point Dixon method were significantly higher (P<0.001 and P=0.002, respectively). Visibility of breast lesions and metastatic adenopathy was significantly better in the two-point Dixon method (P<0.001 and P=0.03, respectively). The two-point Dixon method suppressed the fat signal more potently and improved contrast and visibility of the breast lesions and axillary adenopathy. (author)

  1. Numerical Analysis of S-CO{sub 2} Test Loop Transient Conditions near the Critical Point of CO{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Seong Jun; Oh, Bongseong; Ahn, Yoonhan; Baik, Seongjoon; Lee, Jekyoung; Lee, Jeong Ik [KAIST, Daejeon (Korea, Republic of)

    2016-05-15

    It was identified that controlling CO{sub 2} compressor operation near the critical point is one of the most important issues to operate a S-CO{sub 2} Brayton cycle with a high efficiency. Despite the growing interest in the S-CO{sub 2} Brayton cycle, a few previous research on the transient analysis of the S-CO{sub 2} system has been conducted previously. Moreover, previous studies have some limitation in the modelled test facility, and the experiment was not performed to observe specific scenario. The KAIST research team has conducted S-CO{sub 2} system transient experiments with the CO{sub 2} compressing test facility called SCO{sub 2}PE (Supercritical CO{sub 2} Pressurizing Experiment) at KAIST In this study, authors use the transient analysis code GAMMA (Gas Multidimensional Multicomponent mixture Analysis) code for analyzing the experiment. Two transient scenarios were selected in this study; over cooling and under cooling situations. The selected transient situation is of particular interest since the compressor inlet conditions start to drift away from the critical point of CO{sub 2}. The results represent that the GAMMA code can simulate the S-CO{sub 2} test facility, SCO{sub 2}PE. However, as shown in the cooling water flow rate increasing scenario, the GAMMA code shows calculation error when the phase change occurs. Furthermore, although the results of the cooling water flow rate decrease case shows reasonable agreement with the experimental data, there are still some unexplained differences between the experimental data and the GAMMA code prediction.

  2. A postprocessing method based on chirp Z transform for FDTD calculation of point defect states in two-dimensional phononic crystals

    International Nuclear Information System (INIS)

    Su Xiaoxing; Wang Yuesheng

    2010-01-01

    In this paper, a new postprocessing method for the finite difference time domain (FDTD) calculation of the point defect states in two-dimensional (2D) phononic crystals (PNCs) is developed based on the chirp Z transform (CZT), one of the frequency zooming techniques. The numerical results for the defect states in 2D solid/liquid PNCs with single or double point defects show that compared with the fast Fourier transform (FFT)-based postprocessing method, the method can improve the estimation accuracy of the eigenfrequencies of the point defect states significantly when the FDTD calculation is run with relatively few iterations; and furthermore it can yield the point defect bands without calculating all eigenfrequencies outside the band gaps. The efficiency and accuracy of the FDTD method can be improved significantly with this new postprocessing method.

  3. A postprocessing method based on chirp Z transform for FDTD calculation of point defect states in two-dimensional phononic crystals

    Energy Technology Data Exchange (ETDEWEB)

    Su Xiaoxing, E-mail: xxsu@bjtu.edu.c [School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044 (China); Wang Yuesheng [Institute of Engineering Mechanics, Beijing Jiaotong University, Beijing 100044 (China)

    2010-09-01

    In this paper, a new postprocessing method for the finite difference time domain (FDTD) calculation of the point defect states in two-dimensional (2D) phononic crystals (PNCs) is developed based on the chirp Z transform (CZT), one of the frequency zooming techniques. The numerical results for the defect states in 2D solid/liquid PNCs with single or double point defects show that compared with the fast Fourier transform (FFT)-based postprocessing method, the method can improve the estimation accuracy of the eigenfrequencies of the point defect states significantly when the FDTD calculation is run with relatively few iterations; and furthermore it can yield the point defect bands without calculating all eigenfrequencies outside the band gaps. The efficiency and accuracy of the FDTD method can be improved significantly with this new postprocessing method.

  4. Spectrography analysis of stainless steel by the point to point technique

    International Nuclear Information System (INIS)

    Bona, A.

    1986-01-01

    A method for the determination of the elements Ni, Cr, Mn, Si, Mo, Nb, Cu, Co and V in stainless steel by emission spectrographic analysis using high voltage spark sources is presented. The 'point-to-point' technique is employed. The experimental parameters were optimized taking account a compromise between the detection sensitivity and the precision of the measurement. The parameters investigated were the high voltage capacitance, the inductance, the analytical and auxiliary gaps, the period of pre burn spark and the time of exposure. The edge shape of the counter electrodes and the type of polishing and diameter variation of the stailess steel eletrodes were evaluated in preliminary assays. In addition the degradation of the chemical power of the developer was also investigated. Counter electrodes of graphite, copper, aluminium and iron were employed and the counter electrode itself was used as an internal standard. In the case of graphite counter electrodes the iron lines were employed as internal standard. The relative errors were the criteria for evaluation of these experiments. The National Bureau of Standards - Certified reference stainless steel standards and the Eletrometal Acos Finos S.A. samples (certified by the supplier) were employed for drawing in the calibration systems and analytical curves. The best results were obtained using the convencional graphite counter electrodes. The inaccuracy and the imprecision of the proposed method varied from 2% to 15% and from 1% to 9% respectively. This present technique was compared to others instrumental techniques such as inductively coupled plasma, X-ray fluorescence and neutron activation analysis. The advantages and disadvantages for each case were discussed. (author) [pt

  5. Two-point functions in a holographic Kondo model

    Science.gov (United States)

    Erdmenger, Johanna; Hoyos, Carlos; O'Bannon, Andy; Papadimitriou, Ioannis; Probst, Jonas; Wu, Jackson M. S.

    2017-03-01

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0 + 1)-dimensional impurity spin of a gauged SU( N ) interacting with a (1 + 1)-dimensional, large- N , strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU( N )-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O^{\\dagger}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1 + 1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0 + 1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green's function of the form - i2, which is characteristic of a Kondo resonance.

  6. Two-point functions in a holographic Kondo model

    Energy Technology Data Exchange (ETDEWEB)

    Erdmenger, Johanna [Institut für Theoretische Physik und Astrophysik, Julius-Maximilians-Universität Würzburg,Am Hubland, D-97074 Würzburg (Germany); Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),Föhringer Ring 6, D-80805 Munich (Germany); Hoyos, Carlos [Department of Physics, Universidad de Oviedo, Avda. Calvo Sotelo 18, 33007, Oviedo (Spain); O’Bannon, Andy [STAG Research Centre, Physics and Astronomy, University of Southampton,Highfield, Southampton SO17 1BJ (United Kingdom); Papadimitriou, Ioannis [SISSA and INFN - Sezione di Trieste, Via Bonomea 265, I 34136 Trieste (Italy); Probst, Jonas [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Wu, Jackson M.S. [Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487 (United States)

    2017-03-07

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0+1)-dimensional impurity spin of a gauged SU(N) interacting with a (1+1)-dimensional, large-N, strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU(N)-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O{sup †}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1+1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0+1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green’s function of the form −i〈O〉{sup 2}, which is characteristic of a Kondo resonance.

  7. Two-point concrete resistivity measurements: interfacial phenomena at the electrode–concrete contact zone

    International Nuclear Information System (INIS)

    McCarter, W J; Taha, H M; Suryanto, B; Starrs, G

    2015-01-01

    Ac impedance spectroscopy measurements are used to critically examine the end-to-end (two-point) testing technique employed in evaluating the bulk electrical resistivity of concrete. In particular, this paper focusses on the interfacial contact region between the electrode and specimen and the influence of contacting medium and measurement frequency on the impedance response. Two-point and four-point electrode configurations were compared and modelling of the impedance response was undertaken to identify and quantify the contribution of the electrode–specimen contact region on the measured impedance. Measurements are presented in both Bode and Nyquist formats to aid interpretation. Concretes mixes conforming to BSEN206-1 and BS8500-1 were investigated which included concretes containing the supplementary cementitious materials fly ash and ground granulated blast-furnace slag. A measurement protocol is presented for the end-to-end technique in terms of test frequency and electrode–specimen contacting medium in order to minimize electrode–specimen interfacial effect and ensure correct measurement of bulk resistivity. (paper)

  8. Super-Relaxed ( -Proximal Point Algorithms, Relaxed ( -Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear Variational Inclusions

    Directory of Open Access Journals (Sweden)

    Agarwal RaviP

    2009-01-01

    Full Text Available We glance at recent advances to the general theory of maximal (set-valued monotone mappings and their role demonstrated to examine the convex programming and closely related field of nonlinear variational inequalities. We focus mostly on applications of the super-relaxed ( -proximal point algorithm to the context of solving a class of nonlinear variational inclusion problems, based on the notion of maximal ( -monotonicity. Investigations highlighted in this communication are greatly influenced by the celebrated work of Rockafellar (1976, while others have played a significant part as well in generalizing the proximal point algorithm considered by Rockafellar (1976 to the case of the relaxed proximal point algorithm by Eckstein and Bertsekas (1992. Even for the linear convergence analysis for the overrelaxed (or super-relaxed ( -proximal point algorithm, the fundamental model for Rockafellar's case does the job. Furthermore, we attempt to explore possibilities of generalizing the Yosida regularization/approximation in light of maximal ( -monotonicity, and then applying to first-order evolution equations/inclusions.

  9. Consistency of Trend Break Point Estimator with Underspecified Break Number

    Directory of Open Access Journals (Sweden)

    Jingjing Yang

    2017-01-01

    Full Text Available This paper discusses the consistency of trend break point estimators when the number of breaks is underspecified. The consistency of break point estimators in a simple location model with level shifts has been well documented by researchers under various settings, including extensions such as allowing a time trend in the model. Despite the consistency of break point estimators of level shifts, there are few papers on the consistency of trend shift break point estimators in the presence of an underspecified break number. The simulation study and asymptotic analysis in this paper show that the trend shift break point estimator does not converge to the true break points when the break number is underspecified. In the case of two trend shifts, the inconsistency problem worsens if the magnitudes of the breaks are similar and the breaks are either both positive or both negative. The limiting distribution for the trend break point estimator is developed and closely approximates the finite sample performance.

  10. Floquet stability analysis of the longitudinal dynamics of two hovering model insects

    Science.gov (United States)

    Wu, Jiang Hao; Sun, Mao

    2012-01-01

    Because of the periodically varying aerodynamic and inertial forces of the flapping wings, a hovering or constant-speed flying insect is a cyclically forcing system, and, generally, the flight is not in a fixed-point equilibrium, but in a cyclic-motion equilibrium. Current stability theory of insect flight is based on the averaged model and treats the flight as a fixed-point equilibrium. In the present study, we treated the flight as a cyclic-motion equilibrium and used the Floquet theory to analyse the longitudinal stability of insect flight. Two hovering model insects were considered—a dronefly and a hawkmoth. The former had relatively high wingbeat frequency and small wing-mass to body-mass ratio, and hence very small amplitude of body oscillation; while the latter had relatively low wingbeat frequency and large wing-mass to body-mass ratio, and hence relatively large amplitude of body oscillation. For comparison, analysis using the averaged-model theory (fixed-point stability analysis) was also made. Results of both the cyclic-motion stability analysis and the fixed-point stability analysis were tested by numerical simulation using complete equations of motion coupled with the Navier–Stokes equations. The Floquet theory (cyclic-motion stability analysis) agreed well with the simulation for both the model dronefly and the model hawkmoth; but the averaged-model theory gave good results only for the dronefly. Thus, for an insect with relatively large body oscillation at wingbeat frequency, cyclic-motion stability analysis is required, and for their control analysis, the existing well-developed control theories for systems of fixed-point equilibrium are no longer applicable and new methods that take the cyclic variation of the flight dynamics into account are needed. PMID:22491980

  11. Spin-k/2-spin-k/2 SU(2) two-point functions on the torus

    International Nuclear Information System (INIS)

    Kirsch, Ingo; Kucharski, Piotr

    2012-11-01

    We discuss a class of two-point functions on the torus of primary operators in the SU(2) Wess-Zumino-Witten model at integer level k. In particular, we construct an explicit expression for the current blocks of the spin-(k)/(2)-spin-(k)/(2) torus two-point functions for all k. We first examine the factorization limits of the proposed current blocks and test their monodromy properties. We then prove that the current blocks solve the corresponding Knizhnik-Zamolodchikov-like differential equations using the method of Mathur, Mukhi and Sen.

  12. Spin-k/2-spin-k/2 SU(2) two-point functions on the torus

    Energy Technology Data Exchange (ETDEWEB)

    Kirsch, Ingo [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Gruppe Theorie; Kucharski, Piotr [Warsaw Univ. (Poland). Inst. of Theoretical Physics

    2012-11-15

    We discuss a class of two-point functions on the torus of primary operators in the SU(2) Wess-Zumino-Witten model at integer level k. In particular, we construct an explicit expression for the current blocks of the spin-(k)/(2)-spin-(k)/(2) torus two-point functions for all k. We first examine the factorization limits of the proposed current blocks and test their monodromy properties. We then prove that the current blocks solve the corresponding Knizhnik-Zamolodchikov-like differential equations using the method of Mathur, Mukhi and Sen.

  13. The homological functor of a Bieberbach group with a cyclic point group of order two

    Science.gov (United States)

    Hassim, Hazzirah Izzati Mat; Sarmin, Nor Haniza; Ali, Nor Muhainiah Mohd; Masri, Rohaidah; Idrus, Nor'ashiqin Mohd

    2014-07-01

    The generalized presentation of a Bieberbach group with cyclic point group of order two can be obtained from the fact that any Bieberbach group of dimension n is a direct product of the group of the smallest dimension with a free abelian group. In this paper, by using the group presentation, the homological functor of a Bieberbach group a with cyclic point group of order two of dimension n is found.

  14. Mean density and two-point correlation function for the CfA redshift survey slices

    International Nuclear Information System (INIS)

    De Lapparent, V.; Geller, M.J.; Huchra, J.P.

    1988-01-01

    The effect of large-scale inhomogeneities on the determination of the mean number density and the two-point spatial correlation function were investigated for two complete slices of the extension of the Center for Astrophysics (CfA) redshift survey (de Lapparent et al., 1986). It was found that the mean galaxy number density for the two strips is uncertain by 25 percent, more so than previously estimated. The large uncertainty in the mean density introduces substantial uncertainty in the determination of the two-point correlation function, particularly at large scale; thus, for the 12-deg slice of the CfA redshift survey, the amplitude of the correlation function at intermediate scales is uncertain by a factor of 2. The large uncertainties in the correlation functions might reflect the lack of a fair sample. 45 references

  15. Recent progress in fission at saddle point and scission point

    International Nuclear Information System (INIS)

    Blons, J.; Paya, D.; Signarbieux, C.

    High resolution measurements of 230 Th and 232 Th fission cross sections for neutrons exhibit a fine structure. Such a structure is interpreted as a superposition of two rotational bands in the third, asymmetric, well of the fission barrier. The fragment mass distribution in the thermal fission of 235 U and 233 U does not show any even-odd effect, even at the highest kinetic energies. This is the mark of a strong viscosity in the descent from saddle point to scission point [fr

  16. Coherent electron focusing with quantum point contacts in a two-dimensional electron gas

    NARCIS (Netherlands)

    Houten, H. van; Beenakker, C.W.J.; Williamson, J.G.; Broekaart, M.E.I.; Loosdrecht, P.H.M. van; Wees, B.J. van; Mooij, J.E.; Foxon, C.T.; Harris, J.J.

    1989-01-01

    Transverse electron focusing in a two-dimensional electron gas is investigated experimentally and theoretically for the first time. A split Schottky gate on top of a GaAs-AlxGa1–xAs heterostructure defines two point contacts of variable width, which are used as injector and collector of ballistic

  17. The shooting method and multiple solutions of two/multi-point BVPs of second-order ODE

    Directory of Open Access Journals (Sweden)

    Man Kam Kwong

    2006-06-01

    Full Text Available Within the last decade, there has been growing interest in the study of multiple solutions of two- and multi-point boundary value problems of nonlinear ordinary differential equations as fixed points of a cone mapping. Undeniably many good results have emerged. The purpose of this paper is to point out that, in the special case of second-order equations, the shooting method can be an effective tool, sometimes yielding better results than those obtainable via fixed point techniques.

  18. Hazard analysis and critical control point (HACCP) history and conceptual overview.

    Science.gov (United States)

    Hulebak, Karen L; Schlosser, Wayne

    2002-06-01

    The concept of Hazard Analysis and Critical Control Point (HACCP) is a system that enables the production of safe meat and poultry products through the thorough analysis of production processes, identification of all hazards that are likely to occur in the production establishment, the identification of critical points in the process at which these hazards may be introduced into product and therefore should be controlled, the establishment of critical limits for control at those points, the verification of these prescribed steps, and the methods by which the processing establishment and the regulatory authority can monitor how well process control through the HACCP plan is working. The history of the development of HACCP is reviewed, and examples of practical applications of HACCP are described.

  19. Solving Singular Two-Point Boundary Value Problems Using Continuous Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Omar Abu Arqub

    2012-01-01

    Full Text Available In this paper, the continuous genetic algorithm is applied for the solution of singular two-point boundary value problems, where smooth solution curves are used throughout the evolution of the algorithm to obtain the required nodal values. The proposed technique might be considered as a variation of the finite difference method in the sense that each of the derivatives is replaced by an appropriate difference quotient approximation. This novel approach possesses main advantages; it can be applied without any limitation on the nature of the problem, the type of singularity, and the number of mesh points. Numerical examples are included to demonstrate the accuracy, applicability, and generality of the presented technique. The results reveal that the algorithm is very effective, straightforward, and simple.

  20. Two-point boundary correlation functions of dense loop models

    Directory of Open Access Journals (Sweden)

    Alexi Morin-Duchesne, Jesper Lykke Jacobsen

    2018-06-01

    Full Text Available We investigate six types of two-point boundary correlation functions in the dense loop model. These are defined as ratios $Z/Z^0$ of partition functions on the $m\\times n$ square lattice, with the boundary condition for $Z$ depending on two points $x$ and $y$. We consider: the insertion of an isolated defect (a and a pair of defects (b in a Dirichlet boundary condition, the transition (c between Dirichlet and Neumann boundary conditions, and the connectivity of clusters (d, loops (e and boundary segments (f in a Neumann boundary condition. For the model of critical dense polymers, corresponding to a vanishing loop weight ($\\beta = 0$, we find determinant and pfaffian expressions for these correlators. We extract the conformal weights of the underlying conformal fields and find $\\Delta = -\\frac18$, $0$, $-\\frac3{32}$, $\\frac38$, $1$, $\\tfrac \\theta \\pi (1+\\tfrac{2\\theta}\\pi$, where $\\theta$ encodes the weight of one class of loops for the correlator of type f. These results are obtained by analysing the asymptotics of the exact expressions, and by using the Cardy-Peschel formula in the case where $x$ and $y$ are set to the corners. For type b, we find a $\\log|x-y|$ dependence from the asymptotics, and a $\\ln (\\ln n$ term in the corner free energy. This is consistent with the interpretation of the boundary condition of type b as the insertion of a logarithmic field belonging to a rank two Jordan cell. For the other values of $\\beta = 2 \\cos \\lambda$, we use the hypothesis of conformal invariance to predict the conformal weights and find $\\Delta = \\Delta_{1,2}$, $\\Delta_{1,3}$, $\\Delta_{0,\\frac12}$, $\\Delta_{1,0}$, $\\Delta_{1,-1}$ and $\\Delta_{\\frac{2\\theta}\\lambda+1,\\frac{2\\theta}\\lambda+1}$, extending the results of critical dense polymers. With the results for type f, we reproduce a Coulomb gas prediction for the valence bond entanglement entropy of Jacobsen and Saleur.

  1. Resolution enhancement of scanning four-point-probe measurements on two-dimensional systems

    DEFF Research Database (Denmark)

    Hansen, Torben Mikael; Stokbro, Kurt; Hansen, Ole

    2003-01-01

    A method to improve the resolution of four-point-probe measurements of two-dimensional (2D) and quasi-2D systems is presented. By mapping the conductance on a dense grid around a target area and postprocessing the data, the resolution can be improved by a factor of approximately 50 to better than 1....../15 of the four-point-probe electrode spacing. The real conductance sheet is simulated by a grid of discrete resistances, which is optimized by means of a standard optimization algorithm, until the simulated voltage-to-current ratios converges with the measurement. The method has been tested against simulated...

  2. Miniaturization for Point-of-Care Analysis: Platform Technology for Almost Every Biomedical Assay.

    Science.gov (United States)

    Schumacher, Soeren; Sartorius, Dorian; Ehrentreich-Förster, Eva; Bier, Frank F

    2012-10-01

    Platform technologies for the changing need of diagnostics are one of the main challenges in medical device technology. From one point-of-view the demand for new and more versatile diagnostic is increasing due to a deeper knowledge of biomarkers and their combination with diseases. From another point-of-view a decentralization of diagnostics will occur since decisions can be made faster resulting in higher success of therapy. Hence, new types of technologies have to be established which enables a multiparameter analysis at the point-of-care. Within this review-like article a system called Fraunhofer ivD-platform is introduced. It consists of a credit-card sized cartridge with integrated reagents, sensors and pumps and a read-out/processing unit. Within the cartridge the assay runs fully automated within 15-20 minutes. Due to the open design of the platform different analyses such as antibody, serological or DNA-assays can be performed. Specific examples of these three different assay types are given to show the broad applicability of the system.

  3. Identification of the Key Fields and Their Key Technical Points of Oncology by Patent Analysis.

    Science.gov (United States)

    Zhang, Ting; Chen, Juan; Jia, Xiaofeng

    2015-01-01

    This paper aims to identify the key fields and their key technical points of oncology by patent analysis. Patents of oncology applied from 2006 to 2012 were searched in the Thomson Innovation database. The key fields and their key technical points were determined by analyzing the Derwent Classification (DC) and the International Patent Classification (IPC), respectively. Patent applications in the top ten DC occupied 80% of all the patent applications of oncology, which were the ten fields of oncology to be analyzed. The number of patent applications in these ten fields of oncology was standardized based on patent applications of oncology from 2006 to 2012. For each field, standardization was conducted separately for each of the seven years (2006-2012) and the mean of the seven standardized values was calculated to reflect the relative amount of patent applications in that field; meanwhile, regression analysis using time (year) and the standardized values of patent applications in seven years (2006-2012) was conducted so as to evaluate the trend of patent applications in each field. Two-dimensional quadrant analysis, together with the professional knowledge of oncology, was taken into consideration in determining the key fields of oncology. The fields located in the quadrant with high relative amount or increasing trend of patent applications are identified as key ones. By using the same method, the key technical points in each key field were identified. Altogether 116,820 patents of oncology applied from 2006 to 2012 were retrieved, and four key fields with twenty-nine key technical points were identified, including "natural products and polymers" with nine key technical points, "fermentation industry" with twelve ones, "electrical medical equipment" with four ones, and "diagnosis, surgery" with four ones. The results of this study could provide guidance on the development direction of oncology, and also help researchers broaden innovative ideas and discover new

  4. Expanded uncertainty associated with determination of isotope enrichment factors: Comparison of two point calculation and Rayleigh-plot.

    Science.gov (United States)

    Julien, Maxime; Gilbert, Alexis; Yamada, Keita; Robins, Richard J; Höhener, Patrick; Yoshida, Naohiro; Remaud, Gérald S

    2018-01-01

    The enrichment factor (ε) is a common way to express Isotope Effects (IEs) associated with a phenomenon. Many studies determine ε using a Rayleigh-plot, which needs multiple data points. More recent articles describe an alternative method using the Rayleigh equation that allows the determination of ε using only one experimental point, but this method is often subject to controversy. However, a calculation method using two points (one experimental point and one at t 0 ) should lead to the same results because the calculation is derived from the Rayleigh equation. But, it is frequently asked "what is the valid domain of use of this two point calculation?" The primary aim of the present work is a systematic comparison of results obtained with these two methodologies and the determination of the conditions required for the valid calculation of ε. In order to evaluate the efficiency of the two approaches, the expanded uncertainty (U) associated with determining ε has been calculated using experimental data from three published articles. The second objective of the present work is to describe how to determine the expanded uncertainty (U) associated with determining ε. Comparative methodologies using both Rayleigh-plot and two point calculation are detailed and it is clearly demonstrated that calculation of ε using a single data point can give the same result as a Rayleigh-plot provided one strict condition is respected: that the experimental value is measured at a small fraction of unreacted substrate (f < 30%). This study will help stable isotope users to present their results in a more rigorous expression: ε ± U and therefore to define better the significance of an experimental results prior interpretation. Capsule: Enrichment factor can be determined through two different methods and the calculation of associated expanded uncertainty allows checking its significance. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Inhibition effect of calcium hydroxide point and chlorhexidine point on root canal bacteria of necrosis teeth

    Directory of Open Access Journals (Sweden)

    Andry Leonard Je

    2006-03-01

    Full Text Available Calcium Hydroxide point and Chlorhexidine point are new drugs for eliminating bacteria in the root canal. The points slowly and controly realease Calcium Hydroxide and Chlorhexidine into root canal. The purpose of the study was to determined the effectivity of Calcium hydroxide point (Calcium hydroxide plus point and Chlorhexidine point in eleminating the root canal bacteria of nescrosis teeth. In this study 14 subjects were divided into 2 groups. The first group was treated with Calcium hydroxide point and the second was treated with Chlorhexidine poin. The bacteriological sampling were measured with spectrofotometry. The Paired T Test analysis (before and after showed significant difference between the first and second group. The Independent T Test which analysed the effectivity of both groups had not showed significant difference. Although there was no significant difference in statistical test, the result of second group eliminate more bacteria than the first group. The present finding indicated that the use of Chlorhexidine point was better than Calcium hydroxide point in seven days period. The conclusion is Chlorhexidine point and Calcium hydroxide point as root canal medicament effectively eliminate root canal bacteria of necrosis teeth.

  6. Conservation laws for a system of two point masses in general relativity

    International Nuclear Information System (INIS)

    Damour, Thibaut; Deruelle, Nathalie

    1981-01-01

    We study the symmetries of the generalized lagrangian of two point masses, in the post-post newtonian approximation of General Relativity. We deduce, via Noether's theorem, conservation laws for energy, linear and angular momentum, as well as a generalisation of the center-of-mass theorem [fr

  7. Two-point versus multiple-point geostatistics: the ability of geostatistical methods to capture complex geobodies and their facies associations—an application to a channelized carbonate reservoir, southwest Iran

    International Nuclear Information System (INIS)

    Hashemi, Seyyedhossein; Javaherian, Abdolrahim; Ataee-pour, Majid; Khoshdel, Hossein

    2014-01-01

    . This study showed how different sources of data can be employed in a multiple-point simulation algorithm to get reliable facies models. In addition, concerning the reproduction of curvilinear channel bodies, the modeling results revealed the strength of MPS algorithms (SNESIM in this study) in comparison with two-point geostatistical methods (including the SIS and TGS). (paper)

  8. Dynamical pairwise entanglement and two-point correlations in the three-ligand spin-star structure

    Science.gov (United States)

    Motamedifar, M.

    2017-10-01

    We consider the three-ligand spin-star structure through homogeneous Heisenberg interactions (XXX-3LSSS) in the framework of dynamical pairwise entanglement. It is shown that the time evolution of the central qubit ;one-particle; state (COPS) brings about the generation of quantum W states at periodical time instants. On the contrary, W states cannot be generated from the time evolution of a ligand ;one-particle; state (LOPS). We also investigate the dynamical behavior of two-point quantum correlations as well as the expectation values of the different spin-components for each element in the XXX-3LSSS. It is found that when a W state is generated, the same value of the concurrence between any two arbitrary qubits arises from the xx and yy two-point quantum correlations. On the opposite, zz quantum correlation between any two qubits vanishes at these time instants.

  9. Exact two-point resistance, and the simple random walk on the complete graph minus N edges

    Energy Technology Data Exchange (ETDEWEB)

    Chair, Noureddine, E-mail: n.chair@ju.edu.jo

    2012-12-15

    An analytical approach is developed to obtain the exact expressions for the two-point resistance and the total effective resistance of the complete graph minus N edges of the opposite vertices. These expressions are written in terms of certain numbers that we introduce, which we call the Bejaia and the Pisa numbers; these numbers are the natural generalizations of the bisected Fibonacci and Lucas numbers. The correspondence between random walks and the resistor networks is then used to obtain the exact expressions for the first passage and mean first passage times on this graph. - Highlights: Black-Right-Pointing-Pointer We obtain exact formulas for the two-point resistance of the complete graph minus N edges. Black-Right-Pointing-Pointer We obtain also the total effective resistance of this graph. Black-Right-Pointing-Pointer We modified Schwatt's formula on trigonometrical power sum to suit our computations. Black-Right-Pointing-Pointer We introduced the generalized bisected Fibonacci and Lucas numbers: the Bejaia and the Pisa numbers. Black-Right-Pointing-Pointer The first passage and mean first passage times of the random walks have exact expressions.

  10. PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance

    International Nuclear Information System (INIS)

    Vondy, D.R.

    1979-10-01

    The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined

  11. PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.

    1979-10-01

    The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.

  12. Monthly variations of dew point temperature in the coterminous United States

    Science.gov (United States)

    Robinson, Peter J.

    1998-11-01

    The dew point temperature, Td, data from the surface airways data set of the U.S. National Climatic Data Center were used to develop a basic dew point climatology for the coterminous United States. Quality control procedures were an integral part of the analysis. Daily Td, derived as the average of eight observations at 3-hourly intervals, for 222 stations for the 1961-1990 period were used. The annual and seasonal pattern of average values showed a clear south-north decrease in the eastern portion of the nation, a trend which was most marked in winter. In the west, values decreased inland from the Pacific Coast. Inter-annual variability was generally low when actual mean values were high. A cluster analysis suggested that the area could be divided into six regions, two oriented north-south in the west, four aligned east-west in the area east of the Rocky Mountains. Day-to-day variability was low in all seasons in the two western clusters, but showed a distinct winter maximum in the east. This was explained in broad terms by consideration of air flow regimes, with the Pacific Ocean and the Gulf of Mexico acting as the major moisture sources. Comparison of values for pairs of nearby stations suggested that Td was rather insensitive to local moisture sources. Analysis of the patterns of occurrence of dew points exceeding the 95th percentile threshold indicated that extremes in summer tend to be localized and short-lived, while in winter they are more widespread and persistent.

  13. Dynamics in discrete two-dimensional nonlinear Schrödinger equations in the presence of point defects

    DEFF Research Database (Denmark)

    Christiansen, Peter Leth; Gaididei, Yuri Borisovich; Rasmussen, Kim

    1996-01-01

    The dynamics of two-dimensional discrete structures is studied in the framework of the generalized two-dimensional discrete nonlinear Schrodinger equation. The nonlinear coupling in the form of the Ablowitz-Ladik nonlinearity and point impurities is taken into account. The stability properties...... of the stationary solutions are examined. The essential importance of the existence of stable immobile solitons in the two-dimensional dynamics of the traveling pulses is demonstrated. The typical scenario of the two-dimensional quasicollapse of a moving intense pulse represents the formation of standing trapped...... narrow spikes. The influence of the point impurities on this dynamics is also investigated....

  14. Determination of point of incidence for the case of reflection or refraction at spherical surface knowing two points lying on the ray.

    Science.gov (United States)

    Mikš, Antonín; Novák, Pavel

    2017-09-01

    The paper is focused on the problem of determination of the point of incidence of a light ray for the case of reflection or refraction at the spherical optical surface, assuming that two fixed points in space that the sought light ray should go through are given. The requirement is that one of these points lies on the incident ray and the other point on the reflected/refracted ray. Although at first glance it seems to be a simple problem, it will be shown that it has no simple analytical solution. The basic idea of the solution is given, and it is shown that the problem leads to a nonlinear equation in one variable. The roots of the resulting nonlinear equation can be found by numerical methods of mathematical optimization. The proposed methods were implemented in MATLAB, and the proper function of these algorithms was verified on several examples.

  15. The solution of the neutron point kinetics equation with stochastic extension: an analysis of two moments

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Milena Wollmann da; Vilhena, Marco Tullio M.B.; Bodmann, Bardo Ernst J.; Vasques, Richard, E-mail: milena.wollmann@ufrgs.br, E-mail: vilhena@mat.ufrgs.br, E-mail: bardobodmann@ufrgs.br, E-mail: richard.vasques@fulbrightmail.org [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica

    2015-07-01

    The neutron point kinetics equation, which models the time-dependent behavior of nuclear reactors, is often used to understand the dynamics of nuclear reactor operations. It consists of a system of coupled differential equations that models the interaction between (i) the neutron population; and (II) the concentration of the delayed neutron precursors, which are radioactive isotopes formed in the fission process that decay through neutron emission. These equations are deterministic in nature, and therefore can provide only average values of the modeled populations. However, the actual dynamical process is stochastic: the neutron density and the delayed neutron precursor concentrations vary randomly with time. To address this stochastic behavior, Hayes and Allen have generalized the standard deterministic point kinetics equation. They derived a system of stochastic differential equations that can accurately model the random behavior of the neutron density and the precursor concentrations in a point reactor. Due to the stiffness of these equations, this system was numerically implemented using a stochastic piecewise constant approximation method (Stochastic PCA). Here, we present a study of the influence of stochastic fluctuations on the results of the neutron point kinetics equation. We reproduce the stochastic formulation introduced by Hayes and Allen and compute Monte Carlo numerical results for examples with constant and time-dependent reactivity, comparing these results with stochastic and deterministic methods found in the literature. Moreover, we introduce a modified version of the stochastic method to obtain a non-stiff solution, analogue to a previously derived deterministic approach. (author)

  16. The solution of the neutron point kinetics equation with stochastic extension: an analysis of two moments

    International Nuclear Information System (INIS)

    Silva, Milena Wollmann da; Vilhena, Marco Tullio M.B.; Bodmann, Bardo Ernst J.; Vasques, Richard

    2015-01-01

    The neutron point kinetics equation, which models the time-dependent behavior of nuclear reactors, is often used to understand the dynamics of nuclear reactor operations. It consists of a system of coupled differential equations that models the interaction between (i) the neutron population; and (II) the concentration of the delayed neutron precursors, which are radioactive isotopes formed in the fission process that decay through neutron emission. These equations are deterministic in nature, and therefore can provide only average values of the modeled populations. However, the actual dynamical process is stochastic: the neutron density and the delayed neutron precursor concentrations vary randomly with time. To address this stochastic behavior, Hayes and Allen have generalized the standard deterministic point kinetics equation. They derived a system of stochastic differential equations that can accurately model the random behavior of the neutron density and the precursor concentrations in a point reactor. Due to the stiffness of these equations, this system was numerically implemented using a stochastic piecewise constant approximation method (Stochastic PCA). Here, we present a study of the influence of stochastic fluctuations on the results of the neutron point kinetics equation. We reproduce the stochastic formulation introduced by Hayes and Allen and compute Monte Carlo numerical results for examples with constant and time-dependent reactivity, comparing these results with stochastic and deterministic methods found in the literature. Moreover, we introduce a modified version of the stochastic method to obtain a non-stiff solution, analogue to a previously derived deterministic approach. (author)

  17. Uncertainty analysis of point-by-point sampling complex surfaces using touch probe CMMs DOE for complex surfaces verification with CMM

    DEFF Research Database (Denmark)

    Barini, Emanuele Modesto; Tosello, Guido; De Chiffre, Leonardo

    2010-01-01

    The paper describes a study concerning point-by-point sampling of complex surfaces using tactile CMMs. A four factor, two level completely randomized factorial experiment was carried out, involving measurements on a complex surface configuration item comprising a sphere, a cylinder and a cone, co...

  18. In-Plane free Vibration Analysis of an Annular Disk with Point Elastic Support

    Directory of Open Access Journals (Sweden)

    S. Bashmal

    2011-01-01

    Full Text Available In-plane free vibrations of an elastic and isotropic annular disk with elastic constraints at the inner and outer boundaries, which are applied either along the entire periphery of the disk or at a point are investigated. The boundary characteristic orthogonal polynomials are employed in the Rayleigh-Ritz method to obtain the frequency parameters and the associated mode shapes. Boundary characteristic orthogonal polynomials are generated for the free boundary conditions of the disk while artificial springs are used to account for different boundary conditions. The frequency parameters for different boundary conditions of the outer edge are evaluated and compared with those available in the published studies and computed from a finite element model. The computed mode shapes are presented for a disk clamped at the inner edge and point supported at the outer edge to illustrate the free in-plane vibration behavior of the disk. Results show that addition of point clamped support causes some of the higher modes to split into two different frequencies with different mode shapes.

  19. Mapping correlation of a simulated dark matter source and a point source in the gamma-ray sky - Oral Presentation

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Alexander [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2015-08-23

    In my research, I analyzed how two gamma-ray source models interact with one another when optimizing to fit data. This is important because it becomes hard to distinguish between the two point sources when they are close together or looking at low energy photons. The reason for the first is obvious, the reason why they become harder to distinguish at lower photon energies is the resolving power of the Fermi Gamma-Ray Space Telescope gets worse at lower energies. When the two point sources are highly correlated (hard to distinguish between), we need to change our method of statistical analysis. What I did was show that highly correlated sources have larger uncertainties associated with them, caused by an optimizer not knowing which point source’s parameters to optimize. I also mapped out where their is high correlation for 2 different theoretical mass dark matter point sources so that people analyzing them in the future knew where they had to use more sophisticated statistical analysis.

  20. Pachyonychia congenita: Report of two cases and mutation analysis

    Directory of Open Access Journals (Sweden)

    Jia-Ming Yeh

    2012-09-01

    Full Text Available Pachyonychia congenita (PC comprises a group of rare autosomal dominant genetic disorders that involve ectodermal dysplasia. It is characterized by hypertrophic nail dystrophy, focal palmoplantar keratoderma, follicular keratoses, and oral leukokeratosis. Historically, PC has been subdivided into two subtypes, PC-1 or PC-2, on the basis of clinical presentation. However, differential diagnosis based on clinical grounds, especially in young and/or not fully penetrant patients, can be difficult. In addition, clinical analysis of the large case series has shown that there is considerable phenotypic overlap between these two subtypes recently. Based on the advent of molecular genetics and the identification of the genes causing PC, more specific nomenclature has been adopted. Therefore, diagnosis at the molecular level is useful and important to confirm the clinical impression. In this report, we describe two typical cases of PC with mutation analysis revealed a small deletion (514_516delACC, Asn172del and a point mutation (487 G > A, GAG → AAG, Glu163Lys in the KRT6A gene.

  1. Holographic two-point functions for Janus interfaces in the D1/D5 CFT

    Energy Technology Data Exchange (ETDEWEB)

    Chiodaroli, Marco [Department of Physics and Astronomy, Uppsala University, SE-75108 Uppsala (Sweden); Estes, John [Department of Physics, Long Island University,1 University Plaza, Brooklyn, NY 11201 (United States); Korovin, Yegor [Max-Planck-Institut für Gravitationsphysik, Albert-Einstein-Institut, Am Mühlenberg 1, 14476 Golm (Germany)

    2017-04-26

    This paper investigates scalar perturbations in the top-down supersymmetric Janus solutions dual to conformal interfaces in the D1/D5 CFT, finding analytic closed-form solutions. We obtain an explicit representation of the bulk-to-bulk propagator and extract the two-point correlation function of the dual operator with itself, whose form is not fixed by symmetry alone. We give an expression involving the sum of conformal blocks associated with the bulk-defect operator product expansion and briefly discuss finite-temperature extensions. To our knowledge, this is the first computation of a two-point function which is not completely determined by symmetry for a fully-backreacted, top-down holographic defect.

  2. A proximal point algorithm with generalized proximal distances to BEPs

    OpenAIRE

    Bento, G. C.; Neto, J. X. Cruz; Lopes, J. O.; Soares Jr, P. A.; Soubeyran, A.

    2014-01-01

    We consider a bilevel problem involving two monotone equilibrium bifunctions and we show that this problem can be solved by a proximal point method with generalized proximal distances. We propose a framework for the convergence analysis of the sequences generated by the algorithm. This class of problems is very interesting because it covers mathematical programs and optimization problems under equilibrium constraints. As an application, we consider the problem of the stability and change dyna...

  3. The Melting Point of Palladium Using Miniature Fixed Points of Different Ceramic Materials: Part II—Analysis of Melting Curves and Long-Term Investigation

    Science.gov (United States)

    Edler, F.; Huang, K.

    2016-12-01

    Fifteen miniature fixed-point cells made of three different ceramic crucible materials (Al2O3, ZrO2, and Al2O3(86 %)+ZrO2(14 %)) were filled with pure palladium and used to calibrate type B thermocouples (Pt30 %Rh/Pt6 %Rh). A critical point by using miniature fixed points with small amounts of fixed-point material is the analysis of the melting curves, which are characterized by significant slopes during the melting process compared to flat melting plateaus obtainable using conventional fixed-point cells. The method of the extrapolated starting point temperature using straight line approximation of the melting plateau was applied to analyze the melting curves. This method allowed an unambiguous determination of an electromotive force (emf) assignable as melting temperature. The strict consideration of two constraints resulted in a unique, repeatable and objective method to determine the emf at the melting temperature within an uncertainty of about 0.1 μ V. The lifetime and long-term stability of the miniature fixed points was investigated by performing more than 100 melt/freeze cycles for each crucible of the different ceramic materials. No failure of the crucibles occurred indicating an excellent mechanical stability of the investigated miniature cells. The consequent limitation of heating rates to values below {± }3.5 K min^{-1} above 1100° C and the carefully and completely filled crucibles (the liquid palladium occupies the whole volume of the crucible) are the reasons for successfully preventing the crucibles from breaking. The thermal stability of the melting temperature of palladium was excellent when using the crucibles made of Al2O3(86 %)+ZrO2(14 %) and ZrO2. Emf drifts over the total duration of the long-term investigation were below a temperature equivalent of about 0.1 K-0.2 K.

  4. Life-Cycle Cost-Benefit (LCCB) Analysis of Bridges from a User and Social Point of View

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    2009-01-01

    is to present and discuss some of these problems from a user and social point of view. A brief presentation of a preliminary study of the importance of including benefits in life-cycle cost-benefit analysis in management systems for bridges is shown. Benefits may be positive as well as negative from the user...... point of view. In the paper, negative benefits (user costs) are discussed in relation to the maintenance of concrete bridges. A limited number of excerpts from published reports that are related to the importance of estimating user costs when repairs of bridges are planned, and when optimized strategies......During the last two decades, important progress has been made in the life-cycle cost-benefit (LCCB) analysis of structures, especially offshore platforms, bridges and nuclear installations. Due to the large uncertainties related to the deterioration, maintenance, and benefits of such structures...

  5. Screw compressor analysis from a vibration point-of-view

    Science.gov (United States)

    Hübel, D.; Žitek, P.

    2017-09-01

    Vibrations are a very typical feature of all compressors and are given great attention in the industry. The reason for this interest is primarily the negative influence that it can have on both the operating staff and the entire machine's service life. The purpose of this work is to describe the methodology of screw compressor analysis from a vibration point-of-view. This analysis is an essential part of the design of vibro-diagnostics of screw compressors with regard to their service life.

  6. Process for structural geologic analysis of topography and point data

    Science.gov (United States)

    Eliason, Jay R.; Eliason, Valerie L. C.

    1987-01-01

    A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.

  7. Alternative splicing studies of the reactive oxygen species gene network in Populus reveal two isoforms of high-isoelectric-point superoxide dismutase.

    Science.gov (United States)

    Srivastava, Vaibhav; Srivastava, Manoj Kumar; Chibani, Kamel; Nilsson, Robert; Rouhier, Nicolas; Melzer, Michael; Wingsle, Gunnar

    2009-04-01

    Recent evidence has shown that alternative splicing (AS) is widely involved in the regulation of gene expression, substantially extending the diversity of numerous proteins. In this study, a subset of expressed sequence tags representing members of the reactive oxygen species gene network was selected from the PopulusDB database to investigate AS mechanisms in Populus. Examples of all known types of AS were detected, but intron retention was the most common. Interestingly, the closest Arabidopsis (Arabidopsis thaliana) homologs of half of the AS genes identified in Populus are not reportedly alternatively spliced. Two genes encoding the protein of most interest in our study (high-isoelectric-point superoxide dismutase [hipI-SOD]) have been found in black cottonwood (Populus trichocarpa), designated PthipI-SODC1 and PthipI-SODC2. Analysis of the expressed sequence tag libraries has indicated the presence of two transcripts of PthipI-SODC1 (hipI-SODC1b and hipI-SODC1s). Alignment of these sequences with the PthipI-SODC1 gene showed that hipI-SODC1b was 69 bp longer than hipI-SODC1s due to an AS event involving the use of an alternative donor splice site in the sixth intron. Transcript analysis showed that the splice variant hipI-SODC1b was differentially expressed, being clearly expressed in cambial and xylem, but not phloem, regions. In addition, immunolocalization and mass spectrometric data confirmed the presence of hipI-SOD proteins in vascular tissue. The functionalities of the spliced gene products were assessed by expressing recombinant hipI-SOD proteins and in vitro SOD activity assays.

  8. Analysis of irregularly distributed points

    DEFF Research Database (Denmark)

    Hartelius, Karsten

    1996-01-01

    conditional modes are applied to this problem. The Kalman filter is described as a powerfull tool for modelling two-dimensional data. Motivated by the development of the reduced update Kalman filter we propose a reduced update Kalman smoother which offers considerable computa- tional savings. Kriging...... on hybridisation analysis, which comprise matching a grid to an arrayed set of DNA- clones spotted onto a hybridisation filter. The line process has proven to perform a satisfactorly modelling of shifted fields (subgrids) in the hybridisation grid, and a two-staged hierarchical grid matching scheme which...

  9. Error analysis of dimensionless scaling experiments with multiple points using linear regression

    International Nuclear Information System (INIS)

    Guercan, Oe.D.; Vermare, L.; Hennequin, P.; Bourdelle, C.

    2010-01-01

    A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter)

  10. Third generation masses from a two Higgs model fixed point

    International Nuclear Information System (INIS)

    Froggatt, C.D.; Knowles, I.G.; Moorhouse, R.G.

    1990-01-01

    The large mass ratio between the top and bottom quarks may be attributed to a hierarchy in the vacuum expectation values of scalar doublets. We consider an effective renormalisation group fixed point determination of the quartic scalar and third generation Yukawa couplings in such a two doublet model. This predicts a mass m t =220 GeV and a mass ratio m b /m τ =2.6. In its simplest form the model also predicts the scalar masses, including a light scalar with a mass of order the b quark mass. Experimental implications are discussed. (orig.)

  11. Adiabatic physics of an exchange-coupled spin-dimer system: Magnetocaloric effect, zero-point fluctuations, and possible two-dimensional universal behavior

    International Nuclear Information System (INIS)

    Brambleby, J.; Goddard, P. A.; Singleton, John; Jaime, Marcelo; Lancaster, T.

    2017-01-01

    We present the magnetic and thermal properties of the bosonic-superfluid phase in a spin-dimer network using both quasistatic and rapidly changing pulsed magnetic fields. The entropy derived from a heat-capacity study reveals that the pulsed-field measurements are strongly adiabatic in nature and are responsible for the onset of a significant magnetocaloric effect (MCE). In contrast to previous predictions we show that the MCE is not just confined to the critical regions, but occurs for all fields greater than zero at sufficiently low temperatures. We explain the MCE using a model of the thermal occupation of exchange-coupled dimer spin states and highlight that failure to take this effect into account inevitably leads to incorrect interpretations of experimental results. In addition, the heat capacity in our material is suggestive of an extraordinary contribution from zero-point fluctuations and appears to indicate universal behavior with different critical exponents at the two field-induced critical points. Finally, the data at the upper critical point, combined with the layered structure of the system, are consistent with a two-dimensional nature of spin excitations in the system.

  12. Experimental Analysis of the Influence of Drill Point Angle and Wear on the Drilling of Woven CFRPs

    Directory of Open Access Journals (Sweden)

    Norberto Feito

    2014-05-01

    Full Text Available This paper focuses on the effect of the drill geometry on the drilling of woven Carbon Fiber Reinforced Polymer composite (CFRPs. Although different geometrical effects can be considered in drilling CFRPs, the present work focuses on the influence of point angle and wear because they are the important factors influencing hole quality and machining forces. Surface quality was evaluated in terms of delamination and superficial defects. Three different point angles were tested representative of the geometries commonly used in the industry. Two wear modes were considered, being representative of the wear patterns commonly observed when drilling CFRPs: flank wear and honed cutting edge. It was found that the crossed influence of the point angle and wear were significant to the thrust force. Delamination at the hole entry and exit showed opposite trends with the change of geometry. Also, cutting parameters were checked showing the feed’s dominant influence on surface damage.

  13. Analysis of Two-Phase Flow in Damper Seals for Cryogenic Turbopumps

    Science.gov (United States)

    Arauz, Grigory L.; SanAndres, Luis

    1996-01-01

    Cryogenic damper seals operating close to the liquid-vapor region (near the critical point or slightly su-cooled) are likely to present two-phase flow conditions. Under single phase flow conditions the mechanical energy conveyed to the fluid increases its temperature and causes a phase change when the fluid temperature reaches the saturation value. A bulk-flow analysis for the prediction of the dynamic force response of damper seals operating under two-phase conditions is presented as: all-liquid, liquid-vapor, and all-vapor, i.e. a 'continuous vaporization' model. The two phase region is considered as a homogeneous saturated mixture in thermodynamic equilibrium. Th flow in each region is described by continuity, momentum and energy transport equations. The interdependency of fluid temperatures and pressure in the two-phase region (saturated mixture) does not allow the use of an energy equation in terms of fluid temperature. Instead, the energy transport is expressed in terms of fluid enthalpy. Temperature in the single phase regions, or mixture composition in the two phase region are determined based on the fluid enthalpy. The flow is also regarded as adiabatic since the large axial velocities typical of the seal application determine small levels of heat conduction to the walls as compared to the heat carried by fluid advection. Static and dynamic force characteristics for the seal are obtained from a perturbation analysis of the governing equations. The solution expressed in terms of zeroth and first order fields provide the static (leakage, torque, velocity, pressure, temperature, and mixture composition fields) and dynamic (rotordynamic force coefficients) seal parameters. Theoretical predictions show good agreement with experimental leakage pressure profiles, available from a Nitrogen at cryogenic temperatures. Force coefficient predictions for two phase flow conditions show significant fluid compressibility effects, particularly for mixtures with low mass

  14. Comparative analysis of peak-detection techniques for comprehensive two-dimensional chromatography.

    Science.gov (United States)

    Latha, Indu; Reichenbach, Stephen E; Tao, Qingping

    2011-09-23

    Comprehensive two-dimensional gas chromatography (GC×GC) is a powerful technology for separating complex samples. The typical goal of GC×GC peak detection is to aggregate data points of analyte peaks based on their retention times and intensities. Two techniques commonly used for two-dimensional peak detection are the two-step algorithm and the watershed algorithm. A recent study [4] compared the performance of the two-step and watershed algorithms for GC×GC data with retention-time shifts in the second-column separations. In that analysis, the peak retention-time shifts were corrected while applying the two-step algorithm but the watershed algorithm was applied without shift correction. The results indicated that the watershed algorithm has a higher probability of erroneously splitting a single two-dimensional peak than the two-step approach. This paper reconsiders the analysis by comparing peak-detection performance for resolved peaks after correcting retention-time shifts for both the two-step and watershed algorithms. Simulations with wide-ranging conditions indicate that when shift correction is employed with both algorithms, the watershed algorithm detects resolved peaks with greater accuracy than the two-step method. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Contrast and Critique of Two Approaches to Discourse Analysis: Conversation Analysis and Speech Act Theory

    Directory of Open Access Journals (Sweden)

    Nguyen Van Han

    2014-08-01

    Full Text Available Discourse analysis, as Murcia and Olshtain (2000 assume, is a vast study of language in use that extends beyond sentence level, and it involves a more cognitive and social perspective on language use and communication exchanges. Holding a wide range of phenomena about language with society, culture and thought, discourse analysis contains various approaches: speech act, pragmatics, conversation analysis, variation analysis, and critical discourse analysis. Each approach works in its different domain to discourse. For one dimension, it shares the same assumptions or general problems in discourse analysis with the other approaches: for instance, the explanation on how we organize language into units beyond sentence boundaries, or how language is used to convey information about the world, ourselves and human relationships (Schiffrin 1994: viii. For other dimensions, each approach holds its distinctive characteristics contributing to the vastness of discourse analysis. This paper will mainly discuss two approaches to discourse analysis- conversation analysis and speech act theory- and will attempt to point out some similarities as well as contrasting features between the two approaches, followed by a short reflection on their strengths and weaknesses in the essence of each approach. The organizational and discourse features in the exchanges among three teachers at the College of Finance and Customs in Vietnam will be analysed in terms of conversation analysis and speech act theory.

  16. Recommendations for dealing with waste contaminated with Ebola virus: a Hazard Analysis of Critical Control Points approach.

    Science.gov (United States)

    Edmunds, Kelly L; Elrahman, Samira Abd; Bell, Diana J; Brainard, Julii; Dervisevic, Samir; Fedha, Tsimbiri P; Few, Roger; Howard, Guy; Lake, Iain; Maes, Peter; Matofari, Joseph; Minnigh, Harvey; Mohamedani, Ahmed A; Montgomery, Maggie; Morter, Sarah; Muchiri, Edward; Mudau, Lutendo S; Mutua, Benedict M; Ndambuki, Julius M; Pond, Katherine; Sobsey, Mark D; van der Es, Mike; Zeitoun, Mark; Hunter, Paul R

    2016-06-01

    To assess, within communities experiencing Ebola virus outbreaks, the risks associated with the disposal of human waste and to generate recommendations for mitigating such risks. A team with expertise in the Hazard Analysis of Critical Control Points framework identified waste products from the care of individuals with Ebola virus disease and constructed, tested and confirmed flow diagrams showing the creation of such products. After listing potential hazards associated with each step in each flow diagram, the team conducted a hazard analysis, determined critical control points and made recommendations to mitigate the transmission risks at each control point. The collection, transportation, cleaning and shared use of blood-soiled fomites and the shared use of latrines contaminated with blood or bloodied faeces appeared to be associated with particularly high levels of risk of Ebola virus transmission. More moderate levels of risk were associated with the collection and transportation of material contaminated with bodily fluids other than blood, shared use of latrines soiled with such fluids, the cleaning and shared use of fomites soiled with such fluids, and the contamination of the environment during the collection and transportation of blood-contaminated waste. The risk of the waste-related transmission of Ebola virus could be reduced by the use of full personal protective equipment, appropriate hand hygiene and an appropriate disinfectant after careful cleaning. Use of the Hazard Analysis of Critical Control Points framework could facilitate rapid responses to outbreaks of emerging infectious disease.

  17. HYGIENE PRACTICES IN URBAN RESTAURANTS AND CHALLENGES TO IMPLEMENTING FOOD SAFETY AND HAZARD ANALYSIS CRITICAL CONTROL POINTS (HACCP) PROGRAMMES IN THIKA TOWN, KENYA.

    Science.gov (United States)

    Muinde, R K; Kiinyukia, C; Rombo, G O; Muoki, M A

    2012-12-01

    To determine the microbial load in food, examination of safety measures and possibility of implementing an Hazard Analysis Critical Control Points (HACCP) system. The target population for this study consisted of restaurants owners in Thika. Municipality (n = 30). Simple randomsamples of restaurantswere selected on a systematic sampling method of microbial analysis in cooked, non-cooked, raw food and water sanitation in the selected restaurants. Two hundred and ninety eight restaurants within Thika Municipality were selected. Of these, 30 were sampled for microbiological testing. From the study, 221 (74%) of the restaurants were ready to eat establishments where food was prepared early enough to hold and only 77(26%) of the total restaurants, customers made an order of food they wanted. 118(63%) of the restaurant operators/staff had knowledge on quality control on food safety measures, 24 (8%) of the restaurants applied these knowledge while 256 (86%) of the restaurants staff showed that food contains ingredients that were hazard if poorly handled. 238 (80%) of the resultants used weighing and sorting of food materials, 45 (15%) used preservation methods and the rest used dry foods as critical control points on food safety measures. The study showed that there was need for implementation of Hazard Analysis Critical Control Points (HACCP) system to enhance food safety. Knowledge of HACCP was very low with 89 (30%) of the restaurants applying some of quality measures to the food production process systems. There was contamination with Coliforms, Escherichia coli and Staphylococcus aureus microbial though at very low level. The means of Coliforms, Escherichia coli and Staphylococcus aureas microbial in sampled food were 9.7 x 103CFU/gm, 8.2 x 103 CFU/gm and 5.4 x 103 CFU/gm respectively with Coliforms taking the highest mean.

  18. Cytometry of chromatin bound Mcm6 and PCNA identifies two states in G1 that are separated functionally by the G1 restriction point1

    Directory of Open Access Journals (Sweden)

    Jacobberger James W

    2010-04-01

    Full Text Available Abstract Background Cytometric measurements of DNA content and chromatin-bound Mcm2 have demonstrated bimodal patterns of expression in G1. These patterns, the replication licensing function of Mcm proteins, and a correlation between Mcm loading and cell cycle commitment for cells re-entering the cell cycle, led us to test the idea that cells expressing a defined high level of chromatin-bound Mcm6 in G1 are committed - i.e., past the G1 restriction point. We developed a cell-based assay for tightly-bound PCNA (PCNA* and Mcm6 (Mcm6*, DNA content, and a mitotic marker to clearly define G1, S, G2, and M phases of the cell cycle. hTERT-BJ1, hTERT-RPE-1, and Molt4 cells were extracted with Triton X-100 followed by methanol fixation, stained with antibodies and DAPI, then measured by cytometry. Results Bivariate analysis of cytometric data demonstrated complex patterns with distinct clustering for all combinations of the 4 variables. In G1, cells clustered in two groups characterized by low and high Mcm6* expression. Serum starvation and release experiments showed that residence in the high group was in late G1, just prior to S phase. Kinetic experiments, employing serum withdrawal, and stathmokinetic analysis with aphidicolin, mimosine or nocodazole demonstrated that cells with high levels of Mcm6* cycled with the committed phases of the cell cycle (S, G2, and M. Conclusions A multivariate assay for Mcm6*, PCNA*, DNA content, and a mitotic marker provides analysis capable of estimating the fraction of pre and post-restriction point G1 cells and supports the idea that there are at least two states in G1 defined by levels of chromatin bound Mcm proteins.

  19. Two-year-olds use adults' but not peers' points.

    Science.gov (United States)

    Kachel, Gregor; Moore, Richard; Tomasello, Michael

    2018-03-12

    In the current study, 24- to 27-month-old children (N = 37) used pointing gestures in a cooperative object choice task with either peer or adult partners. When indicating the location of a hidden toy, children pointed equally accurately for adult and peer partners but more often for adult partners. When choosing from one of three hiding places, children used adults' pointing to find a hidden toy significantly more often than they used peers'. In interaction with peers, children's choice behavior was at chance level. These results suggest that toddlers ascribe informative value to adults' but not peers' pointing gestures, and highlight the role of children's social expectations in their communicative development. © 2018 John Wiley & Sons Ltd.

  20. Dual keel Space Station payload pointing system design and analysis feasibility study

    Science.gov (United States)

    Smagala, Tom; Class, Brian F.; Bauer, Frank H.; Lebair, Deborah A.

    1988-01-01

    A Space Station attached Payload Pointing System (PPS) has been designed and analyzed. The PPS is responsible for maintaining fixed payload pointing in the presence of disturbance applied to the Space Station. The payload considered in this analysis is the Solar Optical Telescope. System performance is evaluated via digital time simulations by applying various disturbance forces to the Space Station. The PPS meets the Space Station articulated pointing requirement for all disturbances except Shuttle docking and some centrifuge cases.

  1. Giant Andreev Backscattering through a Quantum Point Contact Coupled via a Disordered Two-Dimensional Electron Gas to Superconductors

    International Nuclear Information System (INIS)

    den Hartog, S.G.; van Wees, B.J.; Klapwijk, T.M.; Nazarov, Y.V.; Borghs, G.

    1997-01-01

    We have investigated the superconducting-phase-modulated reduction in the resistance of a ballistic quantum point contact (QPC) connected via a disordered two-dimensional electron gas (2DEG) to superconductors. We show that this reduction is caused by coherent Andreev backscattering of holes through the QPC, which increases monotonically by reducing the bias voltage to zero. In contrast, the magnitude of the phase-dependent resistance of the disordered 2DEG displays a nonmonotonic reentrant behavior versus bias voltage. copyright 1997 The American Physical Society

  2. Two-phase flow instability and bifurcation analysis of inclined multiple uniformly heated channels - 15107

    International Nuclear Information System (INIS)

    Mishra, A.M.; Paul, S.; Singh, S.; Panday, V.

    2015-01-01

    In this paper the two-phase flow instability analysis of multiple heated channels with various inclinations is studied. In addition, the bifurcation analysis is also carried out to capture the nonlinear dynamics of the system and to identify the regions in parameter space for which subcritical and supercritical bifurcations exist. In order to carry out the analysis, the system is mathematically represented by nonlinear Partial Differential Equation (PDE) for mass, momentum and energy in single as well as two-phase region. Then converted into Ordinary Differential Equation (ODE) using weighted residual method. Also, coupling equation is being used under the assumption that pressure drop in each channel is the same and the total mass flow rate is equal to sum of the individual mass flow rates. The homogeneous equilibrium model is used for the analysis. Stability Map is obtained in terms of phase change number (Npch) and Subcooling Number (Nsb) by solving a set of nonlinear, coupled algebraic equations obtained at equilibrium using Newton Raphson Method. MATLAB Code is verified by comparing it with results obtained by Matcont (Open source software) under same parametric values. Numerical simulations of the time-dependent, nonlinear ODEs are carried out for selected points in the operating parameter space to obtain the actual damped and growing oscillations in the channel inlet flow velocity which confirms the stability region across the stability map. Generalized Hopf (GH) points are observed for different inclinations, they are also points for subcritical and supercritical bifurcations. (authors)

  3. Codimension-two bifurcation analysis on firing activities in Chay neuron model

    International Nuclear Information System (INIS)

    Duan Lixia; Lu Qishao

    2006-01-01

    Using codimension-two bifurcation analysis in the Chay neuron model, the relationship between the electric activities and the parameters of neurons is revealed. The whole parameter space is divided into two parts, that is, the firing and silence regions of neurons. It is found that the transition sets between firing and silence regions are composed of the Hopf bifurcation curves of equilibrium states and the saddle-node bifurcation curves of limit cycles, with some codimension-two bifurcation points. The transitions from silence to firing in neurons are due to the Hopf bifurcation or the fold limit cycle bifurcation, but the codimension-two singularities lead to complexity in dynamical behaviour of neuronal firing

  4. Codimension-two bifurcation analysis on firing activities in Chay neuron model

    Energy Technology Data Exchange (ETDEWEB)

    Duan Lixia [School of Science, Beijing University of Aeronautics and Astronautics, Beijing 100083 (China); Lu Qishao [School of Science, Beijing University of Aeronautics and Astronautics, Beijing 100083 (China)]. E-mail: qishaolu@hotmail.com

    2006-12-15

    Using codimension-two bifurcation analysis in the Chay neuron model, the relationship between the electric activities and the parameters of neurons is revealed. The whole parameter space is divided into two parts, that is, the firing and silence regions of neurons. It is found that the transition sets between firing and silence regions are composed of the Hopf bifurcation curves of equilibrium states and the saddle-node bifurcation curves of limit cycles, with some codimension-two bifurcation points. The transitions from silence to firing in neurons are due to the Hopf bifurcation or the fold limit cycle bifurcation, but the codimension-two singularities lead to complexity in dynamical behaviour of neuronal firing.

  5. Research on Nonlinear Feature of Electrical Resistance of Acupuncture Points

    Directory of Open Access Journals (Sweden)

    Jianzi Wei

    2012-01-01

    Full Text Available A highly sensitive volt-ampere characteristics detecting system was applied to measure the volt-ampere curves of nine acupuncture points, LU9, HT7, LI4, PC6, ST36, SP6, KI3, LR3, and SP3, and corresponding nonacupuncture points bilaterally from 42 healthy volunteers. Electric currents intensity was increased from 0 μA to 20 μA and then returned to 0 μA again. The results showed that the volt-ampere curves of acupuncture points had nonlinear property and magnetic hysteresis-like feature. On all acupuncture point spots, the volt-ampere areas of the increasing phase were significantly larger than that of the decreasing phase (P<0.01. The volt-ampere areas of ten acupuncture point spots were significantly smaller than those of the corresponding nonacupuncture point spots when intensity was increase (P<0.05~P<0.001. And when intensity was decrease, eleven acupuncture point spots showed the same property as above (P<0.05~P<0.001, while two acupuncture point spots showed opposite phenomenon in which the areas of two acupuncture point spots were larger than those of the corresponding nonacupuncture point spots (P<0.05~P<0.01. These results show that the phenomenon of low skin resistance does not exist to all acupuncture points.

  6. Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator

    Science.gov (United States)

    Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.

    2012-09-01

    This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.

  7. Analysis method of beam pointing stability based on optical transmission matrix

    Science.gov (United States)

    Wang, Chuanchuan; Huang, PingXian; Li, Xiaotong; Cen, Zhaofen

    2016-10-01

    Quite a lot of factors will make effects on beam pointing stability of an optical system, Among them, the element tolerance is one of the most important and common factors. In some large laser systems, it will make final micro beams spot on the image plane deviate obviously. So it is essential for us to achieve effective and accurate analysis theoretically on element tolerance. In order to make the analysis of beam pointing stability convenient and theoretical, we consider transmission of a single chief ray rather than beams approximately to stand for the whole spot deviation. According to optical matrix, we also simplify this complex process of light transmission to multiplication of many matrices. So that we can set up element tolerance model, namely having mathematical expression to illustrate spot deviation in an optical system with element tolerance. In this way, we can realize quantitative analysis of beam pointing stability theoretically. In second half of the paper, we design an experiment to get the spot deviation in a multipass optical system caused by element tolerance, then we adjust the tolerance step by step and compare the results with the datum got from tolerance model, finally prove the correction of tolerance model successfully.

  8. Multivariate survivorship analysis using two cross-sectional samples.

    Science.gov (United States)

    Hill, M E

    1999-11-01

    As an alternative to survival analysis with longitudinal data, I introduce a method that can be applied when one observes the same cohort in two cross-sectional samples collected at different points in time. The method allows for the estimation of log-probability survivorship models that estimate the influence of multiple time-invariant factors on survival over a time interval separating two samples. This approach can be used whenever the survival process can be adequately conceptualized as an irreversible single-decrement process (e.g., mortality, the transition to first marriage among a cohort of never-married individuals). Using data from the Integrated Public Use Microdata Series (Ruggles and Sobek 1997), I illustrate the multivariate method through an investigation of the effects of race, parity, and educational attainment on the survival of older women in the United States.

  9. Residual analysis for spatial point processes

    DEFF Research Database (Denmark)

    Baddeley, A.; Turner, R.; Møller, Jesper

    We define residuals for point process models fitted to spatial point pattern data, and propose diagnostic plots based on these residuals. The techniques apply to any Gibbs point process model, which may exhibit spatial heterogeneity, interpoint interaction and dependence on spatial covariates. Ou...... or covariate effects. Q-Q plots of the residuals are effective in diagnosing interpoint interaction. Some existing ad hoc statistics of point patterns (quadrat counts, scan statistic, kernel smoothed intensity, Berman's diagnostic) are recovered as special cases....

  10. Visualizing Uncertainty of Point Phenomena by Redesigned Error Ellipses

    Science.gov (United States)

    Murphy, Christian E.

    2018-05-01

    Visualizing uncertainty remains one of the great challenges in modern cartography. There is no overarching strategy to display the nature of uncertainty, as an effective and efficient visualization depends, besides on the spatial data feature type, heavily on the type of uncertainty. This work presents a design strategy to visualize uncertainty con-nected to point features. The error ellipse, well-known from mathematical statistics, is adapted to display the uncer-tainty of point information originating from spatial generalization. Modified designs of the error ellipse show the po-tential of quantitative and qualitative symbolization and simultaneous point based uncertainty symbolization. The user can intuitively depict the centers of gravity, the major orientation of the point arrays as well as estimate the ex-tents and possible spatial distributions of multiple point phenomena. The error ellipse represents uncertainty in an intuitive way, particularly suitable for laymen. Furthermore it is shown how applicable an adapted design of the er-ror ellipse is to display the uncertainty of point features originating from incomplete data. The suitability of the error ellipse to display the uncertainty of point information is demonstrated within two showcases: (1) the analysis of formations of association football players, and (2) uncertain positioning of events on maps for the media.

  11. Exact Identification of a Quantum Change Point

    Science.gov (United States)

    Sentís, Gael; Calsamiglia, John; Muñoz-Tapia, Ramon

    2017-10-01

    The detection of change points is a pivotal task in statistical analysis. In the quantum realm, it is a new primitive where one aims at identifying the point where a source that supposedly prepares a sequence of particles in identical quantum states starts preparing a mutated one. We obtain the optimal procedure to identify the change point with certainty—naturally at the price of having a certain probability of getting an inconclusive answer. We obtain the analytical form of the optimal probability of successful identification for any length of the particle sequence. We show that the conditional success probabilities of identifying each possible change point show an unexpected oscillatory behavior. We also discuss local (online) protocols and compare them with the optimal procedure.

  12. Energy scales and magnetoresistance at a quantum critical point

    Energy Technology Data Exchange (ETDEWEB)

    Shaginyan, V.R. [Petersburg Nuclear Physics Institute, RAS, Gatchina, 188300 (Russian Federation); Racah Institute of Physics, Hebrew University, Jerusalem 91904 (Israel); CTSPS, Clark Atlanta University, Atlanta, GA 30314 (United States)], E-mail: vrshag@thd.pnpi.spb.ru; Amusia, M.Ya. [Racah Institute of Physics, Hebrew University, Jerusalem 91904 (Israel); Msezane, A.Z. [CTSPS, Clark Atlanta University, Atlanta, GA 30314 (United States); Popov, K.G. [Komi Science Center, Ural Division, RAS, 3a Chernova street, Syktyvkar, 167982 (Russian Federation); Stephanovich, V.A. [Opole University, Institute of Mathematics and Informatics, Opole, 45-052 (Poland)

    2009-03-02

    The magnetoresistance (MR) of CeCoIn{sub 5} is notably different from that in many conventional metals. We show that a pronounced crossover from negative to positive MR at elevated temperatures and fixed magnetic fields is determined by the scaling behavior of quasiparticle effective mass. At a quantum critical point (QCP) this dependence generates kinks (crossover points from fast to slow growth) in thermodynamic characteristics (like specific heat, magnetization, etc.) at some temperatures when a strongly correlated electron system transits from the magnetic field induced Landau-Fermi liquid (LFL) regime to the non-Fermi liquid (NFL) one taking place at rising temperatures. We show that the above kink-like peculiarity separates two distinct energy scales in QCP vicinity - low temperature LFL scale and high temperature one related to NFL regime. Our comprehensive theoretical analysis of experimental data permits to reveal for the first time new MR and kinks scaling behavior as well as to identify the physical reasons for above energy scales.

  13. Search for neutrino point sources with an all-sky autocorrelation analysis in IceCube

    Energy Technology Data Exchange (ETDEWEB)

    Turcati, Andrea; Bernhard, Anna; Coenders, Stefan [TU, Munich (Germany); Collaboration: IceCube-Collaboration

    2016-07-01

    The IceCube Neutrino Observatory is a cubic kilometre scale neutrino telescope located in the Antarctic ice. Its full-sky field of view gives unique opportunities to study the neutrino emission from the Galactic and extragalactic sky. Recently, IceCube found the first signal of astrophysical neutrinos with energies up to the PeV scale, but the origin of these particles still remains unresolved. Given the observed flux, the absence of observations of bright point-sources is explainable with the presence of numerous weak sources. This scenario can be tested using autocorrelation methods. We present here the sensitivities and discovery potentials of a two-point angular correlation analysis performed on seven years of IceCube data, taken between 2008 and 2015. The test is applied on the northern and southern skies separately, using the neutrino energy information to improve the effectiveness of the method.

  14. a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds

    Science.gov (United States)

    He, H.; Khoshelham, K.; Fraser, C.

    2017-09-01

    Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  15. Correspondence analysis of longitudinal data

    NARCIS (Netherlands)

    Van der Heijden, P.G.M.|info:eu-repo/dai/nl/073087998

    2005-01-01

    Correspondence analysis is an exploratory tool for the analysis of associations between categorical variables, the results of which may be displayed graphically. For longitudinal data with two time points, an analysis of the transition matrix (showing the relative frequencies for pairs of

  16. A case study of lightning attachment to flat ground showing multiple unconnected upward leaders

    Science.gov (United States)

    Cummins, Kenneth L.; Krider, E. Philip; Olbinski, Mike; Holle, Ronald L.

    2018-04-01

    On 10 July 2015, a cloud-to-ground (CG) lightning flash that produced two ground terminations was photographed from inside the safety of a truck in southern New Mexico. An analysis of archived NLDN data verified that this was a two-stroke flash, and a close-up view of the first stroke shows that it also initiated at least 12 unconnected, upward leaders (or "streamers") near the ground termination. No unconnected upward leaders were seen near the second ground attachment. After combining an analysis of the photograph with information provided by the NLDN, we infer that the first stroke was of negative (normal) polarity, had modest peak current, and struck about 460 m (± 24%) from the camera. Attachment occurred when an upward-propagating positive leader reached an inferred height of about 21 m above local ground. The second stroke struck ground about 740 m from the camera, and the height of its attachment leader is estimated to be 15 m. The estimated lengths of the unconnected upward leaders in the two-dimensional (2-D) plane of the first stroke range from 2 to 8 m, and all appear to be located within 15 m (2-D) of the main ground termination, with 24% uncertainty. Many of the unconnected upward leaders (inferred to be positive) exhibit multiple upward branches, and most of those branches have upward-directed forks or splits at their ends. This is the first report showing such extensive branching for positive upward leaders in natural lightning strikes to ground. None of the upward leaders can be seen to emanate from the tops of tall, isolated, or pointed objects on the ground, but they likely begin on small plants and rocks, or flat ground. In terms of lightning safety, this photo demonstrates that numerous upward leaders can be produced near a lightning strike point and have the potential to damage or cause injury at more than one specific point on the ground.

  17. Maximum power point tracker for photovoltaic power plants

    Science.gov (United States)

    Arcidiacono, V.; Corsi, S.; Lambri, L.

    The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.

  18. Microchip capillary electrophoresis for point-of-care analysis of lithium

    NARCIS (Netherlands)

    Vrouwe, E.X.; Luttge, R.; Vermes, I.; Berg, van den A.

    2007-01-01

    Background: Microchip capillary electrophoresis (CE) is a promising method for chemical analysis of complex samples such as whole blood. We evaluated the method for point-of-care testing of lithium. Methods: Chemical separation was performed on standard glass microchip CE devices with a conductivity

  19. Intraosseous blood samples for point-of-care analysis: agreement between intraosseous and arterial analyses.

    Science.gov (United States)

    Jousi, Milla; Saikko, Simo; Nurmi, Jouni

    2017-09-11

    Point-of-care (POC) testing is highly useful when treating critically ill patients. In case of difficult vascular access, the intraosseous (IO) route is commonly used, and blood is aspirated to confirm the correct position of the IO-needle. Thus, IO blood samples could be easily accessed for POC analyses in emergency situations. The aim of this study was to determine whether IO values agree sufficiently with arterial values to be used for clinical decision making. Two samples of IO blood were drawn from 31 healthy volunteers and compared with arterial samples. The samples were analysed for sodium, potassium, ionized calcium, glucose, haemoglobin, haematocrit, pH, blood gases, base excess, bicarbonate, and lactate using the i-STAT® POC device. Agreement and reliability were estimated by using the Bland-Altman method and intraclass correlation coefficient calculations. Good agreement was evident between the IO and arterial samples for pH, glucose, and lactate. Potassium levels were clearly higher in the IO samples than those from arterial blood. Base excess and bicarbonate were slightly higher, and sodium and ionised calcium values were slightly lower, in the IO samples compared with the arterial values. The blood gases in the IO samples were between arterial and venous values. Haemoglobin and haematocrit showed remarkable variation in agreement. POC diagnostics of IO blood can be a useful tool to guide treatment in critical emergency care. Seeking out the reversible causes of cardiac arrest or assessing the severity of shock are examples of situations in which obtaining vascular access and blood samples can be difficult, though information about the electrolytes, acid-base balance, and lactate could guide clinical decision making. The analysis of IO samples should though be limited to situations in which no other option is available, and the results should be interpreted with caution, because there is not yet enough scientific evidence regarding the agreement of IO

  20. DETECTION OF SLOPE MOVEMENT BY COMPARING POINT CLOUDS CREATED BY SFM SOFTWARE

    Directory of Open Access Journals (Sweden)

    K. Oda

    2016-06-01

    Full Text Available This paper proposes movement detection method between point clouds created by SFM software, without setting any onsite georeferenced points. SfM software, like Smart3DCaputure, PhotoScan, and Pix4D, are convenient for non-professional operator of photogrammetry, because these systems require simply specification of sequence of photos and output point clouds with colour index which corresponds to the colour of original image pixel where the point is projected. SfM software can execute aerial triangulation and create dense point clouds fully automatically. This is useful when monitoring motion of unstable slopes, or loos rocks in slopes along roads or railroads. Most of existing method, however, uses mesh-based DSM for comparing point clouds before/after movement and it cannot be applied in such cases that part of slopes forms overhangs. And in some cases movement is smaller than precision of ground control points and registering two point clouds with GCP is not appropriate. Change detection method in this paper adopts CCICP (Classification and Combined ICP algorithm for registering point clouds before / after movement. The CCICP algorithm is a type of ICP (Iterative Closest Points which minimizes point-to-plane, and point-to-point distances, simultaneously, and also reject incorrect correspondences based on point classification by PCA (Principle Component Analysis. Precision test shows that CCICP method can register two point clouds up to the 1 pixel size order in original images. Ground control points set in site are useful for initial setting of two point clouds. If there are no GCPs in site of slopes, initial setting is achieved by measuring feature points as ground control points in the point clouds before movement, and creating point clouds after movement with these ground control points. When the motion is rigid transformation, in case that a loose Rock is moving in slope, motion including rotation can be analysed by executing CCICP for a

  1. Importance Performance Analysis as a Trade Show Performance Evaluation and Benchmarking Tool

    OpenAIRE

    Tafesse, Wondwesen; Skallerud, Kåre; Korneliussen, Tor

    2010-01-01

    Author's accepted version (post-print). The purpose of this study is to introduce importance performance analysis as a trade show performance evaluation and benchmarking tool. Importance performance analysis considers exhibitors’ performance expectation and perceived performance in unison to evaluate and benchmark trade show performance. The present study uses data obtained from exhibitors of an international trade show to demonstrate how importance performance analysis can be used to eval...

  2. Investigations on Two Co-C Fixed-Point Cells Prepared at INRIM and LNE-Cnam

    Science.gov (United States)

    Battuello, M.; Florio, M.; Sadli, M.; Bourson, F.

    2011-08-01

    INRIM and LNE-Cnam agreed to undertake a collaboration aimed to verify, through the use of metal-carbon eutectic fixed-point cells, methods and facilities used for defining the transition temperature of eutectic fixed points and manufacturing procedures of cells. To this purpose and as a first step of the cooperation, a Co-C cell manufactured at LNE-Cnam was measured at INRIM and compared with a local cell. The two cells were of different designs: the INRIM cell of 10 cm3 inner volume and the LNE-Cnam one of 3.9 cm3. The external dimensions of the two cells were noticeably different, namely, 40 mm in length and 24 mm in diameter for the LNE-Cnam cell 3Co4 and 110 mm in length and 42 mm in diameter for the INRIM cell. Consequently, the investigation of the effect of temperature distributions in the heating furnace was undertaken by implementing the cells inside single-zone and three-zone furnaces. The transition temperature of the cell was determined at the two institutes making use of different techniques: at INRIM radiation scales at 900 nm, 950 nm, and 1.6 μm were realized from In to Cu and then used to define T 90(Co-C) by extrapolation. At LNE-Cnam, a radiance comparator based on a grating monochromator was used for the extrapolation from the Cu fixed point. This paper presents a comparative description of the cells and the manufacturing methods and the results in terms of equivalence between the two cells and melting temperatures determined at INRIM and LNE-Cnam.

  3. On two special values of temperature factor in hypersonic flow stagnation point

    Science.gov (United States)

    Bilchenko, G. G.; Bilchenko, N. G.

    2018-03-01

    The hypersonic aircraft permeable cylindrical and spherical surfaces laminar boundary layer heat and mass transfer control mathematical model properties are investigated. The nonlinear algebraic equations systems are obtained for two special values of temperature factor in the hypersonic flow stagnation point. The mappings bijectivity between heat and mass transfer local parameters and controls is established. The computation experiments results are presented: the domains of allowed values “heat-friction” are obtained.

  4. Teach yourself visually PowerPoint 2013

    CERN Document Server

    Wood, William

    2013-01-01

    A straightforward, visual approach to learning the new PowerPoint 2013! PowerPoint 2013 boasts updated features and new possibilities; this highly visual tutorial provides step-by-step instructions to help you learn all the capabilities of PowerPoint 2013. It covers the basics, as well as all the exciting new changes and additions in a series of easy-to-follow, full-color, two-page tutorials. Learn how to create slides, dress them up using templates and graphics, add sound and animation, and more. This book is the ideal ""show me, don't tell me"" guide to PowerPoint 2013.De

  5. Numerical analysis of sandwich beam with corrugated core under three-point bending

    Energy Technology Data Exchange (ETDEWEB)

    Wittenbeck, Leszek [Poznan University of Technology, Institute of Mathematics Piotrowo Street No. 5, 60-965 Poznan (Poland); Grygorowicz, Magdalena; Paczos, Piotr [Poznan University of Technology, Institute of Applied Mechanics Jana Pawla IIStreet No. 24, 60-965 Poznan (Poland)

    2015-03-10

    The strength problem of sandwich beam with corrugated core under three-point bending is presented.The beam are made of steel and formed by three mutually orthogonal corrugated layers. The finite element analysis (FEA) of the sandwich beam is performed with the use of the FEM system - ABAQUS. The relationship between the applied load and deflection in three-point bending is considered.

  6. Two-point paraxial traveltime formula for inhomogeneous isotropic and anisotropic media: Tests of accuracy

    KAUST Repository

    Waheed, Umair bin; Psencik, Ivan; Cerveny, Vlastislav; Iversen, Einar; Alkhalifah, Tariq Ali

    2013-01-01

    On several simple models of isotropic and anisotropic media, we have studied the accuracy of the two-point paraxial traveltime formula designed for the approximate calculation of the traveltime between points S' and R' located in the vicinity of points S and R on a reference ray. The reference ray may be situated in a 3D inhomogeneous isotropic or anisotropic medium with or without smooth curved interfaces. The twopoint paraxial traveltime formula has the form of the Taylor expansion of the two-point traveltime with respect to spatial Cartesian coordinates up to quadratic terms at points S and R on the reference ray. The constant term and the coefficients of the linear and quadratic terms are determined from quantities obtained from ray tracing and linear dynamic ray tracing along the reference ray. The use of linear dynamic ray tracing allows the evaluation of the quadratic terms in arbitrarily inhomogeneous media and, as shown by examples, it extends the region of accurate results around the reference ray between S and R (and even outside this interval) obtained with the linear terms only. Although the formula may be used for very general 3D models, we concentrated on simple 2D models of smoothly inhomogeneous isotropic and anisotropic (~8% and ~20% anisotropy) media only. On tests, in which we estimated twopoint traveltimes between a shifted source and a system of shifted receivers, we found that the formula may yield more accurate results than the numerical solution of an eikonal-based differential equation. The tests also indicated that the accuracy of the formula depends primarily on the length and the curvature of the reference ray and only weakly depends on anisotropy. The greater is the curvature of the reference ray, the narrower its vicinity, in which the formula yields accurate results.

  7. Two-point paraxial traveltime formula for inhomogeneous isotropic and anisotropic media: Tests of accuracy

    KAUST Repository

    Waheed, Umair bin

    2013-09-01

    On several simple models of isotropic and anisotropic media, we have studied the accuracy of the two-point paraxial traveltime formula designed for the approximate calculation of the traveltime between points S\\' and R\\' located in the vicinity of points S and R on a reference ray. The reference ray may be situated in a 3D inhomogeneous isotropic or anisotropic medium with or without smooth curved interfaces. The twopoint paraxial traveltime formula has the form of the Taylor expansion of the two-point traveltime with respect to spatial Cartesian coordinates up to quadratic terms at points S and R on the reference ray. The constant term and the coefficients of the linear and quadratic terms are determined from quantities obtained from ray tracing and linear dynamic ray tracing along the reference ray. The use of linear dynamic ray tracing allows the evaluation of the quadratic terms in arbitrarily inhomogeneous media and, as shown by examples, it extends the region of accurate results around the reference ray between S and R (and even outside this interval) obtained with the linear terms only. Although the formula may be used for very general 3D models, we concentrated on simple 2D models of smoothly inhomogeneous isotropic and anisotropic (~8% and ~20% anisotropy) media only. On tests, in which we estimated twopoint traveltimes between a shifted source and a system of shifted receivers, we found that the formula may yield more accurate results than the numerical solution of an eikonal-based differential equation. The tests also indicated that the accuracy of the formula depends primarily on the length and the curvature of the reference ray and only weakly depends on anisotropy. The greater is the curvature of the reference ray, the narrower its vicinity, in which the formula yields accurate results.

  8. Throughput analysis of point-to-multi-point hybric FSO/RF network

    KAUST Repository

    Rakia, Tamer; Gebali, Fayez; Yang, Hong-Chuan; Alouini, Mohamed-Slim

    2017-01-01

    This paper presents and analyzes a point-to-multi-point (P2MP) network that uses a number of free-space optical (FSO) links for data transmission from the central node to the different remote nodes. A common backup radio-frequency (RF) link is used

  9. Cloning and expression analysis of two dehydrodolichyl diphosphate synthase genes from Tripterygium wilfordii

    Directory of Open Access Journals (Sweden)

    Lin-Hui Gao

    2018-01-01

    Full Text Available Objective: To clone and investigate two dehydrodolichyl diphosphate synthase genes of Tripterygium wilfordii by bioinformatics and tissue expression analysis. Materials and Methods: According to the T. wifordii transcriptome database, specific primers were designed to clone the TwDHDDS1 and TwDHDDS2 genes via PCR. Based on the cloned sequences, protein structure prediction, multiple sequence alignment and phylogenetic tree construction were performed. The expression levels of the genes in different tissues of T. wilfordii were measured by real-time quantitative PCR. Results: The TwDHDDS1 gene encompassed a 873 bp open reading frame (ORF and encoded a protein of 290 amino acids. The calculated molecular weight of the translated protein was about 33.46 kDa, and the theoretical isoelectric point (pI was 8.67. The TwDHDDS2 encompassed a 768 bp ORF, encoding a protein of 255 amino acids with a calculated molecular weight of about 21.19 kDa, and a theoretical isoelectric point (pI of 7.72. Plant tissue expression analysis indicated that TwDHDDS1 and TwDHDDS2 both have relatively ubiquitous expression in all sampled organ tissues, but showed the highest transcription levels in the stems. Conclusions: The results of this study provide a basis for further functional studies of TwDHDDS1 and TwDHDDS2. Most importantly, these genes are promising genetic targets for the regulation of the biosynthetic pathways of important bioactive terpenoids such as triptolide.

  10. Two fixed-point theorems related to eigenvalues with the solution of Kazdan-Warner's problem on elliptic equations

    International Nuclear Information System (INIS)

    Vidossich, G.

    1979-01-01

    The paper presents a proof of two fixed-point theorems, which unify previous results on periodic solutions of second-order ordinary differential equations, in the sense that the existence part of these solutions become a corollay of the fixed-point theorems. (author)

  11. Equivalence of two Fixed-Point Semantics for Definitional Higher-Order Logic Programs

    Directory of Open Access Journals (Sweden)

    Angelos Charalambidis

    2015-09-01

    Full Text Available Two distinct research approaches have been proposed for assigning a purely extensional semantics to higher-order logic programming. The former approach uses classical domain theoretic tools while the latter builds on a fixed-point construction defined on a syntactic instantiation of the source program. The relationships between these two approaches had not been investigated until now. In this paper we demonstrate that for a very broad class of programs, namely the class of definitional programs introduced by W. W. Wadge, the two approaches coincide (with respect to ground atoms that involve symbols of the program. On the other hand, we argue that if existential higher-order variables are allowed to appear in the bodies of program rules, the two approaches are in general different. The results of the paper contribute to a better understanding of the semantics of higher-order logic programming.

  12. Washing and chilling as critical control points in pork slaughter hazard analysis and critical control point (HACCP) systems.

    Science.gov (United States)

    Bolton, D J; Pearce, R A; Sheridan, J J; Blair, I S; McDowell, D A; Harrington, D

    2002-01-01

    The aim of this research was to examine the effects of preslaughter washing, pre-evisceration washing, final carcass washing and chilling on final carcass quality and to evaluate these operations as possible critical control points (CCPs) within a pork slaughter hazard analysis and critical control point (HACCP) system. This study estimated bacterial numbers (total viable counts) and the incidence of Salmonella at three surface locations (ham, belly and neck) on 60 animals/carcasses processed through a small commercial pork abattoir (80 pigs d(-1)). Significant reductions (P HACCP in pork slaughter plants. This research will provide a sound scientific basis on which to develop and implement effective HACCP in pork abattoirs.

  13. Point Cloud Analysis for Conservation and Enhancement of Modernist Architecture

    Science.gov (United States)

    Balzani, M.; Maietti, F.; Mugayar Kühl, B.

    2017-02-01

    Documentation of cultural assets through improved acquisition processes for advanced 3D modelling is one of the main challenges to be faced in order to address, through digital representation, advanced analysis on shape, appearance and conservation condition of cultural heritage. 3D modelling can originate new avenues in the way tangible cultural heritage is studied, visualized, curated, displayed and monitored, improving key features such as analysis and visualization of material degradation and state of conservation. An applied research focused on the analysis of surface specifications and material properties by means of 3D laser scanner survey has been developed within the project of Digital Preservation of FAUUSP building, Faculdade de Arquitetura e Urbanismo da Universidade de São Paulo, Brazil. The integrated 3D survey has been performed by the DIAPReM Center of the Department of Architecture of the University of Ferrara in cooperation with the FAUUSP. The 3D survey has allowed the realization of a point cloud model of the external surfaces, as the basis to investigate in detail the formal characteristics, geometric textures and surface features. The digital geometric model was also the basis for processing the intensity values acquired by laser scanning instrument; this method of analysis was an essential integration to the macroscopic investigations in order to manage additional information related to surface characteristics displayable on the point cloud.

  14. Extracting Loop Bounds for WCET Analysis Using the Instrumentation Point Graph

    Science.gov (United States)

    Betts, A.; Bernat, G.

    2009-05-01

    Every calculation engine proposed in the literature of Worst-Case Execution Time (WCET) analysis requires upper bounds on loop iterations. Existing mechanisms to procure this information are either error prone, because they are gathered from the end-user, or limited in scope, because automatic analyses target very specific loop structures. In this paper, we present a technique that obtains bounds completely automatically for arbitrary loop structures. In particular, we show how to employ the Instrumentation Point Graph (IPG) to parse traces of execution (generated by an instrumented program) in order to extract bounds relative to any loop-nesting level. With this technique, therefore, non-rectangular dependencies between loops can be captured, allowing more accurate WCET estimates to be calculated. We demonstrate the improvement in accuracy by comparing WCET estimates computed through our HMB framework against those computed with state-of-the-art techniques.

  15. One-point fluctuation analysis of the high-energy neutrino sky

    DEFF Research Database (Denmark)

    Feyereisen, Michael R.; Tamborra, Irene; Ando, Shin'ichiro

    2017-01-01

    We perform the first one-point fluctuation analysis of the high-energy neutrino sky. This method reveals itself to be especially suited to contemporary neutrino data, as it allows to study the properties of the astrophysical components of the high-energy flux detected by the IceCube telescope, even...

  16. Ethics in Knowledge Organization: Two Conferences Point to a New Core in the Domain

    Directory of Open Access Journals (Sweden)

    Richard P. Smiraglia

    2015-01-01

    Full Text Available http://dx.doi.org/10.5007/1518-2924.2015v20nesp1p1 Two conferences called "Ethics in Information Organization (EIO," held in 2009 and 2013, brought together practitioners and scholars in knowledge organization (KO to discuss ethical decision-making for the organization of knowledge. Traditionally the notion of ethics as a component of knowledge organization has occupied a sort of background position. Concepts of cultural warrant clash with concepts of literary warrant to produce harmful knowledge organization systems. Here tools of domain analytical visualization are applied to the two EIO conferences to demonstrate the potential intension of ethics for KO. Co-word analysis helps to visualize the thematic core in the most frequently used terms: user, ethical, knowledge, national, description, and access. There clearly is a meta-level trajectory incorporating ethics and the user, while the intension includes all applied approaches to KO as well as strong recognition of national, regional, and social cultural identities. Another approach to domain analysis is to examine the social semantics (by analyzing the public record of discourse through citation patterns. Author co-citation analysis shows work anchored in the basic theoretical premises of KO, but also bringing ideas from outside the domain to bear on the problems of objective violence. A network visualization shows how the work on ethics in KO is based on the core principles of KO, but relies also on evidence from librarianship and philosophical guidance to bring forward the issues surrounding objective violence in KOS. The authors contributing to this small pair of conferences have laid out a pathway for expanding understanding of the role of ethics in KO.

  17. Differences of Cutaneous Two-Point Discrimination Thresholds Among Students in Different Years of a Chiropractic Program.

    Science.gov (United States)

    Dane, Andrew B; Teh, Elaine; Reckelhoff, Kenneth E; Ying, Pee Kui

    2017-09-01

    The aim of this study was to investigate if there were differences in the two-point discrimination (2-PD) of fingers among students at different stages of a chiropractic program. This study measured 2-PD thresholds for the dominant and nondominant index finger and dominant and nondominant forearm in groups of students in a 4-year chiropractic program at the International Medical University in Kuala Lumpur, Malaysia. Measurements were made using digital calipers mounted on a modified weighing scale. Group comparisons were made among students for each year of the program (years 1, 2, 3, and 4). Analysis of the 2-PD threshold for differences among the year groups was performed with analysis of variance. The mean 2-PD threshold of the index finger was higher in the students who were in the higher year groups. Dominant-hand mean values for year 1 were 2.93 ± 0.04 mm and 1.69 ± 0.02 mm in year 4. There were significant differences at finger sites (P < .05) among all year groups compared with year 1. There were no significant differences measured at the dominant forearm between any year groups (P = .08). The nondominant fingers of the year groups 1, 2, and 4 showed better 2-PD compared with the dominant finger. There was a significant difference (P = .005) between the nondominant (1.93 ± 1.15) and dominant (2.27 ± 1.14) fingers when all groups were combined (n = 104). The results of this study demonstrated that the finger 2-PD of the chiropractic students later in the program was more precise than that of students in the earlier program. Copyright © 2017. Published by Elsevier Inc.

  18. [Experience of a Break-Even Point Analysis for Make-or-Buy Decision.].

    Science.gov (United States)

    Kim, Yunhee

    2006-12-01

    Cost containment through continuous quality improvement of medical service is required in an age of a keen competition of the medical market. Laboratory managers should examine the matters on make-or-buy decision periodically. On this occasion, a break-even point analysis can be useful as an analyzing tool. In this study, cost accounting and break-even point (BEP) analysis were performed in case that the immunoassay items showing a recent increase in order volume were to be in-house made. Fixed and variable costs were calculated in case that alpha fetoprotein (AFP), carcinoembryonic antigen (CEA), prostate-specific antigen (PSA), ferritin, free thyroxine (fT4), triiodothyronine (T3), thyroid-stimulating hormone (TSH), CA 125, CA 19-9, and hepatitis B envelope antibody (HBeAb) were to be tested with Abbott AxSYM instrument. Break-even volume was calculated as fixed cost per year divided by purchasing cost per test minus variable cost per test and BEP ratio as total purchasing costs at break-even volume divided by total purchasing costs at actual annual volume. The average fixed cost per year of AFP, CEA, PSA, ferritin, fT4, T3, TSH, CA 125, CA 19-9, and HBeAb was 8,279,187 won and average variable cost per test, 3,786 won. Average break-even volume was 1,599 and average BEP ratio was 852%. Average BEP ratio without including quality costs such as calibration and quality control was 74%. Because the quality assurance of clinical tests cannot be waived, outsourcing all of 10 items was more adequate than in-house make at the present volume in financial aspect. BEP analysis was useful as a financial tool for make-or-buy decision, the common matter which laboratory managers meet with.

  19. Equipartitioning and balancing points of polygons

    Directory of Open Access Journals (Sweden)

    Shunmugam Pillay

    2010-07-01

    Full Text Available The centre of mass G of a triangle has the property that the rays to the vertices from G sweep out triangles having equal areas. We show that such points, termed equipartitioning points in this paper, need not exist in other polygons. A necessary and sufficient condition for a quadrilateral to have an equipartitioning point is that one of its diagonals bisects the other. The general theorem, namely, necessary and sufficient conditions for equipartitioning points for arbitrary polygons to exist, is also stated and proved. When this happens, they are in general, distinct from the centre of mass. In parallelograms, and only in them, do the two points coincide.

  20. A TWO-STEP CLASSIFICATION APPROACH TO DISTINGUISHING SIMILAR OBJECTS IN MOBILE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    H. He

    2017-09-01

    Full Text Available Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  1. Two Ti13-oxo-clusters showing non-compact structures, film electrode preparation and photocurrent properties.

    Science.gov (United States)

    Hou, Jin-Le; Luo, Wen; Wu, Yin-Yin; Su, Hu-Chao; Zhang, Guang-Lin; Zhu, Qin-Yu; Dai, Jie

    2015-12-14

    Two benzene dicarboxylate (BDC) and salicylate (SAL) substituted titanium-oxo-clusters, Ti13O10(o-BDC)4(SAL)4(O(i)Pr)16 (1) and Ti13O10(o-BDC)4(SAL-Cl)4(O(i)Pr)16 (2), are prepared by one step in situ solvothermal synthesis. Single crystal analysis shows that the two Ti13 clusters take a paddle arrangement with an S4 symmetry. The non-compact (non-sphere) structure is stabilized by the coordination of BDC and SAL. Film photoelectrodes are prepared by the wet coating process using the solution of the clusters and the photocurrent response properties of the electrodes are studied. It is found that the photocurrent density and photoresponsiveness of the electrodes are related to the number of coating layers and the annealing temperature. Using ligand coordinated titanium-oxo-clusters as the molecular precursors of TiO2 anatase films is found to be effective due to their high solubility, appropriate stability in solution and hence the easy controllability.

  2. Investigation of Primary Dew-Point Saturator Efficiency in Two Different Thermal Environments

    Science.gov (United States)

    Zvizdic, D.; Heinonen, M.; Sestan, D.

    2015-08-01

    The aim of this paper is to describe the evaluation process of the performance of the low-range saturator (LRS), when exposed to two different thermal environments. The examined saturator was designed, built, and tested at MIKES (Centre for Metrology and Accreditation, Finland), and then transported to the Laboratory for Process Measurement (LPM) in Croatia, where it was implemented in a new dew-point calibration system. The saturator works on a single-pressure-single-pass generation principle in the dew/frost-point temperature range between and . The purpose of the various tests performed at MIKES was to examine the efficiency and non-ideality of the saturator. As a test bath facility in Croatia differs from the one used in Finland, the same tests were repeated at LPM, and the effects of different thermal conditions on saturator performance were examined. Thermometers, pressure gauges, an air preparation system, and water for filling the saturator at LPM were also different than those used at MIKES. Results obtained by both laboratories indicate that the efficiency of the examined saturator was not affected either by the thermal conditions under which it was tested or by equipment used for the tests. Both laboratories concluded that LRS is efficient enough for a primary realization of the dew/frost-point temperature scale in the range from to , with flow rates between and . It is also shown that a considerable difference of the pre-saturator efficiency, indicated by two laboratories, did not have influence to the overall performance of the saturator. The results of the research are presented in graphical and tabular forms. This paper also gives a brief description of the design and operation principle of the investigated low-range saturator.

  3. Two-point boundary value and Cauchy formulations in an axisymmetrical MHD equilibrium problem

    International Nuclear Information System (INIS)

    Atanasiu, C.V.; Subbotin, A.A.

    1999-01-01

    In this paper we present two equilibrium solvers for axisymmetrical toroidal configurations, both based on the expansion in poloidal angle method. The first one has been conceived as a two-point boundary value solver in a system of coordinates with straight field lines, while the second one uses a well-conditioned Cauchy formulation of the problem in a general curvilinear coordinate system. In order to check the capability of our moment methods to describe equilibrium accurately, a comparison of the moment solutions with analytical solutions obtained for a Solov'ev equilibrium has been performed. (author)

  4. Bayesian analysis of Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2006-01-01

    Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....

  5. High-resolution wave number spectrum using multi-point measurements in space – the Multi-point Signal Resonator (MSR technique

    Directory of Open Access Journals (Sweden)

    Y. Narita

    2011-02-01

    Full Text Available A new analysis method is presented that provides a high-resolution power spectrum in a broad wave number domain based on multi-point measurements. The analysis technique is referred to as the Multi-point Signal Resonator (MSR and it benefits from Capon's minimum variance method for obtaining the proper power spectral density of the signal as well as the MUSIC algorithm (Multiple Signal Classification for considerably reducing the noise part in the spectrum. The mathematical foundation of the analysis method is presented and it is applied to synthetic data as well as Cluster observations of the interplanetary magnetic field. Using the MSR technique for Cluster data we find a wave in the solar wind propagating parallel to the mean magnetic field with relatively small amplitude, which is not identified by the Capon spectrum. The Cluster data analysis shows the potential of the MSR technique for studying waves and turbulence using multi-point measurements.

  6. Point Defects in Two-Dimensional Layered Semiconductors: Physics and Its Applications

    Science.gov (United States)

    Suh, Joonki

    Recent advances in material science and semiconductor processing have been achieved largely based on in-depth understanding, efficient management and advanced application of point defects in host semiconductors, thus finding the relevant techniques such as doping and defect engineering as a traditional scientific and technological solution. Meanwhile, two- dimensional (2D) layered semiconductors currently draw tremendous attentions due to industrial needs and their rich physics at the nanoscale; as we approach the end of critical device dimensions in silicon-based technology, ultra-thin semiconductors have the potential as next- generation channel materials, and new physics also emerges at such reduced dimensions where confinement of electrons, phonons, and other quasi-particles is significant. It is therefore rewarding and interesting to understand and redefine the impact of lattice defects by investigating their interactions with energy/charge carriers of the host matter. Potentially, the established understanding will provide unprecedented opportunities for realizing new functionalities and enhancing the performance of energy harvesting and optoelectronic devices. In this thesis, multiple novel 2D layered semiconductors, such as bismuth and transition- metal chalcogenides, are explored. Following an introduction of conventional effects induced by point defects in semiconductors, the related physics of electronically active amphoteric defects is revisited in greater details. This can elucidate the complication of a two-dimensional electron gas coexisting with the topological states on the surface of bismuth chalcogenides, recently suggested as topological insulators. Therefore, native point defects are still one of the keys to understand and exploit topological insulators. In addition to from a fundamental science point of view, the effects of point defects on the integrated thermal-electrical transport, as well as the entropy-transporting process in

  7. Proprioceptive Interaction between the Two Arms in a Single-Arm Pointing Task.

    Directory of Open Access Journals (Sweden)

    Kazuyoshi Kigawa

    Full Text Available Proprioceptive signals coming from both arms are used to determine the perceived position of one arm in a two-arm matching task. Here, we examined whether the perceived position of one arm is affected by proprioceptive signals from the other arm in a one-arm pointing task in which participants specified the perceived position of an unseen reference arm with an indicator paddle. Both arms were hidden from the participant's view throughout the study. In Experiment 1, with both arms placed in front of the body, the participants received 70-80 Hz vibration to the elbow flexors of the reference arm (= right arm to induce the illusion of elbow extension. This extension illusion was compared with that when the left arm elbow flexors were vibrated or not. The degree of the vibration-induced extension illusion of the right arm was reduced in the presence of left arm vibration. In Experiment 2, we found that this kinesthetic interaction between the two arms did not occur when the left arm was vibrated in an abducted position. In Experiment 3, the vibration-induced extension illusion of one arm was fully developed when this arm was placed at an abducted position, indicating that the brain receives increased proprioceptive input from a vibrated arm even if the arm was abducted. Our results suggest that proprioceptive interaction between the two arms occurs in a one-arm pointing task when the two arms are aligned with one another. The position sense of one arm measured using a pointer appears to include the influences of incoming information from the other arm when both arms were placed in front of the body and parallel to one another.

  8. Matching fields and lattice points of simplices

    OpenAIRE

    Loho, Georg; Smith, Ben

    2018-01-01

    We show that the Chow covectors of a linkage matching field define a bijection of lattice points and we demonstrate how one can recover the linkage matching field from this bijection. This resolves two open questions from Sturmfels & Zelevinsky (1993) on linkage matching fields. For this, we give an explicit construction that associates a bipartite incidence graph of an ordered partition of a common set to all lattice points in a dilated simplex. Given a triangulation of a product of two simp...

  9. The application of quality risk management to the bacterial endotoxins test: use of hazard analysis and critical control points.

    Science.gov (United States)

    Annalaura, Carducci; Giulia, Davini; Stefano, Ceccanti

    2013-01-01

    Risk analysis is widely used in the pharmaceutical industry to manage production processes, validation activities, training, and other activities. Several methods of risk analysis are available (for example, failure mode and effects analysis, fault tree analysis), and one or more should be chosen and adapted to the specific field where they will be applied. Among the methods available, hazard analysis and critical control points (HACCP) is a methodology that has been applied since the 1960s, and whose areas of application have expanded over time from food to the pharmaceutical industry. It can be easily and successfully applied to several processes because its main feature is the identification, assessment, and control of hazards. It can be also integrated with other tools, such as fishbone diagram and flowcharting. The aim of this article is to show how HACCP can be used to manage an analytical process, propose how to conduct the necessary steps, and provide data templates necessary to document and useful to follow current good manufacturing practices. In the quality control process, risk analysis is a useful tool for enhancing the uniformity of technical choices and their documented rationale. Accordingly, it allows for more effective and economical laboratory management, is capable of increasing the reliability of analytical results, and enables auditors and authorities to better understand choices that have been made. The aim of this article is to show how hazard analysis and critical control points can be used to manage bacterial endotoxins testing and other analytical processes in a formal, clear, and detailed manner.

  10. Anisotropic diffusion of point defects in a two-dimensional crystal of streptavidin observed by high-speed atomic force microscopy

    International Nuclear Information System (INIS)

    Yamamoto, Daisuke; Uchihashi, Takayuki; Kodera, Noriyuki; Ando, Toshio

    2008-01-01

    The diffusion of individual point defects in a two-dimensional streptavidin crystal formed on biotin-containing supported lipid bilayers was observed by high-speed atomic force microscopy. The two-dimensional diffusion of monovacancy defects exhibited anisotropy correlated with the two crystallographic axes in the orthorhombic C 222 crystal; in the 2D plane, one axis (the a-axis) is comprised of contiguous biotin-bound subunit pairs whereas the other axis (the b-axis) is comprised of contiguous biotin-unbound subunit pairs. The diffusivity along the b-axis is approximately 2.4 times larger than that along the a-axis. This anisotropy is ascribed to the difference in the association free energy between the biotin-bound subunit-subunit interaction and the biotin-unbound subunit-subunit interaction. The preferred intermolecular contact occurs between the biotin-unbound subunits. The difference in the intermolecular binding energy between the two types of subunit pair is estimated to be approximately 0.52 kcal mol -1 . Another observed dynamic behavior of point defects was fusion of two point defects into a larger defect, which occurred much more frequently than the fission of a point defect into smaller defects. The diffusivity of point defects increased with increasing defect size. The fusion and the higher diffusivity of larger defects are suggested to be involved in the mechanism for the formation of defect-free crystals

  11. Growth study and essential oil analysis of Piper aduncum from two sites of Cerrado biome of Minas Gerais State, Brazil

    Directory of Open Access Journals (Sweden)

    Gisele L. Oliveira

    Full Text Available Piper aduncum L., Piperaceae, stands out due to its biological activities, however, it is still found in the wild and little is known about its agronomic point of view. The aim of this study was to evaluate the growth and to analyze the chemical composition of essential oils from leaves of P. aduncum collected in two different sites of Cerrado as well as in cultivated plants. The cultivation was installed out in a greenhouse using cuttings of adult specimens. Essential oils were obtained from fresh leaves. Plants from the two studied locations showed erect growth habit and behavior of linear growth. The essential oils composition of P. aduncum from Bocaiuva did not differ between wild and cultivated plants, as the major substance identified as 1,8-cineole. The plants from Montes Claros site showed a distinct concentration for the two samples, being the major substance characterized as transocimene (13.4% for wild and 1,8-cineole (31.3% for cultivated plants. Samples from both locations showed a similar essential oil composition in cultivars. Our results showed that P. aduncum cultivation is feasible and the variation in chemical composition of the two sites may indicate an environmental influence, since chemical and isoenzyme analysis did not show great differences.

  12. H-point standard additions method for simultaneous determination of sulfamethoxazole and trimethoprim in pharmaceutical formulations and biological fluids with simultaneous addition of two analytes

    Science.gov (United States)

    Givianrad, M. H.; Saber-Tehrani, M.; Aberoomand-Azar, P.; Mohagheghian, M.

    2011-03-01

    The applicability of H-point standard additions method (HPSAM) to the resolving of overlapping spectra corresponding to the sulfamethoxazole and trimethoprim is verified by UV-vis spectrophotometry. The results show that the H-point standard additions method with simultaneous addition of both analytes is suitable for the simultaneous determination of sulfamethoxazole and trimethoprim in aqueous media. The results of applying the H-point standard additions method showed that the two drugs could be determined simultaneously with the concentration ratios of sulfamethoxazole to trimethoprim varying from 1:18 to 16:1 in the mixed samples. Also, the limits of detections were 0.58 and 0.37 μmol L -1 for sulfamethoxazole and trimethoprim, respectively. In addition the means of the calculated RSD (%) were 1.63 and 2.01 for SMX and TMP, respectively in synthetic mixtures. The proposed method has been successfully applied to the simultaneous determination of sulfamethoxazole and trimethoprim in some synthetic, pharmaceutical formulation and biological fluid samples.

  13. The two chromosomes of Vibrio cholerae are initiated at different time points in the cell cycle

    DEFF Research Database (Denmark)

    Rasmussen, Tue; Jensen, Rasmus Bugge; Skovgaard, Ole

    2007-01-01

    for analysing flow cytometry data and marker frequency analysis, we show that the small chromosome II is replicated late in the C period of the cell cycle, where most of chromosome I has been replicated. Owing to the delay in initiation of chromosome II, the two chromosomes terminate replication...... at approximately the same time and the average number of replication origins per cell is higher for chromosome I than for chromosome II. Analysis of cell-cycle parameters shows that chromosome replication and segregation is exceptionally fast in V. cholerae. The divided genome and delayed replication of chromosome...... II may reduce the metabolic burden and complexity of chromosome replication by postponing DNA synthesis to the last part of the cell cycle and reducing the need for overlapping replication cycles during rapid proliferation...

  14. Risk-analysis of global climate tipping points

    Energy Technology Data Exchange (ETDEWEB)

    Frieler, Katja; Meinshausen, Malte; Braun, N [Potsdam Institute for Climate Impact Research e.V., Potsdam (Germany). PRIMAP Research Group; and others

    2012-09-15

    There are many elements of the Earth system that are expected to change gradually with increasing global warming. Changes might prove to be reversible after global warming returns to lower levels. But there are others that have the potential of showing a threshold behavior. This means that these changes would imply a transition between qualitatively disparate states which can be triggered by only small shifts in background climate (2). These changes are often expected not to be reversible by returning to the current level of warming. The reason for that is, that many of them are characterized by self-amplifying processes that could lead to a new internally stable state which is qualitatively different from before. There are different elements of the climate system that are already identified as potential tipping elements. This group contains the mass losses of the Greenland and the West-Antarctic Ice Sheet, the decline of the Arctic summer sea ice, different monsoon systems, the degradation of coral reefs, the dieback of the Amazon rainforest, the thawing of the permafrost regions as well as the release of methane hydrates (3). Crucially, these tipping elements have regional to global scale effects on human society, biodiversity and/or ecosystem services. Several examples may have a discernable effect on global climate through a large-scale positive feedback. This means they would further amplify the human induced climate change. These tipping elements pose risks comparable to risks found in other fields of human activity: high-impact events that have at least a few percent chance to occur classify as high-risk events. In many of these examples adaptation options are limited and prevention of occurrence may be a more viable strategy. Therefore, a better understanding of the processes driving tipping points is essential. There might be other tipping elements even more critical but not yet identified. These may also lie within our socio-economic systems that are

  15. Methods for registration laser scanner point clouds in forest stands

    International Nuclear Information System (INIS)

    Bienert, A.; Pech, K.; Maas, H.-G.

    2011-01-01

    Laser scanning is a fast and efficient 3-D measurement technique to capture surface points describing the geometry of a complex object in an accurate and reliable way. Besides airborne laser scanning, terrestrial laser scanning finds growing interest for forestry applications. These two different recording platforms show large differences in resolution, recording area and scan viewing direction. Using both datasets for a combined point cloud analysis may yield advantages because of their largely complementary information. In this paper, methods will be presented to automatically register airborne and terrestrial laser scanner point clouds of a forest stand. In a first step, tree detection is performed in both datasets in an automatic manner. In a second step, corresponding tree positions are determined using RANSAC. Finally, the geometric transformation is performed, divided in a coarse and fine registration. After a coarse registration, the fine registration is done in an iterative manner (ICP) using the point clouds itself. The methods are tested and validated with a dataset of a forest stand. The presented registration results provide accuracies which fulfill the forestry requirements [de

  16. Linear stability analysis of laminar flow near a stagnation point in the slip flow regime

    Science.gov (United States)

    Essaghir, E.; Oubarra, A.; Lahjomri, J.

    2017-12-01

    The aim of the present contribution is to analyze the effect of slip parameter on the stability of a laminar incompressible flow near a stagnation point in the slip flow regime. The analysis is based on the traditional normal mode approach and assumes parallel flow approximation. The Orr-Sommerfeld equation that governs the infinitesimal disturbance of stream function imposed to the steady main flow, which is an exact solution of the Navier-Stokes equation satisfying slip boundary conditions, is obtained by using the powerful spectral Chebyshev collocation method. The results of the effect of slip parameter K on the hydrodynamic characteristics of the base flow, namely the velocity profile, the shear stress profile, the boundary layer, displacement and momentum thicknesses are illustrated and discussed. The numerical data for these characteristics, as well as those of the eigenvalues and the corresponding wave numbers recover the results of the special case of no-slip boundary conditions. They are found to be in good agreement with previous numerical calculations. The effects of slip parameter on the neutral curves of stability, for two-dimensional disturbances in the Reynolds-wave number plane, are then obtained for the first time in the slip flow regime for stagnation point flow. Furthermore, the evolution of the critical Reynolds number against the slip parameter is established. The results show that the critical Reynolds number for instability is significantly increased with the slip parameter and the flow turn out to be more stable when the effect of rarefaction becomes important.

  17. [Three-point bending moment of two types of resin for temporary bridges after reinforcement with glass fibers].

    Science.gov (United States)

    Didia, E E; Akon, A B; Thiam, A; Djeredou, K B

    2010-03-01

    One of the concerns of the dental surgeon in the realization of any operational act is the durability of this one. The mechanical resistance of the provisional prostheses contributes in a large part to the durability of those. The resins in general, have weak mechanical properties. The purpose of this study is to evaluate the resistance in inflection of temporary bridges reinforced with glass fibre. To remedy the weak mechanical properties of resins, we thought in this study, to reinforce them with glass fibres. For this purpose, we realized with two different resins, four groups of temporary bridges of 3 elements, including two groups reinforced fibreglass and the others not. Tests of inflection 3 points have been made on these bridges and resistance to fracture was analysed. The statistical tests showed a significant difference in four groups with better resistance for the reinforced bridges.

  18. On one two-point BVP for the fourth order linear ordinary differential equation

    Czech Academy of Sciences Publication Activity Database

    Mukhigulashvili, Sulkhan; Manjikashvili, M.

    2017-01-01

    Roč. 24, č. 2 (2017), s. 265-275 ISSN 1072-947X Institutional support: RVO:67985840 Keywords : fourth order linear ordinary differential equations * two-point boundary value problems Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 0.290, year: 2016 https://www.degruyter.com/view/j/gmj.2017.24.issue-2/gmj-2016-0077/gmj-2016-0077. xml

  19. On one two-point BVP for the fourth order linear ordinary differential equation

    Czech Academy of Sciences Publication Activity Database

    Mukhigulashvili, Sulkhan; Manjikashvili, M.

    2017-01-01

    Roč. 24, č. 2 (2017), s. 265-275 ISSN 1072-947X Institutional support: RVO:67985840 Keywords : fourth order linear ordinary differential equations * two-point boundary value problems Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 0.290, year: 2016 https://www.degruyter.com/view/j/gmj.2017.24.issue-2/gmj-2016-0077/gmj-2016-0077.xml

  20. Theoretical assessment of the disparity in the electrostatic forces between two point charges and two conductive spheres of equal radii

    Science.gov (United States)

    Kolikov, Kiril

    2016-11-01

    The Coulomb's formula for the force FC of electrostatic interaction between two point charges is well known. In reality, however, interactions occur not between point charges, but between charged bodies of certain geometric form, size and physical structure. This leads to deviation of the estimated force FC from the real force F of electrostatic interaction, thus imposing the task to evaluate the disparity. In the present paper the problem is being solved theoretically for two charged conductive spheres of equal radii and arbitrary electric charges. Assessment of the deviation is given as a function of the ratio of the distance R between the spheres centers to the sum of their radii. For the purpose, relations between FC and F derived in a preceding work of ours, are employed to generalize the Coulomb's interactions. At relatively short distances between the spheres, the Coulomb force FC, as estimated to be induced by charges situated at the centers of the spheres, differ significantly from the real force F of interaction between the spheres. In the case of zero and non-zero charge we prove that with increasing the distance between the two spheres, the force F decrease rapidly, virtually to zero values, i.e. it appears to be short-acting force.

  1. Intrinsic strength of sodium borosilicate glass fibers by using a two-point bending technique

    International Nuclear Information System (INIS)

    Nishikubo, Y; Yoshida, S; Sugawara, T; Matsuoka, J

    2011-01-01

    Flaws existing on glass surface can be divided into two types, extrinsic and intrinsic. Although the extrinsic flaws are generated during processing and using, the intrinsic flaws are regarded as structural defects which result from thermal fluctuation. It is known that the extrinsic flaws determine glass strength, but effects of the intrinsic flaws on the glass strength are still unclear. Since it is considered that the averaged bond-strength and the intrinsic flaw would affect the intrinsic strength, the intrinsic strength of glass surely depends on the glass composition. In this study, the intrinsic failure strain of the glass fibers with the compositions of 20Na 2 O-40xB 2 O 3 -(80-40x)SiO 2 (mol%, x = 0, 0.5, 1.0, 1.5) were measured by using a two-point bending technique. The failure strength was estimated from the failure strain and Young's modulus of glass. It is elucidated that two-point bending strength of glass fiber decreases with increasing B 2 O 3 content in glass. The effects of the glass composition on the intrinsic strength are discussed in terms of elastic and inelastic deformation behaviors prior to fracture.

  2. [Analysis and research on cleaning points of HVAC systems in public places].

    Science.gov (United States)

    Yang, Jiaolan; Han, Xu; Chen, Dongqing; Jin, Xin; Dai, Zizhu

    2010-03-01

    To analyze cleaning points of HVAC systems, and to provides scientific base for regulating the cleaning of HVAC systems. Based on the survey results on the cleaning situation of HVAC systems around China for the past three years, we analyzes the cleaning points of HVAC systems from various aspects, such as the major health risk factors of HVAC systems, the formulation strategy of the cleaning of HVAC systems, cleaning methods and acceptance points of the air ducts and the parts of HVAC systems, the onsite protection and individual protection, the waste treatment and the cleaning of the removed equipment, inspection of the cleaning results, video record, and the final acceptance of the cleaning. The analysis of the major health risk factors of HVAC systems and the formulation strategy of the cleaning of HVAC systems is given. The specific methods for cleaning the air ducts, machine units, air ports, coil pipes and the water cooling towers of HVAC systems, the acceptance points of HVAC systems and the requirements of the report on the final acceptance of the cleaning of HVAC systems are proposed. By the analysis of the points of the cleaning of HVAC systems and proposal of corresponding measures, this study provides the base for the scientific and regular launch of the cleaning of HVAC systems, a novel technology service, and lays a foundation for the revision of the existing cleaning regulations, which may generate technical and social benefits to some extent.

  3. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    Science.gov (United States)

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  4. Ghost imaging with bucket detection and point detection

    Science.gov (United States)

    Zhang, De-Jian; Yin, Rao; Wang, Tong-Biao; Liao, Qing-Hua; Li, Hong-Guo; Liao, Qinghong; Liu, Jiang-Tao

    2018-04-01

    We experimentally investigate ghost imaging with bucket detection and point detection in which three types of illuminating sources are applied: (a) pseudo-thermal light source; (b) amplitude modulated true thermal light source; (c) amplitude modulated laser source. Experimental results show that the quality of ghost images reconstructed with true thermal light or laser beam is insensitive to the usage of bucket or point detector, however, the quality of ghost images reconstructed with pseudo-thermal light in bucket detector case is better than that in point detector case. Our theoretical analysis shows that the reason for this is due to the first order transverse coherence of the illuminating source.

  5. Point of Care Testing Services Delivery: Policy Analysis using a ...

    African Journals Online (AJOL)

    Annals of Biomedical Sciences ... The service providers (hospital management) and the testing personnel are faced with the task of trying to explain these problems. Objective of the study: To critically do a policy analysis of the problems of point of care testing with the aim of identifying the causes of these problems and ...

  6. Two-dimensional analysis of motion artifacts, including flow effects

    International Nuclear Information System (INIS)

    Litt, A.M.; Brody, A.S.; Spangler, R.A.; Scott, P.D.

    1990-01-01

    The effects of motion on magnetic resonance images have been theoretically analyzed for the case of a point-like object in simple harmonic motion and for other one-dimensional trajectories. The authors of this paper extend this analysis to a generalized two-dimensional magnetization with an arbitrary motion trajectory. The authors provide specific solutions for the clinically relevant cases of the cross-sections of cylindrical objects in the body, such as the aorta, which has a roughly one-dimensional, simple harmonic motion during respiration. By extending the solution to include inhomogeneous magnetizations, the authors present a model which allows the effects of motion artifacts and flow artifacts to be analyzed simultaneously

  7. Assessing Performance of Multipurpose Reservoir System Using Two-Point Linear Hedging Rule

    Science.gov (United States)

    Sasireka, K.; Neelakantan, T. R.

    2017-07-01

    Reservoir operation is the one of the important filed of water resource management. Innovative techniques in water resource management are focussed at optimizing the available water and in decreasing the environmental impact of water utilization on the natural environment. In the operation of multi reservoir system, efficient regulation of the release to satisfy the demand for various purpose like domestic, irrigation and hydropower can lead to increase the benefit from the reservoir as well as significantly reduces the damage due to floods. Hedging rule is one of the emerging techniques in reservoir operation, which reduce the severity of drought by accepting number of smaller shortages. The key objective of this paper is to maximize the minimum power production and improve the reliability of water supply for municipal and irrigation purpose by using hedging rule. In this paper, Type II two-point linear hedging rule is attempted to improve the operation of Bargi reservoir in the Narmada basin in India. The results obtained from simulation of hedging rule is compared with results from Standard Operating Policy, the result shows that the application of hedging rule significantly improved the reliability of water supply and reliability of irrigation release and firm power production.

  8. COMPARISON OF TWO VARIANTS OF A KATA TECHNIQUE (UNSU: THE NEUROMECHANICAL POINT OF VIEW

    Directory of Open Access Journals (Sweden)

    Francesco Felici

    2009-11-01

    Full Text Available The objective of this work was to characterize from a neuromechanical point of view a jump performed within the sequence of Kata Unsu in International top level karateka. A modified jumping technique was proposed to improve the already acquired technique. The neuromechanical evaluation, paralleled by a refereeing judgment, was then used to compare modified and classic technique to test if the modification could lead to a better performance capacity, e.g. a higher score during an official competition. To this purpose, four high ranked karateka were recruited and instructed to perform the two jumps. Surface electromyographic signals were recorded in a bipolar mode from the vastus lateralis, rectus femoris, biceps femoris, gluteus maximus, and gastrocnemious muscles of both lower limbs. Mechanical data were collected by means of a stereophotogrammetric system and force platforms. Performance was associated to parameters characterizing the initial conditions of the aerial phase and to the CoM maximal height. The most critical elements having a negative influence on the arbitral evaluation were associated to quantitative error indicators. 3D reconstruction of the movement and videos were used to obtain the referee scores. The Unsu jump was divided into five phases (preparation, take off, ascending flight, descending flight, and landing and the critical elements were highlighted. When comparing the techniques, no difference was found in the pattern of sEMG activation of the throwing leg muscles, while the push leg showed an earlier activation of RF and GA muscles at the beginning of the modified technique. The only significant improvement associated with the modified technique was evidenced at the beginning of the aerial phase, while there was no significant improvement of the referee score. Nevertheless, the proposed neuromechanical analysis, finalized to correlate technique features with the core performance indicators, is new in the field and is a

  9. Point of sale tobacco advertisements in India.

    Science.gov (United States)

    Chaudhry, S; Chaudhry, S; Chaudhry, K

    2007-01-01

    The effect of any legislation depends on its implementation. Limited studies indicate that tobacco companies may tend to use such provision for surrogate advertising. The point of sale advertisement provision has been placed in the Indian Tobacco Control legislation. The study was undertaken to assess the Indian scenario in this regard. To assess if there are any violations related to provision of point of tobacco sale advertisements under India's comprehensive tobacco Control legislation in different parts of India. Boards over various shops showing advertisements of tobacco products were observed in the cities of Delhi, Mumbai, Kolkata, Trivandrum and Jaipur, between September 2005 and March 2006. The point of sale advertisements mushroomed after the implementation of 2004 tobacco control legislation. Tobacco advertisement boards fully satisfying the point of sale provision were practically non-existent. The most common violation of point of sale advertisements was the larger size of the board but with tobacco advertisement equal to the size indicated in the legislation and remaining area often showing a picture. Invariably two boards were placed together to provide the impression of a large single repetitive advertisement. More than two boards was not common. Tobacco advertisement boards were also observed on closed shops/ warehouses, shops not selling tobacco products and on several adjacent shops. The purpose of the point of sale advertisements seems to be surrogate advertisement of tobacco products, mainly cigarettes.

  10. Biospectral analysis of the bladder channel point in chronic low back pain patients

    Science.gov (United States)

    Vidal, Alberto Espinosa; Nava, Juan José Godina; Segura, Miguel Ángel Rodriguez; Bastida, Albino Villegas

    2012-10-01

    Chronic pain is the main cause of disability in the productive age people and is a public health problem that affects both the patient and society. On the other hand, there isn't any instrument to measure it; this is only estimated using subjective variables. The healthy cells generate a known membrane potential which is part of a network of biologically closed electric circuits still unstudied. It is proposed a biospectral analysis of a bladder channel point as a diagnosis method for chronic low back pain patients. Materials and methods: We employed a study group with chronic low back pain patients and a control group without low back pain patients. The visual analog scale (VAS) to determine the level of pain was applied. Bioelectric variables were measured for 10 seconds and the respective biostatistical analyses were made. Results: Biospectral analysis on frequency domain shows a depression in the 60-300 Hz frequency range proportional to the chronicity of low back pain compared against healthy patients.

  11. Hypertriglyceridemic waist phenotype in primary health care: comparison of two cutoff points

    Directory of Open Access Journals (Sweden)

    Braz MAD

    2017-09-01

    Full Text Available Marina Augusta Dias Braz,1 Jallyne Nunes Vieira,1 Flayane Oliveira Gomes,1 Priscilla Rafaella da Silva,1 Ohanna Thays de Medeiros Santos,1 Ilanna Marques Gomes da Rocha,2 Iasmin Matias de Sousa,2 Ana Paula Trussardi Fayh2 1Faculdade de Ciências da Saúde do Trairi, Universidade Federal do Rio Grande do Norte (UFRN, Santa Cruz, 2Department of Nutrition, Centro de Ciências da Saúde, UFRN, Natal, Rio Grande do Norte, Brazil Objective: We aimed to evaluate the prevalence of hypertriglyceridemic waist (HTGW phenotype among users of primary health care using two different cutoff points used in the literature. Methods: We evaluated adults and elderly individuals of both sexes who attended the same level of primary health care. HTGW phenotype was determined with measurements of waist circumference (WC and triglyceride levels and compared using cutoff points proposed by the National Cholesterol Education Program – NCEP/ATP III (WC ≥102 cm for men and ≥88 cm for women; triglyceride levels ≥150 mg/dL for both sexes and by Lemieux et al (WC ≥90 cm for men and ≥85 cm for women; triglyceride levels ≥177 mg/dL for both. Results: Within the sample of 437 individuals, 73.7% was female. The prevalence of HTGW phenotype was high and statistically different with the use of different cutoff points from the literature. The prevalence was higher using the NCEP/ATP III criteria compared to those proposed by Lemieux et al (36.2% and 32.5%, respectively, p<0.05. Individuals with the presence of the phenotype also presented alterations in other traditional cardiovascular risk markers. Conclusion: The HTGW phenotype identified high prevalence of cardiovascular risk in the population, with higher cutoff points from the NCEP/ATP III criteria. The difference in frequency of risk alerts us to the need to establish cutoff points for the Brazilian population. Keywords: abdominal obesity, cardiovascular disease, dyslipidemia, cardiovascular risk

  12. Metallic and antiferromagnetic fixed points from gravity

    Science.gov (United States)

    Paul, Chandrima

    2018-06-01

    We consider SU(2) × U(1) gauge theory coupled to matter field in adjoints and study RG group flow. We constructed Callan-Symanzik equation and subsequent β functions and study the fixed points. We find there are two fixed points, showing metallic and antiferromagnetic behavior. We have shown that metallic phase develops an instability if certain parametric conditions are satisfied.

  13. Feedwater line break accident analysis for SMART in the view point of minimum departure from nucleate boiling ratio

    International Nuclear Information System (INIS)

    Kim Soo Hyoung; Bae, Kyoo Hwan; Chung, Young Jong; Kim, Keung Koo

    2012-01-01

    KAERI and KEPCO consortium had performed standard design of SMART(System integrated Modular Advanced ReacTor) from 2009 to 2011 and obtained standard design approval in July 2012. To confirm the safety of SMART design, all of the safety related design basis events were analyzed. A feedwater line break (FLB) is a postulated accident and is a limiting accident for a decrease in the heat removal by the secondary system in the view point of the peak RCS pressure. It is well known that departure from nucleate boiling ratio (DNBR) increases with the increase of the system pressure for conventional nuclear power plants. But SMART has comparatively lower RCS flow rate, and there is a possibility to show different DNBR behavior depending on the system pressure. To confirm that SMART is safe in case of FLB accident, the Korean nuclear regulatory body required to perform the safety analysis in the view point of minimum DNBR (MDNBR) during the licensing review process for standard design approval (SDA) of SMART design. In this paper, the safety analysis results of the FLB accident for SMART in the view point of MDNBR is described

  14. Feedwater line break accident analysis for SMART in the view point of minimum departure from nucleate boiling ratio

    Energy Technology Data Exchange (ETDEWEB)

    Kim Soo Hyoung; Bae, Kyoo Hwan; Chung, Young Jong; Kim, Keung Koo [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    KAERI and KEPCO consortium had performed standard design of SMART(System integrated Modular Advanced ReacTor) from 2009 to 2011 and obtained standard design approval in July 2012. To confirm the safety of SMART design, all of the safety related design basis events were analyzed. A feedwater line break (FLB) is a postulated accident and is a limiting accident for a decrease in the heat removal by the secondary system in the view point of the peak RCS pressure. It is well known that departure from nucleate boiling ratio (DNBR) increases with the increase of the system pressure for conventional nuclear power plants. But SMART has comparatively lower RCS flow rate, and there is a possibility to show different DNBR behavior depending on the system pressure. To confirm that SMART is safe in case of FLB accident, the Korean nuclear regulatory body required to perform the safety analysis in the view point of minimum DNBR (MDNBR) during the licensing review process for standard design approval (SDA) of SMART design. In this paper, the safety analysis results of the FLB accident for SMART in the view point of MDNBR is described.

  15. Mutual information as a two-point correlation function in stochastic lattice models

    International Nuclear Information System (INIS)

    Müller, Ulrich; Hinrichsen, Haye

    2013-01-01

    In statistical physics entropy is usually introduced as a global quantity which expresses the amount of information that would be needed to specify the microscopic configuration of a system. However, for lattice models with infinitely many possible configurations per lattice site it is also meaningful to introduce entropy as a local observable that describes the information content of a single lattice site. Likewise, the mutual information between two sites can be interpreted as a two-point correlation function which quantifies how much information a lattice site has about the state of another one and vice versa. Studying a particular growth model we demonstrate that the mutual information exhibits scaling properties that are consistent with the established phenomenological scaling picture. (paper)

  16. Two-Field Analysis of No-Scale Supergravity Inflation

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri V; Olive, Keith A

    2015-01-01

    Since the building-blocks of supersymmetric models include chiral superfields containing pairs of effective scalar fields, a two-field approach is particularly appropriate for models of inflation based on supergravity. In this paper, we generalize the two-field analysis of the inflationary power spectrum to supergravity models with arbitrary K\\"ahler potential. We show how two-field effects in the context of no-scale supergravity can alter the model predictions for the scalar spectral index $n_s$ and the tensor-to-scalar ratio $r$, yielding results that interpolate between the Planck-friendly Starobinsky model and BICEP2-friendly predictions. In particular, we show that two-field effects in a chaotic no-scale inflation model with a quadratic potential are capable of reducing $r$ to very small values $\\ll 0.1$. We also calculate the non-Gaussianity measure $f_{\\rm NL}$, finding that is well below the current experimental sensitivity.

  17. Maximum Power Point Tracking for Cascaded PV-Converter Modules Using Two-Stage Particle Swarm Optimization.

    Science.gov (United States)

    Mao, Mingxuan; Duan, Qichang; Zhang, Li; Chen, Hao; Hu, Bei; Duan, Pan

    2017-08-24

    The paper presents a novel two-stage particle swarm optimization (PSO) for the maximum power point tracking (MPPT) control of a PV system consisting of cascaded PV-converter modules, under partial shading conditions (PSCs). In this scheme, the grouping method of the shuffled frog leaping algorithm (SFLA) is incorporated with the basic PSO algorithm, ensuring fast and accurate searching of the global extremum. An adaptive speed factor is also introduced to improve its convergence speed. A PWM algorithm enabling permuted switching of the PV sources is applied. The method enables this PV system to achieve the maximum power generation for any number of PV and converter modules. Simulation studies of the proposed MPPT scheme are performed on a system having two chained PV buck-converter modules and a dc-ac H-bridge connected at its terminals for supplying an AC load. The results show that this type of PV system allows each module to achieve the maximum power generation according its illumination level without affecting the others, and the proposed new control method gives significantly higher power output compared with the conventional P&O and PSO methods.

  18. Analysis on Two Typical Landslide Hazard Phenomena in The Wenchuan Earthquake by Field Investigations and Shaking Table Tests

    Directory of Open Access Journals (Sweden)

    Changwei Yang

    2015-08-01

    Full Text Available Based on our field investigations of landslide hazards in the Wenchuan earthquake, some findings can be reported: (1 the multi-aspect terrain facing empty isolated mountains and thin ridges reacted intensely to the earthquake and was seriously damaged; (2 the slope angles of most landslides was larger than 45°. Considering the above disaster phenomena, the reasons are analyzed based on shaking table tests of one-sided, two-sided and four-sided slopes. The analysis results show that: (1 the amplifications of the peak accelerations of four-sided slopes is stronger than that of the two-sided slopes, while that of the one-sided slope is the weakest, which can indirectly explain the phenomena that the damage is most serious; (2 the amplifications of the peak accelerations gradually increase as the slope angles increase, and there are two inflection points which are the point where the slope angle is 45° and where the slope angle is 50°, respectively, which can explain the seismic phenomenon whereby landslide hazards mainly occur on the slopes whose slope angle is bigger than 45°. The amplification along the slope strike direction is basically consistent, and the step is smooth.

  19. Use of digital image analysis to estimate fluid permeability of porous materials: Application of two-point correlation functions

    International Nuclear Information System (INIS)

    Berryman, J.G.; Blair, S.C.

    1986-01-01

    Scanning electron microscope images of cross sections of several porous specimens have been digitized and analyzed using image processing techniques. The porosity and specific surface area may be estimated directly from measured two-point spatial correlation functions. The measured values of porosity and image specific surface were combined with known values of electrical formation factors to estimate fluid permeability using one version of the Kozeny-Carman empirical relation. For glass bead samples with measured permeability values in the range of a few darcies, our estimates agree well ( +- 10--20%) with the measurements. For samples of Ironton-Galesville sandstone with a permeability in the range of hundreds of millidarcies, our best results agree with the laboratory measurements again within about 20%. For Berea sandstone with still lower permeability (tens of millidarcies), our predictions from the images agree within 10--30%. Best results for the sandstones were obtained by using the porosities obtained at magnifications of about 100 x (since less resolution and better statistics are required) and the image specific surface obtained at magnifications of about 500 x (since greater resolution is required)

  20. Electron-density critical points analysis and catastrophe theory to forecast structure instability in periodic solids.

    Science.gov (United States)

    Merli, Marcello; Pavese, Alessandro

    2018-03-01

    The critical points analysis of electron density, i.e. ρ(x), from ab initio calculations is used in combination with the catastrophe theory to show a correlation between ρ(x) topology and the appearance of instability that may lead to transformations of crystal structures, as a function of pressure/temperature. In particular, this study focuses on the evolution of coalescing non-degenerate critical points, i.e. such that ∇ρ(x c ) = 0 and λ 1 , λ 2 , λ 3 ≠ 0 [λ being the eigenvalues of the Hessian of ρ(x) at x c ], towards degenerate critical points, i.e. ∇ρ(x c ) = 0 and at least one λ equal to zero. The catastrophe theory formalism provides a mathematical tool to model ρ(x) in the neighbourhood of x c and allows one to rationalize the occurrence of instability in terms of electron-density topology and Gibbs energy. The phase/state transitions that TiO 2 (rutile structure), MgO (periclase structure) and Al 2 O 3 (corundum structure) undergo because of pressure and/or temperature are here discussed. An agreement of 3-5% is observed between the theoretical model and experimental pressure/temperature of transformation.

  1. Implementation of the Two-Point Angular Correlation Function on a High-Performance Reconfigurable Computer

    Directory of Open Access Journals (Sweden)

    Volodymyr V. Kindratenko

    2009-01-01

    Full Text Available We present a parallel implementation of an algorithm for calculating the two-point angular correlation function as applied in the field of computational cosmology. The algorithm has been specifically developed for a reconfigurable computer. Our implementation utilizes a microprocessor and two reconfigurable processors on a dual-MAP SRC-6 system. The two reconfigurable processors are used as two application-specific co-processors. Two independent computational kernels are simultaneously executed on the reconfigurable processors while data pre-fetching from disk and initial data pre-processing are executed on the microprocessor. The overall end-to-end algorithm execution speedup achieved by this implementation is over 90× as compared to a sequential implementation of the algorithm executed on a single 2.8 GHz Intel Xeon microprocessor.

  2. Two-point discrimination and kinesthetic sense disorders in productive age individuals with carpal tunnel syndrome.

    Science.gov (United States)

    Wolny, Tomasz; Saulicz, Edward; Linek, Paweł; Myśliwiec, Andrzej

    2016-06-16

    The aim of this study was to evaluate two-point discrimination (2PD) sense and kinesthetic sense dysfunctions in carpal tunnel syndrome (CTS) patients compared with a healthy group. The 2PD sense, muscle force, and kinesthetic differentiation (KD) of strength; the range of motion in radiocarpal articulation; and KD of motion were assessed. The 2PD sense assessment showed significantly higher values in all the examined fingers in the CTS group than in those in the healthy group (p<0.01). There was a significant difference in the percentage value of error in KD of pincer and cylindrical grip (p<0.01) as well as in KD of flexion and extension movement in the radiocarpal articulation (p<0.01) between the studied groups. There are significant differences in the 2PD sense and KD of strength and movement between CTS patients compared with healthy individuals.

  3. Seafood safety: economics of hazard analysis and Critical Control Point (HACCP) programmes

    National Research Council Canada - National Science Library

    Cato, James C

    1998-01-01

    .... This document on economic issues associated with seafood safety was prepared to complement the work of the Service in seafood technology, plant sanitation and Hazard Analysis Critical Control Point (HACCP) implementation...

  4. A Polygon and Point-Based Approach to Matching Geospatial Features

    Directory of Open Access Journals (Sweden)

    Juan J. Ruiz-Lendínez

    2017-12-01

    Full Text Available A methodology for matching bidimensional entities is presented in this paper. The matching is proposed for both area and point features extracted from geographical databases. The procedure used to obtain homologous entities is achieved in a two-step process: The first matching, polygon to polygon matching (inter-element matching, is obtained by means of a genetic algorithm that allows the classifying of area features from two geographical databases. After this, we apply a point to point matching (intra-element matching based on the comparison of changes in their turning functions. This study shows that genetic algorithms are suitable for matching polygon features even if these features are quite different. Our results show up to 40% of matched polygons with differences in geometrical attributes. With regards to point matching, the vertex from homologous polygons, the function and threshold values proposed in this paper show a useful method for obtaining precise vertex matching.

  5. Quantum Multicriticality near the Dirac-Semimetal to Band-Insulator Critical Point in Two Dimensions: A Controlled Ascent from One Dimension

    Science.gov (United States)

    Roy, Bitan; Foster, Matthew S.

    2018-01-01

    We compute the effects of generic short-range interactions on gapless electrons residing at the quantum critical point separating a two-dimensional Dirac semimetal and a symmetry-preserving band insulator. The electronic dispersion at this critical point is anisotropic (Ek=±√{v2kx2+b2ky2 n } with n =2 ), which results in unconventional scaling of thermodynamic and transport quantities. Because of the vanishing density of states [ϱ (E )˜|E |1 /n ], this anisotropic semimetal (ASM) is stable against weak short-range interactions. However, for stronger interactions, the direct Dirac-semimetal to band-insulator transition can either (i) become a fluctuation-driven first-order transition (although unlikely in a particular microscopic model considered here, the anisotropic honeycomb lattice extended Hubbard model) or (ii) get avoided by an intervening broken-symmetry phase. We perform a controlled renormalization group analysis with the small parameter ɛ =1 /n , augmented with a 1 /n expansion (parametrically suppressing quantum fluctuations in the higher dimension) by perturbing away from the one-dimensional limit, realized by setting ɛ =0 and n →∞ . We identify charge density wave (CDW), antiferromagnet (AFM), and singlet s -wave superconductivity as the three dominant candidates for broken symmetry. The onset of any such order at strong coupling (˜ɛ ) takes place through a continuous quantum phase transition across an interacting multicritical point, where the ordered phase, band insulator, Dirac, and anisotropic semimetals meet. We also present the phase diagram of an extended Hubbard model for the ASM, obtained via the controlled deformation of its counterpart in one dimension. The latter displays spin-charge separation and instabilities to CDW, spin density wave, and Luther-Emery liquid phases at arbitrarily weak coupling. The spin density wave and Luther-Emery liquid phases deform into pseudospin SU(2)-symmetric quantum critical points separating the

  6. Quantum Multicriticality near the Dirac-Semimetal to Band-Insulator Critical Point in Two Dimensions: A Controlled Ascent from One Dimension

    Directory of Open Access Journals (Sweden)

    Bitan Roy

    2018-03-01

    Full Text Available We compute the effects of generic short-range interactions on gapless electrons residing at the quantum critical point separating a two-dimensional Dirac semimetal and a symmetry-preserving band insulator. The electronic dispersion at this critical point is anisotropic (E_{k}=±sqrt[v^{2}k_{x}^{2}+b^{2}k_{y}^{2n}] with n=2, which results in unconventional scaling of thermodynamic and transport quantities. Because of the vanishing density of states [ϱ(E∼|E|^{1/n}], this anisotropic semimetal (ASM is stable against weak short-range interactions. However, for stronger interactions, the direct Dirac-semimetal to band-insulator transition can either (i become a fluctuation-driven first-order transition (although unlikely in a particular microscopic model considered here, the anisotropic honeycomb lattice extended Hubbard model or (ii get avoided by an intervening broken-symmetry phase. We perform a controlled renormalization group analysis with the small parameter ε=1/n, augmented with a 1/n expansion (parametrically suppressing quantum fluctuations in the higher dimension by perturbing away from the one-dimensional limit, realized by setting ε=0 and n→∞. We identify charge density wave (CDW, antiferromagnet (AFM, and singlet s-wave superconductivity as the three dominant candidates for broken symmetry. The onset of any such order at strong coupling (∼ε takes place through a continuous quantum phase transition across an interacting multicritical point, where the ordered phase, band insulator, Dirac, and anisotropic semimetals meet. We also present the phase diagram of an extended Hubbard model for the ASM, obtained via the controlled deformation of its counterpart in one dimension. The latter displays spin-charge separation and instabilities to CDW, spin density wave, and Luther-Emery liquid phases at arbitrarily weak coupling. The spin density wave and Luther-Emery liquid phases deform into pseudospin SU(2-symmetric quantum critical

  7. Topology of streamlines and vorticity contours for two - dimensional flows

    DEFF Research Database (Denmark)

    Andersen, Morten

    on the vortex filament by the localised induction approximation the stream function is slightly modified and an extra parameter is introduced. In this setting two new flow topologies arise, but not more than two critical points occur for any combination of the parameters. The analysis of the closed form show...... by a point vortex above a wall in inviscid fluid. There is no reason to a priori expect equivalent results of the three vortex definitions. However, the study is mainly motivated by the findings of Kudela & Malecha (Fluid Dyn. Res. 41, 2009) who find good agreement between the vorticity and streamlines...

  8. Analysis of the flight dynamics of the Solar Maximum Mission (SMM) off-sun scientific pointing

    Science.gov (United States)

    Pitone, D. S.; Klein, J. R.; Twambly, B. J.

    1990-01-01

    Algorithms are presented which were created and implemented by the Goddard Space Flight Center's (GSFC's) Solar Maximum Mission (SMM) attitude operations team to support large-angle spacecraft pointing at scientific objectives. The mission objective of the post-repair SMM satellite was to study solar phenomena. However, because the scientific instruments, such as the Coronagraph/Polarimeter (CP) and the Hard X-ray Burst Spectrometer (HXRBS), were able to view objects other than the Sun, attitude operations support for attitude pointing at large angles from the nominal solar-pointing attitudes was required. Subsequently, attitude support for SMM was provided for scientific objectives such as Comet Halley, Supernova 1987A, Cygnus X-1, and the Crab Nebula. In addition, the analysis was extended to include the reverse problem, computing the right ascension and declination of a body given the off-Sun angles. This analysis led to the computation of the orbits of seven new solar comets seen in the field-of-view (FOV) of the CP. The activities necessary to meet these large-angle attitude-pointing sequences, such as slew sequence planning, viewing-period prediction, and tracking-bias computation are described. Analysis is presented for the computation of maneuvers and pointing parameters relative to the SMM-unique, Sun-centered reference frame. Finally, science data and independent attitude solutions are used to evaluate the larg-angle pointing performance.

  9. Test-retest reliability and minimal detectable change of two simplified 3-point balance measures in patients with stroke.

    Science.gov (United States)

    Chen, Yi-Miau; Huang, Yi-Jing; Huang, Chien-Yu; Lin, Gong-Hong; Liaw, Lih-Jiun; Lee, Shih-Chieh; Hsieh, Ching-Lin

    2017-10-01

    The 3-point Berg Balance Scale (BBS-3P) and 3-point Postural Assessment Scale for Stroke Patients (PASS-3P) were simplified from the BBS and PASS to overcome the complex scoring systems. The BBS-3P and PASS-3P were more feasible in busy clinical practice and showed similarly sound validity and responsiveness to the original measures. However, the reliability of the BBS-3P and PASS-3P is unknown limiting their utility and the interpretability of scores. We aimed to examine the test-retest reliability and minimal detectable change (MDC) of the BBS-3P and PASS-3P in patients with stroke. Cross-sectional study. The rehabilitation departments of a medical center and a community hospital. A total of 51 chronic stroke patients (64.7% male). Both balance measures were administered twice 7 days apart. The test-retest reliability of both the BBS-3P and PASS-3P were examined by intraclass correlation coefficients (ICC). The MDC and its percentage over the total score (MDC%) of each measure was calculated for examining the random measurement errors. The ICC values of the BBS-3P and PASS-3P were 0.99 and 0.97, respectively. The MDC% (MDC) of the BBS-3P and PASS-3P were 9.1% (5.1 points) and 8.4% (3.0 points), respectively, indicating that both measures had small and acceptable random measurement errors. Our results showed that both the BBS-3P and the PASS-3P had good test-retest reliability, with small and acceptable random measurement error. These two simplified 3-level balance measures can provide reliable results over time. Our findings support the repeated administration of the BBS-3P and PASS-3P to monitor the balance of patients with stroke. The MDC values can help clinicians and researchers interpret the change scores more precisely.

  10. Three-dimensional distribution of cortical synapses: a replicated point pattern-based analysis

    Science.gov (United States)

    Anton-Sanchez, Laura; Bielza, Concha; Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; DeFelipe, Javier; Larrañaga, Pedro

    2014-01-01

    The biggest problem when analyzing the brain is that its synaptic connections are extremely complex. Generally, the billions of neurons making up the brain exchange information through two types of highly specialized structures: chemical synapses (the vast majority) and so-called gap junctions (a substrate of one class of electrical synapse). Here we are interested in exploring the three-dimensional spatial distribution of chemical synapses in the cerebral cortex. Recent research has showed that the three-dimensional spatial distribution of synapses in layer III of the neocortex can be modeled by a random sequential adsorption (RSA) point process, i.e., synapses are distributed in space almost randomly, with the only constraint that they cannot overlap. In this study we hypothesize that RSA processes can also explain the distribution of synapses in all cortical layers. We also investigate whether there are differences in both the synaptic density and spatial distribution of synapses between layers. Using combined focused ion beam milling and scanning electron microscopy (FIB/SEM), we obtained three-dimensional samples from the six layers of the rat somatosensory cortex and identified and reconstructed the synaptic junctions. A total volume of tissue of approximately 4500μm3 and around 4000 synapses from three different animals were analyzed. Different samples, layers and/or animals were aggregated and compared using RSA replicated spatial point processes. The results showed no significant differences in the synaptic distribution across the different rats used in the study. We found that RSA processes described the spatial distribution of synapses in all samples of each layer. We also found that the synaptic distribution in layers II to VI conforms to a common underlying RSA process with different densities per layer. Interestingly, the results showed that synapses in layer I had a slightly different spatial distribution from the other layers. PMID:25206325

  11. Alternative Splicing Studies of the Reactive Oxygen Species Gene Network in Populus Reveal Two Isoforms of High-Isoelectric-Point Superoxide Dismutase1[C][W

    Science.gov (United States)

    Srivastava, Vaibhav; Srivastava, Manoj Kumar; Chibani, Kamel; Nilsson, Robert; Rouhier, Nicolas; Melzer, Michael; Wingsle, Gunnar

    2009-01-01

    Recent evidence has shown that alternative splicing (AS) is widely involved in the regulation of gene expression, substantially extending the diversity of numerous proteins. In this study, a subset of expressed sequence tags representing members of the reactive oxygen species gene network was selected from the PopulusDB database to investigate AS mechanisms in Populus. Examples of all known types of AS were detected, but intron retention was the most common. Interestingly, the closest Arabidopsis (Arabidopsis thaliana) homologs of half of the AS genes identified in Populus are not reportedly alternatively spliced. Two genes encoding the protein of most interest in our study (high-isoelectric-point superoxide dismutase [hipI-SOD]) have been found in black cottonwood (Populus trichocarpa), designated PthipI-SODC1 and PthipI-SODC2. Analysis of the expressed sequence tag libraries has indicated the presence of two transcripts of PthipI-SODC1 (hipI-SODC1b and hipI-SODC1s). Alignment of these sequences with the PthipI-SODC1 gene showed that hipI-SODC1b was 69 bp longer than hipI-SODC1s due to an AS event involving the use of an alternative donor splice site in the sixth intron. Transcript analysis showed that the splice variant hipI-SODC1b was differentially expressed, being clearly expressed in cambial and xylem, but not phloem, regions. In addition, immunolocalization and mass spectrometric data confirmed the presence of hipI-SOD proteins in vascular tissue. The functionalities of the spliced gene products were assessed by expressing recombinant hipI-SOD proteins and in vitro SOD activity assays. PMID:19176719

  12. Evenly spaced Detrended Fluctuation Analysis: Selecting the number of points for the diffusion plot

    Science.gov (United States)

    Liddy, Joshua J.; Haddad, Jeffrey M.

    2018-02-01

    Detrended Fluctuation Analysis (DFA) has become a widely-used tool to examine the correlation structure of a time series and provided insights into neuromuscular health and disease states. As the popularity of utilizing DFA in the human behavioral sciences has grown, understanding its limitations and how to properly determine parameters is becoming increasingly important. DFA examines the correlation structure of variability in a time series by computing α, the slope of the log SD- log n diffusion plot. When using the traditional DFA algorithm, the timescales, n, are often selected as a set of integers between a minimum and maximum length based on the number of data points in the time series. This produces non-uniformly distributed values of n in logarithmic scale, which influences the estimation of α due to a disproportionate weighting of the long-timescale regions of the diffusion plot. Recently, the evenly spaced DFA and evenly spaced average DFA algorithms were introduced. Both algorithms compute α by selecting k points for the diffusion plot based on the minimum and maximum timescales of interest and improve the consistency of α estimates for simulated fractional Gaussian noise and fractional Brownian motion time series. Two issues that remain unaddressed are (1) how to select k and (2) whether the evenly-spaced DFA algorithms show similar benefits when assessing human behavioral data. We manipulated k and examined its effects on the accuracy, consistency, and confidence limits of α in simulated and experimental time series. We demonstrate that the accuracy and consistency of α are relatively unaffected by the selection of k. However, the confidence limits of α narrow as k increases, dramatically reducing measurement uncertainty for single trials. We provide guidelines for selecting k and discuss potential uses of the evenly spaced DFA algorithms when assessing human behavioral data.

  13. A Lagrangian analysis of a two-dimensional airfoil with vortex shedding

    Energy Technology Data Exchange (ETDEWEB)

    Lipinski, Doug; Cardwell, Blake; Mohseni, Kamran [Department of Aerospace Engineering Sciences, University of Colorado, Boulder, CO 80309-0429 (United States)], E-mail: Mohseni@colorado.edu

    2008-08-29

    Using invariant material manifolds and flow topology, the flow behavior and structure of flow around a two-dimensional Eppler 387 airfoil is examined with an emphasis on vortex shedding and the time-dependent reattachment profile. The examination focuses on low Reynolds number (Re = 60 000) flow at several angles of attack. Using specialized software, we identify invariant manifolds in the flow and use these structures to illuminate the process of vortex formation and the periodic behavior of the reattachment profile. Our analysis concludes with a topological view of the flow, including fixed points and a discussion of phase plots and the frequency spectrum of several key points in the flow. The behavior of invariant manifolds directly relates to the flow topology and illuminates some aspects seen in phase space during vortex shedding. Furthermore, it highlights the reattachment behavior in ways not seen before.

  14. A Lagrangian analysis of a two-dimensional airfoil with vortex shedding

    International Nuclear Information System (INIS)

    Lipinski, Doug; Cardwell, Blake; Mohseni, Kamran

    2008-01-01

    Using invariant material manifolds and flow topology, the flow behavior and structure of flow around a two-dimensional Eppler 387 airfoil is examined with an emphasis on vortex shedding and the time-dependent reattachment profile. The examination focuses on low Reynolds number (Re = 60 000) flow at several angles of attack. Using specialized software, we identify invariant manifolds in the flow and use these structures to illuminate the process of vortex formation and the periodic behavior of the reattachment profile. Our analysis concludes with a topological view of the flow, including fixed points and a discussion of phase plots and the frequency spectrum of several key points in the flow. The behavior of invariant manifolds directly relates to the flow topology and illuminates some aspects seen in phase space during vortex shedding. Furthermore, it highlights the reattachment behavior in ways not seen before

  15. Dynamics of Multibody Systems Near Lagrangian Points

    Science.gov (United States)

    Wong, Brian

    This thesis examines the dynamics of a physically connected multi-spacecraft system in the vicinity of the Lagrangian points of a Circular Restricted Three-Body System. The spacecraft system is arranged in a wheel-spoke configuration with smaller and less massive satellites connected to a central hub using truss/beams or tether connectors. The kinematics of the system is first defined, and the kinetic, gravitational potential energy and elastic potential energy of the system are derived. The Assumed Modes Method is used to discretize the continuous variables of the system, and a general set of ordinary differential equations describing the dynamics of the connectors and the central hub are obtained using the Lagrangian method. The flexible body dynamics of the tethered and truss connected systems are examined using numerical simulations. The results show that these systems experienced only small elastic deflections when they are naturally librating or rotating at moderate angular velocities, and these deflections have relatively small effect on the attitude dynamics of the systems. Based on these results, it is determined that the connectors can be modeled as rigid when only the attitude dynamics of the system is of interest. The equations of motion of rigid satellites stationed at the Lagrangian points are linearized, and the stability conditions of the satellite are obtained from the linear equations. The required conditions are shown to be similar to those of geocentric satellites. Study of the linear equations also revealed the resonant conditions of rigid Lagrangian point satellites, when a librational natural frequency of the satellite matches the frequency of its station-keeping orbit leading to large attitude motions. For tethered satellites, the linear analysis shows that the tethers are in stable equilibrium when they lie along a line joining the two primary celestial bodies of the Three-Body System. Numerical simulations are used to study the long term

  16. APPLICABILITY ANALYSIS OF CLOTH SIMULATION FILTERING ALGORITHM FOR MOBILE LIDAR POINT CLOUD

    Directory of Open Access Journals (Sweden)

    S. Cai

    2018-04-01

    Full Text Available Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging data post-processing. Cloth simulation filtering (CSF algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM, 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.

  17. Efficient point cloud data processing in shipbuilding: Reformative component extraction method and registration method

    Directory of Open Access Journals (Sweden)

    Jingyu Sun

    2014-07-01

    Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.

  18. Improved DEA Cross Efficiency Evaluation Method Based on Ideal and Anti-Ideal Points

    Directory of Open Access Journals (Sweden)

    Qiang Hou

    2018-01-01

    Full Text Available A new model is introduced in the process of evaluating efficiency value of decision making units (DMUs through data envelopment analysis (DEA method. Two virtual DMUs called ideal point DMU and anti-ideal point DMU are combined to form a comprehensive model based on the DEA method. The ideal point DMU is taking self-assessment system according to efficiency concept. The anti-ideal point DMU is taking other-assessment system according to fairness concept. The two distinctive ideal point models are introduced to the DEA method and combined through using variance ration. From the new model, a reasonable result can be obtained. Numerical examples are provided to illustrate the new constructed model and certify the rationality of the constructed model through relevant analysis with the traditional DEA model.

  19. Single Point Vulnerability Analysis of Automatic Seismic Trip System

    International Nuclear Information System (INIS)

    Oh, Seo Bin; Chung, Soon Il; Lee, Yong Suk; Choi, Byung Pil

    2016-01-01

    Single Point Vulnerability (SPV) analysis is a process used to identify individual equipment whose failure alone will result in a reactor trip, turbine generator failure, or power reduction of more than 50%. Automatic Seismic Trip System (ASTS) is a newly installed system to ensure the safety of plant when earthquake occurs. Since this system directly shuts down the reactor, the failure or malfunction of its system component can cause a reactor trip more frequently than other systems. Therefore, an SPV analysis of ASTS is necessary to maintain its essential performance. To analyze SPV for ASTS, failure mode and effect analysis (FMEA) and fault tree analysis (FTA) was performed. In this study, FMEA and FTA methods were performed to select SPV equipment of ASTS. D/O, D/I, A/I card, seismic sensor, and trip relay had an effect on the reactor trip but their single failure will not cause reactor trip. In conclusion, ASTS is excluded as SPV. These results can be utilized as the basis data for ways to enhance facility reliability such as design modification and improvement of preventive maintenance procedure

  20. Single Point Vulnerability Analysis of Automatic Seismic Trip System

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Seo Bin; Chung, Soon Il; Lee, Yong Suk [FNC Technology Co., Yongin (Korea, Republic of); Choi, Byung Pil [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    Single Point Vulnerability (SPV) analysis is a process used to identify individual equipment whose failure alone will result in a reactor trip, turbine generator failure, or power reduction of more than 50%. Automatic Seismic Trip System (ASTS) is a newly installed system to ensure the safety of plant when earthquake occurs. Since this system directly shuts down the reactor, the failure or malfunction of its system component can cause a reactor trip more frequently than other systems. Therefore, an SPV analysis of ASTS is necessary to maintain its essential performance. To analyze SPV for ASTS, failure mode and effect analysis (FMEA) and fault tree analysis (FTA) was performed. In this study, FMEA and FTA methods were performed to select SPV equipment of ASTS. D/O, D/I, A/I card, seismic sensor, and trip relay had an effect on the reactor trip but their single failure will not cause reactor trip. In conclusion, ASTS is excluded as SPV. These results can be utilized as the basis data for ways to enhance facility reliability such as design modification and improvement of preventive maintenance procedure.

  1. MIDAS/PK code development using point kinetics model

    International Nuclear Information System (INIS)

    Song, Y. M.; Park, S. H.

    1999-01-01

    In this study, a MIDAS/PK code has been developed for analyzing the ATWS (Anticipated Transients Without Scram) which can be one of severe accident initiating events. The MIDAS is an integrated computer code based on the MELCOR code to develop a severe accident risk reduction strategy by Korea Atomic Energy Research Institute. In the mean time, the Chexal-Layman correlation in the current MELCOR, which was developed under a BWR condition, is appeared to be inappropriate for a PWR. So as to provide ATWS analysis capability to the MIDAS code, a point kinetics module, PKINETIC, has first been developed as a stand-alone code whose reference model was selected from the current accident analysis codes. In the next step, the MIDAS/PK code has been developed via coupling PKINETIC with the MIDAS code by inter-connecting several thermal hydraulic parameters between the two codes. Since the major concern in the ATWS analysis is the primary peak pressure during the early few minutes into the accident, the peak pressure from the PKINETIC module and the MIDAS/PK are compared with the RETRAN calculations showing a good agreement between them. The MIDAS/PK code is considered to be valuable for analyzing the plant response during ATWS deterministically, especially for the early domestic Westinghouse plants which rely on the operator procedure instead of an AMSAC (ATWS Mitigating System Actuation Circuitry) against ATWS. This capability of ATWS analysis is also important from the view point of accident management and mitigation

  2. Effect of the number of two-wheeled containers at a gathering point on the energetic workload and work efficiency in refuse collecting

    NARCIS (Netherlands)

    Kuijer, P. Paul F. M.; van der Beek, Allard J.; van Dieën, Jaap H.; Visser, Bart; Frings-Dresen, Monique H. W.

    2002-01-01

    The effect of the number of two-wheeled containers at a gathering point on the energetic workload and the work efficiency in refuse collecting was studied in order to design an optimal gathering point for two-wheeled containers. Three sizes of gathering points were investigated, i.e. with 2, 16 and

  3. Double Dirac Point Semimetal in Two-Dimensional Material: Ta2Se3

    OpenAIRE

    Ma, Yandong; Jing, Yu; Heine, Thomas

    2017-01-01

    Here, we report by first-principles calculations one new stable 2D Dirac material, Ta2Se3 monolayer. For this system, stable layered bulk phase exists, and exfoliation should be possible. Ta2Se3 monolayer is demonstrated to support two Dirac points close to the Fermi level, achieving the exotic 2D double Dirac semimetal. And like 2D single Dirac and 2D node-line semimetals, spin-orbit coupling could introduce an insulating state in this new class of 2D Dirac semimetals. Moreover, the Dirac fe...

  4. H-point exciton transitions in bulk MoS2

    International Nuclear Information System (INIS)

    Saigal, Nihit; Ghosh, Sandip

    2015-01-01

    Reflectance and photoreflectance spectrum of bulk MoS 2 around its direct bandgap energy have been measured at 12 K. Apart from spectral features due to the A and B ground state exciton transitions at the K-point of the Brillouin zone, one observes additional features at nearby energies. Through lineshape analysis the character of two prominent additional features are shown to be quite different from that of A and B. By comparing with reported electronic band structure calculations, these two additional features are identified as ground state exciton transitions at the H-point of the Brillouin zone involving two spin-orbit split valance bands. The excitonic energy gap at the H-point is 1.965 eV with a valance bands splitting of 185 meV. While at the K-point, the corresponding values are 1.920 eV and 205 meV, respectively

  5. Material-Point-Method Analysis of Collapsing Slopes

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    To understand the dynamic evolution of landslides and predict their physical extent, a computational model is required that is capable of analysing complex material behaviour as well as large strains and deformations. Here, a model is presented based on the so-called generalised-interpolation mat......To understand the dynamic evolution of landslides and predict their physical extent, a computational model is required that is capable of analysing complex material behaviour as well as large strains and deformations. Here, a model is presented based on the so-called generalised......, a deformed material description is introduced, based on time integration of the deformation gradient and utilising Gauss quadrature over the volume associated with each material point. The method has been implemented in a Fortran code and employed for the analysis of a landslide that took place during...

  6. Results of safety analysis on PWR type nuclear power plants with two and three loops

    International Nuclear Information System (INIS)

    1979-01-01

    The results of safety analysis on PWR type nuclear power plants with two and three loops are presented, which was conducted by the Resource and Energy Agency, in June, 1979. This analysis was made simulating the phenomenon relating to the pressurizer level gauge at the time of the TMI accident. The model plants were the Ikata nuclear power plant with two loops and the Takahama No. 1 nuclear power plant with three loops. The premise conditions for this safety analysis were as follows: 1) the main feed water flow is totally lost suddenly at the full power operation of the plants, and the feed water pump is started manually 15 minutes after the accident initiation, 2) the relief valve on the pressurizer is kept open even after the pressure drop in the primary cooling system, and the primary cooling water flows out into the containment vessel through the rupture disc of the pressurizer relief tank, and 3) the electric circuit, which sends out the signal of safety injection at the abnormal low pressure in the reactor vessel, is added from the view-point of starting the operation of the emergency core cooling system as early as possible. Relating to the analytical results, the pressure in the reactor vessels changes less, the water level in the pressurizers can be regulated, and the water level in the steam generators is recovered safely in both two and three-loop plants. It is recognized that the plants with both two- and three loops show the safe transient phenomena, and the integrity of the cores is kept under the premise conditions. The evaluation for each analyzed result was conducted in detail. (Nakai, Y.)

  7. Fixed Points in Discrete Models for Regulatory Genetic Networks

    Directory of Open Access Journals (Sweden)

    Orozco Edusmildo

    2007-01-01

    Full Text Available It is desirable to have efficient mathematical methods to extract information about regulatory iterations between genes from repeated measurements of gene transcript concentrations. One piece of information is of interest when the dynamics reaches a steady state. In this paper we develop tools that enable the detection of steady states that are modeled by fixed points in discrete finite dynamical systems. We discuss two algebraic models, a univariate model and a multivariate model. We show that these two models are equivalent and that one can be converted to the other by means of a discrete Fourier transform. We give a new, more general definition of a linear finite dynamical system and we give a necessary and sufficient condition for such a system to be a fixed point system, that is, all cycles are of length one. We show how this result for generalized linear systems can be used to determine when certain nonlinear systems (monomial dynamical systems over finite fields are fixed point systems. We also show how it is possible to determine in polynomial time when an ordinary linear system (defined over a finite field is a fixed point system. We conclude with a necessary condition for a univariate finite dynamical system to be a fixed point system.

  8. A comprehensive evaluation of two MODIS evapotranspiration products over the conterminous United States: using point and gridded FLUXNET and water balance ET

    Science.gov (United States)

    Velpuri, Naga M.; Senay, Gabriel B.; Singh, Ramesh K.; Bohms, Stefanie; Verdin, James P.

    2013-01-01

    Remote sensing datasets are increasingly being used to provide spatially explicit large scale evapotranspiration (ET) estimates. Extensive evaluation of such large scale estimates is necessary before they can be used in various applications. In this study, two monthly MODIS 1 km ET products, MODIS global ET (MOD16) and Operational Simplified Surface Energy Balance (SSEBop) ET, are validated over the conterminous United States at both point and basin scales. Point scale validation was performed using eddy covariance FLUXNET ET (FLET) data (2001–2007) aggregated by year, land cover, elevation and climate zone. Basin scale validation was performed using annual gridded FLUXNET ET (GFET) and annual basin water balance ET (WBET) data aggregated by various hydrologic unit code (HUC) levels. Point scale validation using monthly data aggregated by years revealed that the MOD16 ET and SSEBop ET products showed overall comparable annual accuracies. For most land cover types, both ET products showed comparable results. However, SSEBop showed higher performance for Grassland and Forest classes; MOD16 showed improved performance in the Woody Savanna class. Accuracy of both the ET products was also found to be comparable over different climate zones. However, SSEBop data showed higher skill score across the climate zones covering the western United States. Validation results at different HUC levels over 2000–2011 using GFET as a reference indicate higher accuracies for MOD16 ET data. MOD16, SSEBop and GFET data were validated against WBET (2000–2009), and results indicate that both MOD16 and SSEBop ET matched the accuracies of the global GFET dataset at different HUC levels. Our results indicate that both MODIS ET products effectively reproduced basin scale ET response (up to 25% uncertainty) compared to CONUS-wide point-based ET response (up to 50–60% uncertainty) illustrating the reliability of MODIS ET products for basin-scale ET estimation. Results from this research

  9. Logarithmic two-point correlation functions from a z=2 Lifshitz model

    International Nuclear Information System (INIS)

    Zingg, T.

    2014-01-01

    The Einstein-Proca action is known to have asymptotically locally Lifshitz spacetimes as classical solutions. For dynamical exponent z=2, two-point correlation functions for fluctuations around such a geometry are derived analytically. It is found that the retarded correlators are stable in the sense that all quasinormal modes are situated in the lower half-plane of complex frequencies. Correlators in the longitudinal channel exhibit features that are reminiscent of a structure usually obtained in field theories that are logarithmic, i.e. contain an indecomposable but non-diagonalizable highest weight representation. This provides further evidence for conjecturing the model at hand as a candidate for a gravity dual of a logarithmic field theory with anisotropic scaling symmetry

  10. Linearity analysis and comparison study on the epoc® point-of-care blood analysis system in cardiopulmonary bypass patients

    Directory of Open Access Journals (Sweden)

    Jianing Chen

    2016-03-01

    Full Text Available The epoc® blood analysis system (Epocal Inc., Ottawa, Ontario, Canada is a newly developed in vitro diagnostic hand-held analyzer for testing whole blood samples at point-of-care, which provides blood gas, electrolytes, ionized calcium, glucose, lactate, and hematocrit/calculated hemoglobin rapidly. The analytical performance of the epoc® system was evaluated in a tertiary hospital, see related research article “Analytical evaluation of the epoc® point-of-care blood analysis system in cardiopulmonary bypass patients” [1]. Data presented are the linearity analysis for 9 parameters and the comparison study in 40 cardiopulmonary bypass patients on 3 epoc® meters, Instrumentation Laboratory GEM4000, Abbott iSTAT, Nova CCX, and Roche Accu-Chek Inform II and Performa glucose meters.

  11. Second-order analysis of structured inhomogeneous spatio-temporal point processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Ghorbani, Mohammad

    Statistical methodology for spatio-temporal point processes is in its infancy. We consider second-order analysis based on pair correlation functions and K-functions for first general inhomogeneous spatio-temporal point processes and second inhomogeneous spatio-temporal Cox processes. Assuming...... spatio-temporal separability of the intensity function, we clarify different meanings of second-order spatio-temporal separability. One is second-order spatio-temporal independence and relates e.g. to log-Gaussian Cox processes with an additive covariance structure of the underlying spatio......-temporal Gaussian process. Another concerns shot-noise Cox processes with a separable spatio-temporal covariance density. We propose diagnostic procedures for checking hypotheses of second-order spatio-temporal separability, which we apply on simulated and real data (the UK 2001 epidemic foot and mouth disease data)....

  12. Farmer cooperatives in the food economy of Western Europe: an analysis from the Marketing point of view

    NARCIS (Netherlands)

    Meulenberg, M.T.G.

    1979-01-01

    This paper is concerned with an analysis of farmer cooperatives in Western Europe from the marketing point of view. The analysis is restricted to marketing and processing cooperatives. First some basic characteristics of farmer cooperatives are discussed from a systems point of view. Afterwards

  13. Two-Stage Chaos Optimization Search Application in Maximum Power Point Tracking of PV Array

    Directory of Open Access Journals (Sweden)

    Lihua Wang

    2014-01-01

    Full Text Available In order to deliver the maximum available power to the load under the condition of varying solar irradiation and environment temperature, maximum power point tracking (MPPT technologies have been used widely in PV systems. Among all the MPPT schemes, the chaos method is one of the hot topics in recent years. In this paper, a novel two-stage chaos optimization method is presented which can make search faster and more effective. In the process of proposed chaos search, the improved logistic mapping with the better ergodic is used as the first carrier process. After finding the current optimal solution in a certain guarantee, the power function carrier as the secondary carrier process is used to reduce the search space of optimized variables and eventually find the maximum power point. Comparing with the traditional chaos search method, the proposed method can track the change quickly and accurately and also has better optimization results. The proposed method provides a new efficient way to track the maximum power point of PV array.

  14. Optimal production lot size and reorder point of a two-stage supply chain while random demand is sensitive with sales teams' initiatives

    Science.gov (United States)

    Sankar Sana, Shib

    2016-01-01

    The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.

  15. Analysis of Steady, Two-Dimensional Chemically Reacting Nonequilibrium Flow by an Unsteady, Asymptotically Consistent Technique. Volume I. Theoretical Development.

    Science.gov (United States)

    1982-09-01

    wall, and exit points are know,, collectively as boundary points. In the following discussion, thi numerical treatment used for each type of mesh point...and fiozen solutions and that it matches the ODK solution 6 [Reference (10)] quite well. Also note that in this case, there is only a small departure...shows the results of the H-F system analysis. The mass-averaged temperature profile falls between the equilibrium and frozen solutions and matches the ODK

  16. Two-dimensional multifractal cross-correlation analysis

    International Nuclear Information System (INIS)

    Xi, Caiping; Zhang, Shuning; Xiong, Gang; Zhao, Huichang; Yang, Yonghong

    2017-01-01

    Highlights: • We study the mathematical models of 2D-MFXPF, 2D-MFXDFA and 2D-MFXDMA. • Present the definition of the two-dimensional N 2 -partitioned multiplicative cascading process. • Do the comparative analysis of 2D-MC by 2D-MFXPF, 2D-MFXDFA and 2D-MFXDMA. • Provide a reference on the choice and parameter settings of these methods in practice. - Abstract: There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross-correlations. This paper presents two-dimensional multifractal cross-correlation analysis based on the partition function (2D-MFXPF), two-dimensional multifractal cross-correlation analysis based on the detrended fluctuation analysis (2D-MFXDFA) and two-dimensional multifractal cross-correlation analysis based on the detrended moving average analysis (2D-MFXDMA). We apply these methods to pairs of two-dimensional multiplicative cascades (2D-MC) to do a comparative study. Then, we apply the two-dimensional multifractal cross-correlation analysis based on the detrended fluctuation analysis (2D-MFXDFA) to real images and unveil intriguing multifractality in the cross correlations of the material structures. At last, we give the main conclusions and provide a valuable reference on how to choose the multifractal algorithms in the potential applications in the field of SAR image classification and detection.

  17. Genetic interaction analysis of point mutations enables interrogation of gene function at a residue-level resolution

    Science.gov (United States)

    Braberg, Hannes; Moehle, Erica A.; Shales, Michael; Guthrie, Christine; Krogan, Nevan J.

    2014-01-01

    We have achieved a residue-level resolution of genetic interaction mapping – a technique that measures how the function of one gene is affected by the alteration of a second gene – by analyzing point mutations. Here, we describe how to interpret point mutant genetic interactions, and outline key applications for the approach, including interrogation of protein interaction interfaces and active sites, and examination of post-translational modifications. Genetic interaction analysis has proven effective for characterizing cellular processes; however, to date, systematic high-throughput genetic interaction screens have relied on gene deletions or knockdowns, which limits the resolution of gene function analysis and poses problems for multifunctional genes. Our point mutant approach addresses these issues, and further provides a tool for in vivo structure-function analysis that complements traditional biophysical methods. We also discuss the potential for genetic interaction mapping of point mutations in human cells and its application to personalized medicine. PMID:24842270

  18. Using thermal analysis techniques for identifying the flash point temperatures of some lubricant and base oils

    Directory of Open Access Journals (Sweden)

    Aksam Abdelkhalik

    2018-03-01

    Full Text Available The flash point (FP temperatures of some lubricant and base oils were measured according to ASTM D92 and ASTM D93. In addition, the thermal stability of the oils was studied using differential scanning calorimeter (DSC and thermogravimetric analysis (TGA under nitrogen atmosphere. The DSC results showed that the FP temperatures, for each oil, were found during the first decomposition step and the temperature at the peak of the first decomposition step was usually higher than FP temperatures. The TGA results indicated that the temperature at which 17.5% weigh loss take placed (T17.5% was nearly identical with the FP temperature (±10 °C that was measured according to ASTM D92. The deviation percentage between FP and T17.5% was in the range from −0.8% to 3.6%. Keywords: Flash point, TGA, DSC

  19. Analysis of web-based online services for GPS relative and precise point positioning techniques

    Directory of Open Access Journals (Sweden)

    Taylan Ocalan

    Full Text Available Nowadays, Global Positioning System (GPS has been used effectively in several engineering applications for the survey purposes by multiple disciplines. Web-based online services developed by several organizations; which are user friendly, unlimited and most of them are free; have become a significant alternative against the high-cost scientific and commercial software on achievement of post processing and analyzing the GPS data. When centimeter (cm or decimeter (dm level accuracies are desired, that can be obtained easily regarding different quality engineering applications through these services. In this paper, a test study was conducted at ISKI-CORS network; Istanbul-Turkey in order to figure out the accuracy analysis of the most used web based online services around the world (namely OPUS, AUSPOS, SCOUT, CSRS-PPP, GAPS, APPS, magicGNSS. These services use relative and precise point positioning (PPP solution approaches. In this test study, the coordinates of eight stations were estimated by using of both online services and Bernese 5.0 scientific GPS processing software from 24-hour GPS data set and then the coordinate differences between the online services and Bernese processing software were computed. From the evaluations, it was seen that the results for each individual differences were less than 10 mm regarding relative online service, and less than 20 mm regarding precise point positioning service. The accuracy analysis was gathered from these coordinate differences and standard deviations of the obtained coordinates from different techniques and then online services were compared to each other. The results show that the position accuracies obtained by associated online services provide high accurate solutions that may be used in many engineering applications and geodetic analysis.

  20. The finite temperature density matrix and two-point correlations in the antiferromagnetic XXZ chain

    Science.gov (United States)

    Göhmann, Frank; Hasenclever, Nils P.; Seel, Alexander

    2005-10-01

    We derive finite temperature versions of integral formulae for the two-point correlation functions in the antiferromagnetic XXZ chain. The derivation is based on the summation of density matrix elements characterizing a finite chain segment of length m. On this occasion we also supply a proof of the basic integral formula for the density matrix presented in an earlier publication.

  1. OFF-Stagnation point testing in plasma facility

    Science.gov (United States)

    Viladegut, A.; Chazot, O.

    2015-06-01

    Reentry space vehicles face extreme conditions of heat flux when interacting with the atmosphere at hypersonic velocities. Stagnation point heat flux is normally used as a reference for Thermal Protection Material (TPS) design; however, many critical phenomena also occur at off-stagnation point. This paper adresses the implementation of an offstagnation point methodology able to duplicate in ground facility the hypersonic boundary layer over a flat plate model. The first analysis using two-dimensional (2D) computational fluid dynamics (CFD) simulations is carried out to understand the limitations of this methodology when applying it in plasma wind tunnel. The results from the testing campaign at VKI Plasmatron are also presented.

  2. Detecting corner points from digital curves

    International Nuclear Information System (INIS)

    Sarfraz, M.

    2011-01-01

    Corners in digital images give important clues for shape representation, recognition, and analysis. Since dominant information regarding shape is usually available at the corners, they provide important features for various real life applications in the disciplines like computer vision, pattern recognition, computer graphics. Corners are the robust features in the sense that they provide important information regarding objects under translation, rotation and scale change. They are also important from the view point of understanding human perception of objects. They play crucial role in decomposing or describing the digital curves. They are also used in scale space theory, image representation, stereo vision, motion tracking, image matching, building mosaics and font designing systems. If the corner points are identified properly, a shape can be represented in an efficient and compact way with sufficient accuracy. Corner detection schemes, based on their applications, can be broadly divided into two categories: binary (suitable for binary images) and gray level (suitable for gray level images). Corner detection approaches for binary images usually involve segmenting the image into regions and extracting boundaries from those regions that contain them. The techniques for gray level images can be categorized into two classes: (a) Template based and (b) gradient based. The template based techniques utilize correlation between a sub-image and a template of a given angle. A corner point is selected by finding the maximum of the correlation output. Gradient based techniques require computing curvature of an edge that passes through a neighborhood in a gray level image. Many corner detection algorithms have been proposed in the literature which can be broadly divided into two parts. One is to detect corner points from grayscale images and other relates to boundary based corner detection. This contribution mainly deals with techniques adopted for later approach

  3. An integral constraint for the evolution of the galaxy two-point correlation function

    International Nuclear Information System (INIS)

    Peebles, P.J.E.; Groth, E.J.

    1976-01-01

    Under some conditions an integral over the galaxy two-point correlation function, xi(x,t), evolves with the expansion of the universe in a simple manner easily computed from linear perturbation theory.This provides a useful constraint on the possible evolution of xi(x,t) itself. We test the integral constraint with both an analytic model and numerical N-body simulations for the evolution of irregularities in an expanding universe. Some applications are discussed. (orig.) [de

  4. Futures market efficiency diagnostics via temporal two-point correlations. Russian market case study

    OpenAIRE

    Kopytin, Mikhail; Kazantsev, Evgeniy

    2013-01-01

    Using a two-point correlation technique, we study emergence of market efficiency in the emergent Russian futures market by focusing on lagged correlations. The correlation strength of leader-follower effects in the lagged inter-market correlations on the hourly time frame is seen to be significant initially (2009-2011) but gradually goes down, as the erstwhile leader instruments -- crude oil, the USD/RUB exchange rate, and the Russian stock market index -- seem to lose the leader status. An i...

  5. Optimal analysis of gas cooler and intercooler for two-stage CO2 trans-critical refrigeration system

    International Nuclear Information System (INIS)

    Li, Wenhua

    2013-01-01

    Highlights: • Simplified model for tube-fin gas cooler for CO 2 refrigeration system was presented and validated. • Several parameters were investigated using 1st law and 2nd law in component and system level. • Practical guidelines of optimum for tube-fin gas cooler and intercooler were proposed. - Abstract: Energy-based 1st law and exergy-based 2nd law are both employed in the paper to assess the optimal design of gas cooler and intercooler for two-stage CO 2 refrigeration system. A simplified mathematical model of the air-cooled coil is presented and validated against experimental data with good accuracy. The optimum circuit length under the influence of frontal air velocity and deep rows is investigated first. Thereafter, designed coil with optimum circuit length is further evaluated within the two-stage refrigeration system. It is found out the optimum point using 1st law does not coincide with the point using 2nd law in isolated component and the simulation results from isolated component by 2nd law are closer to system analysis. Results show optimum circuit length is much bigger for gas cooler than intercooler and the influence on the length from variation of frontal air velocity and deep rows may be neglected. There does exist optimum frontal air velocity which will decrease with more number of deep rows

  6. Two-phase coolant pump model of pressurized light water nuclear reactors

    International Nuclear Information System (INIS)

    Santos, G.A. dos; Freitas, R.L.

    1990-01-01

    The two-phase coolant pump model of pressurized light water nuclear reactors is an important point for the loss of primary coolant accident analysis. The homologous curves set up the complete performance of the pump and are input for accidents analysis thermal-hydraulic codes. This work propose a mathematical model able to predict the two-phase homologous curves where it was incorporated geometric and operational pump condition. The results were compared with the experimental tests data from literature and it has showed a good agreement. (author)

  7. An analytical approximation scheme to two-point boundary value problems of ordinary differential equations

    International Nuclear Information System (INIS)

    Boisseau, Bruno; Forgacs, Peter; Giacomini, Hector

    2007-01-01

    A new (algebraic) approximation scheme to find global solutions of two-point boundary value problems of ordinary differential equations (ODEs) is presented. The method is applicable for both linear and nonlinear (coupled) ODEs whose solutions are analytic near one of the boundary points. It is based on replacing the original ODEs by a sequence of auxiliary first-order polynomial ODEs with constant coefficients. The coefficients in the auxiliary ODEs are uniquely determined from the local behaviour of the solution in the neighbourhood of one of the boundary points. The problem of obtaining the parameters of the global (connecting) solutions, analytic at one of the boundary points, reduces to find the appropriate zeros of algebraic equations. The power of the method is illustrated by computing the approximate values of the 'connecting parameters' for a number of nonlinear ODEs arising in various problems in field theory. We treat in particular the static and rotationally symmetric global vortex, the skyrmion, the Abrikosov-Nielsen-Olesen vortex, as well as the 't Hooft-Polyakov magnetic monopole. The total energy of the skyrmion and of the monopole is also computed by the new method. We also consider some ODEs coming from the exact renormalization group. The ground-state energy level of the anharmonic oscillator is also computed for arbitrary coupling strengths with good precision. (fast track communication)

  8. Obesity in show cats.

    Science.gov (United States)

    Corbee, R J

    2014-12-01

    Obesity is an important disease with a high prevalence in cats. Because obesity is related to several other diseases, it is important to identify the population at risk. Several risk factors for obesity have been described in the literature. A higher incidence of obesity in certain cat breeds has been suggested. The aim of this study was to determine whether obesity occurs more often in certain breeds. The second aim was to relate the increased prevalence of obesity in certain breeds to the official standards of that breed. To this end, 268 cats of 22 different breeds investigated by determining their body condition score (BCS) on a nine-point scale by inspection and palpation, at two different cat shows. Overall, 45.5% of the show cats had a BCS > 5, and 4.5% of the show cats had a BCS > 7. There were significant differences between breeds, which could be related to the breed standards. Most overweight and obese cats were in the neutered group. It warrants firm discussions with breeders and cat show judges to come to different interpretations of the standards in order to prevent overweight conditions in certain breeds from being the standard of beauty. Neutering predisposes for obesity and requires early nutritional intervention to prevent obese conditions. Journal of Animal Physiology and Animal Nutrition © 2014 Blackwell Verlag GmbH.

  9. Modeling A.C. Electronic Transport through a Two-Dimensional Quantum Point Contact

    International Nuclear Information System (INIS)

    Aronov, I.E.; Beletskii, N.N.; Berman, G.P.; Campbell, D.K.; Doolen, G.D.; Dudiy, S.V.

    1998-01-01

    We present the results on the a.c. transport of electrons moving through a two-dimensional (2D) semiconductor quantum point contact (QPC). We concentrate our attention on the characteristic properties of the high frequency admittance (ωapproximately0 - 50 GHz), and on the oscillations of the admittance in the vicinity of the separatrix (when a channel opens or closes), in presence of the relaxation effects. The experimental verification of such oscillations in the admittance would be a strong confirmation of the semi-classical approach to the a.c. transport in a QPC, in the separatrix region

  10. Review Team Focused Modeling Analysis of Radial Collector Well Operation on the Hypersaline Groundwater Plume beneath the Turkey Point Site near Homestead, Florida

    International Nuclear Information System (INIS)

    Oostrom, Martinus; Vail, Lance W.

    2016-01-01

    Researchers at Pacific Northwest National Laboratory served as members of a U.S. Nuclear Regulatory Commission review team for the Florida Power & Light Company's application for two combined construction permits and operating licenses (combined licenses or COLs) for two proposed new reactor units-Turkey Point Units 6 and 7. The review team evaluated the environmental impacts of the proposed action based on the October 29, 2014 revision of the COL application, including the Environmental Report, responses to requests for additional information, and supplemental information. As part of this effort, team members tasked with assessing the environmental effects of proposed construction and operation of Units 6 and 7 at the Turkey Point site reviewed two separate modeling studies that analyzed the interaction between surface water and groundwater that would be altered by the operation of radial collector wells (RCWs) at the site. To further confirm their understanding of the groundwater hydrodynamics and to consider whether certain actions, proposed after the two earlier modeling studies were completed, would alter the earlier conclusions documented by the review team in their draft environmental impact statement (EIS; NRC 2015), a third modeling analysis was performed. The third modeling analysis is discussed in this report.

  11. RPA using a multiplexed cartridge for low cost point of care diagnostics in the field.

    Science.gov (United States)

    Ereku, Luck Tosan; Mackay, Ruth E; Craw, Pascal; Naveenathayalan, Angel; Stead, Thomas; Branavan, Manorharanehru; Balachandran, Wamadeva

    2018-04-15

    A point of care device utilising Lab-on-a-Chip technologies that is applicable for biological pathogens was designed, fabricated and tested showing sample in to answer out capabilities. The purpose of the design was to develop a cartridge with the capability to perform nucleic acid extraction and purification from a sample using a chitosan membrane at an acidic pH. Waste was stored within the cartridge with the use of sodium polyacrylate to solidify or gelate the sample in a single chamber. Nucleic acid elution was conducted using the RPA amplification reagents (alkaline pH). Passive valves were used to regulate the fluid flow and a multiplexer was designed to distribute the fluid into six microchambers for amplification reactions. Cartridges were produced using soft lithography of silicone from 3D printed moulds, bonded to glass substrates. The isothermal technique, RPA is employed for amplification. This paper shows the results from two separate experiments: the first using the RPA control nucleic acid, the second showing successful amplification from Chlamydia Trachomatis. Endpoint analysis conducted for the RPA analysis was gel electrophoresis that showed 143 base pair DNA was amplified successfully for positive samples whilst negative samples did not show amplification. End point analysis for Chlamydia Trachomatis samples was fluorescence detection that showed successful detection of 1 copy/μL and 10 copies/μL spiked in a MES buffer. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  12. Point-and-Click Pedagogy: Is it Effective for Teaching Information Technology?

    Directory of Open Access Journals (Sweden)

    Mark Angolia

    2016-09-01

    Full Text Available This paper assesses the effectiveness of the adoption of curriculum content developed and supported by a global academic university-industry alliance sponsored by one of the world’s largest information technology software providers. Academic alliances promote practical and future-oriented education while providing access to proprietary software and technology. Specifically, this paper addresses a lack of quantitative analysis to substantiate the perceived benefits of using information technology “point-and-click” instructional pedagogy to teach fundamental business processes and concepts. The analysis of over 800 test questions from 229 students allowed inferences regarding the utilization of self-directed “point-and-click” driven case studies employed to teach software applications of business processes needed for supply chain management. Correlation studies and analysis of variance investigated data collected from 10 individual course sections over a two-and-one-half-year period in a four-year public university. The data showed statistically significant positive correlations between the pedagogy and conceptual learning. Further, the research provided evidence that the methodology is equally effective for teaching information technology applications using either face-to-face or distance education delivery methods.

  13. Classification of Phase Transitions by Microcanonical Inflection-Point Analysis

    Science.gov (United States)

    Qi, Kai; Bachmann, Michael

    2018-05-01

    By means of the principle of minimal sensitivity we generalize the microcanonical inflection-point analysis method by probing derivatives of the microcanonical entropy for signals of transitions in complex systems. A strategy of systematically identifying and locating independent and dependent phase transitions of any order is proposed. The power of the generalized method is demonstrated in applications to the ferromagnetic Ising model and a coarse-grained model for polymer adsorption onto a substrate. The results shed new light on the intrinsic phase structure of systems with cooperative behavior.

  14. A meta-analysis of response-time tests of the sequential two-systems model of moral judgment.

    Science.gov (United States)

    Baron, Jonathan; Gürçay, Burcu

    2017-05-01

    The (generalized) sequential two-system ("default interventionist") model of utilitarian moral judgment predicts that utilitarian responses often arise from a system-two correction of system-one deontological intuitions. Response-time (RT) results that seem to support this model are usually explained by the fact that low-probability responses have longer RTs. Following earlier results, we predicted response probability from each subject's tendency to make utilitarian responses (A, "Ability") and each dilemma's tendency to elicit deontological responses (D, "Difficulty"), estimated from a Rasch model. At the point where A = D, the two responses are equally likely, so probability effects cannot account for any RT differences between them. The sequential two-system model still predicts that many of the utilitarian responses made at this point will result from system-two corrections of system-one intuitions, hence should take longer. However, when A = D, RT for the two responses was the same, contradicting the sequential model. Here we report a meta-analysis of 26 data sets, which replicated the earlier results of no RT difference overall at the point where A = D. The data sets used three different kinds of moral judgment items, and the RT equality at the point where A = D held for all three. In addition, we found that RT increased with A-D. This result holds for subjects (characterized by Ability) but not for items (characterized by Difficulty). We explain the main features of this unanticipated effect, and of the main results, with a drift-diffusion model.

  15. Improvement of two-dimensional gravity analysis by using logarithmic functions; Taisu kansu wo mochiita nijigen juryoku kaiseki no kairyo

    Energy Technology Data Exchange (ETDEWEB)

    Makino, M; Murata, Y [Geological Survey of Japan, Tsukuba (Japan)

    1996-05-01

    An examination was made, in the two dimensional tectonic analysis by gravity exploration, on a method that was applicable from a deep underground part to a shallow geological structure by using logarithmic functions. In the examination, a case was considered in which an underground structure was divided into a basement and a covering formation and in which the boundary part had undulations. An equation to calculate a basement structure from a gravity anomaly was derived so that, taking into consideration the effect from the height of an observation point, it might be applicable to the shallow distribution of the basement depth. In the test calculation, a model was assumed reaching the depth near the surface with the basement being a step structure. Density difference was set as 0.4g/cm{sup 3}. An analysis using an equation two-dimensionally modified from Ogihara`s (1987) method produced a fairly reasonable result, showing, however, a deformed basement around the boundary of the step structure, with the appearance of a small pulse-shaped structure. The analysis using logarithmic functions revealed that the original basement structure was faithfully restored. 3 refs., 5 figs.

  16. Point interactions in two- and three-dimensional Riemannian manifolds

    International Nuclear Information System (INIS)

    Erman, Fatih; Turgut, O Teoman

    2010-01-01

    We present a non-perturbative renormalization of the bound state problem of n bosons interacting with finitely many Dirac-delta interactions on two- and three-dimensional Riemannian manifolds using the heat kernel. We formulate the problem in terms of a new operator called the principal or characteristic operator Φ(E). In order to investigate the problem in more detail, we then restrict the problem to one particle sector. The lower bound of the ground state energy is found for a general class of manifolds, e.g. for compact and Cartan-Hadamard manifolds. The estimate of the bound state energies in the tunneling regime is calculated by perturbation theory. Non-degeneracy and uniqueness of the ground state is proven by the Perron-Frobenius theorem. Moreover, the pointwise bounds on the wave function is given and all these results are consistent with the one given in standard quantum mechanics. Renormalization procedure does not lead to any radical change in these cases. Finally, renormalization group equations are derived and the β function is exactly calculated. This work is a natural continuation of our previous work based on a novel approach to the renormalization of point interactions, developed by Rajeev.

  17. Multiscale change-point analysis of inhomogeneous Poisson processes using unbalanced wavelet decompositions

    NARCIS (Netherlands)

    Jansen, M.H.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.

    2006-01-01

    We present a continuous wavelet analysis of count data with timevarying intensities. The objective is to extract intervals with significant intensities from background intervals. This includes the precise starting point of the significant interval, its exact duration and the (average) level of

  18. An Exploratory Study: A Kinesic Analysis of Academic Library Public Service Points

    Science.gov (United States)

    Kazlauskas, Edward

    1976-01-01

    An analysis of body movements of individuals at reference and circulation public service points in four academic libraries indicated that both receptive and nonreceptive nonverbal behaviors were used by all levels of library employees, and these behaviors influenced patron interaction. (Author/LS)

  19. STRUTEX: A prototype knowledge-based system for initially configuring a structure to support point loads in two dimensions

    Science.gov (United States)

    Rogers, James L.; Feyock, Stefan; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    The purpose of this research effort is to investigate the benefits that might be derived from applying artificial intelligence tools in the area of conceptual design. Therefore, the emphasis is on the artificial intelligence aspects of conceptual design rather than structural and optimization aspects. A prototype knowledge-based system, called STRUTEX, was developed to initially configure a structure to support point loads in two dimensions. This system combines numerical and symbolic processing by the computer with interactive problem solving aided by the vision of the user by integrating a knowledge base interface and inference engine, a data base interface, and graphics while keeping the knowledge base and data base files separate. The system writes a file which can be input into a structural synthesis system, which combines structural analysis and optimization.

  20. A Comparative Study of Precise Point Positioning (PPP Accuracy Using Online Services

    Directory of Open Access Journals (Sweden)

    Malinowski Marcin

    2016-12-01

    Full Text Available Precise Point Positioning (PPP is a technique used to determine the position of receiver antenna without communication with the reference station. It may be an alternative solution to differential measurements, where maintaining a connection with a single RTK station or a regional network of reference stations RTN is necessary. This situation is especially common in areas with poorly developed infrastructure of ground stations. A lot of research conducted so far on the use of the PPP technique has been concerned about the development of entire day observation sessions. However, this paper presents the results of a comparative analysis of accuracy of absolute determination of position from observations which last between 1 to 7 hours with the use of four permanent services which execute calculations with PPP technique such as: Automatic Precise Positioning Service (APPS, Canadian Spatial Reference System Precise Point Positioning (CSRS-PPP, GNSS Analysis and Positioning Software (GAPS and magicPPP - Precise Point Positioning Solution (magicGNSS. On the basis of acquired results of measurements, it can be concluded that at least two-hour long measurements allow acquiring an absolute position with an accuracy of 2-4 cm. An evaluation of the impact on the accuracy of simultaneous positioning of three points test network on the change of the horizontal distance and the relative height difference between measured triangle vertices was also conducted. Distances and relative height differences between points of the triangular test network measured with a laser station Leica TDRA6000 were adopted as references. The analyses of results show that at least two hours long measurement sessions can be used to determine the horizontal distance or the difference in height with an accuracy of 1-2 cm. Rapid products employed in calculations conducted with PPP technique reached the accuracy of determining coordinates on a close level as in elaborations which employ

  1. A Thermodynamic Analysis of Two Competing Mid-Sized Oxyfuel Combustion Combined Cycles

    Directory of Open Access Journals (Sweden)

    Egill Thorbergsson

    2016-01-01

    Full Text Available A comparative analysis of two mid-sized oxyfuel combustion combined cycles is performed. The two cycles are the semiclosed oxyfuel combustion combined cycle (SCOC-CC and the Graz cycle. In addition, a reference cycle was established as the basis for the analysis of the oxyfuel combustion cycles. A parametric study was conducted where the pressure ratio and the turbine entry temperature were varied. The layout and the design of the SCOC-CC are considerably simpler than the Graz cycle while it achieves the same net efficiency as the Graz cycle. The fact that the efficiencies for the two cycles are close to identical differs from previously reported work. Earlier studies have reported around a 3% points advantage in efficiency for the Graz cycle, which is attributed to the use of a second bottoming cycle. This additional feature is omitted to make the two cycles more comparable in terms of complexity. The Graz cycle has substantially lower pressure ratio at the optimum efficiency and has much higher power density for the gas turbine than both the reference cycle and the SCOC-CC.

  2. Beginning SharePoint 2010 Building Business Solutions with SharePoint

    CERN Document Server

    Perran, Amanda; Mason, Jennifer; Rogers, Laura

    2010-01-01

    Two SharePoint MVPs provide the ultimate introduction to SharePoint 2010Beginning SharePoint 2010: Building Team Solutions with SharePoint provides information workers and site managers with extensive knowledge and expert advice, empowering them to become SharePoint champions within their organizations.Provides expansive coverage of SharePoint topics, as well as specialty areas such as forms, excel services, records management, and web content managementDetails realistic usage scenarios, and includes practice examples that highlight best practices for configuration and customizationIncludes de

  3. Professional SharePoint 2010 Cloud-Based Solutions

    CERN Document Server

    Fox, Steve; Stubbs, Paul; Follette, Donovan

    2011-01-01

    An authoritative guide to extending SharePoint's power with cloud-based services If you want to be part of the next major shift in the IT industry, you'll want this book. Melding two of the hottest trends in the industry—the widespread popularity of the SharePoint collaboration platform and the rapid rise of cloud computing—this practical guide shows developers how to extend their SharePoint solutions with the cloud's almost limitless capabilities. See how to get started, discover smart ways to leverage cloud data and services through Azure, start incorporating Twitter or LinkedIn

  4. Measurements on pointing error and field of view of Cimel-318 Sun photometers in the scope of AERONET

    Directory of Open Access Journals (Sweden)

    B. Torres

    2013-08-01

    Full Text Available Sensitivity studies indicate that among the diverse error sources of ground-based sky radiometer observations, the pointing error plays an important role in the correct retrieval of aerosol properties. The accurate pointing is specially critical for the characterization of desert dust aerosol. The present work relies on the analysis of two new measurement procedures (cross and matrix specifically designed for the evaluation of the pointing error in the standard instrument of the Aerosol Robotic Network (AERONET, the Cimel CE-318 Sun photometer. The first part of the analysis contains a preliminary study whose results conclude on the need of a Sun movement correction for an accurate evaluation of the pointing error from both new measurements. Once this correction is applied, both measurements show equivalent results with differences under 0.01° in the pointing error estimations. The second part of the analysis includes the incorporation of the cross procedure in the AERONET routine measurement protocol in order to monitor the pointing error in field instruments. The pointing error was evaluated using the data collected for more than a year, in 7 Sun photometers belonging to AERONET sites. The registered pointing error values were generally smaller than 0.1°, though in some instruments values up to 0.3° have been observed. Moreover, the pointing error analysis shows that this measurement can be useful to detect mechanical problems in the robots or dirtiness in the 4-quadrant detector used to track the Sun. Specifically, these mechanical faults can be detected due to the stable behavior of the values over time and vs. the solar zenith angle. Finally, the matrix procedure can be used to derive the value of the solid view angle of the instruments. The methodology has been implemented and applied for the characterization of 5 Sun photometers. To validate the method, a comparison with solid angles obtained from the vicarious calibration method was

  5. Beginning SharePoint 2010 Administration Windows SharePoint Foundation 2010 and Microsoft SharePoint Server 2010

    CERN Document Server

    Husman, Göran

    2010-01-01

    Complete coverage on the latest advances in SharePoint 2010 administration. SharePoint 2010 comprises an abundance of new features, and this book shows you how to take advantage of all SharePoint 2010's many improvements. Written by a four-time SharePoint MVP, Beginning SharePoint 2010 Administration begins with a comparison of SharePoint 2010 compared to the previous version and then examines the differences between WSS 4.0 and MSS 2010. Packed with step-by-step instructions, tips and tricks, and real-world examples, this book dives into the basics of how to install, manage, and administrate

  6. Elastic-plastic adhesive contact of rough surfaces using n-point asperity model

    International Nuclear Information System (INIS)

    Sahoo, Prasanta; Mitra, Anirban; Saha, Kashinath

    2009-01-01

    This study considers an analysis of the elastic-plastic contact of rough surfaces in the presence of adhesion using an n-point asperity model. The multiple-point asperity model, developed by Hariri et al (2006 Trans ASME: J. Tribol. 128 505-14) is integrated into the elastic-plastic adhesive contact model developed by Roy Chowdhury and Ghosh (1994 Wear 174 9-19). This n-point asperity model differs from the conventional Greenwood and Williamson model (1966 Proc. R. Soc. Lond. A 295 300-19) in considering the asperities not as fixed entities but as those that change through the contact process, and hence it represents the asperities in a more realistic manner. The newly defined adhesion index and plasticity index defined for the n-point asperity model are used to consider the different conditions that arise because of varying load, surface and material parameters. A comparison between the load-separation behaviour of the new model and the conventional one shows a significant difference between the two depending on combinations of mean separation, adhesion index and plasticity index.

  7. Proposition for Improvement of Economics Situation with Use of Analysis of Break Even Point

    OpenAIRE

    Starečková, Alena

    2015-01-01

    Bakalářská práce se zabývá realizací Break-even-point analýzy v podniku, analýzou nákladů a návrhem na zlepšení finanční situace podniku zejména z pohledu nákladů. V první části práce jsou vymezeny pojmy a vzorce týkajících se Break-even-point analýzy a problematiky nákladů. Ve druhé části pak na konkrétním podniku bude provedena analýza bodu zvratu a následné návrhy na zlepšení stávajícího stavu. Bachelor work is dealing with realization of break even point analysis of company, analysis o...

  8. Conformal four point functions and the operator product expansion

    International Nuclear Information System (INIS)

    Dolan, F.A.; Osborn, H.

    2001-01-01

    Various aspects of the four point function for scalar fields in conformally invariant theories are analysed. This depends on an arbitrary function of two conformal invariants u,v. A recurrence relation for the function corresponding to the contribution of an arbitrary spin field in the operator product expansion to the four point function is derived. This is solved explicitly in two and four dimensions in terms of ordinary hypergeometric functions of variables z,x which are simply related to u,v. The operator product expansion analysis is applied to the explicit expressions for the four point function found for free scalar, fermion and vector field theories in four dimensions. The results for four point functions obtained by using the AdS/CFT correspondence are also analysed in terms of functions related to those appearing in the operator product discussion

  9. Measuring political polarization: Twitter shows the two sides of Venezuela

    Science.gov (United States)

    Morales, A. J.; Borondo, J.; Losada, J. C.; Benito, R. M.

    2015-03-01

    We say that a population is perfectly polarized when divided in two groups of the same size and opposite opinions. In this paper, we propose a methodology to study and measure the emergence of polarization from social interactions. We begin by proposing a model to estimate opinions in which a minority of influential individuals propagate their opinions through a social network. The result of the model is an opinion probability density function. Next, we propose an index to quantify the extent to which the resulting distribution is polarized. Finally, we apply the proposed methodology to a Twitter conversation about the late Venezuelan president, Hugo Chávez, finding a good agreement between our results and offline data. Hence, we show that our methodology can detect different degrees of polarization, depending on the structure of the network.

  10. Tipping point analysis of a large ocean ambient sound record

    Science.gov (United States)

    Livina, Valerie N.; Harris, Peter; Brower, Albert; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2017-04-01

    We study a long (2003-2015) high-resolution (250Hz) sound pressure record provided by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO) from the hydro-acoustic station Cape Leeuwin (Australia). We transform the hydrophone waveforms into five bands of 10-min-average sound pressure levels (including the third-octave band) and apply tipping point analysis techniques [1-3]. We report the results of the analysis of fluctuations and trends in the data and discuss the BigData challenges in processing this record, including handling data segments of large size and possible HPC solutions. References: [1] Livina et al, GRL 2007, [2] Livina et al, Climate of the Past 2010, [3] Livina et al, Chaos 2015.

  11. Electrons at the monkey saddle: A multicritical Lifshitz point

    Science.gov (United States)

    Shtyk, A.; Goldstein, G.; Chamon, C.

    2017-01-01

    We consider two-dimensional interacting electrons at a monkey saddle with dispersion ∝px3-3 pxpy2 . Such a dispersion naturally arises at the multicritical Lifshitz point when three Van Hove saddles merge in an elliptical umbilic elementary catastrophe, which we show can be realized in biased bilayer graphene. A multicritical Lifshitz point of this kind can be identified by its signature Landau level behavior Em∝(Bm ) 3 /2 and related oscillations in thermodynamic and transport properties, such as de Haas-Van Alphen and Shubnikov-de Haas oscillations, whose period triples as the system crosses the singularity. We show, in the case of a single monkey saddle, that the noninteracting electron fixed point is unstable to interactions under the renormalization-group flow, developing either a superconducting instability or non-Fermi-liquid features. Biased bilayer graphene, where there are two non-nested monkey saddles at the K and K' points, exhibits an interplay of competing many-body instabilities, namely, s -wave superconductivity, ferromagnetism, and spin- and charge-density waves.

  12. Describing chaotic attractors: Regular and perpetual points

    Science.gov (United States)

    Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz

    2018-03-01

    We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.

  13. Implementation of hazard analysis and critical control point (HACCP) in dried anchovy production process

    Science.gov (United States)

    Citraresmi, A. D. P.; Wahyuni, E. E.

    2018-03-01

    The aim of this study was to inspect the implementation of Hazard Analysis and Critical Control Point (HACCP) for identification and prevention of potential hazards in the production process of dried anchovy at PT. Kelola Mina Laut (KML), Lobuk unit, Sumenep. Cold storage process is needed in each anchovy processing step in order to maintain its physical and chemical condition. In addition, the implementation of quality assurance system should be undertaken to maintain product quality. The research was conducted using a survey method, by following the whole process of making anchovy from the receiving raw materials to the packaging of final product. The method of data analysis used was descriptive analysis method. Implementation of HACCP at PT. KML, Lobuk unit, Sumenep was conducted by applying Pre Requisite Programs (PRP) and preparation stage consisting of 5 initial stages and 7 principles of HACCP. The results showed that CCP was found in boiling process flow with significant hazard of Listeria monocytogenesis bacteria and final sorting process with significant hazard of foreign material contamination in the product. Actions taken were controlling boiling temperature of 100 – 105°C for 3 - 5 minutes and training for sorting process employees.

  14. Object-Based Point Cloud Analysis of Full-Waveform Airborne Laser Scanning Data for Urban Vegetation Classification

    Directory of Open Access Journals (Sweden)

    Norbert Pfeifer

    2008-08-01

    Full Text Available Airborne laser scanning (ALS is a remote sensing technique well-suited for 3D vegetation mapping and structure characterization because the emitted laser pulses are able to penetrate small gaps in the vegetation canopy. The backscattered echoes from the foliage, woody vegetation, the terrain, and other objects are detected, leading to a cloud of points. Higher echo densities (> 20 echoes/m2 and additional classification variables from full-waveform (FWF ALS data, namely echo amplitude, echo width and information on multiple echoes from one shot, offer new possibilities in classifying the ALS point cloud. Currently FWF sensor information is hardly used for classification purposes. This contribution presents an object-based point cloud analysis (OBPA approach, combining segmentation and classification of the 3D FWF ALS points designed to detect tall vegetation in urban environments. The definition tall vegetation includes trees and shrubs, but excludes grassland and herbage. In the applied procedure FWF ALS echoes are segmented by a seeded region growing procedure. All echoes sorted descending by their surface roughness are used as seed points. Segments are grown based on echo width homogeneity. Next, segment statistics (mean, standard deviation, and coefficient of variation are calculated by aggregating echo features such as amplitude and surface roughness. For classification a rule base is derived automatically from a training area using a statistical classification tree. To demonstrate our method we present data of three sites with around 500,000 echoes each. The accuracy of the classified vegetation segments is evaluated for two independent validation sites. In a point-wise error assessment, where the classification is compared with manually classified 3D points, completeness and correctness better than 90% are reached for the validation sites. In comparison to many other algorithms the proposed 3D point classification works on the original

  15. Chopped or Long Roughage: What Do Calves Prefer? Using Cross Point Analysis of Double Demand Functions

    Science.gov (United States)

    Webb, Laura E.; Bak Jensen, Margit; Engel, Bas; van Reenen, Cornelis G.; Gerrits, Walter J. J.; de Boer, Imke J. M.; Bokkers, Eddie A. M.

    2014-01-01

    The present study aimed to quantify calves'(Bos taurus) preference for long versus chopped hay and straw, and hay versus straw, using cross point analysis of double demand functions, in a context where energy intake was not a limiting factor. Nine calves, fed milk replacer and concentrate, were trained to work for roughage rewards from two simultaneously available panels. The cost (number of muzzle presses) required on the panels varied in each session (left panel/right panel): 7/35, 14/28, 21/21, 28/14, 35/7. Demand functions were estimated from the proportion of rewards achieved on one panel relative to the total number of rewards achieved in one session. Cross points (cp) were calculated as the cost at which an equal number of rewards was achieved from both panels. The deviation of the cp from the midpoint (here 21) indicates the strength of the preference. Calves showed a preference for long versus chopped hay (cp  = 14.5; P  = 0.004), and for hay versus straw (cp  = 38.9; P = 0.004), both of which improve rumen function. Long hay may stimulate chewing more than chopped hay, and the preference for hay versus straw could be related to hedonic characteristics. No preference was found for chopped versus long straw (cp  = 20.8; P = 0.910). These results could be used to improve the welfare of calves in production systems; for example, in systems where calves are fed hay along with high energy concentrate, providing long hay instead of chopped could promote roughage intake, rumen development, and rumination. PMID:24558426

  16. Chopped or long roughage: what do calves prefer? Using cross point analysis of double demand functions.

    Directory of Open Access Journals (Sweden)

    Laura E Webb

    Full Text Available The present study aimed to quantify calves' (Bos taurus preference for long versus chopped hay and straw, and hay versus straw, using cross point analysis of double demand functions, in a context where energy intake was not a limiting factor. Nine calves, fed milk replacer and concentrate, were trained to work for roughage rewards from two simultaneously available panels. The cost (number of muzzle presses required on the panels varied in each session (left panel/right panel: 7/35, 14/28, 21/21, 28/14, 35/7. Demand functions were estimated from the proportion of rewards achieved on one panel relative to the total number of rewards achieved in one session. Cross points (cp were calculated as the cost at which an equal number of rewards was achieved from both panels. The deviation of the cp from the midpoint (here 21 indicates the strength of the preference. Calves showed a preference for long versus chopped hay (cp = 14.5; P = 0.004, and for hay versus straw (cp = 38.9; P = 0.004, both of which improve rumen function. Long hay may stimulate chewing more than chopped hay, and the preference for hay versus straw could be related to hedonic characteristics. No preference was found for chopped versus long straw (cp = 20.8; P = 0.910. These results could be used to improve the welfare of calves in production systems; for example, in systems where calves are fed hay along with high energy concentrate, providing long hay instead of chopped could promote roughage intake, rumen development, and rumination.

  17. Consumers' price awareness at the point-of-selection

    DEFF Research Database (Denmark)

    Jensen, Birger Boutrup

    This paper focuses on consumers' price information processing at the point-of-selection. Specifically, it updates past results of consumers' price awareness at the point-of-selection - applying both a price-recall and a price-recognition test - and tests hypotheses on potential determinants...... of consumers' price awareness at the point-of-selection. Both price-memory tests resulted in higher measured price awareness than in any of the past studies. Results also indicate that price recognition is not the most appropiate measure. Finally, a discriminant analysis shows that consumers who are aware...... of the price at the point-of-selection are more deal prone, more low-price prone, and bought a special-priced item. Implications are discussed....

  18. The quantum spectral analysis of the two-dimensional annular billiard system

    International Nuclear Information System (INIS)

    Yan-Hui, Zhang; Ji-Quan, Zhang; Xue-You, Xu; Sheng-Lu, Lin

    2009-01-01

    Based on the extended closed-orbit theory together with spectral analysis, this paper studies the correspondence between quantum mechanics and the classical counterpart in a two-dimensional annular billiard. The results demonstrate that the Fourier-transformed quantum spectra are in very good accordance with the lengths of the classical ballistic trajectories, whereas spectral strength is intimately associated with the shapes of possible open orbits connecting arbitrary two points in the annular cavity. This approach facilitates an intuitive understanding of basic quantum features such as quantum interference, locations of the wavefunctions, and allows quantitative calculations in the range of high energies, where full quantum calculations may become impractical in general. This treatment provides a thread to explore the properties of microjunction transport and even quantum chaos under the much more general system. (general)

  19. A critical analysis of the tender points in fibromyalgia.

    Science.gov (United States)

    Harden, R Norman; Revivo, Gadi; Song, Sharon; Nampiaparampil, Devi; Golden, Gary; Kirincic, Marie; Houle, Timothy T

    2007-03-01

    To pilot methodologies designed to critically assess the American College of Rheumatology's (ACR) diagnostic criteria for fibromyalgia. Prospective, psychophysical testing. An urban teaching hospital. Twenty-five patients with fibromyalgia and 31 healthy controls (convenience sample). Pressure pain threshold was determined at the 18 ACR tender points and five sham points using an algometer (dolorimeter). The patients "algometric total scores" (sums of the patients' average pain thresholds at the 18 tender points) were derived, as well as pain thresholds across sham points. The "algometric total score" could differentiate patients with fibromyalgia from normals with an accuracy of 85.7% (P pain across sham points than across ACR tender points, sham points also could be used for diagnosis (85.7%; Ps tested vs other painful conditions. The points specified by the ACR were only modestly superior to sham points in making the diagnosis. Most importantly, this pilot suggests single points, smaller groups of points, or sham points may be as effective in diagnosing fibromyalgia as the use of all 18 points, and suggests methodologies to definitively test that hypothesis.

  20. Two α1-Globin Gene Point Mutations Causing Severe Hb H Disease.

    Science.gov (United States)

    Jiang, Hua; Huang, Lv-Yin; Zhen, Li; Jiang, Fan; Li, Dong-Zhi

    Hb H disease is generally a moderate form of α-thalassemia (α-thal) that rarely requires regular blood transfusions. In this study, two Chinese families with members carrying transfusion-dependent Hb H disease were investigated for rare mutations on the α-globin genes (HBA1, HBA2). In one family, Hb Zürich-Albisrieden [α59(E8)Gly→Arg; HBA1: c.178G>C] in combination with the Southeast Asian (- - SEA ) deletion was the defect responsible for the severe phenotype. In another family, a novel hemoglobin (Hb) variant named Hb Sichuan (HBA1: c.393_394insT), causes α-thal and a severe phenotype when associated with the - - SEA deletion. As these two HBA1 mutations can present as continuous blood transfusion-dependent α-thal, it is important to take this point into account for detecting the carriers, especially in couples in which one partner is already a known α 0 -thal carrier.

  1. Analysis on Single Point Vulnerabilities of Plant Control System

    International Nuclear Information System (INIS)

    Chi, Moon Goo; Lee, Eun Chan; Bae, Yeon Kyoung

    2011-01-01

    The Plant Control System (PCS) is a system that controls pumps, valves, dampers, etc. in nuclear power plants with an OPR-1000 design. When there is a failure or spurious actuation of the critical components in the PCS, it can result in unexpected plant trips or transients. From this viewpoint, single point vulnerabilities are evaluated in detail using failure mode effect analyses (FMEA) and fault tree analyses (FTA). This evaluation demonstrates that the PCS has many vulnerable components and the analysis results are provided for OPR-1000 plants for reliability improvements that can reduce their vulnerabilities

  2. Analysis on Single Point Vulnerabilities of Plant Control System

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Moon Goo; Lee, Eun Chan; Bae, Yeon Kyoung [Korea Hydro and Nuclear Power Co., Daejeon (Korea, Republic of)

    2011-08-15

    The Plant Control System (PCS) is a system that controls pumps, valves, dampers, etc. in nuclear power plants with an OPR-1000 design. When there is a failure or spurious actuation of the critical components in the PCS, it can result in unexpected plant trips or transients. From this viewpoint, single point vulnerabilities are evaluated in detail using failure mode effect analyses (FMEA) and fault tree analyses (FTA). This evaluation demonstrates that the PCS has many vulnerable components and the analysis results are provided for OPR-1000 plants for reliability improvements that can reduce their vulnerabilities.

  3. Publication point indicators

    DEFF Research Database (Denmark)

    Elleby, Anita; Ingwersen, Peter

    2010-01-01

    ; the Cumulated Publication Point Indicator (CPPI), which graphically illustrates the cumulated gain of obtained vs. ideal points, both seen as vectors; and the normalized Cumulated Publication Point Index (nCPPI) that represents the cumulated gain of publication success as index values, either graphically......The paper presents comparative analyses of two publication point systems, The Norwegian and the in-house system from the interdisciplinary Danish Institute of International Studies (DIIS), used as case in the study for publications published 2006, and compares central citation-based indicators...... with novel publication point indicators (PPIs) that are formalized and exemplified. Two diachronic citation windows are applied: 2006-07 and 2006-08. Web of Science (WoS) as well as Google Scholar (GS) are applied to observe the cite delay and citedness for the different document types published by DIIS...

  4. Identification of critical points of thermal environment in broiler production

    Directory of Open Access Journals (Sweden)

    AG Menezes

    2010-03-01

    Full Text Available This paper describes an exploratory study carried out to determine critical control points and possible risks in hatcheries and broiler farms. The study was based in the identification of the potential hazards existing in broiler production, from the hatchery to the broiler farm, identifying critical control points and defining critical limits. The following rooms were analyzed in the hatchery: egg cold storage, pre-heating, incubator, and hatcher rooms. Two broiler houses were studied in two different farms. The following data were collected in the hatchery and broiler houses: temperature (ºC and relative humidity (%, air velocity (m s-1, ammonia levels, and light intensity (lx. In the broiler house study, a questionnaire using information of the Broiler Production Good Practices (BPGP manual was applied, and workers were interviewed. Risk analysis matrices were build to determine Critical Control Points (CCP. After data collection, Statistical Process Control (SPC was applied through the analysis of the Process Capacity Index, using the software program Minitab15®. Environmental temperature and relative humidity were the critical points identified in the hatchery and in both farms. The classes determined as critical control points in the broiler houses were poultry litter, feeding, drinking water, workers' hygiene and health, management and biosecurity, norms and legislation, facilities, and activity planning. It was concluded that CCP analysis, associated with SPC control tools and guidelines of good production practices, may contribute to improve quality control in poultry production.

  5. Low-dimensional analysis, using POD, for two mixing layer-wake interactions

    International Nuclear Information System (INIS)

    Braud, Caroline; Heitz, Dominique; Arroyo, Georges; Perret, Laurent; Delville, Joeel; Bonnet, Jean-Paul

    2004-01-01

    The mixing layer-wake interaction is studied experimentally in the framework of two flow configurations. For the first one, the initial conditions of the mixing layer are modified by using a thick trailing edge, a wake effect is therefore superimposed to the mixing layer from its beginning (blunt trailing edge). In the second flow configuration, a canonical mixing layer is perturbed in its asymptotic region by the wake of a cylinder arranged perpendicular to the plane of the mixing layer. These interactions are analyzed mainly by using two-point velocity correlations and the proper orthogonal decomposition (POD). These two flow configurations differ by the degree of complexity they involve: the former is mainly 2D while the latter is highly 3D. The blunt trailing edge configuration is analyzed by using rakes of hot wire probes. This flow configuration is found to be considerably different when compared to a conventional mixing layer. It appears in particular that the scale of the large structures depends only on the trailing edge thickness and does not grow in its downstream evolution. A criterion, based on POD, is proposed in order to separate wake-mixing layer dominant areas of the downstream evolution of the flow. The complex 3D dynamical behaviour resulting from the interaction between the canonical plane mixing layer and the wake of a cylinder is investigated using data arising from particle image velocimetry measurements. An analysis of the velocity correlations shows different length scales in the regions dominated by wake like structures and shear layer type structures. In order to characterize the particular organization in the plane of symmetry, a POD-Galerkin projection of the Navier-Stokes equations is performed in this plane. This leads to a low-dimensional dynamical system that allows the analysis of the relationship between the dominant frequencies to be performed. A reconstruction of the dominant periodic motion suspected from previous studies is

  6. Cost analysis of premixed multichamber bags versus compounded parenteral nutrition: breakeven point.

    Science.gov (United States)

    Bozat, Erkut; Korubuk, Gamze; Onar, Pelin; Abbasoglu, Osman

    2014-02-01

    Industrially premixed multichamber bags or hospital-manufactured compounded products can be used for parenteral nutrition. The aim of this study was to compare the cost of these 2 approaches. Costs of compounded parenteral nutrition bags in an university hospital were calculated. A total of 600 bags that were administered during 34 days between December 10, 2009 and February 17, 2010 were included in the analysis. For quality control, specific gravity evaluation of the filled bags was performed. It was calculated that the variable cost of a hospital compounded bag was $26.15. If we take the annual fixed costs into consideration, the production cost reaches $36.09 for each unit. It was estimated that the cost for the corresponding multichamber bag was $37.79. Taking the fixed and the variable costs into account, the breakeven point of the hospital compounded and the premixed multichamber bags was seen at 5,404 units per year. In specific gravity evaluation, it was observed that the mean and interval values were inside the upper and lower control margins. In this analysis, usage of hospital-compounded parenteral nutrition bags showed a cost advantage in hospitals that treat more than 15 patients per day. In small volume hospitals, premixed multichamber bags may be more beneficial.

  7. Two gap superconductivity in Ba0.55K0.45Fe2As2 single crystals studied by the directional point-contact Andreev reflection spectroscopy

    International Nuclear Information System (INIS)

    Szabo, P.; Pribulova, Z.; Pristas, G.; Bud'ko, S.L.; Canfield, P.C.; Samuely, P.

    2009-01-01

    First directional point-contact Andreev reflection spectroscopy on the Ba 0.55 K 0.45 Fe 2 As 2 single crystals is presented. The spectra show significant differences when measured in the ab plane in comparison with those measured in the c direction. In the latter case no traces of superconducting energy gap could be found, just a reduced point-contact conductance persisting up to about 100 K and indicating reduced density of states. On the other hand within the ab plane two nodeless superconducting energy gaps Δ S ∼2-5 meV and Δ L ∼9-11 meV are detected.

  8. DNA barcode analysis of butterfly species from Pakistan points towards regional endemism.

    Science.gov (United States)

    Ashfaq, Muhammad; Akhtar, Saleem; Khan, Arif M; Adamowicz, Sarah J; Hebert, Paul D N

    2013-09-01

    DNA barcodes were obtained for 81 butterfly species belonging to 52 genera from sites in north-central Pakistan to test the utility of barcoding for their identification and to gain a better understanding of regional barcode variation. These species represent 25% of the butterfly fauna of Pakistan and belong to five families, although the Nymphalidae were dominant, comprising 38% of the total specimens. Barcode analysis showed that maximum conspecific divergence was 1.6%, while there was 1.7-14.3% divergence from the nearest neighbour species. Barcode records for 55 species showed Barcode of Life Data Systems (BOLD), but only 26 of these cases involved specimens from neighbouring India and Central Asia. Analysis revealed that most species showed little incremental sequence variation when specimens from other regions were considered, but a threefold increase was noted in a few cases. There was a clear gap between maximum intraspecific and minimum nearest neighbour distance for all 81 species. Neighbour-joining cluster analysis showed that members of each species formed a monophyletic cluster with strong bootstrap support. The barcode results revealed two provisional species that could not be clearly linked to known taxa, while 24 other species gained their first coverage. Future work should extend the barcode reference library to include all butterfly species from Pakistan as well as neighbouring countries to gain a better understanding of regional variation in barcode sequences in this topographically and climatically complex region. © 2013 The Authors. Molecular Ecology Resources published by John Wiley & Sons Ltd.

  9. Thermodynamic analysis of the two-phase ejector air-conditioning system for buses

    International Nuclear Information System (INIS)

    Ünal, Şaban; Yilmaz, Tuncay

    2015-01-01

    Air-conditioning compressors of the buses are usually operated with the power taken from the engine of the buses. Therefore, an improvement in the air-conditioning system will reduce the fuel consumption of the buses. The improvement in the coefficient of performance (COP) of the air-conditioning system can be provided by using the two-phase ejector as an expansion valve in the air-conditioning system. In this study, the thermodynamic analysis of bus air-conditioning system enhanced with a two-phase ejector and two evaporators is performed. Thermodynamic analysis is made assuming that the mixing process in ejector occurs at constant cross-sectional area and constant pressure. The increase rate in the COP with respect to conventional system is analyzed in terms of the subcooling, condenser and evaporator temperatures. The analysis shows that COP improvement of the system by using the two phase ejector as an expansion device is 15% depending on design parameters of the existing bus air-conditioning system. - Highlights: • Thermodynamic analysis of the two-phase ejector refrigeration system. • Analysis of the COP increase rate of bus air-conditioning system. • Analysis of the entrainment ratio of the two-phase ejector refrigeration system

  10. Bias due to two-stage residual-outcome regression analysis in genetic association studies.

    Science.gov (United States)

    Demissie, Serkalem; Cupples, L Adrienne

    2011-11-01

    Association studies of risk factors and complex diseases require careful assessment of potential confounding factors. Two-stage regression analysis, sometimes referred to as residual- or adjusted-outcome analysis, has been increasingly used in association studies of single nucleotide polymorphisms (SNPs) and quantitative traits. In this analysis, first, a residual-outcome is calculated from a regression of the outcome variable on covariates and then the relationship between the adjusted-outcome and the SNP is evaluated by a simple linear regression of the adjusted-outcome on the SNP. In this article, we examine the performance of this two-stage analysis as compared with multiple linear regression (MLR) analysis. Our findings show that when a SNP and a covariate are correlated, the two-stage approach results in biased genotypic effect and loss of power. Bias is always toward the null and increases with the squared-correlation between the SNP and the covariate (). For example, for , 0.1, and 0.5, two-stage analysis results in, respectively, 0, 10, and 50% attenuation in the SNP effect. As expected, MLR was always unbiased. Since individual SNPs often show little or no correlation with covariates, a two-stage analysis is expected to perform as well as MLR in many genetic studies; however, it produces considerably different results from MLR and may lead to incorrect conclusions when independent variables are highly correlated. While a useful alternative to MLR under , the two -stage approach has serious limitations. Its use as a simple substitute for MLR should be avoided. © 2011 Wiley Periodicals, Inc.

  11. Impedance analysis of acupuncture points and pathways

    International Nuclear Information System (INIS)

    Teplan, Michal; Kukucka, Marek; Ondrejkovicová, Alena

    2011-01-01

    Investigation of impedance characteristics of acupuncture points from acoustic to radio frequency range is addressed. Discernment and localization of acupuncture points in initial single subject study was unsuccessfully attempted by impedance map technique. Vector impedance analyses determined possible resonant zones in MHz region.

  12. Simultaneous colour visualizations of multiple ALS point cloud attributes for land cover and vegetation analysis

    Science.gov (United States)

    Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert

    2014-05-01

    LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar

  13. Patron Preference in Reference Service Points.

    Science.gov (United States)

    Morgan, Linda

    1980-01-01

    Behavior of patrons choosing between a person sitting at a counter and one sitting at a desk at each of two reference points was observed at the reference department during remodeling at the M. D. Anderson Library of the University of Houston. Results showed a statistically relevant preference for the counter. (Author/JD)

  14. Evaluation of use of MPAD trajectory tape and number of orbit points for orbiter mission thermal predictions

    Science.gov (United States)

    Vogt, R. A.

    1979-01-01

    The application of using the mission planning and analysis division (MPAD) common format trajectory data tape to predict temperatures for preflight and post flight mission analysis is presented and evaluated. All of the analyses utilized the latest Space Transportation System 1 flight (STS-1) MPAD trajectory tape, and the simplified '136 note' midsection/payload bay thermal math model. For the first 6.7 hours of the STS-1 flight profile, transient temperatures are presented for selected nodal locations with the current standard method, and the trajectory tape method. Whether the differences are considered significant or not depends upon the view point. Other transient temperature predictions are also presented. These results were obtained to investigate an initial concern that perhaps the predicted temperature differences between the two methods would not only be caused by the inaccuracies of the current method's assumed nominal attitude profile but also be affected by a lack of a sufficient number of orbit points in the current method. Comparison between 6, 12, and 24 orbit point parameters showed a surprising insensitivity to the number of orbit points.

  15. Analytic result for the two-loop six-point NMHV amplitude in N=4 super Yang-Mills theory

    CERN Document Server

    Dixon, Lance J.; Henn, Johannes M.

    2012-01-01

    We provide a simple analytic formula for the two-loop six-point ratio function of planar N = 4 super Yang-Mills theory. This result extends the analytic knowledge of multi-loop six-point amplitudes beyond those with maximal helicity violation. We make a natural ansatz for the symbols of the relevant functions appearing in the two-loop amplitude, and impose various consistency conditions, including symmetry, the absence of spurious poles, the correct collinear behaviour, and agreement with the operator product expansion for light-like (super) Wilson loops. This information reduces the ansatz to a small number of relatively simple functions. In order to fix these parameters uniquely, we utilize an explicit representation of the amplitude in terms of loop integrals that can be evaluated analytically in various kinematic limits. The final compact analytic result is expressed in terms of classical polylogarithms, whose arguments are rational functions of the dual conformal cross-ratios, plus precisely two function...

  16. Screen-Capturing System with Two-Layer Display for PowerPoint Presentation to Enhance Classroom Education

    Science.gov (United States)

    Lai, Yen-Shou; Tsai, Hung-Hsu; Yu, Pao-Ta

    2011-01-01

    This paper proposes a new presentation system integrating a Microsoft PowerPoint presentation in a two-layer method, called the TL system, to promote learning in a physical classroom. With the TL system, teachers can readily control hints or annotations as a way of making them visible or invisible to students so as to reduce information load. In…

  17. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    Science.gov (United States)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and

  18. Accuracy of multi-point boundary crossing time analysis

    Directory of Open Access Journals (Sweden)

    J. Vogt

    2011-12-01

    Full Text Available Recent multi-spacecraft studies of solar wind discontinuity crossings using the timing (boundary plane triangulation method gave boundary parameter estimates that are significantly different from those of the well-established single-spacecraft minimum variance analysis (MVA technique. A large survey of directional discontinuities in Cluster data turned out to be particularly inconsistent in the sense that multi-point timing analyses did not identify any rotational discontinuities (RDs whereas the MVA results of the individual spacecraft suggested that RDs form the majority of events. To make multi-spacecraft studies of discontinuity crossings more conclusive, the present report addresses the accuracy of the timing approach to boundary parameter estimation. Our error analysis is based on the reciprocal vector formalism and takes into account uncertainties both in crossing times and in the spacecraft positions. A rigorous error estimation scheme is presented for the general case of correlated crossing time errors and arbitrary spacecraft configurations. Crossing time error covariances are determined through cross correlation analyses of the residuals. The principal influence of the spacecraft array geometry on the accuracy of the timing method is illustrated using error formulas for the simplified case of mutually uncorrelated and identical errors at different spacecraft. The full error analysis procedure is demonstrated for a solar wind discontinuity as observed by the Cluster FGM instrument.

  19. Experiment and simulation study on unidirectional carbon fiber composite component under dynamic 3 point bending loading

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Guowei; Sun, Qingping; Zeng, Danielle; Li, Dayong; Su, Xuming

    2018-04-10

    In current work, unidirectional (UD) carbon fiber composite hatsection component with two different layups are studied under dynamic 3 point bending loading. The experiments are performed at various impact velocities, and the effects of impactor velocity and layup on acceleration histories are compared. A macro model is established with LS-Dyna for more detailed study. The simulation results show that the delamination plays an important role during dynamic 3 point bending test. Based on the analysis with high speed camera, the sidewall of hatsection shows significant buckling rather than failure. Without considering the delamination, current material model cannot capture the post failure phenomenon correctly. The sidewall delamination is modeled by assumption of larger failure strain together with slim parameters, and the simulation results of different impact velocities and layups match the experimental results reasonable well.

  20. Unique solvability of some two-point boundary value problems for linear functional differential equations with singularities

    Czech Academy of Sciences Publication Activity Database

    Rontó, András; Samoilenko, A. M.

    2007-01-01

    Roč. 41, - (2007), s. 115-136 ISSN 1512-0015 R&D Projects: GA ČR(CZ) GA201/06/0254 Institutional research plan: CEZ:AV0Z10190503 Keywords : two-point problem * functional differential equation * singular boundary problem Subject RIV: BA - General Mathematics

  1. Realization of Copper Melting Point for Thermocouple Calibrations

    Directory of Open Access Journals (Sweden)

    Y. A. ABDELAZIZ

    2011-08-01

    Full Text Available Although the temperature stability and uncertainty of the freezing plateau is better than that of the melting plateau in most of the thermometry fixed points, but realization of melting plateaus are easier than that of freezing plateaus for metal fixed points. It will be convenient if the melting points can be used instead of the freezing points in calibration of standard noble metal thermocouples because of easier realization and longer plateau duration of melting plateaus. In this work a comparison between the melting and freezing points of copper (Cu was carried out using standard noble metal thermocouples. Platinum - platinum 10 % rhodium (type S, platinum – 30 % rhodium / platinum 6 % rhodium (type B and platinum - palladium (Pt/Pd thermocouples are used in this study. Uncertainty budget analysis of the melting points and freezing points is presented. The experimental results show that it is possible to replace the freezing point with the melting point of copper cell in the calibration of standard noble metal thermocouples in secondary-level laboratories if the optimal methods of realization of melting points are used.

  2. Review Team Focused Modeling Analysis of Radial Collector Well Operation on the Hypersaline Groundwater Plume beneath the Turkey Point Site near Homestead, Florida

    Energy Technology Data Exchange (ETDEWEB)

    Oostrom, Martinus [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vail, Lance W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-08-01

    Researchers at Pacific Northwest National Laboratory served as members of a U.S. Nuclear Regulatory Commission review team for the Florida Power & Light Company’s application for two combined construction permits and operating licenses (combined licenses or COLs) for two proposed new reactor units—Turkey Point Units 6 and 7. The review team evaluated the environmental impacts of the proposed action based on the October 29, 2014 revision of the COL application, including the Environmental Report, responses to requests for additional information, and supplemental information. As part of this effort, team members tasked with assessing the environmental effects of proposed construction and operation of Units 6 and 7 at the Turkey Point site reviewed two separate modeling studies that analyzed the interaction between surface water and groundwater that would be altered by the operation of radial collector wells (RCWs) at the site. To further confirm their understanding of the groundwater hydrodynamics and to consider whether certain actions, proposed after the two earlier modeling studies were completed, would alter the earlier conclusions documented by the review team in their draft environmental impact statement (EIS; NRC 2015), a third modeling analysis was performed. The third modeling analysis is discussed in this report.

  3. Dosimetric analysis at ICRU reference points in HDR-brachytherapy of cervical carcinoma.

    Science.gov (United States)

    Eich, H T; Haverkamp, U; Micke, O; Prott, F J; Müller, R P

    2000-01-01

    In vivo dosimetry in bladder and rectum as well as determining doses on suggested reference points following the ICRU report 38 contribute to quality assurance in HDR-brachytherapy of cervical carcinoma, especially to minimize side effects. In order to gain information regarding the radiation exposure at ICRU reference points in rectum, bladder, ureter and regional lymph nodes those were calculated (digitalisation) by means of orthogonal radiographs of 11 applications in patients with cervical carcinoma, who received primary radiotherapy. In addition, the doses at the ICRU rectum reference point was compared to the results of in vivo measurements in the rectum. The in vivo measurements were by factor 1.5 below the doses determined for the ICRU rectum reference point (4.05 +/- 0.68 Gy versus 6.11 +/- 1.63 Gy). Reasons for this were: calibration errors, non-orthogonal radiographs, movement of applicator and probe in the time span between X-ray and application, missing connection of probe and anterior rectal wall. The standard deviation of calculations at ICRU reference points was on average +/- 30%. Possible reasons for the relatively large standard deviation were difficulties in defining the points, identifying them on radiographs and the different locations of the applicators. Although 3 D CT, US or MR based treatment planning using dose volume histogram analysis is more and more established, this simple procedure of marking and digitising the ICRU reference points lengthened treatment planning only by 5 to 10 minutes. The advantages of in vivo dosimetry are easy practicability and the possibility to determine rectum doses during radiation. The advantages of computer-aided planning at ICRU reference points are that calculations are available before radiation and that they can still be taken into account for treatment planning. Both methods should be applied in HDR-brachytherapy of cervical carcinoma.

  4. Ergodic Capacity Analysis of Free-Space Optical Links with Nonzero Boresight Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique; Alouini, Mohamed-Slim; Cheng, Julian

    2015-01-01

    A unified capacity analysis of a free-space optical (FSO) link that accounts for nonzero boresight pointing errors and both types of detection techniques (i.e. intensity modulation/ direct detection as well as heterodyne detection) is addressed

  5. Synchrotron radiation phase-contrast X-ray CT imaging of acupuncture points

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Dongming; Yan, Xiaohui; Zhang, Xinyi [Fudan University, Synchrotron Radiation Research Center, State Key Laboratory of Surface Physics and Department of Physics, Shanghai (China); Liu, Chenglin [Physics Department of Yancheng Teachers' College, Yancheng (China); Dang, Ruishan [The Second Military Medical University, Shanghai (China); Xiao, Tiqiao [Chinese Academy of Sciences, Shanghai Synchrotron Radiation Facility, Shanghai Institute of Applied Physics, Shanghai (China); Zhu, Peiping [Chinese Academy of Sciences, Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Beijing (China)

    2011-08-15

    Three-dimensional (3D) topographic structures of acupuncture points were investigated by using synchrotron radiation in-line X-ray phase contrast computerized tomography. Two acupuncture points, named Zhongji (RN3) and Zusanli (ST36), were studied. We found an accumulation of microvessels at each acupuncture point region. Images of the tissues surrounding the acupuncture points do not show such kinds of structure. This is the first time that 3D images have revealed the specific structures of acupuncture points. (orig.)

  6. Synchrotron radiation phase-contrast X-ray CT imaging of acupuncture points

    International Nuclear Information System (INIS)

    Zhang, Dongming; Yan, Xiaohui; Zhang, Xinyi; Liu, Chenglin; Dang, Ruishan; Xiao, Tiqiao; Zhu, Peiping

    2011-01-01

    Three-dimensional (3D) topographic structures of acupuncture points were investigated by using synchrotron radiation in-line X-ray phase contrast computerized tomography. Two acupuncture points, named Zhongji (RN3) and Zusanli (ST36), were studied. We found an accumulation of microvessels at each acupuncture point region. Images of the tissues surrounding the acupuncture points do not show such kinds of structure. This is the first time that 3D images have revealed the specific structures of acupuncture points. (orig.)

  7. Self-Similar Spin Images for Point Cloud Matching

    Science.gov (United States)

    Pulido, Daniel

    The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor

  8. On the Relation Between Facular Bright Points and the Magnetic Field

    Science.gov (United States)

    Berger, Thomas; Shine, Richard; Tarbell, Theodore; Title, Alan; Scharmer, Goran

    1994-12-01

    Multi-spectral images of magnetic structures in the solar photosphere are presented. The images were obtained in the summers of 1993 and 1994 at the Swedish Solar Telescope on La Palma using the tunable birefringent Solar Optical Universal Polarimeter (SOUP filter), a 10 Angstroms wide interference filter tuned to 4304 Angstroms in the band head of the CH radical (the Fraunhofer G-band), and a 3 Angstroms wide interference filter centered on the Ca II--K absorption line. Three large format CCD cameras with shuttered exposures on the order of 10 msec and frame rates of up to 7 frames per second were used to create time series of both quiet and active region evolution. The full field--of--view is 60times 80 arcseconds (44times 58 Mm). With the best seeing, structures as small as 0.22 arcseconds (160 km) in diameter are clearly resolved. Post--processing of the images results in rigid coalignment of the image sets to an accuracy comparable to the spatial resolution. Facular bright points with mean diameters of 0.35 arcseconds (250 km) and elongated filaments with lengths on the order of arcseconds (10(3) km) are imaged with contrast values of up to 60 % by the G--band filter. Overlay of these images on contemporal Fe I 6302 Angstroms magnetograms and Ca II K images reveals that the bright points occur, without exception, on sites of magnetic flux through the photosphere. However, instances of concentrated and diffuse magnetic flux and Ca II K emission without associated bright points are common, leading to the conclusion that the presence of magnetic flux is a necessary but not sufficient condition for the occurence of resolvable facular bright points. Comparison of the G--band and continuum images shows a complex relation between structures in the two bandwidths: bright points exceeding 350 km in extent correspond to distinct bright structures in the continuum; smaller bright points show no clear relation to continuum structures. Size and contrast statistical cross

  9. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    International Nuclear Information System (INIS)

    Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz

    2008-01-01

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

  10. Hazard analysis and critical control point (HACCP) for an ultrasound food processing operation.

    Science.gov (United States)

    Chemat, Farid; Hoarau, Nicolas

    2004-05-01

    Emerging technologies, such as ultrasound (US), used for food and drink production often cause hazards for product safety. Classical quality control methods are inadequate to control these hazards. Hazard analysis of critical control points (HACCP) is the most secure and cost-effective method for controlling possible product contamination or cross-contamination, due to physical or chemical hazard during production. The following case study on the application of HACCP to an US food-processing operation demonstrates how the hazards at the critical control points of the process are effectively controlled through the implementation of HACCP.

  11. Comments on the comparison of global methods for linear two-point boundary value problems

    International Nuclear Information System (INIS)

    de Boor, C.; Swartz, B.

    1977-01-01

    A more careful count of the operations involved in solving the linear system associated with collocation of a two-point boundary value problem using a rough splines reverses results recently reported by others in this journal. In addition, it is observed that the use of the technique of ''condensation of parameters'' can decrease the computer storage required. Furthermore, the use of a particular highly localized basis can also reduce the setup time when the mesh is irregular. Finally, operation counts are roughly estimated for the solution of certain linear system associated with two competing collocation methods; namely, collocation with smooth splines and collocation of the equivalent first order system with continuous piecewise polynomials

  12. An application of the 'end-point' method to the minimum critical mass problem in two group transport theory

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    2003-01-01

    A two group integral equation derived using transport theory, which describes the fuel distribution necessary for a flat thermal flux and minimum critical mass, is solved by the classical end-point method. This method has a number of advantages and in particular highlights the changing behaviour of the fissile mass distribution function in the neighbourhood of the core-reflector interface. We also show how the reflector thermal flux behaves and explain the origin of the maximum which arises when the critical size is less than that corresponding to minimum critical mass. A comparison is made with diffusion theory and the necessary and somewhat artificial presence of surface delta functions in the fuel distribution is shown to be analogous to the edge transients that arise naturally in transport theory

  13. Dynamics of single photon transport in a one-dimensional waveguide two-point coupled with a Jaynes-Cummings system

    KAUST Repository

    Wang, Yuwen

    2016-09-22

    We study the dynamics of an ultrafast single photon pulse in a one-dimensional waveguide two-point coupled with a Jaynes-Cummings system. We find that for any single photon input the transmissivity depends periodically on the separation between the two coupling points. For a pulse containing many plane wave components it is almost impossible to suppress transmission, especially when the width of the pulse is less than 20 times the period. In contrast to plane wave input, the waveform of the pulse can be modified by controlling the coupling between the waveguide and Jaynes-Cummings system. Tailoring of the waveform is important for single photon manipulation in quantum informatics. © The Author(s) 2016.

  14. Relatively Inexact Proximal Point Algorithm and Linear Convergence Analysis

    Directory of Open Access Journals (Sweden)

    Ram U. Verma

    2009-01-01

    Full Text Available Based on a notion of relatively maximal (m-relaxed monotonicity, the approximation solvability of a general class of inclusion problems is discussed, while generalizing Rockafellar's theorem (1976 on linear convergence using the proximal point algorithm in a real Hilbert space setting. Convergence analysis, based on this new model, is simpler and compact than that of the celebrated technique of Rockafellar in which the Lipschitz continuity at 0 of the inverse of the set-valued mapping is applied. Furthermore, it can be used to generalize the Yosida approximation, which, in turn, can be applied to first-order evolution equations as well as evolution inclusions.

  15. Proteome Analysis of the Plant Pathogenic Fungus Monilinia laxa Showing Host Specificity

    Directory of Open Access Journals (Sweden)

    Olja Bregar

    2012-01-01

    Full Text Available Brown rot fungus Monilinia laxa (Aderh. & Ruhl. Honey is an important plant pathogen in stone and pome fruits in Europe. We applied a proteomic approach in a study of M. laxa isolates obtained from apples and apricots in order to show the host specifity of the isolates and to analyse differentially expressed proteins in terms of host specifity, fungal pathogenicity and identification of candidate proteins for diagnostic marker development. Extracted mycelium proteins were separated by 2-D electrophoresis (2-DE and visualized by Coomassie staining in a non-linear pH range of 3–11 and Mr of 14–116 kDa. We set up a 2-DE reference map of M. laxa, resolving up to 800 protein spots, and used it for image analysis. The average technical coefficient of variance (13 % demonstrated a high reproducibility of protein extraction and 2-D polyacrylamide gel electrophoresis (2-DE PAGE, and the average biological coefficient of variance (23 % enabled differential proteomic analysis of the isolates. Multivariate statistical analysis (principal component analysis discriminated isolates from two different hosts, providing new data that support the existence of a M. laxa specialized form f. sp. mali, which infects only apples. A total of 50 differentially expressed proteins were further analyzed by LC-MS/MS, yielding 41 positive identifications. The identified mycelial proteins were functionally classified into 6 groups: amino acid and protein metabolism, energy production, carbohydrate metabolism, stress response, fatty acid metabolism and other proteins. Some proteins expressed only in apple isolates have been described as virulence factors in other fungi. The acetolactate synthase was almost 11-fold more abundant in apple-specific isolates than in apricot isolates and it might be implicated in M. laxa host specificity. Ten proteins identified only in apple isolates are potential candidates for the development of M. laxa host-specific diagnostic markers.

  16. Super-resolution for a point source better than λ/500 using positive refraction

    Science.gov (United States)

    Miñano, Juan C.; Marqués, Ricardo; González, Juan C.; Benítez, Pablo; Delgado, Vicente; Grabovickic, Dejan; Freire, Manuel

    2011-12-01

    Leonhardt (2009 New J. Phys. 11 093040) demonstrated that the two-dimensional (2D) Maxwell fish eye (MFE) lens can focus perfectly 2D Helmholtz waves of arbitrary frequency; that is, it can transport perfectly an outward (monopole) 2D Helmholtz wave field, generated by a point source, towards a ‘perfect point drain’ located at the corresponding image point. Moreover, a prototype with λ/5 super-resolution property for one microwave frequency has been manufactured and tested (Ma et al 2010 arXiv:1007.2530v1; Ma et al 2010 New J. Phys. 13 033016). However, neither software simulations nor experimental measurements for a broad band of frequencies have yet been reported. Here, we present steady-state simulations with a non-perfect drain for a device equivalent to the MFE, called the spherical geodesic waveguide (SGW), which predicts up to λ/500 super-resolution close to discrete frequencies. Out of these frequencies, the SGW does not show super-resolution in the analysis carried out.

  17. Super-resolution for a point source better than λ/500 using positive refraction

    International Nuclear Information System (INIS)

    Miñano, Juan C; González, Juan C; Benítez, Pablo; Grabovickic, Dejan; Marqués, Ricardo; Delgado, Vicente; Freire, Manuel

    2011-01-01

    Leonhardt (2009 New J. Phys. 11 093040) demonstrated that the two-dimensional (2D) Maxwell fish eye (MFE) lens can focus perfectly 2D Helmholtz waves of arbitrary frequency; that is, it can transport perfectly an outward (monopole) 2D Helmholtz wave field, generated by a point source, towards a ‘perfect point drain’ located at the corresponding image point. Moreover, a prototype with λ/5 super-resolution property for one microwave frequency has been manufactured and tested (Ma et al 2010 arXiv:1007.2530v1; Ma et al 2010 New J. Phys. 13 033016). However, neither software simulations nor experimental measurements for a broad band of frequencies have yet been reported. Here, we present steady-state simulations with a non-perfect drain for a device equivalent to the MFE, called the spherical geodesic waveguide (SGW), which predicts up to λ/500 super-resolution close to discrete frequencies. Out of these frequencies, the SGW does not show super-resolution in the analysis carried out. (paper)

  18. Comprehensive two-dimensional gas chromatography applied to illicit drug analysis.

    Science.gov (United States)

    Mitrevski, Blagoj; Wynne, Paul; Marriott, Philip J

    2011-11-01

    Multidimensional gas chromatography (MDGC), and especially its latest incarnation--comprehensive two-dimensional gas chromatography (GC × GC)--have proved advantageous over and above classic one-dimensional gas chromatography (1D GC) in many areas of analysis by offering improved peak capacity, often enhanced sensitivity and, especially in the case of GC × GC, the unique feature of 'structured' chromatograms. This article reviews recent advances in MDGC and GC × GC in drug analysis with special focus on ecstasy, heroin and cocaine profiling. Although 1D GC is still the method of choice for drug profiling in most laboratories because of its simplicity and instrument availability, GC × GC is a tempting proposition for this purpose because of its ability to generate a higher net information content. Effluent refocusing due to the modulation (compression) process, combined with the separation on two 'orthogonal' columns, results in more components being well resolved and therefore being analytically and statistically useful to the profile. The spread of the components in the two-dimensional plots is strongly dependent on the extent of retention 'orthogonality' (i.e. the extent to which the two phases possess different or independent retention mechanisms towards sample constituents) between the two columns. The benefits of 'information-driven' drug profiling, where more points of reference are usually required for sample differentiation, are discussed. In addition, several limitations in application of MDGC in drug profiling, including data acquisition rate, column temperature limit, column phase orthogonality and chiral separation, are considered and discussed. Although the review focuses on the articles published in the last decade, a brief chronological preview of the profiling methods used throughout the last three decades is given.

  19. Does medicine still show an unresolved discrimination against women? Experience in two European university hospitals.

    Science.gov (United States)

    Santamaría, A; Merino, A; Viñas, O; Arrizabalaga, P

    2009-02-01

    Have invisible barriers for women been broken in 2007, or do we still have to break through medicine's glass ceiling? Data from two of the most prestigious university hospitals in Barcelona with 700-800 beds, Hospital Clínic (HC) and Hospital de la Santa Creu i Sant Pau (HSCSP) address this issue. In the HSCSP, 87% of the department chairs are men and 85% of the department unit chiefs are also men. With respect to women, only 5 (13%) are in the top position (department chair) and 4 (15%) are department unit chiefs. Similar statistics are also found at the HC: 87% of the department chairs and 89% of the department unit chiefs are men. Currently, only 6 women (13%) are in the top position and 6 (11%) are department unit chiefs. Analysis of the 2002 data of internal promotions in HC showed that for the first level (senior specialist) sex distribution was similar. Nevertheless, for the second level (consultant) only 25% were women, and for the top level (senior consultant) only 8% were women. These proportions have not changed in 2007 in spite of a 10% increase in leadership positions during this period. Similar proportions were found in HSCSP where 68% of the top promotions were held by men. The data obtained from these two different medical institutions in Barcelona are probably representative of other hospitals in Spain. It would be ethically desirable to have males and females in leadership positions in the medical profession.

  20. Three-Dimensional Adaptive Mesh Refinement Simulations of Point-Symmetric Nebulae

    NARCIS (Netherlands)

    Rijkhorst, E.-J.; Icke, V.; Mellema, G.; Meixner, M.; Kastner, J.H.; Balick, B.; Soker, N.

    2004-01-01

    Previous analytical and numerical work shows that the generalized interacting stellar winds model can explain the observed bipolar shapes of planetary nebulae very well. However, many circumstellar nebulae have a multipolar or point-symmetric shape. With two-dimensional calculations, Icke showed

  1. A three-point Taylor algorithm for three-point boundary value problems

    NARCIS (Netherlands)

    J.L. López; E. Pérez Sinusía; N.M. Temme (Nico)

    2011-01-01

    textabstractWe consider second-order linear differential equations $\\varphi(x)y''+f(x)y'+g(x)y=h(x)$ in the interval $(-1,1)$ with Dirichlet, Neumann or mixed Dirichlet-Neumann boundary conditions given at three points of the interval: the two extreme points $x=\\pm 1$ and an interior point

  2. Coarse Point Cloud Registration by Egi Matching of Voxel Clusters

    Science.gov (United States)

    Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo

    2016-06-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.

  3. Implementation of 5S tools as a starting point in business process reengineering

    Directory of Open Access Journals (Sweden)

    Vorkapić Miloš 0000-0002-3463-8665

    2017-01-01

    Full Text Available The paper deals with the analysis of elements which represent a starting point in implementation of a business process reengineering. We have used Lean tools through the analysis of 5S model in our research. On the example of finalization of the finished transmitter in IHMT-CMT production, 5S tools were implemented with a focus on Quality elements although the theory shows that BPR and TQM are two opposite activities in an enterprise. We wanted to distinguish the significance of employees’ self-discipline which helps the process of product finalization to develop in time and without waste and losses. In addition, the employees keep their work place clean, tidy and functional.

  4. Spatially explicit spectral analysis of point clouds and geospatial data

    Science.gov (United States)

    Buscombe, Daniel D.

    2015-01-01

    The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software packagePySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is

  5. Spatially explicit spectral analysis of point clouds and geospatial data

    Science.gov (United States)

    Buscombe, Daniel

    2016-01-01

    The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software package PySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is described

  6. Tokyo Motor Show 2003; Tokyo Motor Show 2003

    Energy Technology Data Exchange (ETDEWEB)

    Joly, E.

    2004-01-01

    The text which follows present the different techniques exposed during the 37. Tokyo Motor Show. The report points out the great tendencies of developments of the Japanese automobile industry. The hybrid electric-powered vehicles or those equipped with fuel cells have been highlighted by the Japanese manufacturers which allow considerable budgets in the research of less polluting vehicles. The exposed models, although being all different according to the manufacturer, use always a hybrid system: fuel cell/battery. The manufacturers have stressed too on the intelligent systems for navigation and safety as well as on the design and comfort. (O.M.)

  7. On application of the S-matrix two-point function to nuclear data evaluation

    International Nuclear Information System (INIS)

    Igarasi, S.

    1992-01-01

    Statistical model calculation using S-matrix two-point function (STF) was tried. The results were compared with those calculated with the Hauser-Feshbach formula (HF) with and without resonance level-width fluctuation corrections (WFC). The STF gave almost the same cross sections as calculated using Moldauer's degrees of freedom for the χ 2 -distributions (MCD). The effect of the WFC to the final states in continuum was also studied using the HF with WFC of the MCD and of Porter-Thomas distribution (PTD). The HF with the MCD is recommended for practical calculation of the cross sections. (orig.)

  8. Feasibility of the Two-Point Method for Determining the One-Repetition Maximum in the Bench Press Exercise.

    Science.gov (United States)

    García-Ramos, Amador; Haff, Guy Gregory; Pestaña-Melero, Francisco Luis; Pérez-Castilla, Alejandro; Rojas, Francisco Javier; Balsalobre-Fernández, Carlos; Jaric, Slobodan

    2017-09-05

    This study compared the concurrent validity and reliability of previously proposed generalized group equations for estimating the bench press (BP) one-repetition maximum (1RM) with the individualized load-velocity relationship modelled with a two-point method. Thirty men (BP 1RM relative to body mass: 1.08 0.18 kg·kg -1 ) performed two incremental loading tests in the concentric-only BP exercise and another two in the eccentric-concentric BP exercise to assess their actual 1RM and load-velocity relationships. A high velocity (≈ 1 m·s -1 ) and a low velocity (≈ 0.5 m·s -1 ) was selected from their load-velocity relationships to estimate the 1RM from generalized group equations and through an individual linear model obtained from the two velocities. The directly measured 1RM was highly correlated with all predicted 1RMs (r range: 0.847-0.977). The generalized group equations systematically underestimated the actual 1RM when predicted from the concentric-only BP (P <0.001; effect size [ES] range: 0.15-0.94), but overestimated it when predicted from the eccentric-concentric BP (P <0.001; ES range: 0.36-0.98). Conversely, a low systematic bias (range: -2.3-0.5 kg) and random errors (range: 3.0-3.8 kg), no heteroscedasticity of errors (r 2 range: 0.053-0.082), and trivial ES (range: -0.17-0.04) were observed when the prediction was based on the two-point method. Although all examined methods reported the 1RM with high reliability (CV≤5.1%; ICC≥0.89), the direct method was the most reliable (CV<2.0%; ICC≥0.98). The quick, fatigue-free, and practical two-point method was able to predict the BP 1RM with high reliability and practically perfect validity, and therefore we recommend its use over generalized group equations.

  9. Comparative analysis among several methods used to solve the point kinetic equations

    International Nuclear Information System (INIS)

    Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da

    2007-01-01

    The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)

  10. Comparative analysis among several methods used to solve the point kinetic equations

    Energy Technology Data Exchange (ETDEWEB)

    Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear; E-mails: alupo@if.ufrj.br; agoncalves@con.ufrj.br; aquilino@lmp.ufrj.br; fernando@con.ufrj.br

    2007-07-01

    The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)

  11. Two-dimensional NMR measurement and point dipole model prediction of paramagnetic shift tensors in solids

    Energy Technology Data Exchange (ETDEWEB)

    Walder, Brennan J.; Davis, Michael C.; Grandinetti, Philip J. [Department of Chemistry, Ohio State University, 100 West 18th Avenue, Columbus, Ohio 43210 (United States); Dey, Krishna K. [Department of Physics, Dr. H. S. Gour University, Sagar, Madhya Pradesh 470003 (India); Baltisberger, Jay H. [Division of Natural Science, Mathematics, and Nursing, Berea College, Berea, Kentucky 40403 (United States)

    2015-01-07

    A new two-dimensional Nuclear Magnetic Resonance (NMR) experiment to separate and correlate the first-order quadrupolar and chemical/paramagnetic shift interactions is described. This experiment, which we call the shifting-d echo experiment, allows a more precise determination of tensor principal components values and their relative orientation. It is designed using the recently introduced symmetry pathway concept. A comparison of the shifting-d experiment with earlier proposed methods is presented and experimentally illustrated in the case of {sup 2}H (I = 1) paramagnetic shift and quadrupolar tensors of CuCl{sub 2}⋅2D{sub 2}O. The benefits of the shifting-d echo experiment over other methods are a factor of two improvement in sensitivity and the suppression of major artifacts. From the 2D lineshape analysis of the shifting-d spectrum, the {sup 2}H quadrupolar coupling parameters are 〈C{sub q}〉 = 118.1 kHz and 〈η{sub q}〉 = 0.88, and the {sup 2}H paramagnetic shift tensor anisotropy parameters are 〈ζ{sub P}〉 = − 152.5 ppm and 〈η{sub P}〉 = 0.91. The orientation of the quadrupolar coupling principal axis system (PAS) relative to the paramagnetic shift anisotropy principal axis system is given by (α,β,γ)=((π)/2 ,(π)/2 ,0). Using a simple ligand hopping model, the tensor parameters in the absence of exchange are estimated. On the basis of this analysis, the instantaneous principal components and orientation of the quadrupolar coupling are found to be in excellent agreement with previous measurements. A new point dipole model for predicting the paramagnetic shift tensor is proposed yielding significantly better agreement than previously used models. In the new model, the dipoles are displaced from nuclei at positions associated with high electron density in the singly occupied molecular orbital predicted from ligand field theory.

  12. Transcriptome Analysis of Liangshan Pig Muscle Development at the Growth Curve Inflection Point and Asymptotic Stages Using Digital Gene Expression Profiling

    Science.gov (United States)

    Du, Jingjing; Liu, Chendong; Wu, Xiaoqian; Pu, Qiang; Fu, Yuhua; Tang, Qianzi; Liu, Yuanrui; Li, Qiang; Yang, Runlin; Li, Xuewei; Tang, Guoqing; Jiang, Yanzhi; Li, Mingzhou; Zhang, Shunhua; Zhu, Li

    2015-01-01

    Animal growth curves can provide essential information for animal breeders to optimize feeding and management strategies. However, the genetic mechanism underlying the phenotypic differentiation between the inflection point and asymptotic stages of the growth curve is not well characterized. Here, we employed Liangshan pigs in stages of growth at the inflection point (under inflection point: UIP) and the two asymptotic stages (before the inflection point: BIP, after the inflection point: AIP) as models to survey global gene expression in the longissimus dorsi muscle using digital gene expression (DGE) tag profiling. We found Liangshan pigs reached maximum growth rate (UIP) at 163.6 days of age and a weight of 134.6 kg. The DGE libraries generated 117 million reads of 5.89 gigabases in length. 21,331, 20,996 and 20,139 expressed transcripts were identified BIP, UIP and AIP, respectively. Among them, we identified 757 differentially expressed genes (DEGs) between BIP and UIP, and 271 DEGs between AIP and UIP. An enrichment analysis of DEGs proved the immune system was strengthened in the AIP stage. Energy metabolism rate, global transcriptional activity and bone development intensity were highest UIP. Meat from Liangshan pigs had the highest intramuscular fat content and most favorable fatty acid composition in the AIP. Three hundred eighty (27.70%) specific expression genes were highly enriched in QTL regions for growth and meat quality traits. This study completed a comprehensive analysis of diverse genetic mechanisms underlying the inflection point and asymptotic stages of growth. Our findings will serve as an important resource in the understanding of animal growth and development in indigenous pig breeds. PMID:26292092

  13. DISCRIMINATIVE ANALYSIS OF TESTS FOR EVALUATING SITUATIONMOTORIC ABILITIES BETWEEN TWO GROUPS OF BASKETBALL PLAYERS SELECTED BY THE TEST OF SOCIOMETRY

    Directory of Open Access Journals (Sweden)

    Abdulla Elezi

    2011-09-01

    Full Text Available Determining differences between the two groups of basketball players selected with the modified sociometric test (Paranosić and Lazarević in some tests for assessing situation-motor skills, was the aim of this work. The test sample was consisted of 20 basketball players who had most positive points and 20 basketball players who had most negative points, in total- 40 players. T-test was applied to determine whether there are differences between the two groups of basketball players who had been elected with the help of the sociometric test. Analyses were made with the program SPSS 8.0. The discriminative analysis has determined that the differences in the arithmetic means between the groups of basketball players who had most positive points and the group of basketball players who had most negative points in some tests for assessing situation-motor abilities do not exist

  14. Blue-noise remeshing with farthest point optimization

    KAUST Repository

    Yan, Dongming

    2014-08-01

    In this paper, we present a novel method for surface sampling and remeshing with good blue-noise properties. Our approach is based on the farthest point optimization (FPO), a relaxation technique that generates high quality blue-noise point sets in 2D. We propose two important generalizations of the original FPO framework: adaptive sampling and sampling on surfaces. A simple and efficient algorithm for accelerating the FPO framework is also proposed. Experimental results show that the generalized FPO generates point sets with excellent blue-noise properties for adaptive and surface sampling. Furthermore, we demonstrate that our remeshing quality is superior to the current state-of-the art approaches. © 2014 The Eurographics Association and John Wiley & Sons Ltd.

  15. Blue-noise remeshing with farthest point optimization

    KAUST Repository

    Yan, Dongming; Guo, Jianwei; Jia, Xiaohong; Zhang, Xiaopeng; Wonka, Peter

    2014-01-01

    In this paper, we present a novel method for surface sampling and remeshing with good blue-noise properties. Our approach is based on the farthest point optimization (FPO), a relaxation technique that generates high quality blue-noise point sets in 2D. We propose two important generalizations of the original FPO framework: adaptive sampling and sampling on surfaces. A simple and efficient algorithm for accelerating the FPO framework is also proposed. Experimental results show that the generalized FPO generates point sets with excellent blue-noise properties for adaptive and surface sampling. Furthermore, we demonstrate that our remeshing quality is superior to the current state-of-the art approaches. © 2014 The Eurographics Association and John Wiley & Sons Ltd.

  16. Analysis of two dimensional signals via curvelet transform

    Science.gov (United States)

    Lech, W.; Wójcik, W.; Kotyra, A.; Popiel, P.; Duk, M.

    2007-04-01

    This paper describes an application of curvelet transform analysis problem of interferometric images. Comparing to two-dimensional wavelet transform, curvelet transform has higher time-frequency resolution. This article includes numerical experiments, which were executed on random interferometric image. In the result of nonlinear approximations, curvelet transform obtains matrix with smaller number of coefficients than is guaranteed by wavelet transform. Additionally, denoising simulations show that curvelet could be a very good tool to remove noise from images.

  17. Two-Scale 13C Metabolic Flux Analysis for Metabolic Engineering.

    Science.gov (United States)

    Ando, David; Garcia Martin, Hector

    2018-01-01

    Accelerating the Design-Build-Test-Learn (DBTL) cycle in synthetic biology is critical to achieving rapid and facile bioengineering of organisms for the production of, e.g., biofuels and other chemicals. The Learn phase involves using data obtained from the Test phase to inform the next Design phase. As part of the Learn phase, mathematical models of metabolic fluxes give a mechanistic level of comprehension to cellular metabolism, isolating the principle drivers of metabolic behavior from the peripheral ones, and directing future experimental designs and engineering methodologies. Furthermore, the measurement of intracellular metabolic fluxes is specifically noteworthy as providing a rapid and easy-to-understand picture of how carbon and energy flow throughout the cell. Here, we present a detailed guide to performing metabolic flux analysis in the Learn phase of the DBTL cycle, where we show how one can take the isotope labeling data from a 13 C labeling experiment and immediately turn it into a determination of cellular fluxes that points in the direction of genetic engineering strategies that will advance the metabolic engineering process.For our modeling purposes we use the Joint BioEnergy Institute (JBEI) Quantitative Metabolic Modeling (jQMM) library, which provides an open-source, python-based framework for modeling internal metabolic fluxes and making actionable predictions on how to modify cellular metabolism for specific bioengineering goals. It presents a complete toolbox for performing different types of flux analysis such as Flux Balance Analysis, 13 C Metabolic Flux Analysis, and it introduces the capability to use 13 C labeling experimental data to constrain comprehensive genome-scale models through a technique called two-scale 13 C Metabolic Flux Analysis (2S- 13 C MFA) [1]. In addition to several other capabilities, the jQMM is also able to predict the effects of knockouts using the MoMA and ROOM methodologies. The use of the jQMM library is

  18. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  19. Mathematical pointing model establishment of the visual tracking theodolite for satellites in two kinds of observation methods.

    Science.gov (United States)

    Zhang, Yuncheng

    The mathematical pointing model is establishment of the visual tracking theodolite for satellites in two kinds of observation methods at Yunnan Observatory, which is related to the digitalisation reform and the optical-electronic technique reform, is introduced respectively in this paper.

  20. Induced Temporal Signatures for Point-Source Detection

    International Nuclear Information System (INIS)

    Stephens, Daniel L.; Runkle, Robert C.; Carlson, Deborah K.; Peurrung, Anthony J.; Seifert, Allen; Wyatt, Cory R.

    2005-01-01

    Detection of radioactive point-sized sources is inherently divided into two regimes encompassing stationary and moving detectors. The two cases differ in their treatment of background radiation and its influence on detection sensitivity. In the stationary detector case the statistical fluctuation of the background determines the minimum detectable quantity. In the moving detector case the detector may be subjected to widely and irregularly varying background radiation, as a result of geographical and environmental variation. This significant systematic variation, in conjunction with the statistical variation of the background, requires a conservative threshold to be selected to yield the same false-positive rate as the stationary detection case. This results in lost detection sensitivity for real sources. This work focuses on a simple and practical modification of the detector geometry that increase point-source recognition via a distinctive temporal signature. A key part of this effort is the integrated development of both detector geometries that induce a highly distinctive signature for point sources and the development of statistical algorithms able to optimize detection of this signature amidst varying background. The identification of temporal signatures for point sources has been demonstrated and compared with the canonical method showing good results. This work demonstrates that temporal signatures are efficient at increasing point-source discrimination in a moving detector system

  1. Professional SharePoint 2010 Development

    CERN Document Server

    Rizzo, Tom; Fried, Jeff; Swider, Paul J; Hillier, Scot; Schaefer, Kenneth

    2012-01-01

    Updated guidance on how to take advantage of the newest features of SharePoint programmability More than simply a portal, SharePoint is Microsoft's popular content management solution for building intranets and websites or hosting wikis and blogs. Offering broad coverage on all aspects of development for the SharePoint platform, this comprehensive book shows you exactly what SharePoint does, how to build solutions, and what features are accessible within SharePoint. Written by a team of SharePoint experts, this new edition offers an extensive selection of field-tested best practices that shows

  2. The three catalases in Deinococcus radiodurans: Only two show catalase activity

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Sun-Wook [Research Division for Biotechnology, Korea Atomic Energy Research Institute, Jeongeup, 580-185 (Korea, Republic of); Department of Biological Sciences, College of Biological Sciences and Biotechnology, Chungnam National University, Daejeon, 305-764 (Korea, Republic of); Jung, Jong-Hyun; Kim, Min-Kyu; Seo, Ho Seong [Research Division for Biotechnology, Korea Atomic Energy Research Institute, Jeongeup, 580-185 (Korea, Republic of); Lim, Heon-Man [Department of Biological Sciences, College of Biological Sciences and Biotechnology, Chungnam National University, Daejeon, 305-764 (Korea, Republic of); Lim, Sangyong, E-mail: saylim@kaeri.re.kr [Research Division for Biotechnology, Korea Atomic Energy Research Institute, Jeongeup, 580-185 (Korea, Republic of)

    2016-01-15

    Deinococcus radiodurans, which is extremely resistant to ionizing radiation and oxidative stress, is known to have three catalases (DR1998, DRA0146, and DRA0259). In this study, to investigate the role of each catalase, we constructed catalase mutants (Δdr1998, ΔdrA0146, and ΔdrA0259) of D. radiodurans. Of the three mutants, Δdr1998 exhibited the greatest decrease in hydrogen peroxide (H{sub 2}O{sub 2}) resistance and the highest increase in intracellular reactive oxygen species (ROS) levels following H{sub 2}O{sub 2} treatments, whereas ΔdrA0146 showed no change in its H{sub 2}O{sub 2} resistance or ROS level. Catalase activity was not attenuated in ΔdrA0146, and none of the three bands detected in an in-gel catalase activity assay disappeared in ΔdrA0146. The purified His-tagged recombinant DRA0146 did not show catalase activity. In addition, the phylogenetic analysis of the deinococcal catalases revealed that the DR1998-type catalase is common in the genus Deinococcus, but the DRA0146-type catalase was found in only 4 of 23 Deinococcus species. Taken together, these results indicate that DR1998 plays a critical role in the anti-oxidative system of D. radiodurans by detoxifying H{sub 2}O{sub 2}, but DRA0146 does not have catalase activity and is not involved in the resistance to H{sub 2}O{sub 2} stress. - Highlights: • The dr1998 mutant strain lost 90% of its total catalase activity. • Increased ROS levels and decreased H{sub 2}O{sub 2} resistance were observed in dr1998 mutants. • Lack of drA0146 did not affect any oxidative stress-related phenotypes. • The purified DRA0146 did not show catalase activity.

  3. The three catalases in Deinococcus radiodurans: Only two show catalase activity

    International Nuclear Information System (INIS)

    Jeong, Sun-Wook; Jung, Jong-Hyun; Kim, Min-Kyu; Seo, Ho Seong; Lim, Heon-Man; Lim, Sangyong

    2016-01-01

    Deinococcus radiodurans, which is extremely resistant to ionizing radiation and oxidative stress, is known to have three catalases (DR1998, DRA0146, and DRA0259). In this study, to investigate the role of each catalase, we constructed catalase mutants (Δdr1998, ΔdrA0146, and ΔdrA0259) of D. radiodurans. Of the three mutants, Δdr1998 exhibited the greatest decrease in hydrogen peroxide (H_2O_2) resistance and the highest increase in intracellular reactive oxygen species (ROS) levels following H_2O_2 treatments, whereas ΔdrA0146 showed no change in its H_2O_2 resistance or ROS level. Catalase activity was not attenuated in ΔdrA0146, and none of the three bands detected in an in-gel catalase activity assay disappeared in ΔdrA0146. The purified His-tagged recombinant DRA0146 did not show catalase activity. In addition, the phylogenetic analysis of the deinococcal catalases revealed that the DR1998-type catalase is common in the genus Deinococcus, but the DRA0146-type catalase was found in only 4 of 23 Deinococcus species. Taken together, these results indicate that DR1998 plays a critical role in the anti-oxidative system of D. radiodurans by detoxifying H_2O_2, but DRA0146 does not have catalase activity and is not involved in the resistance to H_2O_2 stress. - Highlights: • The dr1998 mutant strain lost 90% of its total catalase activity. • Increased ROS levels and decreased H_2O_2 resistance were observed in dr1998 mutants. • Lack of drA0146 did not affect any oxidative stress-related phenotypes. • The purified DRA0146 did not show catalase activity.

  4. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    Science.gov (United States)

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

  5. Comparative MR study of hepatic fat quantification using single-voxel proton spectroscopy, two-point dixon and three-point IDEAL.

    Science.gov (United States)

    Kim, Hyeonjin; Taksali, Sara E; Dufour, Sylvie; Befroy, Douglas; Goodman, T Robin; Petersen, Kitt Falk; Shulman, Gerald I; Caprio, Sonia; Constable, R Todd

    2008-03-01

    Hepatic fat fraction (HFF) was measured in 28 lean/obese humans by single-voxel proton spectroscopy (MRS), a two-point Dixon (2PD), and a three-point iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL) method (3PI). For the lean, obese, and total subject groups, the range of HFF measured by MRS was 0.3-3.5% (1.1 +/- 1.4%), 0.3-41.5% (11.7 +/- 12.1), and 0.3-41.5% (10.1 +/- 11.6%), respectively. For the same groups, the HFF measured by 2PD was -6.3-2.2% (-2.0 +/- 3.7%), -2.4-42.9% (12.9 +/- 13.8%), and -6.3-42.9% (10.5 +/- 13.7%), respectively, and for 3PI they were 7.9-12.8% (10.1 +/- 2.0%), 11.1-49.3% (22.0 +/- 12.2%), and 7.9-49.3% (20.0 +/- 11.8%), respectively. The HFF measured by MRS was highly correlated with those measured by 2PD (r = 0.954, P fatty liver with the MRI methods ranged from 68-93% for 2PD and 64-89% for 3PI. Our study demonstrates that the apparent HFF measured by the MRI methods can significantly vary depending on the choice of water-fat separation methods and sequences. Such variability may limit the clinical application of the MRI methods, particularly when a diagnosis of early fatty liver needs to be performed. Therefore, protocol-specific establishment of cutoffs for liver fat content may be necessary. (c) 2008 Wiley-Liss, Inc.

  6. Two-step estimation for inhomogeneous spatial point processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao

    This paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second order properties (K-function). Regression parameters are estimated using a Poisson likelihood score estimating function and in a second...... step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rain forests....

  7. Multi-lane detection based on multiple vanishing points detection

    Science.gov (United States)

    Li, Chuanxiang; Nie, Yiming; Dai, Bin; Wu, Tao

    2015-03-01

    Lane detection plays a significant role in Advanced Driver Assistance Systems (ADAS) for intelligent vehicles. In this paper we present a multi-lane detection method based on multiple vanishing points detection. A new multi-lane model assumes that a single lane, which has two approximately parallel boundaries, may not parallel to others on road plane. Non-parallel lanes associate with different vanishing points. A biological plausibility model is used to detect multiple vanishing points and fit lane model. Experimental results show that the proposed method can detect both parallel lanes and non-parallel lanes.

  8. [Evaluation of a new blood gas analysis system: RapidPoint 500(®)].

    Science.gov (United States)

    Nicolas, Thierry; Cabrolier, Nadège; Bardonnet, Karine; Davani, Siamak

    2013-01-01

    We present here evaluation of a new blood gas analysis system, RapidPoint 500(®) (Siemens Healthcare Diagnostics). The aim of this research was to compare the ergonomics and analytical performances of this analyser with those of the RapidLab 1265 for the following parameters: pH, partial oxygen pressure, partial carbon dioxide pressure, sodium, potassium, ionized calcium, lactate and the CO-oximetry parameters: hemoglobin, oxyhemoglobin, carboxyhemoglobin, methemoglobin, reduced hemoglobin, neonatal bilirubin; as well as with the Dimension Vista 500 results for chloride and glucose. The Valtec protocol, recommended by the French Society of Clinical Biology (SFBC), was used to analyze the study results. The experiment was carried out over a period of one month in the Department of medical biochemistry. One hundred sixty five samples from adult patients admitted to the ER or hospitalized in intensive care were tested. The RapidPoint 500(®) was highly satisfactory from an ergonomic point of view. Intra-and inter- assay coefficients of variation (CV) with the three control levels were below those recommended by the SFBC for all parameters, and the comparative study gave coefficients of determination higher than 0.91. Taken together, the RapidPoint 500(®) appears fully satisfactory in terms of ergonomics and analytical performance.

  9. Fuzzy stochastic analysis of serviceability and ultimate limit states of two-span pedestrian steel bridge

    Science.gov (United States)

    Kala, Zdeněk; Sandovič, GiedrÄ--

    2012-09-01

    The paper deals with non-linear analysis of ultimate and serviceability limit states of two-span pedestrian steel bridge. The effects of random material and geometrical characteristics on limit states are analyzed. The Monte Carlo method was applied to stochastic analysis. For the serviceability limit state, also influence of fuzzy uncertainty of the limit deflection value on random characteristics of load capacity of variable action was studied. The results prove that, for the type of structure studied, the serviceability limit state is decisive from the point of view of design. The present paper opens a discussion on the use of stochastic analysis to verify the limit deflections given in the standards EUROCODES.

  10. Efficiency analysis on a two-level three-phase quasi-soft-switching inverter

    DEFF Research Database (Denmark)

    Geng, Pan; Wu, Weimin; Huang, Min

    2013-01-01

    When designing an inverter, an engineer often needs to select and predict the efficiency beforehand. For the standard inverters, plenty of researches are analyzing the power losses and also many software tools are being used for efficiency calculation. In this paper, the efficiency calculation...... for non-conventional inverters with special shoot-through state is introduced and illustrated through the analysis on a special two-level three-phase quasi-soft-switching inverter. Efficiency comparison between the classical two-stage two-level three-phase inverter and the two-level three-phase quasi......-soft-switching inverter is carried out. A 10 kW/380 V prototype is constructed to verify the analysis. The experimental results show that the efficiency of the new inverter is higher than that of the traditional two-stage two- level three-phase inverter....

  11. Comprehensive analysis of Curie-point depths and lithospheric effective elastic thickness at Arctic Region

    Science.gov (United States)

    Lu, Y.; Li, C. F.

    2017-12-01

    Arctic Ocean remains at the forefront of geological exploration. Here we investigate its deep geological structures and geodynamics on the basis of gravity, magnetic and bathymetric data. We estimate Curie-point depth and lithospheric effective elastic thickness to understand deep geothermal structures and Arctic lithospheric evolution. A fractal exponent of 3.0 for the 3D magnetization model is used in the Curie-point depth inversion. The result shows that Curie-point depths are between 5 and 50 km. Curie depths are mostly small near the active mid-ocean ridges, corresponding well to high heat flow and active shallow volcanism. Large curie depths are distributed mainly at continental marginal seas around the Arctic Ocean. We present a map of effective elastic thickness (Te) of the lithosphere using a multitaper coherence technique, and Te are between 5 and 110 km. Te primarily depends on geothermal gradient and composition, as well as structures in the lithosphere. We find that Te and Curie-point depths are often correlated. Large Te are distributed mainly at continental region and small Te are distributed at oceanic region. The Alpha-Mendeleyev Ridge (AMR) and The Svalbard Archipelago (SA) are symmetrical with the mid-ocean ridge. AMR and SA were formed before an early stage of Eurasian basin spreading, and they are considered as conjugate large igneous provinces, which show small Te and Curie-point depths. Novaya Zemlya region has large Curie-point depths and small Te. We consider that fault and fracture near the Novaya Zemlya orogenic belt cause small Te. A series of transform faults connect Arctic mid-ocean ridge with North Atlantic mid-ocean ridge. We can see large Te near transform faults, but small Curie-point depths. We consider that although temperature near transform faults is high, but mechanically the lithosphere near transform faults are strengthened.

  12. Exact two-point resistance, and the simple random walk on the complete graph minus N edges

    International Nuclear Information System (INIS)

    Chair, Noureddine

    2012-01-01

    An analytical approach is developed to obtain the exact expressions for the two-point resistance and the total effective resistance of the complete graph minus N edges of the opposite vertices. These expressions are written in terms of certain numbers that we introduce, which we call the Bejaia and the Pisa numbers; these numbers are the natural generalizations of the bisected Fibonacci and Lucas numbers. The correspondence between random walks and the resistor networks is then used to obtain the exact expressions for the first passage and mean first passage times on this graph. - Highlights: ► We obtain exact formulas for the two-point resistance of the complete graph minus N edges. ► We obtain also the total effective resistance of this graph. ► We modified Schwatt’s formula on trigonometrical power sum to suit our computations. ► We introduced the generalized bisected Fibonacci and Lucas numbers: the Bejaia and the Pisa numbers. ► The first passage and mean first passage times of the random walks have exact expressions.

  13. ROMANIA – PORTUGAL: A COMPARATIVE ANALYSIS OF THE TWO COUNTRIES’ LABOUR MARKETS

    Directory of Open Access Journals (Sweden)

    DIMIAN Gina Cristina

    2011-12-01

    Full Text Available Our choice was justified by the fact that between the two countries exist some features that make them interesting to study from the employment point of view. Thus, both countries are Latin and this is why we consider they are comparable, because employment means people, more precisely mentalities and attitudes to work. We considered that it is interesting to see how the labour market from the east Latin Europe has evolved, in a comparable, crucial period, with its counterpart from west Latin Europe. First of all, we would like to point out the fact that our intention is to analyse the periods which from the economic history point of view have influenced in a decisive manner the present evolution of the two countries. The Portugal labour market is a subject of real scientific interest (we would like to mention that even Michael Porter was interested by this topic. Our paper tries to emphasize the common and different features of the two labour markets, in order to facilitate an experience sharing process on this topic. To achieve the paper’s objectives statistical and cluster analysis have been used. This is one of the best ways to capture the influence of determinant factors on labour market performance. The degree of originality is given by the assumed objectives, namely studying some very up-to-date problems from an interconnected perspective (historical similarities, structural changes, labour market performance and analyzing the Romanian situation compared to other EU countries, i.e. Portugal. The main impact of the paper will be on the practical level through the model outcomes and conclusions. One of the objectives is to look for solutions to the problems identified and to persuade policy makers to give them a greater importance. Our main contribution is represented by the fact that we have approached this topic from an economic and historical perspective, trying to find explanations for the present situation in the modern past of the

  14. Forensic Analysis of Blue Ball point Pen Inks on Questioned Documents by High Performance Thin Layer Chromatography Technique (HPTLC)

    International Nuclear Information System (INIS)

    Lee, L.C.; Siti Mariam Nunurung; Abdul Aziz Ishak

    2014-01-01

    Nowadays, crimes related to forged documents are increasing. Any erasure, addition or modification in the document content always involves the use of writing instrument such as ball point pens. Hence, there is an evident need to develop a fast and accurate ink analysis protocol to solve this problem. This study is aimed to determine the discrimination power of high performance thin layer chromatography (HPTLC) technique for analyzing a set of blue ball point pen inks. Ink samples deposited on paper were extracted using methanol and separated via a solvent mixture of ethyl acetate, methanol and distilled water (70: 35: 30, v/ v/ v). In this method, the discrimination power of 89.40 % was achieved, which confirm that the proposed method was able to differentiate a significant number of pen-pair samples. In addition, composition of blue pen inks was found to be homogeneous (RSD < 2.5 %) and the proposed method showed good repeatability and reproducibility (RSD < 3. 0%). As a conclusion, HPTLC is an effective tool to separate blue ball point pen inks. (author)

  15. A stock market forecasting model combining two-directional two-dimensional principal component analysis and radial basis function neural network.

    Science.gov (United States)

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.

  16. Two-step estimation for inhomogeneous spatial point processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao

    2009-01-01

    The paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second-order properties (K-function). Regression parameters are estimated by using a Poisson likelihood score estimating function and in the ...... and in the second step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rainforests....

  17. Publication point indicators

    DEFF Research Database (Denmark)

    Elleby, Anita; Ingwersen, Peter

    2010-01-01

    The paper presents comparative analyses of two publication point systems, The Norwegian and the in-house system from the interdiscplinary Danish Institute for International Studies (DIIS), used as case in the study for publications published 2006, and compares central citation-based indicators...... with novel publication point indicators (PPIs) that are formalized and exemplified. Two diachronic citation windows are applied: 2006-07 and 2006-08. Web of Science (WoS) as well as Google Scholar (GS) are applied to observe the cite delay and citedness for the different document types published by DIIS...... for all document types. Statistical significant correlations were only found between WoS and GS and the two publication point systems in between, respectively. The study demonstrates how the nCPPI can be applied to institutions as evaluation tools supplementary to JCI in various combinations...

  18. Effect of incisor inclination changes on cephalometric points a and b

    International Nuclear Information System (INIS)

    Hassan, S.; Shaikh, A.; Fida, M.

    2015-01-01

    The position of cephalometric points A and B are liable to be affected by alveolar remodelling caused by orthodontic tooth movement during incisor retraction. This study was conducted to evaluate the change in positions of cephalometric points A and B in sagittal and vertical dimensions due to change in incisor inclinations. Methods: Total sample of 31 subjects were recruited into the study. The inclusion criteria were extraction of premolars in upper and lower arches, completion of growth and orthodontic treatment. The exclusion criteria were patients with craniofacial anomalies and history of orthodontic treatment. By superimposition of pre and post treatment tracings, various linear and angular parameters were measured. Various tests and multiple linear regression analysis were performed to determine changes in outcome variables. Statistically significant p-value was <0.05. Results:One-sample t-test showed that change in position of only point A was statistically significant which was 1.61mm (p<0.01) in sagittal direction and 1.49mm (p<0.01) in vertical direction. Multiple linear regression analysis showed that if we retrocline upper incisor by 100, the point A will move superiorly by 0.6mm. Conclusions: Total change in the position of point A is in a downward and forward direction. Total Change in upper incisors inclinations causes change in position of point A only in vertical direction. (author)

  19. Filtering Photogrammetric Point Clouds Using Standard LIDAR Filters Towards DTM Generation

    Science.gov (United States)

    Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M. Y.

    2018-05-01

    Digital Terrain Models (DTMs) can be generated from point clouds acquired by laser scanning or photogrammetric dense matching. During the last two decades, much effort has been paid to developing robust filtering algorithms for the airborne laser scanning (ALS) data. With the point cloud quality from dense image matching (DIM) getting better and better, the research question that arises is whether those standard Lidar filters can be used to filter photogrammetric point clouds as well. Experiments are implemented to filter two dense matching point clouds with different noise levels. Results show that the standard Lidar filter is robust to random noise. However, artefacts and blunders in the DIM points often appear due to low contrast or poor texture in the images. Filtering will be erroneous in these locations. Filtering the DIM points pre-processed by a ranking filter will bring higher Type II error (i.e. non-ground points actually labelled as ground points) but much lower Type I error (i.e. bare ground points labelled as non-ground points). Finally, the potential DTM accuracy that can be achieved by DIM points is evaluated. Two DIM point clouds derived by Pix4Dmapper and SURE are compared. On grassland dense matching generates points higher than the true terrain surface, which will result in incorrectly elevated DTMs. The application of the ranking filter leads to a reduced bias in the DTM height, but a slightly increased noise level.

  20. Material-Point Analysis of Large-Strain Problems

    DEFF Research Database (Denmark)

    Andersen, Søren

    The aim of this thesis is to apply and improve the material-point method for modelling of geotechnical problems. One of the geotechnical phenomena that is a subject of active research is the study of landslides. A large amount of research is focused on determining when slopes become unstable. Hence......, it is possible to predict if a certain slope is stable using commercial finite element or finite difference software such as PLAXIS, ABAQUS or FLAC. However, the dynamics during a landslide are less explored. The material-point method (MPM) is a novel numerical method aimed at analysing problems involving...... materials subjected to large strains in a dynamical time–space domain. This thesis explores the material-point method with the specific aim of improving the performance for geotechnical problems. Large-strain geotechnical problems such as landslides pose a major challenge to model numerically. Employing...