WorldWideScience

Sample records for continuous pointing method

  1. Method and apparatus for continuously detecting and monitoring the hydrocarbon dew-point of gas

    Energy Technology Data Exchange (ETDEWEB)

    Boyle, G.J.; Pritchard, F.R.

    1987-08-04

    This patent describes a method and apparatus for continuously detecting and monitoring the hydrocarbon dew-point of a gas. A gas sample is supplied to a dew-point detector and the temperature of a portion of the sample gas stream to be investigated is lowered progressively prior to detection until the dew-point is reached. The presence of condensate within the flowing gas is detected and subsequently the supply gas sample is heated to above the dew-point. The procedure of cooling and heating the gas stream continuously in a cyclical manner is repeated.

  2. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, California 90095 (United States); Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-11-15

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method

  3. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system.

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan

    2015-11-01

    To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter

  4. Ewald Electrostatics for Mixtures of Point and Continuous Line Charges.

    Science.gov (United States)

    Antila, Hanne S; Tassel, Paul R Van; Sammalkorpi, Maria

    2015-10-15

    Many charged macro- or supramolecular systems, such as DNA, are approximately rod-shaped and, to the lowest order, may be treated as continuous line charges. However, the standard method used to calculate electrostatics in molecular simulation, the Ewald summation, is designed to treat systems of point charges. We extend the Ewald concept to a hybrid system containing both point charges and continuous line charges. We find the calculated force between a point charge and (i) a continuous line charge and (ii) a discrete line charge consisting of uniformly spaced point charges to be numerically equivalent when the separation greatly exceeds the discretization length. At shorter separations, discretization induces deviations in the force and energy, and point charge-point charge correlation effects. Because significant computational savings are also possible, the continuous line charge Ewald method presented here offers the possibility of accurate and efficient electrostatic calculations.

  5. Comparison of Single-Point and Continuous Sampling Methods for Estimating Residential Indoor Temperature and Humidity.

    Science.gov (United States)

    Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A

    2015-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.

  6. Polynomial approach method to solve the neutron point kinetics equations with use of the analytic continuation

    Energy Technology Data Exchange (ETDEWEB)

    Tumelero, Fernanda; Petersen, Claudio Zen; Goncalves, Glenio Aguiar [Universidade Federal de Pelotas, Capao do Leao, RS (Brazil). Programa de Pos Graduacao em Modelagem Matematica; Schramm, Marcelo [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica

    2016-12-15

    In this work, we report a solution to solve the Neutron Point Kinetics Equations applying the Polynomial Approach Method. The main idea is to expand the neutron density and delayed neutron precursors as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions and the analytical continuation is used to determine the solutions of the next intervals. A genuine error control is developed based on an analogy with the Rest Theorem. For illustration, we also report simulations for different approaches types (linear, quadratic and cubic). The results obtained by numerical simulations for linear approximation are compared with results in the literature.

  7. Estimating continuous floodplain and major river bed topography mixing ordinal coutour lines and topographic points

    Science.gov (United States)

    Bailly, J. S.; Dartevelle, M.; Delenne, C.; Rousseau, A.

    2017-12-01

    Floodplain and major river bed topography govern many river biophysical processes during floods. Despite the grow of direct topographic measurements from LiDARS on riverine systems, it still room to develop methods for large (e.g. deltas) or very local (e.g. ponds) riverine systems that take advantage of information coming from simple SAR or optical image processing on floodplain, resulting from waterbodies delineation during flood up or down, and producing ordered coutour lines. The next challenge is thus to exploit such data in order to estimate continuous topography on the floodplain combining heterogeneous data: a topographic points dataset and a located but unknown and ordered contourline dataset. This article is comparing two methods designed to estimate continuous topography on the floodplain mixing ordinal coutour lines and continuous topographic points. For both methods a first estimation step is to value each contourline with elevation and a second step is next to estimate the continuous field from both topographic points and valued contourlines. The first proposed method is a stochastic method starting from multigaussian random-fields and conditional simualtion. The second is a deterministic method based on radial spline fonction for thin layers used for approximated bivariate surface construction. Results are first shown and discussed from a set of synoptic case studies presenting various topographic points density and topographic smoothness. Next, results are shown and discuss on an actual case study in the Montagua laguna, located in the north of Valparaiso, Chile.

  8. A Continuation Method for Weakly Kannan Maps

    Directory of Open Access Journals (Sweden)

    Ariza-Ruiz David

    2010-01-01

    Full Text Available The first continuation method for contractive maps in the setting of a metric space was given by Granas. Later, Frigon extended Granas theorem to the class of weakly contractive maps, and recently Agarwal and O'Regan have given the corresponding result for a certain type of quasicontractions which includes maps of Kannan type. In this note we introduce the concept of weakly Kannan maps and give a fixed point theorem, and then a continuation method, for this class of maps.

  9. Efficient point cloud data processing in shipbuilding: Reformative component extraction method and registration method

    Directory of Open Access Journals (Sweden)

    Jingyu Sun

    2014-07-01

    Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.

  10. Parametric methods for spatial point processes

    DEFF Research Database (Denmark)

    Møller, Jesper

    is studied in Section 4, and Bayesian inference in Section 5. On one hand, as the development in computer technology and computational statistics continues,computationally-intensive simulation-based methods for likelihood inference probably will play a increasing role for statistical analysis of spatial...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models......(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...

  11. CONTINUOUS ANALYZER UTILIZING BOILING POINT DETERMINATION

    Science.gov (United States)

    Pappas, W.S.

    1963-03-19

    A device is designed for continuously determining the boiling point of a mixture of liquids. The device comprises a distillation chamber for boiling a liquid; outlet conduit means for maintaining the liquid contents of said chamber at a constant level; a reflux condenser mounted above said distillation chamber; means for continuously introducing an incoming liquid sample into said reflux condenser and into intimate contact with vapors refluxing within said condenser; and means for measuring the temperature of the liquid flowing through said distillation chamber. (AEC)

  12. Automatic continuous dew point measurement in combustion gases

    Energy Technology Data Exchange (ETDEWEB)

    Fehler, D.

    1986-08-01

    Low exhaust temperatures serve to minimize energy consumption in combustion systems. This requires accurate, continuous measurement of exhaust condensation. An automatic dew point meter for continuous operation is described. The principle of measurement, the design of the measuring system, and practical aspects of operation are discussed.

  13. Continuation of connecting orbits in 3d-ODEs' (i) point-to-cycle connections.

    NARCIS (Netherlands)

    Doedel, E.J.; Kooi, B.W.; van Voorn, G.A.K.; Kuznetzov, Y.A.

    2008-01-01

    We propose new methods for the numerical continuation of point-to-cycle connecting orbits in three-dimensional autonomous ODE's using projection boundary conditions. In our approach, the projection boundary conditions near the cycle are formulated using an eigenfunction of the associated adjoint

  14. Natural Preconditioning and Iterative Methods for Saddle Point Systems

    KAUST Repository

    Pestana, Jennifer

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. The solution of quadratic or locally quadratic extremum problems subject to linear(ized) constraints gives rise to linear systems in saddle point form. This is true whether in the continuous or the discrete setting, so saddle point systems arising from the discretization of partial differential equation problems, such as those describing electromagnetic problems or incompressible flow, lead to equations with this structure, as do, for example, interior point methods and the sequential quadratic programming approach to nonlinear optimization. This survey concerns iterative solution methods for these problems and, in particular, shows how the problem formulation leads to natural preconditioners which guarantee a fast rate of convergence of the relevant iterative methods. These preconditioners are related to the original extremum problem and their effectiveness - in terms of rapidity of convergence - is established here via a proof of general bounds on the eigenvalues of the preconditioned saddle point matrix on which iteration convergence depends.

  15. Solving Singular Two-Point Boundary Value Problems Using Continuous Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Omar Abu Arqub

    2012-01-01

    Full Text Available In this paper, the continuous genetic algorithm is applied for the solution of singular two-point boundary value problems, where smooth solution curves are used throughout the evolution of the algorithm to obtain the required nodal values. The proposed technique might be considered as a variation of the finite difference method in the sense that each of the derivatives is replaced by an appropriate difference quotient approximation. This novel approach possesses main advantages; it can be applied without any limitation on the nature of the problem, the type of singularity, and the number of mesh points. Numerical examples are included to demonstrate the accuracy, applicability, and generality of the presented technique. The results reveal that the algorithm is very effective, straightforward, and simple.

  16. Discrete Approximations of Determinantal Point Processes on Continuous Spaces: Tree Representations and Tail Triviality

    Science.gov (United States)

    Osada, Hirofumi; Osada, Shota

    2018-01-01

    We prove tail triviality of determinantal point processes μ on continuous spaces. Tail triviality has been proved for such processes only on discrete spaces, and hence we have generalized the result to continuous spaces. To do this, we construct tree representations, that is, discrete approximations of determinantal point processes enjoying a determinantal structure. There are many interesting examples of determinantal point processes on continuous spaces such as zero points of the hyperbolic Gaussian analytic function with Bergman kernel, and the thermodynamic limit of eigenvalues of Gaussian random matrices for Sine_2 , Airy_2 , Bessel_2 , and Ginibre point processes. Our main theorem proves all these point processes are tail trivial.

  17. The endogenous grid method for discrete-continuous dynamic choice models with (or without) taste shocks

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jørgensen, Thomas H.; Rust, John

    2017-01-01

    We present a fast and accurate computational method for solving and estimating a class of dynamic programming models with discrete and continuous choice variables. The solution method we develop for structural estimation extends the endogenous grid-point method (EGM) to discrete-continuous (DC) p...

  18. Novel TPPO Based Maximum Power Point Method for Photovoltaic System

    Directory of Open Access Journals (Sweden)

    ABBASI, M. A.

    2017-08-01

    Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.

  19. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  20. Continuous Extraction of Subway Tunnel Cross Sections Based on Terrestrial Point Clouds

    Directory of Open Access Journals (Sweden)

    Zhizhong Kang

    2014-01-01

    Full Text Available An efficient method for the continuous extraction of subway tunnel cross sections using terrestrial point clouds is proposed. First, the continuous central axis of the tunnel is extracted using a 2D projection of the point cloud and curve fitting using the RANSAC (RANdom SAmple Consensus algorithm, and the axis is optimized using a global extraction strategy based on segment-wise fitting. The cross-sectional planes, which are orthogonal to the central axis, are then determined for every interval. The cross-sectional points are extracted by intersecting straight lines that rotate orthogonally around the central axis within the cross-sectional plane with the tunnel point cloud. An interpolation algorithm based on quadric parametric surface fitting, using the BaySAC (Bayesian SAmpling Consensus algorithm, is proposed to compute the cross-sectional point when it cannot be acquired directly from the tunnel points along the extraction direction of interest. Because the standard shape of the tunnel cross section is a circle, circle fitting is implemented using RANSAC to reduce the noise. The proposed approach is tested on terrestrial point clouds that cover a 150-m-long segment of a Shanghai subway tunnel, which were acquired using a LMS VZ-400 laser scanner. The results indicate that the proposed quadric parametric surface fitting using the optimized BaySAC achieves a higher overall fitting accuracy (0.9 mm than the accuracy (1.6 mm obtained by the plain RANSAC. The results also show that the proposed cross section extraction algorithm can achieve high accuracy (millimeter level, which was assessed by comparing the fitted radii with the designed radius of the cross section and comparing corresponding chord lengths in different cross sections and high efficiency (less than 3 s/section on average.

  1. Electrically continuous graphene from single crystal copper verified by terahertz conductance spectroscopy and micro four-point probe

    DEFF Research Database (Denmark)

    Buron, Jonas Christian Due; Pizzocchero, Filippo; Jessen, Bjarke Sørensen

    2014-01-01

    noninvasive conductance characterization methods: ultrabroadband terahertz time-domain spectroscopy and micro four-point probe, which probe the electrical properties of the graphene film on different length scales, 100 nm and 10 μm, respectively. Ultrabroadband terahertz time-domain spectroscopy allows......- and microscale electrical continuity of single layer graphene grown on centimeter-sized single crystal copper with that of previously studied graphene films, grown on commercially available copper foil, after transfer to SiO2 surfaces. The electrical continuity of the graphene films is analyzed using two....... Micro four-point probe resistance values measured on graphene grown on single crystalline copper in two different voltage-current configurations show close agreement with the expected distributions for a continuous 2D conductor, in contrast with previous observations on graphene grown on commercial...

  2. Analysis of fatigue resistance of continuous and non-continuous welded rectangular frame intersections by finite element method

    International Nuclear Information System (INIS)

    McCoy, M. L.; Moradi, R.; Lankarani, H. M.

    2011-01-01

    Agricultural and construction equipment are commonly implemented with rectangular tubing in their structural frame designs. A typical joining method to fabricate these frames is by welding and the use of ancillary structural plating at the connections. This aids two continuous members to pass through an intersection point of the frame with some degree of connectivity, but the connections are highly unbalanced as the tubing centroids exhibit asymmetry. Due to the practice of welded continuous member frame intersections in current agricultural equipment designs, a conviction may exist that welded continuous member frames are superior in structural strength over that of structural frame intersections implementing welded non-continuous members where the tubing centroids lie within two planes of symmetry, a connection design that would likely fabricating a more fatigue resistant structural frame. Three types of welded continuous tubing frame intersections currently observed in the designs of agricultural equipment were compared to two non-continuous frame intersection designs. Each design was subjected to the same loading condition and then examined for stress levels using the Finite Element Method to predict fatigue life. Results demonstrated that a lighter weight, non-continuous member frame intersection design was two magnitudes superior in fatigue resistance than some current implemented frame designs when using Stress-Life fatigue prediction methods and empirical fatigue strengths for fillet welds. Stress-Life predictions were also made using theoretical fatigue strength calculations for the fatigue strength at the welds for comparison to the empirical derived weld fatigue strength

  3. CONTINUOUSLY DEFORMATION MONITORING OF SUBWAY TUNNEL BASED ON TERRESTRIAL POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Z. Kang

    2012-07-01

    Full Text Available The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the common control points can be used by each station and thus the error accumulation avoided within a section. Afterwards, the central axis of the subway tunnel is determined through RANSAC (Random Sample Consensus algorithm and curve fitting. Although with very high resolution, laser points are still discrete and thus the vertical section is computed via the quadric fitting of the vicinity of interest, instead of the fitting of the whole model of a subway tunnel, which is determined by the intersection line rotated about the central axis of tunnel within a vertical plane. The extraction of the vertical section is then optimized using RANSAC for the purpose of filtering out noises. Based on the extracted vertical sections, the volume of tunnel deformation is estimated by the comparison between vertical sections extracted at the same position from different epochs of point clouds. Furthermore, the continuously extracted vertical sections are deployed to evaluate the convergent tendency of the tunnel. The proposed algorithms are verified using real datasets in terms of accuracy and computation efficiency. The experimental result of fitting accuracy analysis shows the maximum deviation between interpolated point and real point is 1.5 mm, and the minimum one is 0.1 mm; the convergent tendency of the tunnel was detected by the comparison of adjacent fitting radius. The maximum error is 6 mm, while the minimum one is 1 mm. The computation cost of vertical section abstraction is within 3 seconds/section, which proves high efficiency..

  4. Experimental Tracking of Limit-Point Bifurcations and Backbone Curves Using Control-Based Continuation

    Science.gov (United States)

    Renson, Ludovic; Barton, David A. W.; Neild, Simon A.

    Control-based continuation (CBC) is a means of applying numerical continuation directly to a physical experiment for bifurcation analysis without the use of a mathematical model. CBC enables the detection and tracking of bifurcations directly, without the need for a post-processing stage as is often the case for more traditional experimental approaches. In this paper, we use CBC to directly locate limit-point bifurcations of a periodically forced oscillator and track them as forcing parameters are varied. Backbone curves, which capture the overall frequency-amplitude dependence of the system’s forced response, are also traced out directly. The proposed method is demonstrated on a single-degree-of-freedom mechanical system with a nonlinear stiffness characteristic. Results are presented for two configurations of the nonlinearity — one where it exhibits a hardening stiffness characteristic and one where it exhibits softening-hardening.

  5. Post-Processing in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars Vabbersgaard

    The material-point method (MPM) is a numerical method for dynamic or static analysis of solids using a discretization in time and space. The method has shown to be successful in modelling physical problems involving large deformations, which are difficult to model with traditional numerical tools...... such as the finite element method. In the material-point method, a set of material points is utilized to track the problem in time and space, while a computational background grid is utilized to obtain spatial derivatives relevant to the physical problem. Currently, the research within the material-point method......-point method. The first idea involves associating a volume with each material point and displaying the deformation of this volume. In the discretization process, the physical domain is divided into a number of smaller volumes each represented by a simple shape; here quadrilaterals are chosen for the presented...

  6. A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems

    International Nuclear Information System (INIS)

    Zhang Guiyong; Liu Guirong

    2010-01-01

    In the framework of a weakened weak (W 2 ) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W 2 formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H 1 space, but in a G 1 space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H 1 space and G 1 space can be viewed as a space of functions with weakened weak (W 2 ) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W 2 formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W 2 formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much softer than the overly-stiff FEM model and

  7. LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources

    Science.gov (United States)

    Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin

    2017-12-01

    Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable

  8. A New Iterative Method for Equilibrium Problems and Fixed Point Problems

    Directory of Open Access Journals (Sweden)

    Abdul Latif

    2013-01-01

    Full Text Available Introducing a new iterative method, we study the existence of a common element of the set of solutions of equilibrium problems for a family of monotone, Lipschitz-type continuous mappings and the sets of fixed points of two nonexpansive semigroups in a real Hilbert space. We establish strong convergence theorems of the new iterative method for the solution of the variational inequality problem which is the optimality condition for the minimization problem. Our results improve and generalize the corresponding recent results of Anh (2012, Cianciaruso et al. (2010, and many others.

  9. Voltage stability, bifurcation parameters and continuation methods

    Energy Technology Data Exchange (ETDEWEB)

    Alvarado, F L [Wisconsin Univ., Madison, WI (United States)

    1994-12-31

    This paper considers the importance of the choice of bifurcation parameter in the determination of the voltage stability limit and the maximum power load ability of a system. When the bifurcation parameter is power demand, the two limits are equivalent. However, when other types of load models and bifurcation parameters are considered, the two concepts differ. The continuation method is considered as a method for determination of voltage stability margins. Three variants of the continuation method are described: the continuation parameter is the bifurcation parameter the continuation parameter is initially the bifurcation parameter, but is free to change, and the continuation parameter is a new `arc length` parameter. Implementations of voltage stability software using continuation methods are described. (author) 23 refs., 9 figs.

  10. Summary statistics for end-point conditioned continuous-time Markov chains

    DEFF Research Database (Denmark)

    Hobolth, Asger; Jensen, Jens Ledet

    Continuous-time Markov chains are a widely used modelling tool. Applications include DNA sequence evolution, ion channel gating behavior and mathematical finance. We consider the problem of calculating properties of summary statistics (e.g. mean time spent in a state, mean number of jumps between...... two states and the distribution of the total number of jumps) for discretely observed continuous time Markov chains. Three alternative methods for calculating properties of summary statistics are described and the pros and cons of the methods are discussed. The methods are based on (i) an eigenvalue...... decomposition of the rate matrix, (ii) the uniformization method, and (iii) integrals of matrix exponentials. In particular we develop a framework that allows for analyses of rather general summary statistics using the uniformization method....

  11. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    Science.gov (United States)

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  12. Interior-Point Methods for Linear Programming: A Review

    Science.gov (United States)

    Singh, J. N.; Singh, D.

    2002-01-01

    The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…

  13. Analytic continuation of massless two-loop four-point functions

    International Nuclear Information System (INIS)

    Gehrmann, T.; Remiddi, E.

    2002-01-01

    We describe the analytic continuation of two-loop four-point functions with one off-shell external leg and internal massless propagators from the Euclidean region of space-like 1→3 decay to Minkowskian regions relevant to all 1→3 and 2→2 reactions with one space-like or time-like off-shell external leg. Our results can be used to derive two-loop master integrals and unrenormalized matrix elements for hadronic vector-boson-plus-jet production and deep inelastic two-plus-one-jet production, from results previously obtained for three-jet production in electron-positron annihilation. (author)

  14. Dew Point Calibration System Using a Quartz Crystal Sensor with a Differential Frequency Method.

    Science.gov (United States)

    Lin, Ningning; Meng, Xiaofeng; Nie, Jing

    2016-11-18

    In this paper, the influence of temperature on quartz crystal microbalance (QCM) sensor response during dew point calibration is investigated. The aim is to present a compensation method to eliminate temperature impact on frequency acquisition. A new sensitive structure is proposed with double QCMs. One is kept in contact with the environment, whereas the other is not exposed to the atmosphere. There is a thermal conductivity silicone pad between each crystal and a refrigeration device to keep a uniform temperature condition. A differential frequency method is described in detail and is applied to calibrate the frequency characteristics of QCM at the dew point of -3.75 °C. It is worth noting that frequency changes of two QCMs were approximately opposite when temperature conditions were changed simultaneously. The results from continuous experiments show that the frequencies of two QCMs as the dew point moment was reached have strong consistency and high repeatability, leading to the conclusion that the sensitive structure can calibrate dew points with high reliability.

  15. Selective Integration in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Lars; Andersen, Søren; Damkilde, Lars

    2009-01-01

    The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....

  16. End-point detection in potentiometric titration by continuous wavelet transform.

    Science.gov (United States)

    Jakubowska, Małgorzata; Baś, Bogusław; Kubiak, Władysław W

    2009-10-15

    The aim of this work was construction of the new wavelet function and verification that a continuous wavelet transform with a specially defined dedicated mother wavelet is a useful tool for precise detection of end-point in a potentiometric titration. The proposed algorithm does not require any initial information about the nature or the type of analyte and/or the shape of the titration curve. The signal imperfection, as well as random noise or spikes has no influence on the operation of the procedure. The optimization of the new algorithm was done using simulated curves and next experimental data were considered. In the case of well-shaped and noise-free titration data, the proposed method gives the same accuracy and precision as commonly used algorithms. But, in the case of noisy or badly shaped curves, the presented approach works good (relative error mainly below 2% and coefficients of variability below 5%) while traditional procedures fail. Therefore, the proposed algorithm may be useful in interpretation of the experimental data and also in automation of the typical titration analysis, specially in the case when random noise interfere with analytical signal.

  17. Advanced continuous cultivation methods for systems microbiology.

    Science.gov (United States)

    Adamberg, Kaarel; Valgepea, Kaspar; Vilu, Raivo

    2015-09-01

    Increasing the throughput of systems biology-based experimental characterization of in silico-designed strains has great potential for accelerating the development of cell factories. For this, analysis of metabolism in the steady state is essential as only this enables the unequivocal definition of the physiological state of cells, which is needed for the complete description and in silico reconstruction of their phenotypes. In this review, we show that for a systems microbiology approach, high-resolution characterization of metabolism in the steady state--growth space analysis (GSA)--can be achieved by using advanced continuous cultivation methods termed changestats. In changestats, an environmental parameter is continuously changed at a constant rate within one experiment whilst maintaining cells in the physiological steady state similar to chemostats. This increases the resolution and throughput of GSA compared with chemostats, and, moreover, enables following of the dynamics of metabolism and detection of metabolic switch-points and optimal growth conditions. We also describe the concept, challenge and necessary criteria of the systematic analysis of steady-state metabolism. Finally, we propose that such systematic characterization of the steady-state growth space of cells using changestats has value not only for fundamental studies of metabolism, but also for systems biology-based metabolic engineering of cell factories.

  18. Continuous acid dew point measurement in coal-fired power plants; Kontinuierliche Saeuretaupunktmessung in Braunkohlekraftwerken

    Energy Technology Data Exchange (ETDEWEB)

    Foedisch, Holger; Schulz, Joerg; Schengber, Petra; Dietrich, Gabriele [Dr. Foedisch Umweltmesstechnik AG, Markranstaedt (Germany)

    2009-07-01

    The reduction of flue gas losses is one option to increase power plant efficiency. The target is the optimised low waste gas temperature. When applying lignite and other high-sulphur fuels the temperature of the flue gas is mainly determined by the acid dew point. Temperature of the flue gas system is to amount some 10 to 20 K above the assumed acid dew point. The acid dew point measuring system AMD 08 is able to detect the real acid dew point in a quasi-continuous way. Thus, it is possible to deliberately decrease waste gas temperature. (orig.)

  19. Method Points: towards a metric for method complexity

    Directory of Open Access Journals (Sweden)

    Graham McLeod

    1998-11-01

    Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.

  20. Dew Point Calibration System Using a Quartz Crystal Sensor with a Differential Frequency Method

    Directory of Open Access Journals (Sweden)

    Ningning Lin

    2016-11-01

    Full Text Available In this paper, the influence of temperature on quartz crystal microbalance (QCM sensor response during dew point calibration is investigated. The aim is to present a compensation method to eliminate temperature impact on frequency acquisition. A new sensitive structure is proposed with double QCMs. One is kept in contact with the environment, whereas the other is not exposed to the atmosphere. There is a thermal conductivity silicone pad between each crystal and a refrigeration device to keep a uniform temperature condition. A differential frequency method is described in detail and is applied to calibrate the frequency characteristics of QCM at the dew point of −3.75 °C. It is worth noting that frequency changes of two QCMs were approximately opposite when temperature conditions were changed simultaneously. The results from continuous experiments show that the frequencies of two QCMs as the dew point moment was reached have strong consistency and high repeatability, leading to the conclusion that the sensitive structure can calibrate dew points with high reliability.

  1. Quantifying regional cerebral blood flow with N-isopropyl-p-[123I]iodoamphetamine and SPECT by one-point sampling method

    International Nuclear Information System (INIS)

    Odano, Ikuo; Takahashi, Naoya; Noguchi, Eikichi; Ohtaki, Hiro; Hatano, Masayoshi; Yamazaki, Yoshihiro; Higuchi, Takeshi; Ohkubo, Masaki.

    1994-01-01

    We developed a new non-invasive technique; one-point sampling method, for quantitative measurement of regional cerebral blood flow (rCBF) with N-isopropyl-p-[ 123 I]iodoamphetamine and SPECT. Although the continuous withdrawal of arterial blood and octanol treatment of the blood are required in the conventional microsphere method, the new technique dose not require these two procedures. The total activity of 123 I-IMP obtained by the continuous withdrawal of arterial blood is inferred by the activity of 133 I-IMP obtained by the one point arterial sample using a regression line. To determine when one point sampling time was optimum for inferring integral input function of the continuous withdrawal and whether the treatment of sampled blood for octanol fraction was required, we examined a correlation between the total activity of arterial blood withdrawn from 0 to 5 min after the injection and the activity of one point sample obtained at time t, and calculated a regression line. As a result, the minimum % error for the inference using the regression line was obtained at 6 min after the 123 I-IMP injection, moreover, the octanol treatment was not required. Then examining an effect on the values of rCBF when the sampling time was deviated from 6 min, we could correct the values in approximately 3% error when the sample was obtained at 6±1 min after the injection. The one-point sampling method provides accurate and relatively non-invasive measurement of rCBF without octanol extraction of arterial blood. (author)

  2. The cross-over points in lattice gauge theories with continuous gauge groups

    International Nuclear Information System (INIS)

    Cvitanovic, P.; Greensite, J.; Lautrup, B.

    1981-01-01

    We obtain a closed expression for the weak-to-strong coupling cross-over point in all Wilson type lattice gauge theories with continuous gauge groups. We use a weak-coupling expansion of the mean-field self-consistency equation. In all cases where our results can be compared with Monte Carlo calculations the agreement is excellent. (orig.)

  3. High-precision terahertz frequency modulated continuous wave imaging method using continuous wavelet transform

    Science.gov (United States)

    Zhou, Yu; Wang, Tianyi; Dai, Bing; Li, Wenjun; Wang, Wei; You, Chengwu; Wang, Kejia; Liu, Jinsong; Wang, Shenglie; Yang, Zhengang

    2018-02-01

    Inspired by the extensive application of terahertz (THz) imaging technologies in the field of aerospace, we exploit a THz frequency modulated continuous-wave imaging method with continuous wavelet transform (CWT) algorithm to detect a multilayer heat shield made of special materials. This method uses the frequency modulation continuous-wave system to catch the reflected THz signal and then process the image data by the CWT with different basis functions. By calculating the sizes of the defects area in the final images and then comparing the results with real samples, a practical high-precision THz imaging method is demonstrated. Our method can be an effective tool for the THz nondestructive testing of composites, drugs, and some cultural heritages.

  4. A continuation method for emission tomography

    International Nuclear Information System (INIS)

    Lee, M.; Zubal, I.G.

    1993-01-01

    One approach to improved reconstructions in emission tomography has been the incorporation of additional source information via Gibbs priors that assume a source f that is piecewise smooth. A natural Gibbs prior for expressing such constraints is an energy function E(f,l) defined on binary valued line processes l as well as f. MAP estimation leads to the difficult problem of minimizing a mixed (continuous and binary) variable objective function. Previous approaches have used Gibbs 'potential' functions, φ(f v ) and φ(f h ), defined solely on spatial derivatives, f v and f h , of the source. These φ functions implicitly incorporate line processes, but only in an approximate manner. The correct φ function, φ * , consistent with the use of line processes, leads to difficult minimization problems. In this work, the authors present a method wherein the correct φ * function is approached through a sequence of smooth φ functions. This is the essence of a continuation method in which the minimum of the energy function corresponding to one member of the φ function sequence is used as an initial condition for the minimization of the next, less approximate, stage. The continuation method is implemented using a GEM-ICM procedure. Simulation results show improvement using the continuation method relative to using φ * alone, and to conventional EM reconstructions

  5. Curvature-Continuous 3D Path-Planning Using QPMI Method

    Directory of Open Access Journals (Sweden)

    Seong-Ryong Chang

    2015-06-01

    Full Text Available It is impossible to achieve vertex movement and rapid velocity control in aerial robots and aerial vehicles because of momentum from the air. A continuous-curvature path ensures such robots and vehicles can fly with stable and continuous movements. General continuous path-planning methods use spline interpolation, for example B-spline and Bézier curves. However, these methods cannot be directly applied to continuous path planning in a 3D space. These methods use a subset of the waypoints to decide curvature and some waypoints are not included in the planned path. This paper proposes a method for constructing a curvature-continuous path in 3D space that includes every waypoint. The movements in each axis, x, y and z, are separated by the parameter u. Waypoint groups are formed, each with its own continuous path derived using quadratic polynomial interpolation. The membership function then combines each continuous path into one continuous path. The continuity of the path is verified and the curvature-continuous path is produced using the proposed method.

  6. Fixed-point data-collection method of video signal

    International Nuclear Information System (INIS)

    Tang Yu; Yin Zejie; Qian Weiming; Wu Xiaoyi

    1997-01-01

    The author describes a Fixed-point data-collection method of video signal. The method provides an idea of fixed-point data-collection, and has been successfully applied in the research of real-time radiography on dose field, a project supported by National Science Fund

  7. Standard Test Methods for Insulation Integrity and Ground Path Continuity of Photovoltaic Modules

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2000-01-01

    1.1 These test methods cover procedures for (1) testing for current leakage between the electrical circuit of a photovoltaic module and its external components while a user-specified voltage is applied and (2) for testing for possible module insulation breakdown (dielectric voltage withstand test). 1.2 A procedure is described for measuring the insulation resistance between the electrical circuit of a photovoltaic module and its external components (insulation resistance test). 1.3 A procedure is provided for verifying that electrical continuity exists between the exposed external conductive surfaces of the module, such as the frame, structural members, or edge closures, and its grounding point (ground path continuity test). 1.4 This test method does not establish pass or fail levels. The determination of acceptable or unacceptable results is beyond the scope of this test method. 1.5 There is no similar or equivalent ISO standard. This standard does not purport to address all of the safety concerns, if a...

  8. A new comparison method for dew-point generators

    Science.gov (United States)

    Heinonen, Martti

    1999-12-01

    A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.

  9. Analysis of relationship between registration performance of point cloud statistical model and generation method of corresponding points

    International Nuclear Information System (INIS)

    Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata

    2010-01-01

    Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)

  10. Analysis of Stress Updates in the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    The material-point method (MPM) is a new numerical method for analysis of large strain engineering problems. The MPM applies a dual formulation, where the state of the problem (mass, stress, strain, velocity etc.) is tracked using a finite set of material points while the governing equations...... are solved on a background computational grid. Several references state, that one of the main advantages of the material-point method is the easy application of complicated material behaviour as the constitutive response is updated individually for each material point. However, as discussed here, the MPM way...

  11. THE GROWTH POINTS OF STATISTICAL METHODS

    OpenAIRE

    Orlov A. I.

    2014-01-01

    On the basis of a new paradigm of applied mathematical statistics, data analysis and economic-mathematical methods are identified; we have also discussed five topical areas in which modern applied statistics is developing as well as the other statistical methods, i.e. five "growth points" – nonparametric statistics, robustness, computer-statistical methods, statistics of interval data, statistics of non-numeric data

  12. Material-Point Method Analysis of Bending in Elastic Beams

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2007-01-01

    The aim of this paper is to test different types of spatial interpolation for the material-point method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...

  13. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    Science.gov (United States)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  14. Extensions of vector-valued Baire one functions with preservation of points of continuity

    Czech Academy of Sciences Publication Activity Database

    Koc, M.; Kolář, Jan

    2016-01-01

    Roč. 442, č. 1 (2016), s. 138-148 ISSN 0022-247X R&D Projects: GA ČR(CZ) GA14-07880S Institutional support: RVO:67985840 Keywords : vector-valued Baire one functions * extensions * non-tangential limit * continuity points Subject RIV: BA - General Mathematics Impact factor: 1.064, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022247X1630097X

  15. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  16. C-point and V-point singularity lattice formation and index sign conversion methods

    Science.gov (United States)

    Kumar Pal, Sushanta; Ruchi; Senthilkumaran, P.

    2017-06-01

    The generic singularities in an ellipse field are C-points namely stars, lemons and monstars in a polarization distribution with C-point indices (-1/2), (+1/2) and (+1/2) respectively. Similar to C-point singularities, there are V-point singularities that occur in a vector field and are characterized by Poincare-Hopf index of integer values. In this paper we show that the superposition of three homogenously polarized beams in different linear states leads to the formation of polarization singularity lattice. Three point sources at the focal plane of the lens are used to create three interfering plane waves. A radial/azimuthal polarization converter (S-wave plate) placed near the focal plane modulates the polarization states of the three beams. The interference pattern is found to host C-points and V-points in a hexagonal lattice. The C-points occur at intensity maxima and V-points occur at intensity minima. Modulating the state of polarization (SOP) of three plane waves from radial to azimuthal does not essentially change the nature of polarization singularity lattice as the Poincare-Hopf index for both radial and azimuthal polarization distributions is (+1). Hence a transformation from a star to a lemon is not trivial, as such a transformation requires not a single SOP change, but a change in whole spatial SOP distribution. Further there is no change in the lattice structure and the C- and V-points appear at locations where they were present earlier. Hence to convert an interlacing star and V-point lattice into an interlacing lemon and V-point lattice, the interferometer requires modification. We show for the first time a method to change the polarity of C-point and V-point indices. This means that lemons can be converted into stars and stars can be converted into lemons. Similarly the positive V-point can be converted to negative V-point and vice versa. The intensity distribution in all these lattices is invariant as the SOPs of the three beams are changed in an

  17. Probabilistic Power Flow Method Considering Continuous and Discrete Variables

    Directory of Open Access Journals (Sweden)

    Xuexia Zhang

    2017-04-01

    Full Text Available This paper proposes a probabilistic power flow (PPF method considering continuous and discrete variables (continuous and discrete power flow, CDPF for power systems. The proposed method—based on the cumulant method (CM and multiple deterministic power flow (MDPF calculations—can deal with continuous variables such as wind power generation (WPG and loads, and discrete variables such as fuel cell generation (FCG. In this paper, continuous variables follow a normal distribution (loads or a non-normal distribution (WPG, and discrete variables follow a binomial distribution (FCG. Through testing on IEEE 14-bus and IEEE 118-bus power systems, the proposed method (CDPF has better accuracy compared with the CM, and higher efficiency compared with the Monte Carlo simulation method (MCSM.

  18. Strike Point Control on EAST Using an Isoflux Control Method

    International Nuclear Information System (INIS)

    Xing Zhe; Xiao Bingjia; Luo Zhengping; Walker, M. L.; Humphreys, D. A.

    2015-01-01

    For the advanced tokamak, the particle deposition and thermal load on the divertor is a big challenge. By moving the strike points on divertor target plates, the position of particle deposition and thermal load can be shifted. We could adjust the Poloidal Field (PF) coil current to achieve the strike point position feedback control. Using isoflux control method, the strike point position can be controlled by controlling the X point position. On the basis of experimental data, we establish relational expressions between X point position and strike point position. Benchmark experiments are carried out to validate the correctness and robustness of the control methods. The strike point position is successfully controlled following our command in the EAST operation. (paper)

  19. Novel Ratio Subtraction and Isoabsorptive Point Methods for ...

    African Journals Online (AJOL)

    Purpose: To develop and validate two innovative spectrophotometric methods used for the simultaneous determination of ambroxol hydrochloride and doxycycline in their binary mixture. Methods: Ratio subtraction and isoabsorptive point methods were used for the simultaneous determination of ambroxol hydrochloride ...

  20. Automated and continuously operating acid dew point measuring instrument for flue gases

    Energy Technology Data Exchange (ETDEWEB)

    Reckmann, D.; Naundorf, G.

    1986-06-01

    Design and operation is explained for a sulfuric acid dew point indicator for continuous flue gas temperature control. The indicator operated successfully in trial tests over several years with brown coal, gas and oil combustion in a measurement range of 60 to 180 C. The design is regarded as uncomplicated and easy to manufacture. Its operating principle is based on electric conductivity measurement on a surface on which sulfuric acid vapor has condensed. A ring electrode and a PtRh/Pt thermal element as central electrode are employed. A scheme of the equipment design is provided. Accuracy of the indicator was compared to manual dew point sondes manufactured by Degussa and showed a maximum deviation of 5 C. Manual cleaning after a number of weeks of operation is required. Fly ash with a high lime content increases dust buildup and requires more frequent cleaning cycles.

  1. Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality

    Directory of Open Access Journals (Sweden)

    Zhanchao Li

    2013-01-01

    Full Text Available The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model and change of sequence distribution law of nonparametric statistical model. On this basis, through the reduction of change point problem, the establishment of basic nonparametric change point model, and asymptotic analysis on test method of basic change point problem, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is created in consideration of the situation that in practice concrete dam crack behavior may have more abnormality points. And the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is used in the actual project, demonstrating the effectiveness and scientific reasonableness of the method established. Meanwhile, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality has a complete theoretical basis and strong practicality with a broad application prospect in actual project.

  2. NREL Patents Method for Continuous Monitoring of Materials During

    Science.gov (United States)

    Manufacturing | News | NREL NREL Patents Method for Continuous Monitoring of Materials During Manufacturing News Release: NREL Patents Method for Continuous Monitoring of Materials During Manufacturing NREL's Energy Systems Integration Facility (ESIF). More information, including the published patent, can

  3. Pointing Verification Method for Spaceborne Lidars

    Directory of Open Access Journals (Sweden)

    Axel Amediek

    2017-01-01

    Full Text Available High precision acquisition of atmospheric parameters from the air or space by means of lidar requires accurate knowledge of laser pointing. Discrepancies between the assumed and actual pointing can introduce large errors due to the Doppler effect or a wrongly assumed air pressure at ground level. In this paper, a method for precisely quantifying these discrepancies for airborne and spaceborne lidar systems is presented. The method is based on the comparison of ground elevations derived from the lidar ranging data with high-resolution topography data obtained from a digital elevation model and allows for the derivation of the lateral and longitudinal deviation of the laser beam propagation direction. The applicability of the technique is demonstrated by using experimental data from an airborne lidar system, confirming that geo-referencing of the lidar ground spot trace with an uncertainty of less than 10 m with respect to the used digital elevation model (DEM can be obtained.

  4. Continuously deformation monitoring of subway tunnel based on terrestrial point clouds

    NARCIS (Netherlands)

    Kang, Z.; Tuo, L.; Zlatanova, S.

    2012-01-01

    The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the

  5. TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL

    Directory of Open Access Journals (Sweden)

    N. Zhu

    2016-06-01

    Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  6. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    Science.gov (United States)

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  7. Genealogical series method. Hyperpolar points screen effect

    International Nuclear Information System (INIS)

    Gorbatov, A.M.

    1991-01-01

    The fundamental values of the genealogical series method -the genealogical integrals (sandwiches) have been investigated. The hyperpolar points screen effect has been found. It allows one to calculate the sandwiches for the Fermion systems with large number of particles and to ascertain the validity of the iterated-potential method as well. For the first time the genealogical-series method has been realized numerically for the central spin-independent potential

  8. The Purification Method of Matching Points Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    DONG Yang

    2017-02-01

    Full Text Available The traditional purification method of matching points usually uses a small number of the points as initial input. Though it can meet most of the requirements of point constraints, the iterative purification solution is easy to fall into local extreme, which results in the missing of correct matching points. To solve this problem, we introduce the principal component analysis method to use the whole point set as initial input. And thorough mismatching points step eliminating and robust solving, more accurate global optimal solution, which intends to reduce the omission rate of correct matching points and thus reaches better purification effect, can be obtained. Experimental results show that this method can obtain the global optimal solution under a certain original false matching rate, and can decrease or avoid the omission of correct matching points.

  9. A dynamic method for continuously measuring magnetic field profiles in planar micropole undulators with submillimeter gaps

    International Nuclear Information System (INIS)

    Tatchyn, R.; Oregon Univ., Eugene

    1989-01-01

    Conventional techniques for measuring magnetic field profiles in ordinary undulators rely predominantly on Hall probes for making point-by-point static measurements. As undulators with submillimeter periods and gaps become available, such techniques will start becoming untenable, due to the relative largeness of conventional Hall probe heads and the rapidly increasing number of periods in devices of fixed length. In this paper a method is presented which can rapidly map out field profiles in undulators with periods and gaps extending down to the 100 μm range and beyond. The method, which samples the magnetic field continuously, has been used successfully in profiling a recently constructed 726 μm period undulator, and seems to offer some potential advantages over conventional Hall probe techniques in measuring large-scale undulator fields as well. (orig.)

  10. Source splitting via the point source method

    International Nuclear Information System (INIS)

    Potthast, Roland; Fazi, Filippo M; Nelson, Philip A

    2010-01-01

    We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields u j , j = 1, ..., n of n element of N sound sources supported in different bounded domains G 1 , ..., G n in R 3 from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u 1 + ... + u n on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions g 1 ,…, g n , n element of N, to construct u l for l = 1, ..., n from u| Λ in the form u l (x) = ∫ Λ g l,x (y)u(y)ds(y), l=1,... n. (1) We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online

  11. Subsidence and Fault Displacement Along the Long Point Fault Derived from Continuous GPS Observations (2012-2017)

    Science.gov (United States)

    Tsibanos, V.; Wang, G.

    2017-12-01

    The Long Point Fault located in Houston Texas is a complex system of normal faults which causes significant damage to urban infrastructure on both private and public property. This case study focuses on the 20-km long fault using high accuracy continuously operating global positioning satellite (GPS) stations to delineate fault movement over five years (2012 - 2017). The Long Point Fault is the longest active fault in the greater Houston area that damages roads, buried pipes, concrete structures and buildings and creates a financial burden for the city of Houston and the residents who live in close vicinity to the fault trace. In order to monitor fault displacement along the surface 11 permanent and continuously operating GPS stations were installed 6 on the hanging wall and 5 on the footwall. This study is an overview of the GPS observations from 2013 to 2017. GPS positions were processed with both relative (double differencing) and absolute Precise Point Positioning (PPP) techniques. The PPP solutions that are referred to IGS08 reference frame were transformed to the Stable Houston Reference Frame (SHRF16). Our results show no considerable horizontal displacements across the fault, but do show uneven vertical displacement attributed to regional subsidence in the range of (5 - 10 mm/yr). This subsidence can be associated to compaction of silty clays in the Chicot and Evangeline aquifers whose water depths are approximately 50m and 80m below the land surface (bls). These levels are below the regional pre-consolidation head that is about 30 to 40m bls. Recent research indicates subsidence will continue to occur until the aquifer levels reach the pre-consolidation head. With further GPS observations both the Long Point Fault and regional land subsidence can be monitored providing important geological data to the Houston community.

  12. Method to Minimize the Low-Frequency Neutral-Point Voltage Oscillations With Time-Offset Injection for Neutral-Point-Clamped Inverters

    DEFF Research Database (Denmark)

    Choi, Ui-Min; Blaabjerg, Frede; Lee, Kyo-Beum

    2015-01-01

    time of small- and medium-voltage vectors. However, if the power factor is lower, there is a limitation to eliminate neutral-point oscillations. In this case, the proposed method can be improved by changing the switching sequence properly. Additionally, a method for neutral-point voltage balancing......This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time offset to the three-phase turn-on times. The proper time offset is simply calculated considering the phase currents and dwell...

  13. An Estimating Method of Contractile State Changes Come From Continuous Isometric Contraction of Skeletal Muscle

    Energy Technology Data Exchange (ETDEWEB)

    Park, H.J.; Lee, S.J. [Wonkwang University, Iksan (Korea)

    2003-01-01

    In this study was proposed that a new estimating method for investigation of contractile state changes which generated from continuous isometric contraction of skeletal muscle. The physiological changes (EMG, ECG) and the psychological changes by CNS(central nervous system) were measured by experiments, while the muscle of subjects contracted continuously with isometric contraction in constant load. The psychological changes were represented as three-step-change named 'fatigue', 'pain' and 'sick(greatly pain)' from oral test, and the method which compared physiological change with psychological change on basis of these three steps was developed. The result of analyzing the physiological signals, EMG and ECG signal changes were observed at the vicinity of judging point in time of psychological changes. Namely, it is supposed that contractile states have three kind of states pattern (stable, fatigue, pain) instead of two states (stable, fatigue). (author). 24 refs., 7 figs.

  14. Material-point Method Analysis of Bending in Elastic Beams

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    The aim of this paper is to test different types of spatial interpolation for the materialpoint method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...

  15. A generalized endogenous grid method for discrete-continuous choice

    OpenAIRE

    John Rust; Bertel Schjerning; Fedor Iskhakov

    2012-01-01

    This paper extends Carroll's endogenous grid method (2006 "The method of endogenous gridpoints for solving dynamic stochastic optimization problems", Economic Letters) for models with sequential discrete and continuous choice. Unlike existing generalizations, we propose solution algorithm that inherits both advantages of the original method, namely it avoids all root finding operations, and also efficiently deals with restrictions on the continuous decision variable. To further speed up the s...

  16. Second derivative continuous linear multistep methods for the ...

    African Journals Online (AJOL)

    step methods (LMM), with properties that embed the characteristics of LMM and hybrid methods. This paper gives a continuous reformulation of the Enright [5] second derivative methods. The motivation lies in the fact that the new formulation ...

  17. Taylor's series method for solving the nonlinear point kinetics equations

    International Nuclear Information System (INIS)

    Nahla, Abdallah A.

    2011-01-01

    Highlights: → Taylor's series method for nonlinear point kinetics equations is applied. → The general order of derivatives are derived for this system. → Stability of Taylor's series method is studied. → Taylor's series method is A-stable for negative reactivity. → Taylor's series method is an accurate computational technique. - Abstract: Taylor's series method for solving the point reactor kinetics equations with multi-group of delayed neutrons in the presence of Newtonian temperature feedback reactivity is applied and programmed by FORTRAN. This system is the couples of the stiff nonlinear ordinary differential equations. This numerical method is based on the different order derivatives of the neutron density, the precursor concentrations of i-group of delayed neutrons and the reactivity. The r th order of derivatives are derived. The stability of Taylor's series method is discussed. Three sets of applications: step, ramp and temperature feedback reactivities are computed. Taylor's series method is an accurate computational technique and stable for negative step, negative ramp and temperature feedback reactivities. This method is useful than the traditional methods for solving the nonlinear point kinetics equations.

  18. Direct continuous multichannel γspectrometric measurements- one of the main methods for control and study of radioactive environmental pollution

    International Nuclear Information System (INIS)

    Khitrov, L.M.; Rumiantsev, O.V.

    1991-01-01

    In Chernobyl along with usual methods of environment radiation control there were used methods and equipment of direct continuous multichannel measurements. The necessary equipment was installed both on permanent observation stations (river Pripyat, Chernobyl, river Dnieper, Kiev) and on mobile units (helicopters, scientific river-boats, automobiles). Together with continuous control of radioactive situation and its estimation in time and space this equipment enabled to carry out the following: - determination of time-spatial structure of radioactive pollution in stationary points and on space (mapping); - selection of representative samples for subsequent radionuclide analysis; - direct data input into the computer, data storage and data base creation. The results and conclusions drawn are important not only for the situation on Chernobyl atomic station - they may and should be used for a continuous radioactive monitoring of the environment. Though the method and its realization remain to be modernized and unified. (author)

  19. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    Science.gov (United States)

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  20. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    Directory of Open Access Journals (Sweden)

    Yueqian Shen

    2016-12-01

    Full Text Available A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  1. Point Cluster Analysis Using a 3D Voronoi Diagram with Applications in Point Cloud Segmentation

    Directory of Open Access Journals (Sweden)

    Shen Ying

    2015-08-01

    Full Text Available Three-dimensional (3D point analysis and visualization is one of the most effective methods of point cluster detection and segmentation in geospatial datasets. However, serious scattering and clotting characteristics interfere with the visual detection of 3D point clusters. To overcome this problem, this study proposes the use of 3D Voronoi diagrams to analyze and visualize 3D points instead of the original data item. The proposed algorithm computes the cluster of 3D points by applying a set of 3D Voronoi cells to describe and quantify 3D points. The decompositions of point cloud of 3D models are guided by the 3D Voronoi cell parameters. The parameter values are mapped from the Voronoi cells to 3D points to show the spatial pattern and relationships; thus, a 3D point cluster pattern can be highlighted and easily recognized. To capture different cluster patterns, continuous progressive clusters and segmentations are tested. The 3D spatial relationship is shown to facilitate cluster detection. Furthermore, the generated segmentations of real 3D data cases are exploited to demonstrate the feasibility of our approach in detecting different spatial clusters for continuous point cloud segmentation.

  2. New methods of subcooled water recognition in dew point hygrometers

    Science.gov (United States)

    Weremczuk, Jerzy; Jachowicz, Ryszard

    2001-08-01

    Two new methods of sub-cooled water recognition in dew point hygrometers are presented in this paper. The first one- impedance method use a new semiconductor mirror in which the dew point detector, the thermometer and the heaters were integrated all together. The second one an optical method based on a multi-section optical detector is discussed in the report. Experimental results of both methods are shown. New types of dew pont hydrometers of ability to recognized sub-cooled water were proposed.

  3. Word Length Selection Method for Controller Implementation on FPGAs Using the VHDL-2008 Fixed-Point and Floating-Point Packages

    Directory of Open Access Journals (Sweden)

    Urriza I

    2010-01-01

    Full Text Available Abstract This paper presents a word length selection method for the implementation of digital controllers in both fixed-point and floating-point hardware on FPGAs. This method uses the new types defined in the VHDL-2008 fixed-point and floating-point packages. These packages allow customizing the word length of fixed and floating point representations and shorten the design cycle simplifying the design of arithmetic operations. The method performs bit-true simulations in order to determine the word length to represent the constant coefficients and the internal signals of the digital controller while maintaining the control system specifications. A mixed-signal simulation tool is used to simulate the closed loop system as a whole in order to analyze the impact of the quantization effects and loop delays on the control system performance. The method is applied to implement a digital controller for a switching power converter. The digital circuit is implemented on an FPGA, and the simulations are experimentally verified.

  4. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  5. Application of the nudged elastic band method to the point-to-point radio wave ray tracing in IRI modeled ionosphere

    Science.gov (United States)

    Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.

    2017-07-01

    Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.

  6. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    International Nuclear Information System (INIS)

    Tumelero, Fernanda; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana

    2015-01-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  7. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    Energy Technology Data Exchange (ETDEWEB)

    Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica

    2015-07-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  8. A simple method for regional cerebral blood flow measurement by one-point arterial blood sampling and 123I-IMP microsphere model (part 2). A study of time correction of one-point blood sample count

    International Nuclear Information System (INIS)

    Masuda, Yasuhiko; Makino, Kenichi; Gotoh, Satoshi

    1999-01-01

    In our previous paper regarding determination of the regional cerebral blood flow (rCBF) using the 123 I-IMP microsphere model, we reported that the accuracy of determination of the integrated value of the input function from one-point arterial blood sampling can be increased by performing correction using the 5 min: 29 min ratio for the whole-brain count. However, failure to carry out the arterial blood collection at exactly 5 minutes after 123 I-IMP injection causes errors with this method, and there is thus a time limitation. We have now revised out method so that the one-point arterial blood sampling can be performed at any time during the interval between 5 minutes and 20 minutes after 123 I-IMP injection, with addition of a correction step for the sampling time. This revised method permits more accurate estimation of the integral of the input functions. This method was then applied to 174 experimental subjects: one-point blood samples collected at random times between 5 and 20 minutes, and the estimated values for the continuous arterial octanol extraction count (COC) were determined. The mean error rate between the COC and the actual measured continuous arterial octanol extraction count (OC) was 3.6%, and the standard deviation was 12.7%. Accordingly, in 70% of the cases, the rCBF was able to be estimated within an error rate of 13%, while estimation was possible in 95% of the cases within an error rate of 25%. This improved method is a simple technique for determination of the rCBF by 123 I-IMP microsphere model and one-point arterial blood sampling which no longer shows a time limitation and does not require any octanol extraction step. (author)

  9. Real-time Continuous Assessment Method for Mental and Physiological Condition using Heart Rate Variability

    Science.gov (United States)

    Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro

    It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.

  10. Continuous Problem of Function Continuity

    Science.gov (United States)

    Jayakody, Gaya; Zazkis, Rina

    2015-01-01

    We examine different definitions presented in textbooks and other mathematical sources for "continuity of a function at a point" and "continuous function" in the context of introductory level Calculus. We then identify problematic issues related to definitions of continuity and discontinuity: inconsistency and absence of…

  11. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    Science.gov (United States)

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  12. The descriptive set-theoretic complexity of the set of points of continuity of a multi-valued function (Extended Abstract

    Directory of Open Access Journals (Sweden)

    Vassilios Gregoriades

    2010-06-01

    Full Text Available In this article we treat a notion of continuity for a multi-valued function F and we compute the descriptive set-theoretic complexity of the set of all x for which F is continuous at x. We give conditions under which the latter set is either a G_delta set or the countable union of G_delta sets. Also we provide a counterexample which shows that the latter result is optimum under the same conditions. Moreover we prove that those conditions are necessary in order to obtain that the set of points of continuity of F is Borel i.e., we show that if we drop some of the previous conditions then there is a multi-valued function F whose graph is a Borel set and the set of points of continuity of F is not a Borel set. Finally we give some analogue results regarding a stronger notion of continuity for a multi-valued function. This article is motivated by a question of M. Ziegler in "Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform Computability with Applications to Linear Algebra", (submitted.

  13. Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods

    International Nuclear Information System (INIS)

    Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris

    2016-01-01

    Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.

  14. Interior Point Methods for Large-Scale Nonlinear Programming

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2005-01-01

    Roč. 20, č. 4-5 (2005), s. 569-582 ISSN 1055-6788 R&D Projects: GA AV ČR IAA1030405 Institutional research plan: CEZ:AV0Z10300504 Keywords : nonlinear programming * interior point methods * KKT systems * indefinite preconditioners * filter methods * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.477, year: 2005

  15. Synthesis of Numerical Methods for Modeling Wave Energy Converter-Point Absorbers: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y.; Yu, Y. H.

    2012-05-01

    During the past few decades, wave energy has received significant attention among all ocean energy formats. Industry has proposed hundreds of prototypes such as an oscillating water column, a point absorber, an overtopping system, and a bottom-hinged system. In particular, many researchers have focused on modeling the floating-point absorber as the technology to extract wave energy. Several modeling methods have been used such as the analytical method, the boundary-integral equation method, the Navier-Stokes equations method, and the empirical method. However, no standardized method has been decided. To assist the development of wave energy conversion technologies, this report reviews the methods for modeling the floating-point absorber.

  16. Continuous improvement methods in the nuclear industry

    International Nuclear Information System (INIS)

    Heising, Carolyn D.

    1995-01-01

    The purpose of this paper is to investigate management methods for improved safety in the nuclear power industry. Process improvement management, methods of business process reengineering, total quality management, and continued process improvement (KAIZEN) are explored. The anticipated advantages of extensive use of improved process oriented management methods in the nuclear industry are increased effectiveness and efficiency in virtually all tasks of plant operation and maintenance. Important spin off include increased plant safety and economy. (author). 6 refs., 1 fig

  17. Continuous method of natrium purification

    International Nuclear Information System (INIS)

    Batoux, B.; Laurent-Atthalin, A.; Salmon, M.

    1975-01-01

    An improvement of the known method for the production of highly pure sodium from technically pure sodium which still contains several hundred ppm metallic impurities is proposed. These impurities, first of all Ca and Ba, are separated by oxidation with sodium peroxide. The continuous method is new which can also be performed on a technically large scale and which results in a degree of purity of less than 10 ppm Ca. Under N 2 -atmosphere, highly dispersed sodium peroxide is added to a flow of sodium, and at 100 0 C to 150 0 C, thoroughly mixed, the suspension is heated under turbulence to 200 0 C to 300 0 C, and the forming oxides are separated. Exact data for an optimum reaction guide as well as a flow diagram are supplied. (UWI) [de

  18. Continuous method of natrium purification

    Energy Technology Data Exchange (ETDEWEB)

    Batoux, B; Laurent-Atthalin, A; Salmon, M

    1975-05-28

    An improvement of the known method for the production of highly pure sodium from technically pure sodium which still contains several hundred ppm metallic impurities is proposed. These impurities, first of all Ca and Ba, are separated by oxidation with sodium peroxide. The new continuous method can be performed on a technically large scale and results in a degree of purity of less than 10 ppm Ca. Under N/sub 2/ -atmosphere, highly dispersed sodium peroxide is added to a flow of sodium, and at 100/sup 0/C to 150/sup 0/C, thoroughly mixed, the suspension is heated under turbulence to 200/sup 0/C to 300/sup 0/C, and the forming oxides are separated. Exact data for an optimum reaction guide as well as a flow diagram are supplied.

  19. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    International Nuclear Information System (INIS)

    Liu, W; Sawant, A; Ruan, D

    2016-01-01

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity in local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real

  20. Evaluation of the H-point standard additions method (HPSAM) and the generalized H-point standard additions method (GHPSAM) for the UV-analysis of two-component mixtures.

    Science.gov (United States)

    Hund, E; Massart, D L; Smeyers-Verbeke, J

    1999-10-01

    The H-point standard additions method (HPSAM) and two versions of the generalized H-point standard additions method (GHPSAM) are evaluated for the UV-analysis of two-component mixtures. Synthetic mixtures of anhydrous caffeine and phenazone as well as of atovaquone and proguanil hydrochloride were used. Furthermore, the method was applied to pharmaceutical formulations that contain these compounds as active drug substances. This paper shows both the difficulties that are related to the methods and the conditions by which acceptable results can be obtained.

  1. A multi points ultrasonic detection method for material flow of belt conveyor

    Science.gov (United States)

    Zhang, Li; He, Rongjun

    2018-03-01

    For big detection error of single point ultrasonic ranging technology used in material flow detection of belt conveyor when coal distributes unevenly or is large, a material flow detection method of belt conveyor is designed based on multi points ultrasonic counter ranging technology. The method can calculate approximate sectional area of material by locating multi points on surfaces of material and belt, in order to get material flow according to running speed of belt conveyor. The test results show that the method has smaller detection error than single point ultrasonic ranging technology under the condition of big coal with uneven distribution.

  2. Near-point string: Simple method to demonstrate anticipated near point for multifocal and accommodating intraocular lenses.

    Science.gov (United States)

    George, Monica C; Lazer, Zane P; George, David S

    2016-05-01

    We present a technique that uses a near-point string to demonstrate the anticipated near point of multifocal and accommodating intraocular lenses (IOLs). Beads are placed on the string at distances corresponding to the near points for diffractive and accommodating IOLs. The string is held up to the patient's eye to demonstrate where each of the IOLs is likely to provide the best near vision. None of the authors has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  3. [Absorption spectrum of Quasi-continuous laser modulation demodulation method].

    Science.gov (United States)

    Shao, Xin; Liu, Fu-Gui; Du, Zhen-Hui; Wang, Wei

    2014-05-01

    A software phase-locked amplifier demodulation method is proposed in order to demodulate the second harmonic (2f) signal of quasi-continuous laser wavelength modulation spectroscopy (WMS) properly, based on the analysis of its signal characteristics. By judging the effectiveness of the measurement data, filter, phase-sensitive detection, digital filtering and other processing, the method can achieve the sensitive detection of quasi-continuous signal The method was verified by using carbon dioxide detection experiments. The WMS-2f signal obtained by the software phase-locked amplifier and the high-performance phase-locked amplifier (SR844) were compared simultaneously. The results show that the Allan variance of WMS-2f signal demodulated by the software phase-locked amplifier is one order of magnitude smaller than that demodulated by SR844, corresponding two order of magnitude lower of detection limit. And it is able to solve the unlocked problem caused by the small duty cycle of quasi-continuous modulation signal, with a small signal waveform distortion.

  4. A Model Stitching Architecture for Continuous Full Flight-Envelope Simulation of Fixed-Wing Aircraft and Rotorcraft from Discrete Point Linear Models

    Science.gov (United States)

    2016-04-01

    AND ROTORCRAFT FROM DISCRETE -POINT LINEAR MODELS Eric L. Tobias and Mark B. Tischler Aviation Development Directorate Aviation and Missile...Stitching Architecture for Continuous Full Flight-Envelope Simulation of Fixed-Wing Aircraft and Rotorcraft from Discrete -Point Linear Models 5...of discrete -point linear models and trim data. The model stitching simulation architecture is applicable to any aircraft configuration readily

  5. A Review on the Modified Finite Point Method

    Directory of Open Access Journals (Sweden)

    Nan-Jing Wu

    2014-01-01

    Full Text Available The objective of this paper is to make a review on recent advancements of the modified finite point method, named MFPM hereafter. This MFPM method is developed for solving general partial differential equations. Benchmark examples of employing this method to solve Laplace, Poisson, convection-diffusion, Helmholtz, mild-slope, and extended mild-slope equations are verified and then illustrated in fluid flow problems. Application of MFPM to numerical generation of orthogonal grids, which is governed by Laplace equation, is also demonstrated.

  6. Comparison of methods for accurate end-point detection of potentiometric titrations

    Science.gov (United States)

    Villela, R. L. A.; Borges, P. P.; Vyskočil, L.

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.

  7. Comparison of methods for accurate end-point detection of potentiometric titrations

    International Nuclear Information System (INIS)

    Villela, R L A; Borges, P P; Vyskočil, L

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper

  8. Continuous surveillance of transformers using artificial intelligence methods; Surveillance continue des transformateurs: application des methodes d'intelligence artificielle

    Energy Technology Data Exchange (ETDEWEB)

    Schenk, A.; Germond, A. [Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Boss, P.; Lorin, P. [ABB Secheron SA, Geneve (Switzerland)

    2000-07-01

    The article describes a new method for the continuous surveillance of power transformers based on the application of artificial intelligence (AI) techniques. An experimental pilot project on a specially equipped, strategically important power transformer is described. Traditional surveillance methods and the use of mathematical models for the prediction of faults are described. The article describes the monitoring equipment used in the pilot project and the AI principles such as self-organising maps that are applied. The results obtained from the pilot project and methods for their graphical representation are discussed.

  9. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-27

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the

  10. Comments on the comparison of global methods for linear two-point boundary value problems

    International Nuclear Information System (INIS)

    de Boor, C.; Swartz, B.

    1977-01-01

    A more careful count of the operations involved in solving the linear system associated with collocation of a two-point boundary value problem using a rough splines reverses results recently reported by others in this journal. In addition, it is observed that the use of the technique of ''condensation of parameters'' can decrease the computer storage required. Furthermore, the use of a particular highly localized basis can also reduce the setup time when the mesh is irregular. Finally, operation counts are roughly estimated for the solution of certain linear system associated with two competing collocation methods; namely, collocation with smooth splines and collocation of the equivalent first order system with continuous piecewise polynomials

  11. Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2014-01-01

    Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.

  12. Development of a continuous energy version of KENO V.a

    International Nuclear Information System (INIS)

    Dunn, M.E.; Bentley, C.L.; Goluoglu, S.; Paschal, L.S.; Dodds, H.L.

    1997-01-01

    KENO V.a is a multigroup Monte Carlo code that solves the Boltzmann transport equation and is used extensively in the nuclear criticality safety community to calculate the effective multiplication factor k eff of systems containing fissile material. Because of the smaller amount of disk storage and CPU time required in calculations, multigroup approaches have been preferred over continuous energy (point) approaches in the past to solve the transport equation. With the advent of high-performance computers, storage and CPU limitations are less restrictive, thereby making continuous energy methods viable for transport calculations. Moreover, continuous energy methods avoid many of the assumptions and approximations inherent in multigroup methods. Because a continuous energy version of KENO V.a does not exist, the objective of the work is to develop a new version of KENO V.a that utilizes continuous energy cross sections. Currently, a point cross-section library, which is based on a raw continuous energy cross-section library such as ENDF/B-V is not available for implementation in KENO V.a; however, point cross-section libraries are available for MCNP, another widely used Monte Carlo transport code. Since MCNP cross sections are based on ENDF data and are readily available, a new version of KENO V.a named PKENO V.a has been developed that performs the random walk using MCNP cross sections. To utilize point cross sections, extensive modifications have been made to KENO V.a. At this point in the research, testing of the code is underway. In particular, PKENO V.a, KENO V.a, and MCNP have been used to model nine critical experiments and one subcritical problem. The results obtained with PKENO V.a are in excellent agreement with MCNP, KENO V.a, and experiments

  13. AN IMPROVEMENT ON GEOMETRY-BASED METHODS FOR GENERATION OF NETWORK PATHS FROM POINTS

    Directory of Open Access Journals (Sweden)

    Z. Akbari

    2014-10-01

    Full Text Available Determining network path is important for different purposes such as determination of road traffic, the average speed of vehicles, and other network analysis. One of the required input data is information about network path. Nevertheless, the data collected by the positioning systems often lead to the discrete points. Conversion of these points to the network path have become one of the challenges which different researchers, presents many ways for solving it. This study aims at investigating geometry-based methods to estimate the network paths from the obtained points and improve an existing point to curve method. To this end, some geometry-based methods have been studied and an improved method has been proposed by applying conditions on the best method after describing and illustrating weaknesses of them.

  14. Modeling of Landslides with the Material Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2008-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  15. Modelling of Landslides with the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  16. Analysis and research on Maximum Power Point Tracking of Photovoltaic Array with Fuzzy Logic Control and Three-point Weight Comparison Method

    Institute of Scientific and Technical Information of China (English)

    LIN; Kuang-Jang; LIN; Chii-Ruey

    2010-01-01

    The Photovoltaic Array has a best optimal operating point where the array operating can obtain the maximum power.However, the optimal operating point can be compromised by the strength of solar radiation,angle,and by the change of environment and load.Due to the constant changes in these conditions,it has become very difficult to locate the optimal operating point by following a mathematical model.Therefore,this study will focus mostly on the application of Fuzzy Logic Control theory and Three-point Weight Comparison Method in effort to locate the optimal operating point of solar panel and achieve maximum efficiency in power generation. The Three-point Weight Comparison Method is the comparison between the characteristic curves of the voltage of photovoltaic array and output power;it is a rather simple way to track the maximum power.The Fuzzy Logic Control,on the other hand,can be used to solve problems that cannot be effectively dealt with by calculation rules,such as concepts,contemplation, deductive reasoning,and identification.Therefore,this paper uses these two kinds of methods to make simulation successively. The simulation results show that,the Three-point Comparison Method is more effective under the environment with more frequent change of solar radiation;however,the Fuzzy Logic Control has better tacking efficiency under the environment with violent change of solar radiation.

  17. Primal-Dual Interior Point Multigrid Method for Topology Optimization

    Czech Academy of Sciences Publication Activity Database

    Kočvara, Michal; Mohammed, S.

    2016-01-01

    Roč. 38, č. 5 (2016), B685-B709 ISSN 1064-8275 Grant - others:European Commission - EC(XE) 313781 Institutional support: RVO:67985556 Keywords : topology optimization * multigrid method s * interior point method Subject RIV: BA - General Mathematics Impact factor: 2.195, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/kocvara-0462418.pdf

  18. Continuous path control of a 5-DOF parallel-serial hybrid robot

    International Nuclear Information System (INIS)

    Uchiyama, Takuma; Terada, Hidetsugu; Mitsuya, Hironori

    2010-01-01

    Using the 5-degree of freedom parallel-serial hybrid robot, to realize the de-burring, new forward and inverse kinematic calculation methods based on the 'off-line teaching' method are proposed. This hybrid robot consists of a parallel stage section and a serial stage section. Considering this point, each section is calculated individually. And the continuous path control algorithm of this hybrid robot is proposed. To verify the usefulness, a prototype robot is tested which is controlled based on the proposed methods. This verification includes a positioning test and a pose test. The positioning test evaluates the continuous path of the tool center point. The pose test evaluates the pose on the tool center point. As the result, it is confirmed that this hybrid robot moves correctly using the proposed methods

  19. A Classification-oriented Method of Feature Image Generation for Vehicle-borne Laser Scanning Point Clouds

    Directory of Open Access Journals (Sweden)

    YANG Bisheng

    2016-02-01

    Full Text Available An efficient method of feature image generation of point clouds to automatically classify dense point clouds into different categories is proposed, such as terrain points, building points. The method first uses planar projection to sort points into different grids, then calculates the weights and feature values of grids according to the distribution of laser scanning points, and finally generates the feature image of point clouds. Thus, the proposed method adopts contour extraction and tracing means to extract the boundaries and point clouds of man-made objects (e.g. buildings and trees in 3D based on the image generated. Experiments show that the proposed method provides a promising solution for classifying and extracting man-made objects from vehicle-borne laser scanning point clouds.

  20. Minimizing convex functions by continuous descent methods

    Directory of Open Access Journals (Sweden)

    Sergiu Aizicovici

    2010-01-01

    Full Text Available We study continuous descent methods for minimizing convex functions, defined on general Banach spaces, which are associated with an appropriate complete metric space of vector fields. We show that there exists an everywhere dense open set in this space of vector fields such that each of its elements generates strongly convergent trajectories.

  1. Analytic continuation of quantum Monte Carlo data. Stochastic sampling method

    Energy Technology Data Exchange (ETDEWEB)

    Ghanem, Khaldoon; Koch, Erik [Institute for Advanced Simulation, Forschungszentrum Juelich, 52425 Juelich (Germany)

    2016-07-01

    We apply Bayesian inference to the analytic continuation of quantum Monte Carlo (QMC) data from the imaginary axis to the real axis. Demanding a proper functional Bayesian formulation of any analytic continuation method leads naturally to the stochastic sampling method (StochS) as the Bayesian method with the simplest prior, while it excludes the maximum entropy method and Tikhonov regularization. We present a new efficient algorithm for performing StochS that reduces computational times by orders of magnitude in comparison to earlier StochS methods. We apply the new algorithm to a wide variety of typical test cases: spectral functions and susceptibilities from DMFT and lattice QMC calculations. Results show that StochS performs well and is able to resolve sharp features in the spectrum.

  2. Non-Interior Continuation Method for Solving the Monotone Semidefinite Complementarity Problem

    International Nuclear Information System (INIS)

    Huang, Z.H.; Han, J.

    2003-01-01

    Recently, Chen and Tseng extended non-interior continuation smoothing methods for solving linear/ nonlinear complementarity problems to semidefinite complementarity problems (SDCP). In this paper we propose a non-interior continuation method for solving the monotone SDCP based on the smoothed Fischer-Burmeister function, which is shown to be globally linearly and locally quadratically convergent under suitable assumptions. Our algorithm needs at most to solve a linear system of equations at each iteration. In addition, in our analysis on global linear convergence of the algorithm, we need not use the assumption that the Frechet derivative of the function involved in the SDCP is Lipschitz continuous. For non-interior continuation/ smoothing methods for solving the nonlinear complementarity problem, such an assumption has been used widely in the literature in order to achieve global linear convergence results of the algorithms

  3. Methods for registration laser scanner point clouds in forest stands

    International Nuclear Information System (INIS)

    Bienert, A.; Pech, K.; Maas, H.-G.

    2011-01-01

    Laser scanning is a fast and efficient 3-D measurement technique to capture surface points describing the geometry of a complex object in an accurate and reliable way. Besides airborne laser scanning, terrestrial laser scanning finds growing interest for forestry applications. These two different recording platforms show large differences in resolution, recording area and scan viewing direction. Using both datasets for a combined point cloud analysis may yield advantages because of their largely complementary information. In this paper, methods will be presented to automatically register airborne and terrestrial laser scanner point clouds of a forest stand. In a first step, tree detection is performed in both datasets in an automatic manner. In a second step, corresponding tree positions are determined using RANSAC. Finally, the geometric transformation is performed, divided in a coarse and fine registration. After a coarse registration, the fine registration is done in an iterative manner (ICP) using the point clouds itself. The methods are tested and validated with a dataset of a forest stand. The presented registration results provide accuracies which fulfill the forestry requirements [de

  4. The computation of fixed points and applications

    CERN Document Server

    Todd, Michael J

    1976-01-01

    Fixed-point algorithms have diverse applications in economics, optimization, game theory and the numerical solution of boundary-value problems. Since Scarf's pioneering work [56,57] on obtaining approximate fixed points of continuous mappings, a great deal of research has been done in extending the applicability and improving the efficiency of fixed-point methods. Much of this work is available only in research papers, although Scarf's book [58] gives a remarkably clear exposition of the power of fixed-point methods. However, the algorithms described by Scarf have been super~eded by the more sophisticated restart and homotopy techniques of Merrill [~8,~9] and Eaves and Saigal [1~,16]. To understand the more efficient algorithms one must become familiar with the notions of triangulation and simplicial approxi- tion, whereas Scarf stresses the concept of primitive set. These notes are intended to introduce to a wider audience the most recent fixed-point methods and their applications. Our approach is therefore ...

  5. Methods for converting continuous shrubland ecosystem component values to thematic National Land Cover Database classes

    Science.gov (United States)

    Rigge, Matthew B.; Gass, Leila; Homer, Collin G.; Xian, George Z.

    2017-10-26

    The National Land Cover Database (NLCD) provides thematic land cover and land cover change data at 30-meter spatial resolution for the United States. Although the NLCD is considered to be the leading thematic land cover/land use product and overall classification accuracy across the NLCD is high, performance and consistency in the vast shrub and grasslands of the Western United States is lower than desired. To address these issues and fulfill the needs of stakeholders requiring more accurate rangeland data, the USGS has developed a method to quantify these areas in terms of the continuous cover of several cover components. These components include the cover of shrub, sagebrush (Artemisia spp), big sagebrush (Artemisia tridentata spp.), herbaceous, annual herbaceous, litter, and bare ground, and shrub and sagebrush height. To produce maps of component cover, we collected field data that were then associated with spectral values in WorldView-2 and Landsat imagery using regression tree models. The current report outlines the procedures and results of converting these continuous cover components to three thematic NLCD classes: barren, shrubland, and grassland. To accomplish this, we developed a series of indices and conditional models using continuous cover of shrub, bare ground, herbaceous, and litter as inputs. The continuous cover data are currently available for two large regions in the Western United States. Accuracy of the “cross-walked” product was assessed relative to that of NLCD 2011 at independent validation points (n=787) across these two regions. Overall thematic accuracy of the “cross-walked” product was 0.70, compared to 0.63 for NLCD 2011. The kappa value was considerably higher for the “cross-walked” product at 0.41 compared to 0.28 for NLCD 2011. Accuracy was also evaluated relative to the values of training points (n=75,000) used in the development of the continuous cover components. Again, the “cross-walked” product outperformed NLCD

  6. Integral staggered point-matching method for millimeter-wave reflective diffraction gratings on electron cyclotron heating systems

    International Nuclear Information System (INIS)

    Xia, Donghui; Huang, Mei; Wang, Zhijiang; Zhang, Feng; Zhuang, Ge

    2016-01-01

    Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.

  7. Integral staggered point-matching method for millimeter-wave reflective diffraction gratings on electron cyclotron heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Donghui [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Huang, Mei [Southwestern Institute of Physics, 610041 Chengdu (China); Wang, Zhijiang, E-mail: wangzj@hust.edu.cn [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Zhang, Feng [Southwestern Institute of Physics, 610041 Chengdu (China); Zhuang, Ge [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China)

    2016-10-15

    Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.

  8. Evaluation of the point-centred-quarter method of sampling ...

    African Journals Online (AJOL)

    -quarter method.The parameter which was most efficiently sampled was species composition relativedensity) with 90% replicate similarity being achieved with 100 point-centred-quarters. However, this technique cannot be recommended, even ...

  9. The Closest Point Method and Multigrid Solvers for Elliptic Equations on Surfaces

    KAUST Repository

    Chen, Yujia

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Elliptic partial differential equations are important from both application and analysis points of view. In this paper we apply the closest point method to solve elliptic equations on general curved surfaces. Based on the closest point representation of the underlying surface, we formulate an embedding equation for the surface elliptic problem, then discretize it using standard finite differences and interpolation schemes on banded but uniform Cartesian grids. We prove the convergence of the difference scheme for the Poisson\\'s equation on a smooth closed curve. In order to solve the resulting large sparse linear systems, we propose a specific geometric multigrid method in the setting of the closest point method. Convergence studies in both the accuracy of the difference scheme and the speed of the multigrid algorithm show that our approaches are effective.

  10. A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator

    Science.gov (United States)

    Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai

    2017-05-01

    To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.

  11. Optimal Control Method of Parabolic Partial Differential Equations and Its Application to Heat Transfer Model in Continuous Cast Secondary Cooling Zone

    Directory of Open Access Journals (Sweden)

    Yuan Wang

    2015-01-01

    Full Text Available Our work is devoted to a class of optimal control problems of parabolic partial differential equations. Because of the partial differential equations constraints, it is rather difficult to solve the optimization problem. The gradient of the cost function can be found by the adjoint problem approach. Based on the adjoint problem approach, the gradient of cost function is proved to be Lipschitz continuous. An improved conjugate method is applied to solve this optimization problem and this algorithm is proved to be convergent. This method is applied to set-point values in continuous cast secondary cooling zone. Based on the real data in a plant, the simulation experiments show that the method can ensure the steel billet quality. From these experiment results, it is concluded that the improved conjugate gradient algorithm is convergent and the method is effective in optimal control problem of partial differential equations.

  12. Gender preference between traditional and PowerPoint methods of teaching gross anatomy.

    Science.gov (United States)

    Nuhu, Saleh; Adamu, Lawan Hassan; Buba, Mohammed Alhaji; Garba, Sani Hyedima; Dalori, Babagana Mohammed; Yusuf, Ashiru Hassan

    2018-01-01

    Teaching and learning process is increasingly metamorphosing from the traditional chalk and talk to the modern dynamism in the information and communication technology. Medical education is no exception to this dynamism more especially in the teaching of gross anatomy, which serves as one of the bases of understanding the human structure. This study was conducted to determine the gender preference of preclinical medical students on the use of traditional (chalk and talk) and PowerPoint presentation in the teaching of gross anatomy. This was cross-sectional and prospective study, which was conducted among preclinical medical students in the University of Maiduguri, Nigeria. Using simple random techniques, a questionnaire was circulated among 280 medical students, where 247 students filled the questionnaire appropriately. The data obtained was analyzed using SPSS version 20 (IBM Corporation, Armonk, NY, USA) to find the method preferred by the students among other things. Majority of the preclinical medical students in the University of Maiduguri preferred PowerPoint method in the teaching of gross anatomy over the conventional methods. The Cronbach alpha value of 0.76 was obtained which is an acceptable level of internal consistency. A statistically significant association was found between gender and preferred method of lecture delivery on the clarity of lecture content where females prefer the conventional method of lecture delivery whereas males prefer the PowerPoint method, On the reproducibility of text and diagram, females prefer PowerPoint method of teaching gross anatomy while males prefer the conventional method of teaching gross anatomy. There are gender preferences with regard to clarity of lecture contents and reproducibility of text and diagram. It was also revealed from this study that majority of the preclinical medical students in the University of Maiduguri prefer PowerPoint presentation over the traditional chalk and talk method in most of the

  13. A Reference Point Construction Method Using Mobile Terminals and the Indoor Localization Evaluation in the Centroid Method

    Directory of Open Access Journals (Sweden)

    Takahiro Yamaguchi

    2015-05-01

    Full Text Available As smartphones become widespread, a variety of smartphone applications are being developed. This paper proposes a method for indoor localization (i.e., positioning that uses only smartphones, which are general-purpose mobile terminals, as reference point devices. This method has the following features: (a the localization system is built with smartphones whose movements are confined to respective limited areas. No fixed reference point devices are used; (b the method does not depend on the wireless performance of smartphones and does not require information about the propagation characteristics of the radio waves sent from reference point devices, and (c the method determines the location at the application layer, at which location information can be easily incorporated into high-level services. We have evaluated the level of localization accuracy of the proposed method by building a software emulator that modeled an underground shopping mall. We have confirmed that the determined location is within a small area in which the user can find target objects visually.

  14. Improved DEA Cross Efficiency Evaluation Method Based on Ideal and Anti-Ideal Points

    Directory of Open Access Journals (Sweden)

    Qiang Hou

    2018-01-01

    Full Text Available A new model is introduced in the process of evaluating efficiency value of decision making units (DMUs through data envelopment analysis (DEA method. Two virtual DMUs called ideal point DMU and anti-ideal point DMU are combined to form a comprehensive model based on the DEA method. The ideal point DMU is taking self-assessment system according to efficiency concept. The anti-ideal point DMU is taking other-assessment system according to fairness concept. The two distinctive ideal point models are introduced to the DEA method and combined through using variance ration. From the new model, a reasonable result can be obtained. Numerical examples are provided to illustrate the new constructed model and certify the rationality of the constructed model through relevant analysis with the traditional DEA model.

  15. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Science.gov (United States)

    Pereira, N. F.; Sitek, A.

    2010-09-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  16. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    International Nuclear Information System (INIS)

    Pereira, N F; Sitek, A

    2010-01-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  17. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, N F; Sitek, A, E-mail: nfp4@bwh.harvard.ed, E-mail: asitek@bwh.harvard.ed [Department of Radiology, Brigham and Women' s Hospital-Harvard Medical School Boston, MA (United States)

    2010-09-21

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  18. Comparative analysis among several methods used to solve the point kinetic equations

    International Nuclear Information System (INIS)

    Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da

    2007-01-01

    The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)

  19. Comparative analysis among several methods used to solve the point kinetic equations

    Energy Technology Data Exchange (ETDEWEB)

    Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear; E-mails: alupo@if.ufrj.br; agoncalves@con.ufrj.br; aquilino@lmp.ufrj.br; fernando@con.ufrj.br

    2007-07-01

    The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)

  20. Interior-Point Method for Non-Linear Non-Convex Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2004-01-01

    Roč. 11, č. 5-6 (2004), s. 431-453 ISSN 1070-5325 R&D Projects: GA AV ČR IAA1030103 Institutional research plan: CEZ:AV0Z1030915 Keywords : non-linear programming * interior point methods * indefinite systems * indefinite preconditioners * preconditioned conjugate gradient method * merit functions * algorithms * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.727, year: 2004

  1. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    Science.gov (United States)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  2. Primal Interior-Point Method for Large Sparse Minimax Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 45, č. 5 (2009), s. 841-864 ISSN 0023-5954 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * minimax optimization * nonsmooth optimization * interior-point methods * modified Newton methods * variable metric methods * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.445, year: 2009 http://dml.cz/handle/10338.dmlcz/140034

  3. Method and apparatus for continuous sampling

    International Nuclear Information System (INIS)

    Marcussen, C.

    1982-01-01

    An apparatus and method for continuously sampling a pulverous material flow includes means for extracting a representative subflow from a pulverous material flow. A screw conveyor is provided to cause the extracted subflow to be pushed upwardly through a duct to an overflow. Means for transmitting a radiation beam transversely to the subflow in the duct, and means for sensing the transmitted beam through opposite pairs of windows in the duct are provided to measure the concentration of one or more constituents in the subflow. (author)

  4. Evaluating the impact of continuous quality improvement methods at hospitals in Tanzania: a cluster-randomized trial.

    Science.gov (United States)

    Kamiya, Yusuke; Ishijma, Hisahiro; Hagiwara, Akiko; Takahashi, Shizu; Ngonyani, Henook A M; Samky, Eleuter

    2017-02-01

    To evaluate the impact of implementing continuous quality improvement (CQI) methods on patient's experiences and satisfaction in Tanzania. Cluster-randomized trial, which randomly allocated district-level hospitals into treatment group and control group, was conducted. Sixteen district-level hospitals in Kilimanjaro and Manyara regions of Tanzania. Outpatient exit surveys targeting totally 3292 individuals, 1688 in the treatment and 1604 in the control group, from 3 time-points between September 2011 and September 2012. Implementation of the 5S (Sort, Set, Shine, Standardize, Sustain) approach as a CQI method at outpatient departments over 12 months. Cleanliness, waiting time, patient's experience, patient's satisfaction. The 5S increased cleanliness in the outpatient department, patients' subjective waiting time and overall satisfaction. However, negligible effects were confirmed for patient's experiences on hospital staff behaviours. The 5S as a CQI method is effective in enhancing hospital environment and service delivery; that are subjectively assessed by outpatients even during the short intervention period. Nevertheless, continuous efforts will be needed to connect CQI practices with the further improvement in the delivery of quality health care. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  5. Rainfall Deduction Method for Estimating Non-Point Source Pollution Load for Watershed

    OpenAIRE

    Cai, Ming; Li, Huai-en; KAWAKAMI, Yoji

    2004-01-01

    The water pollution can be divided into point source pollution (PSP) and non-point source pollution (NSP). Since the point source pollution has been controlled, the non-point source pollution is becoming the main pollution source. The prediction of NSP load is being increasingly important in water pollution controlling and planning in watershed. Considering the monitoring data shortage of NPS in China, a practical estimation method of non-point source pollution load --- rainfall deduction met...

  6. Evaluation of null-point detection methods on simulation data

    Science.gov (United States)

    Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano

    2014-05-01

    We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.

  7. A Bayesian MCMC method for point process models with intractable normalising constants

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2004-01-01

    to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...... and pairwise interaction point processes....

  8. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    Science.gov (United States)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  9. Hydrothermal optimal power flow using continuation method

    International Nuclear Information System (INIS)

    Raoofat, M.; Seifi, H.

    2001-01-01

    The problem of optimal economic operation of hydrothermal electric power systems is solved using powerful continuation method. While in conventional approach, fixed generation voltages are used to avoid convergence problems, in the algorithm, they are treated as variables so that better solutions can be obtained. The algorithm is tested for a typical 5-bus and 17-bus New Zealand networks. Its capabilities and promising results are assessed

  10. Method to minimize the low-frequency neutral-point voltage oscillations with time-offset injection for neutral-point-clamped inverters

    DEFF Research Database (Denmark)

    Choi, Uimin; Lee, Kyo-Beum; Blaabjerg, Frede

    2013-01-01

    This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time-offset to the three phase turn-on times. The proper time-offset is simply calculated considering the phase currents and dwell...

  11. Symbol recognition produced by points of tactile stimulation: the illusion of linear continuity.

    Science.gov (United States)

    Gonzales, G R

    1996-11-01

    To determine whether tactile receptive communication is possible through the use of a mechanical device that produces the phi phenomenon on the body surface. Twenty-six subjects (11 blind and 15 sighted participants) were tested with use of a tactile communication device (TCD) that produces an illusion of linear continuity forming numbers on the dorsal aspect of the wrist. Recognition of a number or number set was the goal. A TCD with protruding and vibrating solenoids produced sequentially delivered points of cutaneous stimulation along a pattern resembling numbers and created the illusion of dragging a vibrating stylet to form numbers, similar to what might be felt by testing for graphesthesia. Blind subjects recognized numbers with fewer trials than did sighted subjects, although all subjects were able to recognize all the numbers produced by the TCD. Subjects who had been blind since birth and had no prior tactile exposure to numbers were able to draw the numbers after experiencing them delivered by the TCD even though they did not recognize their meaning. The phi phenomenon is probably responsible for the illusion of continuous lines in the shape of numbers as produced by the TCD. This tactile illusion could potentially be used for more complex tactile communications such as letters and words.

  12. Method for continuous synthesis of metal oxide powders

    Science.gov (United States)

    Berry, David A.; Haynes, Daniel J.; Shekhawat, Dushyant; Smith, Mark W.

    2015-09-08

    A method for the rapid and continuous production of crystalline mixed-metal oxides from a precursor solution comprised of a polymerizing agent, chelated metal ions, and a solvent. The method discharges solution droplets of less than 500 .mu.m diameter using an atomizing or spray-type process into a reactor having multiple temperature zones. Rapid evaporation occurs in a first zone, followed by mixed-metal organic foam formation in a second zone, followed by amorphous and partially crystalline oxide precursor formation in a third zone, followed by formation of the substantially crystalline mixed-metal oxide in a fourth zone. The method operates in a continuous rather than batch manner and the use of small droplets as the starting material for the temperature-based process allows relatively high temperature processing. In a particular embodiment, the first zone operates at 100-300.degree. C., the second zone operates at 300-700.degree. C., and the third operates at 700-1000.degree. C., and fourth zone operates at at least 700.degree. C. The resulting crystalline mixed-metal oxides display a high degree of crystallinity and sphericity with typical diameters on the order of 50 .mu.m or less.

  13. Two-point method uncertainty during control and measurement of cylindrical element diameters

    Science.gov (United States)

    Glukhov, V. I.; Shalay, V. V.; Radev, H.

    2018-04-01

    The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.

  14. 'Continuation rate', 'use-effectiveness' and their assessment for the diaphragm and jelly method.

    Science.gov (United States)

    Chandrasekaran, C; Karkal, M

    1972-11-01

    Abstract The application of the life-table technique in the calculation of use-effectiveness of a contraceptive was proposed by Potter in 1963.(1) The technique was also found to be useful in assessing the duration for which the use of a contraceptive was continued. The keen interest that existed in the use of IUD in the mid-1960's was reflected in the terminology developed for assessment of the continuity of use. 'Retention rate' was a frequently used index.(2) Because of the development of the concept of segments whose end-period determined either termination of the use of a method or its continuance on a cut-off date, 'closure rate' and 'termination rate' have been used as measures of the discontinuance of the use of methods primarily of the IUD.(3) While discussing concepts relating to acceptance, use and effectiveness of family planning methods, more generally, an expert group suggested that 'continuation' should be used to denote that a client (or a couple) had begun to practise a method and that the method was still being practised.(4) Since this group defined 'an acceptor' as a person taking service and/or advice, i.e. having an IUD insertion or a sterilization operation or receiving supplies (or advice on methods such as 'rhythm' or coitus-interruptus with the intent of using the method), the base for the assessment of continuation rates, according to this group, would be only those acceptors who had begun using the method. The lifetable method has also been used for the study of the continuation rate for pill acceptors.(5) Balakrishnan, et al., made a study of continuation rates of oral contraceptives using the multiple decrement life-table technique.(6).

  15. Numerical continuation methods for dynamical systems path following and boundary value problems

    CERN Document Server

    Krauskopf, Bernd; Galan-Vioque, Jorge

    2007-01-01

    Path following in combination with boundary value problem solvers has emerged as a continuing and strong influence in the development of dynamical systems theory and its application. It is widely acknowledged that the software package AUTO - developed by Eusebius J. Doedel about thirty years ago and further expanded and developed ever since - plays a central role in the brief history of numerical continuation. This book has been compiled on the occasion of Sebius Doedel''s 60th birthday. Bringing together for the first time a large amount of material in a single, accessible source, it is hoped that the book will become the natural entry point for researchers in diverse disciplines who wish to learn what numerical continuation techniques can achieve. The book opens with a foreword by Herbert B. Keller and lecture notes by Sebius Doedel himself that introduce the basic concepts of numerical bifurcation analysis. The other chapters by leading experts discuss continuation for various types of systems and objects ...

  16. Interior Point Methods on GPU with application to Model Predictive Control

    DEFF Research Database (Denmark)

    Gade-Nielsen, Nicolai Fog

    The goal of this thesis is to investigate the application of interior point methods to solve dynamical optimization problems, using a graphical processing unit (GPU) with a focus on problems arising in Model Predictice Control (MPC). Multi-core processors have been available for over ten years now...... software package called GPUOPT, available under the non-restrictive MIT license. GPUOPT includes includes a primal-dual interior-point method, which supports both the CPU and the GPU. It is implemented as multiple components, where the matrix operations and solver for the Newton directions is separated...

  17. Evaluation of the performance of a point-of-care method for total and differential white blood cell count in clozapine users.

    Science.gov (United States)

    Bui, H N; Bogers, J P A M; Cohen, D; Njo, T; Herruer, M H

    2016-12-01

    We evaluated the performance of the HemoCue WBC DIFF, a point-of-care device for total and differential white cell count, primarily to test its suitability for the mandatory white blood cell monitoring in clozapine use. Leukocyte count and 5-part differentiation was performed by the point-of-care device and by routine laboratory method in venous EDTA-blood samples from 20 clozapine users, 20 neutropenic patients, and 20 healthy volunteers. From the volunteers, also a capillary sample was drawn. Intra-assay reproducibility and drop-to-drop variation were tested. The correlation between both methods in venous samples was r > 0.95 for leukocyte, neutrophil, and lymphocyte counts. The correlation between point-of-care (capillary sample) and routine (venous sample) methods for these cells was 0.772; 0.817 and 0.798, respectively. Only for leukocyte and neutrophil counts, the intra-assay reproducibility was sufficient. The point-of-care device can be used to screen for leukocyte and neutrophil counts. Because of the relatively high measurement uncertainty and poor correlation with venous samples, we recommend to repeat the measurement with a venous sample if cell counts are in the lower reference range. In case of clozapine therapy, neutropenia can probably be excluded if high neutrophil counts are found and patients can continue their therapy. © 2016 John Wiley & Sons Ltd.

  18. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    Science.gov (United States)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  19. A GROSS ERROR ELIMINATION METHOD FOR POINT CLOUD DATA BASED ON KD-TREE

    Directory of Open Access Journals (Sweden)

    Q. Kang

    2018-04-01

    Full Text Available Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data’s pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  20. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    Science.gov (United States)

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  1. Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method)

    DEFF Research Database (Denmark)

    Hansen, Susanne Brunsgaard; Berg, Rolf W.; Stenby, Erling Halfdan

    Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf......Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf...

  2. The Closest Point Method and Multigrid Solvers for Elliptic Equations on Surfaces

    KAUST Repository

    Chen, Yujia; Macdonald, Colin B.

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Elliptic partial differential equations are important from both application and analysis points of view. In this paper we apply the closest point method to solve elliptic equations on general

  3. Analysis of Spatial Interpolation in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2010-01-01

    are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...

  4. New methods to interpolate large volume of data from points or particles (Mesh-Free) methods application for its scientific visualization

    International Nuclear Information System (INIS)

    Reyes Lopez, Y.; Yervilla Herrera, H.; Viamontes Esquivel, A.; Recarey Morfa, C. A.

    2009-01-01

    In the following paper we developed a new method to interpolate large volumes of scattered data, focused mainly on the results of the Mesh free Methods, Points Methods and the Particles Methods application. Through this one, we use local radial basis function as interpolating functions. We also use over-tree as the data structure that allows to accelerate the localization of the data that influences to interpolate the values at a new point, speeding up the application of scientific visualization techniques to generate images from large data volumes from the application of Mesh-free Methods, Points and Particle Methods, in the resolution of diverse models of physics-mathematics. As an example, the results obtained after applying this method using the local interpolation functions of Shepard are shown. (Author) 22 refs

  5. Photoelastic method to quantitatively visualise the evolution of whole-field stress in 3D printed models subject to continuous loading processes

    Science.gov (United States)

    Ju, Yang; Ren, Zhangyu; Wang, Li; Mao, Lingtao; Chiang, Fu-Pen

    2018-01-01

    The combination of three-dimensional (3D) printing techniques and photoelastic testing is a promising way to quantitatively determine the continuous whole-field stress distributions in solids that are characterized by complex structures. However, photoelastic testing produces wrapped isoclinic and isochromatic phase maps, and unwrapping these maps has always been a significant challenge. To realize the visualization and transparentization of the stress fields in complex structures, we report a new approach to quantify the continuous evolution of the whole-field stress in photosensitive material that is applicable to the fabrication of complex structures using 3D printing technology. The stress fringe orders are determined by analyzing a series of continuous frames extracted from a video recording of the fringe changes over the entire loading process. The integer portion of the fringe orders at a specific point on the model can be determined by counting the valleys of the light intensity change curve over the whole loading process, and the fractional portion can be calculated based on the cosine function between the light intensity and retardation. This method allows the fringe orders to be determined from the video itself, which significantly improves characterization accuracy and simplifies the experimental operation over the entire processes. To validate the proposed method, we compare the results of the theoretical calculations to those of experiments based on the diametric compression of a circular disc prepared by a 3D printer with photosensitive resin. The results indicate that the method can accurately determine the stress fringe order, except for points where the deformation is too large to differentiate the fringes pertaining to photoplasticity.

  6. Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem

    Science.gov (United States)

    Omagari, Hiroki; Higashino, Shin-Ichiro

    2018-04-01

    In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.

  7. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  8. A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.

    Science.gov (United States)

    Quan, Quan; Cai, Kai-Yuan

    2016-02-01

    In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.

  9. Five-point form of the nodal diffusion method and comparison with finite-difference

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1988-01-01

    Nodal Methods have been derived, implemented and numerically tested for several problems in physics and engineering. In the field of nuclear engineering, many nodal formalisms have been used for the neutron diffusion equation, all yielding results which were far more computationally efficient than conventional Finite Difference (FD) and Finite Element (FE) methods. However, not much effort has been devoted to theoretically comparing nodal and FD methods in order to explain the very high accuracy of the former. In this summary we outline the derivation of a simple five-point form for the lowest order nodal method and compare it to the traditional five-point, edge-centered FD scheme. The effect of the observed differences on the accuracy of the respective methods is established by considering a simple test problem. It must be emphasized that the nodal five-point scheme derived here is mathematically equivalent to previously derived lowest order nodal methods. 7 refs., 1 tab

  10. A Multi-Point Method Considering the Maximum Power Point Tracking Dynamic Process for Aerodynamic Optimization of Variable-Speed Wind Turbine Blades

    Directory of Open Access Journals (Sweden)

    Zhiqiang Yang

    2016-05-01

    Full Text Available Due to the dynamic process of maximum power point tracking (MPPT caused by turbulence and large rotor inertia, variable-speed wind turbines (VSWTs cannot maintain the optimal tip speed ratio (TSR from cut-in wind speed up to the rated speed. Therefore, in order to increase the total captured wind energy, the existing aerodynamic design for VSWT blades, which only focuses on performance improvement at a single TSR, needs to be improved to a multi-point design. In this paper, based on a closed-loop system of VSWTs, including turbulent wind, rotor, drive train and MPPT controller, the distribution of operational TSR and its description based on inflow wind energy are investigated. Moreover, a multi-point method considering the MPPT dynamic process for the aerodynamic optimization of VSWT blades is proposed. In the proposed method, the distribution of operational TSR is obtained through a dynamic simulation of the closed-loop system under a specific turbulent wind, and accordingly the multiple design TSRs and the corresponding weighting coefficients in the objective function are determined. Finally, using the blade of a National Renewable Energy Laboratory (NREL 1.5 MW wind turbine as the baseline, the proposed method is compared with the conventional single-point optimization method using the commercial software Bladed. Simulation results verify the effectiveness of the proposed method.

  11. Primal Interior Point Method for Minimization of Generalized Minimax Functions

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2010-01-01

    Roč. 46, č. 4 (2010), s. 697-721 ISSN 0023-5954 R&D Projects: GA ČR GA201/09/1957 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * nonsmooth optimization * generalized minimax optimization * interior-point methods * modified Newton methods * variable metric methods * global convergence * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://dml.cz/handle/10338.dmlcz/140779

  12. Method of analytic continuation by duality in QCD: Beyond QCD sum rules

    International Nuclear Information System (INIS)

    Kremer, M.; Nasrallah, N.F.; Papadopoulos, N.A.; Schilcher, K.

    1986-01-01

    We present the method of analytic continuation by duality which allows the approximate continuation of QCD amplitudes to small values of the momentum variables where direct perturbative calculations are not possible. This allows a substantial extension of the domain of applications of hadronic QCD phenomenology. The method is illustrated by a simple example which shows its essential features

  13. Continuation of connecting orbits in 3d-ODEs. (ii) cycle-to-cycle connections.

    NARCIS (Netherlands)

    Doedel, E.J.; Kooi, B.W.; van Voorn, G.A.K.; Kuznetzov, Y.A.

    2009-01-01

    In Part I of this paper we have discussed new methods for the numerical continuation of point-to-cycle connecting orbits in three-dimensional autonomous ODE's using projection boundary conditions. In this second part we extend the method to the numerical continuation of cycle-to-cycle connecting

  14. Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method

    Directory of Open Access Journals (Sweden)

    Darae Jeong

    2018-01-01

    Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.

  15. Data-Driven Method for Wind Turbine Yaw Angle Sensor Zero-Point Shifting Fault Detection

    Directory of Open Access Journals (Sweden)

    Yan Pei

    2018-03-01

    Full Text Available Wind turbine yaw control plays an important role in increasing the wind turbine production and also in protecting the wind turbine. Accurate measurement of yaw angle is the basis of an effective wind turbine yaw controller. The accuracy of yaw angle measurement is affected significantly by the problem of zero-point shifting. Hence, it is essential to evaluate the zero-point shifting error on wind turbines on-line in order to improve the reliability of yaw angle measurement in real time. Particularly, qualitative evaluation of the zero-point shifting error could be useful for wind farm operators to realize prompt and cost-effective maintenance on yaw angle sensors. In the aim of qualitatively evaluating the zero-point shifting error, the yaw angle sensor zero-point shifting fault is firstly defined in this paper. A data-driven method is then proposed to detect the zero-point shifting fault based on Supervisory Control and Data Acquisition (SCADA data. The zero-point shifting fault is detected in the proposed method by analyzing the power performance under different yaw angles. The SCADA data are partitioned into different bins according to both wind speed and yaw angle in order to deeply evaluate the power performance. An indicator is proposed in this method for power performance evaluation under each yaw angle. The yaw angle with the largest indicator is considered as the yaw angle measurement error in our work. A zero-point shifting fault would trigger an alarm if the error is larger than a predefined threshold. Case studies from several actual wind farms proved the effectiveness of the proposed method in detecting zero-point shifting fault and also in improving the wind turbine performance. Results of the proposed method could be useful for wind farm operators to realize prompt adjustment if there exists a large error of yaw angle measurement.

  16. Interior Point Method for Solving Fuzzy Number Linear Programming Problems Using Linear Ranking Function

    Directory of Open Access Journals (Sweden)

    Yi-hua Zhong

    2013-01-01

    Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.

  17. Analytical resource assessment method for continuous (unconventional) oil and gas accumulations - The "ACCESS" Method

    Science.gov (United States)

    Crovelli, Robert A.; revised by Charpentier, Ronald R.

    2012-01-01

    The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.

  18. PKI, Gamma Radiation Reactor Shielding Calculation by Point-Kernel Method

    International Nuclear Information System (INIS)

    Li Chunhuai; Zhang Liwu; Zhang Yuqin; Zhang Chuanxu; Niu Xihua

    1990-01-01

    1 - Description of program or function: This code calculates radiation shielding problem of gamma-ray in geometric space. 2 - Method of solution: PKI uses a point kernel integration technique, describes radiation shielding geometric space by using geometric space configuration method and coordinate conversion, and makes use of calculation result of reactor primary shielding and flow regularity in loop system for coolant

  19. Methods of fast, multiple-point in vivo T1 determination

    International Nuclear Information System (INIS)

    Zhang, Y.; Spigarelli, M.; Fencil, L.E.; Yeung, H.N.

    1989-01-01

    Two methods of rapid, multiple-point determination of T1 in vivo have been evaluated with a phantom consisting of vials of gel in different Mn + + concentrations. The first method was an inversion-recovery- on-the-fly technique, and the second method used a variable- tip-angle (α) progressive saturation with two sub- sequences of different repetition times. In the first method, 1/T1 was evaluated by an exponential fit. In the second method, 1/T1 was obtained iteratively with a linear fit and then readjusted together with α to a model equation until self-consistency was reached

  20. A new method to identify the location of the kick point during the golf swing.

    Science.gov (United States)

    Joyce, Christopher; Burnett, Angus; Matthews, Miccal

    2013-12-01

    No method currently exists to determine the location of the kick point during the golf swing. This study consisted of two phases. In the first phase, the static kick point of 10 drivers (having identical grip and head but fitted with shafts of differing mass and stiffness) was determined by two methods: (1) a visual method used by professional club fitters and (2) an algorithm using 3D locations of markers positioned on the golf club. Using level of agreement statistics, we showed the latter technique was a valid method to determine the location of the static kick point. In phase two, the validated method was used to determine the dynamic kick point during the golf swing. Twelve elite male golfers had three shots analyzed for two drivers fitted with stiff shafts of differing mass (56 g and 78 g). Excellent between-trial reliability was found for dynamic kick point location. Differences were found for dynamic kick point location when compared with static kick point location, as well as between-shaft and within-shaft. These findings have implications for future investigations examining the bending behavior of golf clubs, as well as being useful to examine relationships between properties of the shaft and launch parameters.

  1. Statistical methods for change-point detection in surface temperature records

    Science.gov (United States)

    Pintar, A. L.; Possolo, A.; Zhang, N. F.

    2013-09-01

    We describe several statistical methods to detect possible change-points in a time series of values of surface temperature measured at a meteorological station, and to assess the statistical significance of such changes, taking into account the natural variability of the measured values, and the autocorrelations between them. These methods serve to determine whether the record may suffer from biases unrelated to the climate signal, hence whether there may be a need for adjustments as considered by M. J. Menne and C. N. Williams (2009) "Homogenization of Temperature Series via Pairwise Comparisons", Journal of Climate 22 (7), 1700-1717. We also review methods to characterize patterns of seasonality (seasonal decomposition using monthly medians or robust local regression), and explain the role they play in the imputation of missing values, and in enabling robust decompositions of the measured values into a seasonal component, a possible climate signal, and a station-specific remainder. The methods for change-point detection that we describe include statistical process control, wavelet multi-resolution analysis, adaptive weights smoothing, and a Bayesian procedure, all of which are applicable to single station records.

  2. Multiscale Modeling using Molecular Dynamics and Dual Domain Material Point Method

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division. Fluid Dynamics and Solid Mechanics Group, T-3; Rice Univ., Houston, TX (United States)

    2016-07-07

    For problems involving large material deformation rate, the material deformation time scale can be shorter than the material takes to reach a thermodynamical equilibrium. For such problems, it is difficult to obtain a constitutive relation. History dependency become important because of thermodynamic non-equilibrium. Our goal is to build a multi-scale numerical method which can bypass the need for a constitutive relation. In conclusion, multi-scale simulation method is developed based on the dual domain material point (DDMP). Molecular dynamics (MD) simulation is performed to calculate stress. Since the communication among material points is not necessary, the computation can be done embarrassingly parallel in CPU-GPU platform.

  3. A resilience perspective to water risk management: case-study application of the adaptation tipping point method

    Science.gov (United States)

    Gersonius, Berry; Ashley, Richard; Jeuken, Ad; Nasruddin, Fauzy; Pathirana, Assela; Zevenbergen, Chris

    2010-05-01

    start the identification and analysis of adaptive strategies at the end of PSIR scheme: impact and examine whether, and for how long, current risk management strategies will continue to be effective under different future conditions. The most noteworthy application of this approach is the adaptation tipping point method. Adaptation tipping points (ATP) are defined as the points where the magnitude of change is such that the current risk management strategy can no longer meet its objectives. In the ATP method, policy objectives, determining aspirational functioning, are taken as the starting point. Also, the current measures to achieve these objectives are described. This is followed by a sensitivity analysis to determine the optimal and critical boundary conditions (state). Lastly, the state is related to pressures in terms of future change. It should be noted that in the ATP method the driver for adopting a new risk management strategy is not future change as such, but rather failing to meet the policy objectives. In the current paper, the ATP method is applied to the case study of an existing stormwater system in Dordrecht (the Netherlands). This application shows the potential of the ATP method to reduce the complexity of implementing a resilience-focused approach to water risk management. It is expected that this will help foster greater practical relevance of resilience as a perspective for the planning of water management structures.

  4. Sampling trace organic compounds in water: a comparison of a continuous active sampler to continuous passive and discrete sampling methods.

    Science.gov (United States)

    Coes, Alissa L; Paretti, Nicholas V; Foreman, William T; Iverson, Jana L; Alvarez, David A

    2014-03-01

    A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method. Published by Elsevier B.V.

  5. Sampling trace organic compounds in water: a comparison of a continuous active sampler to continuous passive and discrete sampling methods

    Science.gov (United States)

    Coes, Alissa L.; Paretti, Nicholas V.; Foreman, William T.; Iverson, Jana L.; Alvarez, David A.

    2014-01-01

    A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.

  6. Generalized treatment of point reactor kinetics driven by random reactivity fluctuations via the Wiener-Hermite functional method

    International Nuclear Information System (INIS)

    Behringer, K.

    1991-02-01

    In a recent paper by Behringer et al. (1990), the Wiener-Hermite Functional (WHF) method has been applied to point reactor kinetics excited by Gaussian random reactivity noise under stationary conditions, in order to calculate the neutron steady-state value and the neutron power spectral density (PSD) in a second-order (WHF-2) approximation. For simplicity, delayed neutrons and any feedback effects have been disregarded. The present study is a straightforward continuation of the previous one, treating the problem more generally by including any number of delayed neutron groups. For the case of white reactivity noise, the accuracy of the approach is determined by comparison with the exact solution available from the Fokker-Planck method. In the numerical comparisons, the first-oder (WHF-1) approximation of the PSD is also considered. (author) 4 figs., 10 refs

  7. Coordinate alignment of combined measurement systems using a modified common points method

    Science.gov (United States)

    Zhao, G.; Zhang, P.; Xiao, W.

    2018-03-01

    The co-ordinate metrology has been extensively researched for its outstanding advantages in measurement range and accuracy. The alignment of different measurement systems is usually achieved by integrating local coordinates via common points before measurement. The alignment errors would accumulate and significantly reduce the global accuracy, thus need to be minimized. In this thesis, a modified common points method (MCPM) is proposed to combine different traceable system errors of the cooperating machines, and optimize the global accuracy by introducing mutual geometric constraints. The geometric constraints, obtained by measuring the common points in individual local coordinate systems, provide the possibility to reduce the local measuring uncertainty whereby enhance the global measuring certainty. A simulation system is developed in Matlab to analyze the feature of MCPM using the Monto-Carlo method. An exemplary setup is constructed to verify the feasibility and efficiency of the proposed method associated with laser tracker and indoor iGPS systems. Experimental results show that MCPM could significantly improve the alignment accuracy.

  8. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    International Nuclear Information System (INIS)

    Nakata, Susumu

    2008-01-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  9. Continuous non-invasive blood glucose monitoring by spectral image differencing method

    Science.gov (United States)

    Huang, Hao; Liao, Ningfang; Cheng, Haobo; Liang, Jing

    2018-01-01

    Currently, the use of implantable enzyme electrode sensor is the main method for continuous blood glucose monitoring. But the effect of electrochemical reactions and the significant drift caused by bioelectricity in body will reduce the accuracy of the glucose measurements. So the enzyme-based glucose sensors need to be calibrated several times each day by the finger-prick blood corrections. This increases the patient's pain. In this paper, we proposed a method for continuous Non-invasive blood glucose monitoring by spectral image differencing method in the near infrared band. The method uses a high-precision CCD detector to switch the filter in a very short period of time, obtains the spectral images. And then by using the morphological method to obtain the spectral image differences, the dynamic change of blood sugar is reflected in the image difference data. Through the experiment proved that this method can be used to monitor blood glucose dynamically to a certain extent.

  10. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    Science.gov (United States)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  11. A portable low-cost 3D point cloud acquiring method based on structure light

    Science.gov (United States)

    Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia

    2018-03-01

    A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.

  12. A primal-dual interior point method for large-scale free material optimization

    DEFF Research Database (Denmark)

    Weldeyesus, Alemseged Gebrehiwot; Stolpe, Mathias

    2015-01-01

    Free Material Optimization (FMO) is a branch of structural optimization in which the design variable is the elastic material tensor that is allowed to vary over the design domain. The requirements are that the material tensor is symmetric positive semidefinite with bounded trace. The resulting...... optimization problem is a nonlinear semidefinite program with many small matrix inequalities for which a special-purpose optimization method should be developed. The objective of this article is to propose an efficient primal-dual interior point method for FMO that can robustly and accurately solve large...... of iterations the interior point method requires is modest and increases only marginally with problem size. The computed optimal solutions obtain a higher precision than other available special-purpose methods for FMO. The efficiency and robustness of the method is demonstrated by numerical experiments on a set...

  13. Continuous anneal method for characterizing the thermal stability of ultraviolet Bragg gratings

    DEFF Research Database (Denmark)

    Rathje, Jacob; Kristensen, Martin; Pedersen, Jens Engholm

    2000-01-01

    We present a new method for determining the long-term stability of UV-induced fiber Bragg gratings. We use a continuous temperature ramp method in which systematic variation of the ramp speed probes both the short- and long-term stability. Results are obtained both for gratings written in D2 loaded...... we resolve two separate energy distributions, suggesting that two different defects are involved. The experiments show that complicated decays originating from various energy distributions can be analyzed with this continuous isochronal anneal method. The results have both practical applications...

  14. The overlap Dirac operator as a continued fraction

    International Nuclear Information System (INIS)

    Wenger, U.; Deutsches Elektronen-Synchrotron

    2004-03-01

    We use a continued fraction expansion of the sign-function in order to obtain a five dimensional formulation of the overlap lattice Dirac operator. Within this formulation the inverse of the overlap operator can be calculated by a single Krylov space method and nested conjugate gradient procedures are avoided. We point out that the five dimensional linear system can be made well conditioned using equivalence transformations on the continued fractions. (orig.)

  15. The complexity of interior point methods for solving discounted turn-based stochastic games

    DEFF Research Database (Denmark)

    Hansen, Thomas Dueholm; Ibsen-Jensen, Rasmus

    2013-01-01

    for general 2TBSGs. This implies that a number of interior point methods can be used to solve 2TBSGs. We consider two such algorithms: the unified interior point method of Kojima, Megiddo, Noma, and Yoshise, and the interior point potential reduction algorithm of Kojima, Megiddo, and Ye. The algorithms run...... states and discount factor γ we get κ=Θ(n(1−γ)2) , −δ=Θ(n√1−γ) , and 1/θ=Θ(n(1−γ)2) in the worst case. The lower bounds for κ, − δ, and 1/θ are all obtained using the same family of deterministic games....

  16. A feature point identification method for positron emission particle tracking with multiple tracers

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  17. Hybrid kriging methods for interpolating sparse river bathymetry point data

    Directory of Open Access Journals (Sweden)

    Pedro Velloso Gomes Batista

    Full Text Available ABSTRACT Terrain models that represent riverbed topography are used for analyzing geomorphologic changes, calculating water storage capacity, and making hydrologic simulations. These models are generated by interpolating bathymetry points. River bathymetry is usually surveyed through cross-sections, which may lead to a sparse sampling pattern. Hybrid kriging methods, such as regression kriging (RK and co-kriging (CK employ the correlation with auxiliary predictors, as well as inter-variable correlation, to improve the predictions of the target variable. In this study, we use the orthogonal distance of a (x, y point to the river centerline as a covariate for RK and CK. Given that riverbed elevation variability is abrupt transversely to the flow direction, it is expected that the greater the Euclidean distance of a point to the thalweg, the greater the bed elevation will be. The aim of this study was to evaluate if the use of the proposed covariate improves the spatial prediction of riverbed topography. In order to asses such premise, we perform an external validation. Transversal cross-sections are used to make the spatial predictions, and the point data surveyed between sections are used for testing. We compare the results from CK and RK to the ones obtained from ordinary kriging (OK. The validation indicates that RK yields the lowest RMSE among the interpolators. RK predictions represent the thalweg between cross-sections, whereas the other methods under-predict the river thalweg depth. Therefore, we conclude that RK provides a simple approach for enhancing the quality of the spatial prediction from sparse bathymetry data.

  18. Two-point versus multiple-point geostatistics: the ability of geostatistical methods to capture complex geobodies and their facies associations—an application to a channelized carbonate reservoir, southwest Iran

    International Nuclear Information System (INIS)

    Hashemi, Seyyedhossein; Javaherian, Abdolrahim; Ataee-pour, Majid; Khoshdel, Hossein

    2014-01-01

    Facies models try to explain facies architectures which have a primary control on the subsurface heterogeneities and the fluid flow characteristics of a given reservoir. In the process of facies modeling, geostatistical methods are implemented to integrate different sources of data into a consistent model. The facies models should describe facies interactions; the shape and geometry of the geobodies as they occur in reality. Two distinct categories of geostatistical techniques are two-point and multiple-point (geo) statistics (MPS). In this study, both of the aforementioned categories were applied to generate facies models. A sequential indicator simulation (SIS) and a truncated Gaussian simulation (TGS) represented two-point geostatistical methods, and a single normal equation simulation (SNESIM) selected as an MPS simulation representative. The dataset from an extremely channelized carbonate reservoir located in southwest Iran was applied to these algorithms to analyze their performance in reproducing complex curvilinear geobodies. The SNESIM algorithm needs consistent training images (TI) in which all possible facies architectures that are present in the area are included. The TI model was founded on the data acquired from modern occurrences. These analogies delivered vital information about the possible channel geometries and facies classes that are typically present in those similar environments. The MPS results were conditioned to both soft and hard data. Soft facies probabilities were acquired from a neural network workflow. In this workflow, seismic-derived attributes were implemented as the input data. Furthermore, MPS realizations were conditioned to hard data to guarantee the exact positioning and continuity of the channel bodies. A geobody extraction workflow was implemented to extract the most certain parts of the channel bodies from the seismic data. These extracted parts of the channel bodies were applied to the simulation workflow as hard data

  19. Numerical Continuation Methods for Intrusive Uncertainty Quantification Studies

    Energy Technology Data Exchange (ETDEWEB)

    Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Phipps, Eric Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    Rigorous modeling of engineering systems relies on efficient propagation of uncertainty from input parameters to model outputs. In recent years, there has been substantial development of probabilistic polynomial chaos (PC) Uncertainty Quantification (UQ) methods, enabling studies in expensive computational models. One approach, termed ”intrusive”, involving reformulation of the governing equations, has been found to have superior computational performance compared to non-intrusive sampling-based methods in relevant large-scale problems, particularly in the context of emerging architectures. However, the utility of intrusive methods has been severely limited due to detrimental numerical instabilities associated with strong nonlinear physics. Previous methods for stabilizing these constructions tend to add unacceptably high computational costs, particularly in problems with many uncertain parameters. In order to address these challenges, we propose to adapt and improve numerical continuation methods for the robust time integration of intrusive PC system dynamics. We propose adaptive methods, starting with a small uncertainty for which the model has stable behavior and gradually moving to larger uncertainty where the instabilities are rampant, in a manner that provides a suitable solution.

  20. EVALUATION OF CONTINUOUS THERMODILUTION METHOD FOR CARDIAC OUTPUT MEASUREMENT

    Directory of Open Access Journals (Sweden)

    Roman Parežnik

    2001-12-01

    Full Text Available Background. Continuous monitoring of haemodynamic variables is often necessary for detection of rapid changes in critically ill patients. In our patients recently introduced continuous thermodilution technique (CTD for cardiac output measurement was compared to bolus thermodilution technique (BTD which is a »golden standard« method for cardiac output (CO measurement in intensive care medicine.Methods. Ten critically ill patients were included in a retrospective observational study. Using CTD method cardiac output was measured continuously. BTD measurements using the same equipment were performed intermittently. The data obtained by BTD were compared to those obtained by CTD just before the BTD (CTD-before and 2–3 minutes after the BTD (CTD-after. The CO values were divided into three groups: all CO values, CO > 4.5 L/min, CO < 4.5 L/min. The bias (mean difference between values obtained by two methods, standard deviation, 95% confidence limits and relative error were calculated and the linear regression analysis was performed. t-test for pared data was used to compare the biases for CTD-before and CTD-after for an individual group. The p value of less than 0.05 was considered statistically significant.Results. A total of 60 data triplets were obtained. CTD-before ranged from 1.9 L/min to 12.6 L/min, CTD-after from 2.0 to 13.2 L/min and BTD from 1.9 to 12.0 L/min. For all CO values the bias for CTD-before was 0.13 ± 0.52 L/min (95% confidence limits 1.17–0.91 L/min, relative error was 3.52 ± 15.20%, linear regression equation was CTD-before = 0.96 × BTD + 0.01 and Pearson’s correlation coefficient was 0.95. The values for CTD-after were 0.08 ± 0.46 L/min (1.0–0.84 L/min, 2.22 ± 9.05%, CTD-after = 0.98 × BTD + 0.01 and 0.98 respectively. For all CO values there was no statistically significant difference between biases for CTD-before and CTD-after (p = 0,51. There was no statistically significant difference between biases for CTD

  1. Statistical methods for assessing agreement between continuous measurements

    DEFF Research Database (Denmark)

    Sokolowski, Ineta; Hansen, Rikke Pilegaard; Vedsted, Peter

    Background: Clinical research often involves study of agreement amongst observers. Agreement can be measured in different ways, and one can obtain quite different values depending on which method one uses. Objective: We review the approaches that have been discussed to assess the agreement between...... continuous measures and discuss their strengths and weaknesses. Different methods are illustrated using actual data from the `Delay in diagnosis of cancer in general practice´ project in Aarhus, Denmark. Subjects and Methods: We use weighted kappa-statistic, intraclass correlation coefficient (ICC......), concordance coefficient, Bland-Altman limits of agreement and percentage of agreement to assess the agreement between patient reported delay and doctor reported delay in diagnosis of cancer in general practice. Key messages: The correct statistical approach is not obvious. Many studies give the product...

  2. A connection between the asymptotic iteration method and the continued fractions formalism

    International Nuclear Information System (INIS)

    Matamala, A.R.; Gutierrez, F.A.; Diaz-Valdes, J.

    2007-01-01

    In this work, we show that there is a connection between the asymptotic iteration method (a method to solve second order linear ordinary differential equations) and the older method of continued fractions to solve differential equations

  3. A point-value enhanced finite volume method based on approximate delta functions

    Science.gov (United States)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  4. One step linear reconstruction method for continuous wave diffuse optical tomography

    Science.gov (United States)

    Ukhrowiyah, N.; Yasin, M.

    2017-09-01

    The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.

  5. Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve

    Science.gov (United States)

    Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.

    2009-04-01

    Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.

  6. Dual reference point temperature interrogating method for distributed temperature sensor

    International Nuclear Information System (INIS)

    Ma, Xin; Ju, Fang; Chang, Jun; Wang, Weijie; Wang, Zongliang

    2013-01-01

    A novel method based on dual temperature reference points is presented to interrogate the temperature in a distributed temperature sensing (DTS) system. This new method is suitable to overcome deficiencies due to the impact of DC offsets and the gain difference in the two signal channels of the sensing system during temperature interrogation. Moreover, this method can in most cases avoid the need to calibrate the gain and DC offsets in the receiver, data acquisition and conversion. An improved temperature interrogation formula is presented and the experimental results show that this method can efficiently estimate the channel amplification and system DC offset, thus improving the system accuracy. (letter)

  7. RESEARCH ON FEATURE POINTS EXTRACTION METHOD FOR BINARY MULTISCALE AND ROTATION INVARIANT LOCAL FEATURE DESCRIPTOR

    Directory of Open Access Journals (Sweden)

    Hongwei Ying

    2014-08-01

    Full Text Available An extreme point of scale space extraction method for binary multiscale and rotation invariant local feature descriptor is studied in this paper in order to obtain a robust and fast method for local image feature descriptor. Classic local feature description algorithms often select neighborhood information of feature points which are extremes of image scale space, obtained by constructing the image pyramid using certain signal transform method. But build the image pyramid always consumes a large amount of computing and storage resources, is not conducive to the actual applications development. This paper presents a dual multiscale FAST algorithm, it does not need to build the image pyramid, but can extract feature points of scale extreme quickly. Feature points extracted by proposed method have the characteristic of multiscale and rotation Invariant and are fit to construct the local feature descriptor.

  8. Continuous energy Monte Carlo method based lattice homogeinzation

    International Nuclear Information System (INIS)

    Li Mancang; Yao Dong; Wang Kan

    2014-01-01

    Based on the Monte Carlo code MCNP, the continuous energy Monte Carlo multi-group constants generation code MCMC has been developed. The track length scheme has been used as the foundation of cross section generation. The scattering matrix and Legendre components require special techniques, and the scattering event method has been proposed to solve this problem. Three methods have been developed to calculate the diffusion coefficients for diffusion reactor core codes and the Legendre method has been applied in MCMC. To the satisfaction of the equivalence theory, the general equivalence theory (GET) and the superhomogenization method (SPH) have been applied to the Monte Carlo method based group constants. The super equivalence method (SPE) has been proposed to improve the equivalence. GET, SPH and SPE have been implemented into MCMC. The numerical results showed that generating the homogenization multi-group constants via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data library can be used for a wide range of applications due to the versatility. The MCMC scheme can be seen as a potential alternative to the widely used deterministic lattice codes. (authors)

  9. Rapid, sensitive and reproducible method for point-of-collection screening of liquid milk for adulterants using a portable Raman spectrometer with novel optimized sample well

    Science.gov (United States)

    Nieuwoudt, Michel K.; Holroyd, Steve E.; McGoverin, Cushla M.; Simpson, M. Cather; Williams, David E.

    2017-02-01

    Point-of-care diagnostics are of interest in the medical, security and food industry, the latter particularly for screening food adulterated for economic gain. Milk adulteration continues to be a major problem worldwide and different methods to detect fraudulent additives have been investigated for over a century. Laboratory based methods are limited in their application to point-of-collection diagnosis and also require expensive instrumentation, chemicals and skilled technicians. This has encouraged exploration of spectroscopic methods as more rapid and inexpensive alternatives. Raman spectroscopy has excellent potential for screening of milk because of the rich complexity inherent in its signals. The rapid advances in photonic technologies and fabrication methods are enabling increasingly sensitive portable mini-Raman systems to be placed on the market that are both affordable and feasible for both point-of-care and point-of-collection applications. We have developed a powerful spectroscopic method for rapidly screening liquid milk for sucrose and four nitrogen-rich adulterants (dicyandiamide (DCD), ammonium sulphate, melamine, urea), using a combined system: a small, portable Raman spectrometer with focusing fibre optic probe and optimized reflective focusing wells, simply fabricated in aluminium. The reliable sample presentation of this system enabled high reproducibility of 8% RSD (residual standard deviation) within four minutes. Limit of detection intervals for PLS calibrations ranged between 140 - 520 ppm for the four N-rich compounds and between 0.7 - 3.6 % for sucrose. The portability of the system and reliability and reproducibility of this technique opens opportunities for general, reagentless adulteration screening of biological fluids as well as milk, at point-of-collection.

  10. Towards Automatic Testing of Reference Point Based Interactive Methods

    OpenAIRE

    Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa

    2016-01-01

    In order to understand strengths and weaknesses of optimization algorithms, it is important to have access to different types of test problems, well defined performance indicators and analysis tools. Such tools are widely available for testing evolutionary multiobjective optimization algorithms. To our knowledge, there do not exist tools for analyzing the performance of interactive multiobjective optimization methods based on the reference point approach to communicating ...

  11. Aerodynamic Optimization Based on Continuous Adjoint Method for a Flexible Wing

    Directory of Open Access Journals (Sweden)

    Zhaoke Xu

    2016-01-01

    Full Text Available Aerodynamic optimization based on continuous adjoint method for a flexible wing is developed using FORTRAN 90 in the present work. Aerostructural analysis is performed on the basis of high-fidelity models with Euler equations on the aerodynamic side and a linear quadrilateral shell element model on the structure side. This shell element can deal with both thin and thick shell problems with intersections, so this shell element is suitable for the wing structural model which consists of two spars, 20 ribs, and skin. The continuous adjoint formulations based on Euler equations and unstructured mesh are derived and used in the work. Sequential quadratic programming method is adopted to search for the optimal solution using the gradients from continuous adjoint method. The flow charts of rigid and flexible optimization are presented and compared. The objective is to minimize drag coefficient meanwhile maintaining lift coefficient for a rigid and flexible wing. A comparison between the results from aerostructural analysis of rigid optimization and flexible optimization is shown here to demonstrate that it is necessary to include the effect of aeroelasticity in the optimization design of a wing.

  12. Solution of Dendritic Growth in Steel by the Novel Point Automata Method

    International Nuclear Information System (INIS)

    Lorbiecka, A Z; Šarler, B

    2012-01-01

    The aim of this paper is the simulation of dendritic growth in steel in two dimensions by a coupled deterministic continuum mechanics heat and species transfer model and a stochastic localized phase change kinetics model taking into account the undercooling, curvature, kinetic, and thermodynamic anisotropy. The stochastic model receives temperature and concentration information from the deterministic model and the deterministic heat, and species diffusion equations receive the solid fraction information from the stochastic model. The heat and species transfer models are solved on a regular grid by the standard explicit Finite Difference Method (FDM). The phase-change kinetics model is solved by a novel Point Automata (PA) approach. The PA method was developed [1] in order to circumvent the mesh anisotropy problem, associated with the classical Cellular Automata (CA) method. The PA approach is established on randomly distributed points and neighbourhood configuration, similar as appears in meshless methods. A comparison of the PA and CA methods is shown. It is demonstrated that the results with the new PA method are not sensitive to the crystallographic orientations of the dendrite.

  13. Method of Fusion Diagnosis for Dam Service Status Based on Joint Distribution Function of Multiple Points

    Directory of Open Access Journals (Sweden)

    Zhenxiang Jiang

    2016-01-01

    Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.

  14. Defining Glaucomatous Optic Neuropathy from a Continuous Measure of Optic Nerve Damage - The Optimal Cut-off Point for Risk-factor Analysis in Population-based Epidemiology

    NARCIS (Netherlands)

    Ramdas, Wishal D.; Rizopoulos, Dimitris; Wolfs, Roger C. W.; Hofman, Albert; de Jong, Paulus T. V. M.; Vingerling, Johannes R.; Jansonius, Nomdo M.

    2011-01-01

    Purpose: Diseases characterized by a continuous trait can be defined by setting a cut-off point for the disease measure in question, accepting some misclassification. The 97.5th percentile is commonly used as a cut-off point. However, it is unclear whether this percentile is the optimal cut-off

  15. An unsteady point vortex method for coupled fluid-solid problems

    Energy Technology Data Exchange (ETDEWEB)

    Michelin, Sebastien [Jacobs School of Engineering, UCSD, Department of Mechanical and Aerospace Engineering, La Jolla, CA (United States); Ecole Nationale Superieure des Mines de Paris, Paris (France); Llewellyn Smith, Stefan G. [Jacobs School of Engineering, UCSD, Department of Mechanical and Aerospace Engineering, La Jolla, CA (United States)

    2009-06-15

    A method is proposed for the study of the two-dimensional coupled motion of a general sharp-edged solid body and a surrounding inviscid flow. The formation of vorticity at the body's edges is accounted for by the shedding at each corner of point vortices whose intensity is adjusted at each time step to satisfy the regularity condition on the flow at the generating corner. The irreversible nature of vortex shedding is included in the model by requiring the vortices' intensity to vary monotonically in time. A conservation of linear momentum argument is provided for the equation of motion of these point vortices (Brown-Michael equation). The forces and torques applied on the solid body are computed as explicit functions of the solid body velocity and the vortices' position and intensity, thereby providing an explicit formulation of the vortex-solid coupled problem as a set of non-linear ordinary differential equations. The example of a falling card in a fluid initially at rest is then studied using this method. The stability of broadside-on fall is analysed and the shedding of vorticity from both plate edges is shown to destabilize this position, consistent with experimental studies and numerical simulations of this problem. The reduced-order representation of the fluid motion in terms of point vortices is used to understand the physical origin of this destabilization. (orig.)

  16. A new integral method for solving the point reactor neutron kinetics equations

    International Nuclear Information System (INIS)

    Li Haofeng; Chen Wenzhen; Luo Lei; Zhu Qian

    2009-01-01

    A numerical integral method that efficiently provides the solution of the point kinetics equations by using the better basis function (BBF) for the approximation of the neutron density in one time step integrations is described and investigated. The approach is based on an exact analytic integration of the neutron density equation, where the stiffness of the equations is overcome by the fully implicit formulation. The procedure is tested by using a variety of reactivity functions, including step reactivity insertion, ramp input and oscillatory reactivity changes. The solution of the better basis function method is compared to other analytical and numerical solutions of the point reactor kinetics equations. The results show that selecting a better basis function can improve the efficiency and accuracy of this integral method. The better basis function method can be used in real time forecasting for power reactors in order to prevent reactivity accidents.

  17. Using Financial Information in Continuing Education. Accepted Methods and New Approaches.

    Science.gov (United States)

    Matkin, Gary W.

    This book, which is intended as a resource/reference guide for experienced financial managers and course planners, examines accepted methods and new approaches for using financial information in continuing education. The introduction reviews theory and practice, traditional and new methods, planning and organizational management, and technology.…

  18. Fixed Point Methods in the Stability of the Cauchy Functional Equations

    Directory of Open Access Journals (Sweden)

    Z. Dehvari

    2013-03-01

    Full Text Available By using the fixed point methods, we prove some generalized Hyers-Ulam stability of homomorphisms for Cauchy and CauchyJensen functional equations on the product algebras and on the triple systems.

  19. Limiting Accuracy of Segregated Solution Methods for Nonsymmetric Saddle Point Problems

    Czech Academy of Sciences Publication Activity Database

    Jiránek, P.; Rozložník, Miroslav

    Roc. 215, c. 1 (2008), s. 28-37 ISSN 0377-0427 R&D Projects: GA MŠk 1M0554; GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : saddle point problems * Schur complement reduction method * null-space projection method * rounding error analysis Subject RIV: BA - General Mathematics Impact factor: 1.048, year: 2008

  20. Invalid-point removal based on epipolar constraint in the structured-light method

    Science.gov (United States)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-06-01

    In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.

  1. Using the method of ideal point to solve dual-objective problem for production scheduling

    Directory of Open Access Journals (Sweden)

    Mariia Marko

    2016-07-01

    Full Text Available In practice, there are often problems, which must simultaneously optimize several criterias. This so-called multi-objective optimization problem. In the article we consider the use of the method ideal point to solve the two-objective optimization problem of production planning. The process of finding solution to the problem consists of a series of steps where using simplex method, we find the ideal point. After that for solving a scalar problems, we use the method of Lagrange multipliers

  2. Multiperiod hydrothermal economic dispatch by an interior point method

    Directory of Open Access Journals (Sweden)

    Kimball L. M.

    2002-01-01

    Full Text Available This paper presents an interior point algorithm to solve the multiperiod hydrothermal economic dispatch (HTED. The multiperiod HTED is a large scale nonlinear programming problem. Various optimization methods have been applied to the multiperiod HTED, but most neglect important network characteristics or require decomposition into thermal and hydro subproblems. The algorithm described here exploits the special bordered block diagonal structure and sparsity of the Newton system for the first order necessary conditions to result in a fast efficient algorithm that can account for all network aspects. Applying this new algorithm challenges a conventional method for the use of available hydro resources known as the peak shaving heuristic.

  3. The quadrant method measuring four points is as a reliable and accurate as the quadrant method in the evaluation after anatomical double-bundle ACL reconstruction.

    Science.gov (United States)

    Mochizuki, Yuta; Kaneko, Takao; Kawahara, Keisuke; Toyoda, Shinya; Kono, Norihiko; Hada, Masaru; Ikegami, Hiroyasu; Musha, Yoshiro

    2017-11-20

    The quadrant method was described by Bernard et al. and it has been widely used for postoperative evaluation of anterior cruciate ligament (ACL) reconstruction. The purpose of this research is to further develop the quadrant method measuring four points, which we named four-point quadrant method, and to compare with the quadrant method. Three-dimensional computed tomography (3D-CT) analyses were performed in 25 patients who underwent double-bundle ACL reconstruction using the outside-in technique. The four points in this study's quadrant method were defined as point1-highest, point2-deepest, point3-lowest, and point4-shallowest, in femoral tunnel position. Value of depth and height in each point was measured. Antero-medial (AM) tunnel is (depth1, height2) and postero-lateral (PL) tunnel is (depth3, height4) in this four-point quadrant method. The 3D-CT images were evaluated independently by 2 orthopaedic surgeons. A second measurement was performed by both observers after a 4-week interval. Intra- and inter-observer reliability was calculated by means of intra-class correlation coefficient (ICC). Also, the accuracy of the method was evaluated against the quadrant method. Intra-observer reliability was almost perfect for both AM and PL tunnel (ICC > 0.81). Inter-observer reliability of AM tunnel was substantial (ICC > 0.61) and that of PL tunnel was almost perfect (ICC > 0.81). The AM tunnel position was 0.13% deep, 0.58% high and PL tunnel position was 0.01% shallow, 0.13% low compared to quadrant method. The four-point quadrant method was found to have high intra- and inter-observer reliability and accuracy. This method can evaluate the tunnel position regardless of the shape and morphology of the bone tunnel aperture for use of comparison and can provide measurement that can be compared with various reconstruction methods. The four-point quadrant method of this study is considered to have clinical relevance in that it is a detailed and accurate tool for

  4. Collective mass and zero-point energy in the generator-coordinate method

    International Nuclear Information System (INIS)

    Fiolhais, C.

    1982-01-01

    The aim of the present thesis if the study of the collective mass parameters and the zero-point energies in the GCM framework with special regards to the fission process. After the derivation of the collective Schroedinger equation in the framework of the Gaussian overlap approximation the inertia parameters are compared with those of the adiabatic time-dependent Hartree-Fock method. Then the kinetic and the potential zero-point energy occurring in this formulation are studied. Thereafter the practical application of the described formalism is discussed. Then a numerical calculation of the GCM mass parameter and the zero-point energy for the fission process on the base of a two-center shell model with a pairing force in the BCS approximation is presented. (HSI) [de

  5. Method of nuclear reactor control using a variable temperature load dependent set point

    International Nuclear Information System (INIS)

    Kelly, J.J.; Rambo, G.E.

    1982-01-01

    A method and apparatus for controlling a nuclear reactor in response to a variable average reactor coolant temperature set point is disclosed. The set point is dependent upon percent of full power load demand. A manually-actuated ''droop mode'' of control is provided whereby the reactor coolant temperature is allowed to drop below the set point temperature a predetermined amount wherein the control is switched from reactor control rods exclusively to feedwater flow

  6. Numerical and Theoretical Investigations Concerning the Continuous-Surface-Curvature Effect in Compressor Blades

    Directory of Open Access Journals (Sweden)

    Yin Song

    2014-12-01

    Full Text Available Though the importance of curvature continuity on compressor blade performances has been realized, there are two major questions that need to be solved, i.e., the respective effects of curvature continuity at the leading-edge blend point and the main surface, and the contradiction between the traditional theory and experimental observations in the effect of those novel leading-edge shapes with smaller curvature discontinuity and sharper nose. In this paper, an optimization method to design continuous-curvature blade profiles which deviate little from datum blades is proposed, and numerical and theoretical analysis is carried out to investigate the continuous-curvature effect on blade performances. The results show that the curvature continuity at the leading-edge blend point helps to eliminate the separation bubble, thus improving the blade performance. The main-surface curvature continuity is also beneficial, although its effects are much smaller than those of the blend-point curvature continuity. Furthermore, it is observed that there exist two factors controlling the leading-edge spike, i.e., the curvature discontinuity at the blend point which dominates at small incidences, and the nose curvature which dominates at large incidences. To the authors’ knowledge, such mechanisms have not been reported before, and they can help to solve the sharp-leading-edge paradox.

  7. Convergence results for a class of abstract continuous descent methods

    Directory of Open Access Journals (Sweden)

    Sergiu Aizicovici

    2004-03-01

    Full Text Available We study continuous descent methods for the minimization of Lipschitzian functions defined on a general Banach space. We establish convergence theorems for those methods which are generated by approximate solutions to evolution equations governed by regular vector fields. Since the complement of the set of regular vector fields is $sigma$-porous, we conclude that our results apply to most vector fields in the sense of Baire's categories.

  8. Five-point Element Scheme of Finite Analytic Method for Unsteady Groundwater Flow

    Institute of Scientific and Technical Information of China (English)

    Xiang Bo; Mi Xiao; Ji Changming; Luo Qingsong

    2007-01-01

    In order to improve the finite analytic method's adaptability for irregular unit, by using coordinates rotation technique this paper establishes a five-point element scheme of finite analytic method. It not only solves unsteady groundwater flow equation but also gives the boundary condition. This method can be used to calculate the three typical questions of groundwater. By compared with predecessor's computed result, the result of this method is more satisfactory.

  9. Acid dew point measurement in flue gases

    Energy Technology Data Exchange (ETDEWEB)

    Struschka, M.; Baumbach, G.

    1986-06-01

    The operation of modern boiler plants requires the continuous measurement of the acid dew point in flue gases. An existing measuring instrument was modified in such a way that it can determine acid dew points reliably, reproduceably and continuously. The authors present the mechanisms of the dew point formation, the dew point measuring principle, the modification and the operational results.

  10. Business continuity 2014: From traditional to integrated Business Continuity Management.

    Science.gov (United States)

    Ee, Henry

    As global change continues to generate new challenges and potential threats to businesses, traditional business continuity management (BCM) slowly reveals its limitations and weak points to ensuring 'business resiliency' today. Consequently, BCM professionals also face the challenge of re-evaluating traditional concepts and introducing new strategies and industry best practices. This paper points to why traditional BCM is no longer sufficient in terms of enabling businesses to survive in today's high-risk environment. It also looks into some of the misconceptions about BCM and other stumbling blocks to establishing effective BCM today. Most importantly, however, this paper provides tips based on the Business Continuity Institute's (BCI) Good Practices Guideline (GPG) and the latest international BCM standard ISO 22301 on how to overcome the issues and challenges presented.

  11. Semianalytical analysis of shear walls with the use of discrete-continual finite element method. Part 1: Mathematical foundations

    Directory of Open Access Journals (Sweden)

    Akimov Pavel

    2016-01-01

    Full Text Available The distinctive paper is devoted to the two-dimensional semi-analytical solution of boundary problems of analysis of shear walls with the use of discrete-continual finite element method (DCFEM. This approach allows obtaining the exact analytical solution in one direction (so-called “basic” direction, also decrease the size of the problem to one-dimensional common finite element analysis. The resulting multipoint boundary problem for the first-order system of ordinary differential equations with piecewise constant coefficients is solved analytically. The proposed method is rather efficient for evaluation of the boundary effect (such as the stress field near the concentrated force. DCFEM also has a completely computer-oriented algorithm, computational stability, optimal conditionality of resultant system and it is applicable for the various loads at an arbitrary point or a region of the wall.

  12. Numerical Solutions of the Mean-Value Theorem: New Methods for Downward Continuation of Potential Fields

    Science.gov (United States)

    Zhang, Chong; Lü, Qingtian; Yan, Jiayong; Qi, Guang

    2018-04-01

    Downward continuation can enhance small-scale sources and improve resolution. Nevertheless, the common methods have disadvantages in obtaining optimal results because of divergence and instability. We derive the mean-value theorem for potential fields, which could be the theoretical basis of some data processing and interpretation. Based on numerical solutions of the mean-value theorem, we present the convergent and stable downward continuation methods by using the first-order vertical derivatives and their upward continuation. By applying one of our methods to both the synthetic and real cases, we show that our method is stable, convergent and accurate. Meanwhile, compared with the fast Fourier transform Taylor series method and the integrated second vertical derivative Taylor series method, our process has very little boundary effect and is still stable in noise. We find that the characters of the fading anomalies emerge properly in our downward continuation with respect to the original fields at the lower heights.

  13. The use of the case study method in radiation worker continuing training

    International Nuclear Information System (INIS)

    Stevens, R.D.

    1990-01-01

    Typical methods of continuing training are often viewed by employees as boring, redundant and unnecessary. It is hoped that the operating experience lesson in the required course, Radiation Worker Requalification, will be well received by employees because actual RFP events will be presented as case studies. The interactive learning atmosphere created by the case study method stimulates discussion, develops analytical abilities, and motivates employees to use lessons learned in the workplace. This problem solving approach to continuing training incorporates cause and effect analysis, a technique which is also used at RFP to investigate events. A method of designing the operating experience lesson in the Radiation Worker Requalification course is described in this paper. 7 refs., 2 figs

  14. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    Science.gov (United States)

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  15. Continuous integration congestion cost allocation based on sensitivity

    International Nuclear Information System (INIS)

    Wu, Z.Q.; Wang, Y.N.

    2004-01-01

    Congestion cost allocation is a very important topic in congestion management. Allocation methods based on the Aumann-Shapley value use the discrete numerical integration method, which needs to solve the incremented OPF solution many times, and as such it is not suitable for practical application to large-scale systems. The optimal solution and its sensitivity change tendency during congestion removal using a DC optimal power flow (OPF) process is analysed. A simple continuous integration method based on the sensitivity is proposed for the congestion cost allocation. The proposed sensitivity analysis method needs a smaller computation time than the method based on using the quadratic method and inner point iteration. The proposed congestion cost allocation method uses a continuous integration method rather than discrete numerical integration. The method does not need to solve the incremented OPF solutions; which allows it use in large-scale systems. The method can also be used for AC OPF congestion management. (author)

  16. Apparatus and method for continuous production of materials

    Science.gov (United States)

    Chang, Chih-hung; Jin, Hyungdae

    2014-08-12

    Embodiments of a continuous-flow injection reactor and a method for continuous material synthesis are disclosed. The reactor includes a mixing zone unit and a residence time unit removably coupled to the mixing zone unit. The mixing zone unit includes at least one top inlet, a side inlet, and a bottom outlet. An injection tube, or plurality of injection tubes, is inserted through the top inlet and extends past the side inlet while terminating above the bottom outlet. A first reactant solution flows in through the side inlet, and a second reactant solution flows in through the injection tube(s). With reference to nanoparticle synthesis, the reactant solutions combine in a mixing zone and form nucleated nanoparticles. The nucleated nanoparticles flow through the residence time unit. The residence time unit may be a single conduit, or it may include an outer housing and a plurality of inner tubes within the outer housing.

  17. Transformer-based asymmetrical embedded Z-source neutral point clamped inverters with continuous input current and enhanced voltage boost capability

    DEFF Research Database (Denmark)

    Mo, W.; Loh, Poh Chiang; Blaabjerg, Frede

    2013-01-01

    Z-source Neutral Point Clamped (NPC) inverters were introduced to integrate both the advantages of Z-source inverters and NPC inverters. However, traditional Z-source inverters suffer from high voltage stress and chopping input current. This paper proposes six types transformer-based impedance-so......-source NPC inverters which have enhanced voltage boost capability and continuous input current by utilizing of transformer and embedded dc source configuration. Experimental results are presented to verify the theory validation....

  18. Improved fixed point iterative method for blade element momentum computations

    DEFF Research Database (Denmark)

    Sun, Zhenye; Shen, Wen Zhong; Chen, Jin

    2017-01-01

    The blade element momentum (BEM) theory is widely used in aerodynamic performance calculations and optimization applications for wind turbines. The fixed point iterative method is the most commonly utilized technique to solve the BEM equations. However, this method sometimes does not converge...... are addressed through both theoretical analysis and numerical tests. A term from the BEM equations equals to zero at a critical inflow angle is the source of the convergence problems. When the initial inflow angle is set larger than the critical inflow angle and the relaxation methodology is adopted...

  19. C1-continuous Virtual Element Method for Poisson-Kirchhoff plate problem

    Energy Technology Data Exchange (ETDEWEB)

    Gyrya, Vitaliy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Mourad, Hashem Mohamed [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-20

    We present a family of C1-continuous high-order Virtual Element Methods for Poisson-Kirchho plate bending problem. The convergence of the methods is tested on a variety of meshes including rectangular, quadrilateral, and meshes obtained by edge removal (i.e. highly irregular meshes). The convergence rates are presented for all of these tests.

  20. On stability of fixed points and chaos in fractional systems.

    Science.gov (United States)

    Edelman, Mark

    2018-02-01

    In this paper, we propose a method to calculate asymptotically period two sinks and define the range of stability of fixed points for a variety of discrete fractional systems of the order 0chaos is impossible in the corresponding continuous fractional systems.

  1. Perspective for applying traditional and inovative teaching and learning methods to nurses continuing education

    OpenAIRE

    Bendinskaitė, Irmina

    2015-01-01

    Bendinskaitė I. Perspective for applying traditional and innovative teaching and learning methods to nurse’s continuing education, magister thesis / supervisor Assoc. Prof. O. Riklikienė; Departament of Nursing and Care, Faculty of Nursing, Lithuanian University of Health Sciences. – Kaunas, 2015, – p. 92 The purpose of this study was to investigate traditional and innovative teaching and learning methods perspective to nurse’s continuing education. Material and methods. In a period fro...

  2. Rapid continuous chemical methods for studies of nuclei far from stability

    CERN Document Server

    Trautmann, N; Eriksen, D; Gaggeler, H; Greulich, N; Hickmann, U; Kaffrell, N; Skarnemark, G; Stender, E; Zendel, M

    1981-01-01

    Fast continuous separation methods accomplished by combining a gas-jet recoil-transport system with a variety of chemical systems are described. Procedures for the isolation of individual elements from fission product mixtures with the multistage solvent extraction facility SISAK are presented. Thermochromatography in connection with a gas-jet has been studied as a technique for on-line separation of volatile fission halides. Based on chemical reactions in a gas-jet system itself separation procedures for tellurium, selenium and germanium from fission products have been worked out. All the continuous chemical methods can be performed within a few seconds. The application of such procedures to the investigation of nuclides far from the line of beta -stability is illustrated by a few examples. (16 refs).

  3. Damage detection and locating using tone burst and continuous excitation modulation method

    Science.gov (United States)

    Li, Zheng; Wang, Zhi; Xiao, Li; Qu, Wenzhong

    2014-03-01

    Among structural health monitoring techniques, nonlinear ultrasonic spectroscopy methods are found to be effective diagnostic approach to detecting nonlinear damage such as fatigue crack, due to their sensitivity to incipient structural changes. In this paper, a nonlinear ultrasonic modulation method was developed to detect and locate a fatigue crack on an aluminum plate. The method is different with nonlinear wave modulation method which recognizes the modulation of low-frequency vibration and high-frequency ultrasonic wave; it recognizes the modulation of tone burst and high-frequency ultrasonic wave. In the experiment, a Hanning window modulated sinusoidal tone burst and a continuous sinusoidal excitation were simultaneously imposed on the PZT array which was bonded on the surface of an aluminum plate. The modulations of tone burst and continuous sinusoidal excitation was observed in different actuator-sensor paths, indicating the presence and location of fatigue crack. The results of experiments show that the proposed method is capable of detecting and locating the fatigue crack successfully.

  4. Regional cerebral blood flow measurements by a noninvasive microsphere method using 123I-IMP. Comparison with the modified fractional uptake method and the continuous arterial blood sampling method

    International Nuclear Information System (INIS)

    Nakano, Seigo; Matsuda, Hiroshi; Tanizaki, Hiroshi; Ogawa, Masafumi; Miyazaki, Yoshiharu; Yonekura, Yoshiharu

    1998-01-01

    A noninvasive microsphere method using N-isopropyl-p-( 123 I)iodoamphetamine ( 123 I-IMP), developed by Yonekura et al., was performed in 10 patients with neurological diseases to quantify regional cerebral blood flow (rCBF). Regional CBF values by this method were compared with rCBF values simultaneously estimated from both the modified fractional uptake (FU) method using cardiac output developed by Miyazaki et al. and the conventional method with continuous arterial blood sampling. In comparison, we designated the factor which converted raw SPECT voxel counts to rCBF values as a CBF factor. A highly significant correlation (r=0.962, p<0.001) was obtained in the CBF factors between the present method and the continuous arterial blood sampling method. The CBF factors by the present method were only 2.7% higher on the average than those by the continuous arterial blood sampling method. There were significant correlation (r=0.811 and r=O.798, p<0.001) in the CBF factor between modified FU method (threshold for estimating total brain SPECT counts; 10% and 30% respectively) and the continuous arterial blood sampling method. However, the CBF factors of the modified FU method showed 31.4% and 62.3% higher on the average (threshold; 10% and 30% respectively) than those by the continuous arterial blood sampling method. In conclusion, this newly developed method for rCBF measurements was considered to be useful for routine clinical studies without any blood sampling. (author)

  5. Material-Point-Method Analysis of Collapsing Slopes

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    To understand the dynamic evolution of landslides and predict their physical extent, a computational model is required that is capable of analysing complex material behaviour as well as large strains and deformations. Here, a model is presented based on the so-called generalised-interpolation mat......To understand the dynamic evolution of landslides and predict their physical extent, a computational model is required that is capable of analysing complex material behaviour as well as large strains and deformations. Here, a model is presented based on the so-called generalised......, a deformed material description is introduced, based on time integration of the deformation gradient and utilising Gauss quadrature over the volume associated with each material point. The method has been implemented in a Fortran code and employed for the analysis of a landslide that took place during...

  6. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    International Nuclear Information System (INIS)

    Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz

    2008-01-01

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

  7. Phase equilibria for mixtures containing very many components. development and application of continuous thermodynamics for chemical process design

    International Nuclear Information System (INIS)

    Cotterman, R.L.; Bender, R.; Prausnitz, J.M.

    1984-01-01

    For some multicomponent mixtures, where detailed chemical analysis is not feasible, the compositio of the mixture may be described by a continuous distribution function of some convenient macroscopic property suc as normal boiling point or molecular weight. To attain a quantitative description of phase equilibria for such mixtures, this work has developed thermodynamic procedures for continuous systems; that procedure is called continuous thermodynamics. To illustrate, continuous thermodynamics is used to calculate dew points for natural-gas mixtures, solvent loss in a high-pressure absorber, and liquid-liquid phase equilibria in a polymer fractionation process. Continuous thermodynamics provides a rational method for calculating phase equilibria for those mixtures where complete chemical analysis is not available but where composition can be given by some statistical description. While continuous thermodynamics is only the logical limit of the well-known pseudo-component method, it is more efficient than that method because it is less arbitrary and it often requires less computer time

  8. Comparison of the intracoronary continuous infusion method using a microcatheter and the intravenous continuous adenosine infusion method for inducing maximal hyperemia for fractional flow reserve measurement.

    Science.gov (United States)

    Yoon, Myeong-Ho; Tahk, Seung-Jea; Yang, Hyoung-Mo; Park, Jin-Sun; Zheng, Mingri; Lim, Hong-Seok; Choi, Byoung-Joo; Choi, So-Yeon; Choi, Un-Jung; Hwang, Joung-Won; Kang, Soo-Jin; Hwang, Gyo-Seung; Shin, Joon-Han

    2009-06-01

    Inducing stable maximal coronary hyperemia is essential for measurement of fractional flow reserve (FFR). We evaluated the efficacy of the intracoronary (IC) continuous adenosine infusion method via a microcatheter for inducing maximal coronary hyperemia. In 43 patients with 44 intermediate coronary lesions, FFR was measured consecutively by IC bolus adenosine injection (48-80 microg in left coronary artery, 36-60 microg in the right coronary artery) and a standard intravenous (IV) adenosine infusion (140 microg x min(-1) x kg(-1)). After completion of the IV infusion method, the tip of an IC microcatheter (Progreat Microcatheter System, Terumo, Japan) was positioned at the coronary ostium, and FFR was measured with increasing IC continuous adenosine infusion rates from 60 to 360 microg/min via the microcatheter. Fractional flow reserve decreased with increasing IC adenosine infusion rates, and no further decrease was observed after 300 microg/min. All patients were well tolerated during the procedures. Fractional flow reserves measured by IC adenosine infusion with 180, 240, 300, and 360 microg/min were significantly lower than those by IV infusion (P < .05). Intracoronary infusion at 180, 240, 300, and 360 microg/min was able to shorten the times to induction of optimal and steady-stable hyperemia compared to IV infusion (P < .05). Functional significances were changed in 5 lesions by IC infusion at 240 to 360 microg/min but not by IV infusion. The results of this study suggest that an IC adenosine continuous infusion method via a microcatheter is safe and effective in inducing steady-state hyperemia and more potent and quicker in inducing optimal hyperemia than the standard IV infusion method.

  9. A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets

    Directory of Open Access Journals (Sweden)

    Vilius Matiukas

    2011-08-01

    Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English

  10. Methods and considerations to determine sphere center from terrestrial laser scanner point cloud data

    International Nuclear Information System (INIS)

    Rachakonda, Prem; Muralikrishnan, Bala; Lee, Vincent; Shilling, Meghan; Sawyer, Daniel; Cournoyer, Luc; Cheok, Geraldine

    2017-01-01

    The Dimensional Metrology Group at the National Institute of Standards and Technology is performing research to support the development of documentary standards within the ASTM E57 committee. This committee is addressing the point-to-point performance evaluation of a subclass of 3D imaging systems called terrestrial laser scanners (TLSs), which are laser-based and use a spherical coordinate system. This paper discusses the usage of sphere targets for this effort, and methods to minimize the errors due to the determination of their centers. The key contributions of this paper include methods to segment sphere data from a TLS point cloud, and the study of some of the factors that influence the determination of sphere centers. (paper)

  11. The Oil Point Method - A tool for indicative environmental evaluation in material and process selection

    DEFF Research Database (Denmark)

    Bey, Niki

    2000-01-01

    to three essential assessment steps, the method enables rough environmental evaluations and supports in this way material- and process-related decision-making in the early stages of design. In its overall structure, the Oil Point Method is related to Life Cycle Assessment - except for two main differences...... of environmental evaluation and only approximate information about the product and its life cycle. This dissertation addresses this challenge in presenting a method, which is tailored to these requirements of designers - the Oil Point Method (OPM). In providing environmental key information and confining itself...

  12. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  13. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    International Nuclear Information System (INIS)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  14. A GPU code for analytic continuation through a sampling method

    Directory of Open Access Journals (Sweden)

    Johan Nordström

    2016-01-01

    Full Text Available We here present a code for performing analytic continuation of fermionic Green’s functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU. The code is based on the sampling method introduced by Mishchenko et al. (2000, and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.

  15. Constructing C1 Continuous Surface on Irregular Quad Meshes

    Institute of Scientific and Technical Information of China (English)

    HE Jun; GUO Qiang

    2013-01-01

    A new method is proposed for surface construction on irregular quad meshes as extensions to uniform B-spline surfaces. Given a number of control points, which form a regular or irregular quad mesh, a weight function is constructed for each control point. The weight function is defined on a local domain and is C1 continuous. Then the whole surface is constructed by the weighted combination of all the control points. The property of the new method is that the surface is defined by piecewise C1 bi-cubic rational parametric polynomial with each quad face. It is an extension to uniform B-spline surfaces in the sense that its definition is an analogy of the B-spline surface, and it produces a uniform bi-cubic B-spline surface if the control mesh is a regular quad mesh. Examples produced by the new method are also included.

  16. A RECOGNITION METHOD FOR AIRPLANE TARGETS USING 3D POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    M. Zhou

    2012-07-01

    Full Text Available LiDAR is capable of obtaining three dimension coordinates of the terrain and targets directly and is widely applied in digital city, emergent disaster mitigation and environment monitoring. Especially because of its ability of penetrating the low density vegetation and canopy, LiDAR technique has superior advantages in hidden and camouflaged targets detection and recognition. Based on the multi-echo data of LiDAR, and combining the invariant moment theory, this paper presents a recognition method for classic airplanes (even hidden targets mainly under the cover of canopy using KD-Tree segmented point cloud data. The proposed algorithm firstly uses KD-tree to organize and manage point cloud data, and makes use of the clustering method to segment objects, and then the prior knowledge and invariant recognition moment are utilized to recognise airplanes. The outcomes of this test verified the practicality and feasibility of the method derived in this paper. And these could be applied in target measuring and modelling of subsequent data processing.

  17. Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

    NARCIS (Netherlands)

    Asadi, A.R.; Roos, C.

    2015-01-01

    In this paper, we design a class of infeasible interior-point methods for linear optimization based on large neighborhood. The algorithm is inspired by a full-Newton step infeasible algorithm with a linear convergence rate in problem dimension that was recently proposed by the second author.

  18. Numerical methods for the simulation of continuous sedimentation in ideal clarifier-thickener units

    Energy Technology Data Exchange (ETDEWEB)

    Buerger, R.; Karlsen, K.H.; Risebro, N.H.; Towers, J.D.

    2001-10-01

    We consider a model of continuous sedimentation. Under idealizing assumptions, the settling of the solid particles under the influence of gravity can be described by the initial value problem for a nonlinear hyperbolic partial differential equation with a flux function that depends discontinuously on height. The purpose of this contribution is to present and demonstrate two numerical methods for simulating continuous sedimentation: a front tracking method and a finite finite difference method. The basic building blocks in the front tracking method are the solutions of a finite number of certain Riemann problems and a procedure for tracking local collisions of shocks. The solutions of the Riemann problems are recalled herein and the front tracking algorithm is described. As an alternative to the front tracking method, a simple scalar finite difference algorithm is proposed. This method is based on discretizing the spatially varying flux parameters on a mesh that is staggered with respect to that of the conserved variable, resulting in a straightforward generalization of the well-known Engquist-Osher upwind finite difference method. The result is an easily implemented upwind shock capturing method. Numerical examples demonstrate that the front tracking and finite difference methods can be used as efficient and accurate simulation tools for continuous sedimentation. The numerical results for the finite difference method indicate that discontinuities in the local solids concentration are resolved sharply and agree with those produced by the front tracking method. The latter is free of numerical dissipation, which leads to sharply resolved concentration discontinuities, but is more complicated to implement than the former. Available mathematical results for the proposed numerical methods are also briefly reviewed. (author)

  19. Using the Direct Sampling Multiple-Point Geostatistical Method for Filling Gaps in Landsat 7 ETM+ SLC-off Imagery

    KAUST Repository

    Yin, Gaohong

    2016-05-01

    Since the failure of the Scan Line Corrector (SLC) instrument on Landsat 7, observable gaps occur in the acquired Landsat 7 imagery, impacting the spatial continuity of observed imagery. Due to the highly geometric and radiometric accuracy provided by Landsat 7, a number of approaches have been proposed to fill the gaps. However, all proposed approaches have evident constraints for universal application. The main issues in gap-filling are an inability to describe the continuity features such as meandering streams or roads, or maintaining the shape of small objects when filling gaps in heterogeneous areas. The aim of the study is to validate the feasibility of using the Direct Sampling multiple-point geostatistical method, which has been shown to reconstruct complicated geological structures satisfactorily, to fill Landsat 7 gaps. The Direct Sampling method uses a conditional stochastic resampling of known locations within a target image to fill gaps and can generate multiple reconstructions for one simulation case. The Direct Sampling method was examined across a range of land cover types including deserts, sparse rural areas, dense farmlands, urban areas, braided rivers and coastal areas to demonstrate its capacity to recover gaps accurately for various land cover types. The prediction accuracy of the Direct Sampling method was also compared with other gap-filling approaches, which have been previously demonstrated to offer satisfactory results, under both homogeneous area and heterogeneous area situations. Studies have shown that the Direct Sampling method provides sufficiently accurate prediction results for a variety of land cover types from homogeneous areas to heterogeneous land cover types. Likewise, it exhibits superior performances when used to fill gaps in heterogeneous land cover types without input image or with an input image that is temporally far from the target image in comparison with other gap-filling approaches.

  20. An adaptive Monte Carlo method under emission point as sampling station for deep penetration calculation

    International Nuclear Information System (INIS)

    Wang, Ruihong; Yang, Shulin; Pei, Lucheng

    2011-01-01

    Deep penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, an adaptive technique under the emission point as a sampling station is presented. The main advantage is to choose the most suitable sampling number from the emission point station to get the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is also derived. The main principle is to define the importance function of the response due to the particle state and ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive method under the emission point as a station could overcome the difficulty of underestimation to the result in some degree, and the related importance sampling method gets satisfied results as well. (author)

  1. Displacement fields from point cloud data: Application of particle imaging velocimetry to landslide geodesy

    Science.gov (United States)

    Aryal, Arjun; Brooks, Benjamin A.; Reid, Mark E.; Bawden, Gerald W.; Pawlak, Geno

    2012-01-01

    Acquiring spatially continuous ground-surface displacement fields from Terrestrial Laser Scanners (TLS) will allow better understanding of the physical processes governing landslide motion at detailed spatial and temporal scales. Problems arise, however, when estimating continuous displacement fields from TLS point-clouds because reflecting points from sequential scans of moving ground are not defined uniquely, thus repeat TLS surveys typically do not track individual reflectors. Here, we implemented the cross-correlation-based Particle Image Velocimetry (PIV) method to derive a surface deformation field using TLS point-cloud data. We estimated associated errors using the shape of the cross-correlation function and tested the method's performance with synthetic displacements applied to a TLS point cloud. We applied the method to the toe of the episodically active Cleveland Corral Landslide in northern California using TLS data acquired in June 2005–January 2007 and January–May 2010. Estimated displacements ranged from decimeters to several meters and they agreed well with independent measurements at better than 9% root mean squared (RMS) error. For each of the time periods, the method provided a smooth, nearly continuous displacement field that coincides with independently mapped boundaries of the slide and permits further kinematic and mechanical inference. For the 2010 data set, for instance, the PIV-derived displacement field identified a diffuse zone of displacement that preceded by over a month the development of a new lateral shear zone. Additionally, the upslope and downslope displacement gradients delineated by the dense PIV field elucidated the non-rigid behavior of the slide.

  2. A travel time forecasting model based on change-point detection method

    Science.gov (United States)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  3. Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.

    Science.gov (United States)

    Reed, George H; Poyner, Russell R

    2015-01-01

    An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.

  4. An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points

    Science.gov (United States)

    Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun

    2014-05-01

    Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.

  5. A highly accurate algorithm for the solution of the point kinetics equations

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    2013-01-01

    Highlights: • Point kinetics equations for nuclear reactor transient analysis are numerically solved to extreme accuracy. • Results for classic benchmarks found in the literature are given to 9-digit accuracy. • Recent results of claimed accuracy are shown to be less accurate than claimed. • Arguably brings a chapter of numerical evaluation of the PKEs to a close. - Abstract: Attempts to resolve the point kinetics equations (PKEs) describing nuclear reactor transients have been the subject of numerous articles and texts over the past 50 years. Some very innovative methods, such as the RTS (Reactor Transient Simulation) and CAC (Continuous Analytical Continuation) methods of G.R. Keepin and J. Vigil respectively, have been shown to be exceptionally useful. Recently however, several authors have developed methods they consider accurate without a clear basis for their assertion. In response, this presentation will establish a definitive set of benchmarks to enable those developing PKE methods to truthfully assess the degree of accuracy of their methods. Then, with these benchmarks, two recently published methods, found in this journal will be shown to be less accurate than claimed and a legacy method from 1984 will be confirmed

  6. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have

  7. Development of a method of continuous improvement of services using the Business Intelligence tools

    Directory of Open Access Journals (Sweden)

    Svetlana V. Kulikova

    2018-01-01

    Full Text Available The purpose of the study was to develop a method of continuous improvement of services using the Business Intelligence tools.Materials and methods: the materials are used on the concept of the Deming Cycle, methods and Business Intelligence technologies, Agile methodology and SCRUM.Results: the article considers the problem of continuous improvement of services and offers solutions using methods and technologies of Business Intelligence. In this case, the purpose of this technology is to solve and make the final decision regarding what needs to be improved in the current organization of services. In other words, Business Intelligence helps the product manager to see what is hidden from the “human eye” on the basis of received and processed data. Development of a method based on the concept of the Deming Cycle and Agile methodologies, and SCRUM.The article describes the main stages of development of method based on activity of the enterprise. It is necessary to fully build the Business Intelligence system in the enterprise to identify bottlenecks and justify the need for their elimination and, in general, for continuous improvement of the services. This process is represented in the notation of DFD. The article presents a scheme for the selection of suitable agile methodologies.The proposed concept of the solution of the stated objectives, including methods of identification of problems through Business Intelligence technology, development of the system for troubleshooting and analysis of results of the introduced changes. The technical description of the project is given.Conclusion: following the work of the authors there was formed the concept of the method for the continuous improvement of the services, using the Business Intelligence technology with the specifics of the enterprises, offering SaaS solutions. It was also found that when using this method, the recommended development methodology is SCRUM. The result of this scientific

  8. Development of continuous pharmaceutical production processes supported by process systems engineering methods and tools

    DEFF Research Database (Denmark)

    Gernaey, Krist; Cervera Padrell, Albert Emili; Woodley, John

    2012-01-01

    The pharmaceutical industry is undergoing a radical transition towards continuous production processes. Systematic use of process systems engineering (PSE) methods and tools form the key to achieve this transition in a structured and efficient way.......The pharmaceutical industry is undergoing a radical transition towards continuous production processes. Systematic use of process systems engineering (PSE) methods and tools form the key to achieve this transition in a structured and efficient way....

  9. Apparatus and method for implementing power saving techniques when processing floating point values

    Science.gov (United States)

    Kim, Young Moon; Park, Sang Phill

    2017-10-03

    An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.

  10. A comparative study of the maximum power point tracking methods for PV systems

    International Nuclear Information System (INIS)

    Liu, Yali; Li, Ming; Ji, Xu; Luo, Xi; Wang, Meidi; Zhang, Ying

    2014-01-01

    Highlights: • An improved maximum power point tracking method for PV system was proposed. • Theoretical derivation procedure of the proposed method was provided. • Simulation models of MPPT trackers were established based on MATLAB/Simulink. • Experiments were conducted to verify the effectiveness of the proposed MPPT method. - Abstract: Maximum power point tracking (MPPT) algorithms play an important role in the optimization of the power and efficiency of a photovoltaic (PV) generation system. According to the contradiction of the classical Perturb and Observe (P and Oa) method between the corresponding speed and the tracking accuracy on steady-state, an improved P and O (P and Ob) method has been put forward in this paper by using the Atken interpolation algorithm. To validate the correctness and performance of the proposed method, simulation and experimental study have been implemented. Simulation models of classical P and Oa method and improved P and Ob method have been established by MATLAB/Simulink to analyze each technique under varying solar irradiation and temperature. The experimental results show that the tracking efficiency of P and Ob method is an average of 93% compared to 72% for P and Oa method, this conclusion basically agree with the simulation study. Finally, we proposed the applicable conditions and scope of these MPPT methods in the practical application

  11. Time discretization of the point kinetic equations using matrix exponential method and First-Order Hold

    International Nuclear Information System (INIS)

    Park, Yujin; Kazantzis, Nikolaos; Parlos, Alexander G.; Chong, Kil To

    2013-01-01

    Highlights: • Numerical solution for stiff differential equations using matrix exponential method. • The approximation is based on First Order Hold assumption. • Various input examples applied to the point kinetics equations. • The method shows superior useful and effective activity. - Abstract: A system of nonlinear differential equations is derived to model the dynamics of neutron density and the delayed neutron precursors within a point kinetics equation modeling framework for a nuclear reactor. The point kinetic equations are mathematically characterized as stiff, occasionally nonlinear, ordinary differential equations, posing significant challenges when numerical solutions are sought and traditionally resulting in the need for smaller time step intervals within various computational schemes. In light of the above realization, the present paper proposes a new discretization method inspired by system-theoretic notions and technically based on a combination of the matrix exponential method (MEM) and the First-Order Hold (FOH) assumption. Under the proposed time discretization structure, the sampled-data representation of the nonlinear point kinetic system of equations is derived. The performance of the proposed time discretization procedure is evaluated using several case studies with sinusoidal reactivity profiles and multiple input examples (reactivity and neutron source function). It is shown, that by applying the proposed method under a First-Order Hold for the neutron density and the precursor concentrations at each time step interval, the stiffness problem associated with the point kinetic equations can be adequately addressed and resolved. Finally, as evidenced by the aforementioned detailed simulation studies, the proposed method retains its validity and accuracy for a wide range of reactor operating conditions, including large sampling periods dictated by physical and/or technical limitations associated with the current state of sensor and

  12. A Novel Complementary Method for the Point-Scan Nondestructive Tests Based on Lamb Waves

    Directory of Open Access Journals (Sweden)

    Rahim Gorgin

    2014-01-01

    Full Text Available This study presents a novel area-scan damage identification method based on Lamb waves which can be used as a complementary method for point-scan nondestructive techniques. The proposed technique is able to identify the most probable locations of damages prior to point-scan test which lead to decreasing the time and cost of inspection. The test-piece surface was partitioned with some smaller areas and the damage probability presence of each area was evaluated. A0 mode of Lamb wave was generated and collected using a mobile handmade transducer set at each area. Subsequently, a damage presence probability index (DPPI based on the energy of captured responses was defined for each area. The area with the highest DPPI value highlights the most probable locations of damages in test-piece. Point-scan nondestructive methods can then be used once these areas are found to identify the damage in detail. The approach was validated by predicting the most probable locations of representative damages including through-thickness hole and crack in aluminum plates. The obtained experimental results demonstrated the high potential of developed method in defining the most probable locations of damages in structures.

  13. The Oblique Basis Method from an Engineering Point of View

    International Nuclear Information System (INIS)

    Gueorguiev, V G

    2012-01-01

    The oblique basis method is reviewed from engineering point of view related to vibration and control theory. Examples are used to demonstrate and relate the oblique basis in nuclear physics to the equivalent mathematical problems in vibration theory. The mathematical techniques, such as principal coordinates and root locus, used by vibration and control theory engineers are shown to be relevant to the Richardson - Gaudin pairing-like problems in nuclear physics.

  14. Measuring the exhaust gas dew point of continuously operated combustion plants

    Energy Technology Data Exchange (ETDEWEB)

    Fehler, D.

    1985-07-16

    Low waste-gas temperatures represent one means of minimizing the energy consumption of combustion facilities. However, condensation should be prevented to occur in the waste gas since this could result in a destruction of parts. Measuring the waste-gas dew point allows to control combustion parameters in such a way as to be able to operate at low temperatures without danger of condensation. Dew point sensors will provide an important signal for optimizing combustion facilities.

  15. Three points of view in transport theory

    International Nuclear Information System (INIS)

    Ruben, Panta Pazos; Tilio de Vilhena, M.

    2001-01-01

    A lot of efforts in Transport Theory is used to develop numerical methods or hybrid numerical-analytical techniques. We present in this work three points of view about transport problems. First the C0 semigroup approach, in which the free transport operator ψ → μ ∇ generates an strongly continuous semigroup. The operators operator ψ → σt and operator ψ → ∫ ∇ k(x,μ,μ' ψ(x,μ') dμ' are bounded operators, and by perturbation the transport operator ψ → μ ∇ ψ + σt ψ - K ψ also generates an strongly continuous semigroup. To prove the convergence of the approximations of a numerical methods to the exact solution we use the approximation theorem of C0 semi-groups in canonical form. In other way, the discrete schemes theory is employed in searching the rate of convergence of numerical techniques in transport theory. For 1D dependent of time transport problem and two-dimensional steady state problem we summarize some estimates, incorporating different boundary conditions. Finally we give a survey about the dynamical behavior of the SN approximations. In order to give a unified approach, some results illustrates the equivalence of the three points of views for the case of the steady-state transport problem for slab geometry. (author)

  16. Robust Trajectory Design in Highly Perturbed Environments Leveraging Continuation Methods, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Research is proposed to investigate continuation methods to improve the robustness of trajectory design algorithms for spacecraft in highly perturbed dynamical...

  17. Acid dew point measurements in combustion gases using the dew point measuring system AH 85100

    Energy Technology Data Exchange (ETDEWEB)

    Fehler, D.

    1984-01-01

    Measuring system for continuous monitoring of the SO/sub 2//SO/sub 3/ dew point in the flue gas, characterized by a low failure rate, applicability inside the flue gas duct, maintenance-free continuous operation, and self-cleaning. The measuring principle is the cooling of the sensor element down to the 'onset condensation' message. Sensor surface temperatures are listed and evaluated as flue gas dew point temperatures. The measuring system is described. (DOMA).

  18. Penyelesaian Numerik Persamaan Advection Dengan Radial Point Interpolation Method dan Integrasi Waktu Dengan Discontinuous Galerkin Method

    Directory of Open Access Journals (Sweden)

    Kresno Wikan Sadono

    2016-12-01

    Full Text Available Persamaan differensial banyak digunakan untuk menggambarkan berbagai fenomena dalam bidang sains dan rekayasa. Berbagai masalah komplek dalam kehidupan sehari-hari dapat dimodelkan dengan persamaan differensial dan diselesaikan dengan metode numerik. Salah satu metode numerik, yaitu metode meshfree atau meshless berkembang akhir-akhir ini, tanpa proses pembuatan elemen pada domain. Penelitian ini menggabungkan metode meshless yaitu radial basis point interpolation method (RPIM dengan integrasi waktu discontinuous Galerkin method (DGM, metode ini disebut RPIM-DGM. Metode RPIM-DGM diaplikasikan pada advection equation pada satu dimensi. RPIM menggunakan basis function multiquadratic function (MQ dan integrasi waktu diturunkan untuk linear-DGM maupun quadratic-DGM. Hasil simulasi menunjukkan, metode ini mendekati hasil analitis dengan baik. Hasil simulasi numerik dengan RPIM DGM menunjukkan semakin banyak node dan semakin kecil time increment menunjukkan hasil numerik semakin akurat. Hasil lain menunjukkan, integrasi numerik dengan quadratic-DGM untuk suatu time increment dan jumlah node tertentu semakin meningkatkan akurasi dibandingkan dengan linear-DGM.  [Title: Numerical solution of advection equation with radial basis interpolation method and discontinuous Galerkin method for time integration] Differential equation is widely used to describe a variety of phenomena in science and engineering. A variety of complex issues in everyday life can be modeled with differential equations and solved by numerical method. One of the numerical methods, the method meshfree or meshless developing lately, without making use of the elements in the domain. The research combines methods meshless, i.e. radial basis point interpolation method with discontinuous Galerkin method as time integration method. This method is called RPIM-DGM. The RPIM-DGM applied to one dimension advection equation. The RPIM using basis function multiquadratic function and time

  19. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    Science.gov (United States)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  20. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    Directory of Open Access Journals (Sweden)

    Zhiying Song

    2017-01-01

    Full Text Available The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS method and a dynamic threshold denoising (DTD method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933 on feature images and less Euclidean distance error (ED = 2.826 on landmark points, outperforming the source data (NC = −0.496, ED = 25.847 and the compared method (NC = −0.614, ED = 16.085. Moreover, our method is about ten times faster than the compared one.

  1. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    Science.gov (United States)

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  2. The shooting method and multiple solutions of two/multi-point BVPs of second-order ODE

    Directory of Open Access Journals (Sweden)

    Man Kam Kwong

    2006-06-01

    Full Text Available Within the last decade, there has been growing interest in the study of multiple solutions of two- and multi-point boundary value problems of nonlinear ordinary differential equations as fixed points of a cone mapping. Undeniably many good results have emerged. The purpose of this paper is to point out that, in the special case of second-order equations, the shooting method can be an effective tool, sometimes yielding better results than those obtainable via fixed point techniques.

  3. Empirical continuation of the differential cross section

    International Nuclear Information System (INIS)

    Borbely, I.

    1978-12-01

    The theoretical basis as well as the practical methods of empirical continuation of the differential cross section into the nonphysical region of the cos theta variable are discussed. The equivalence of the different methods is proved. A physical applicability condition is given and the published applications are reviewed. In many cases the correctly applied procedure turns out to provide nonsignificant or even incorrect structure information which points to the necessity for careful and statistically complete analysis of the experimental data with a physical understanding of the analysed process. (author)

  4. A Newton method for solving continuous multiple material minimum compliance problems

    DEFF Research Database (Denmark)

    Stolpe, M; Stegmann, Jan

    method, one or two linear saddle point systems are solved. These systems involve the Hessian of the objective function, which is both expensive to compute and completely dense. Therefore, the linear algebra is arranged such that the Hessian is not explicitly formed. The main concern is to solve...

  5. A Newton method for solving continuous multiple material minimum compliance problems

    DEFF Research Database (Denmark)

    Stolpe, Mathias; Stegmann, Jan

    2007-01-01

    method, one or two linear saddle point systems are solved. These systems involve the Hessian of the objective function, which is both expensive to compute and completely dense. Therefore, the linear algebra is arranged such that the Hessian is not explicitly formed. The main concern is to solve...

  6. End points and assessments in esthetic dental treatment.

    Science.gov (United States)

    Ishida, Yuichi; Fujimoto, Keiko; Higaki, Nobuaki; Goto, Takaharu; Ichikawa, Tetsuo

    2015-10-01

    There are two key considerations for successful esthetic dental treatments. This article systematically describes the two key considerations: the end points of esthetic dental treatments and assessments of esthetic outcomes, which are also important for acquiring clinical skill in esthetic dental treatments. The end point and assessment of esthetic dental treatment were discussed through literature reviews and clinical practices. Before designing a treatment plan, the end point of dental treatment should be established. The section entitled "End point of esthetic dental treatment" discusses treatments for maxillary anterior teeth and the restoration of facial profile with prostheses. The process of assessing treatment outcomes entitled "Assessments of esthetic dental treatment" discusses objective and subjective evaluation methods. Practitioners should reach an agreement regarding desired end points with patients through medical interviews, and continuing improvements and developments of esthetic assessments are required to raise the therapeutic level of esthetic dental treatments. Copyright © 2015 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  7. New practical method for evaluation of a conventional flat plate continuous pistachio dryer

    Energy Technology Data Exchange (ETDEWEB)

    Kouchakzadeh, Ahmad [Agri Machinery Engineering, Ilam University, Ilam (Iran, Islamic Republic of); Tavakoli, Teymur [Agri Machinery Engineering, Tarbyat Modares University, Tehran (Iran, Islamic Republic of)

    2011-07-15

    Highlights: {yields} Evaluation of a conventional flat plate continuous pistachio dryer with a new feasible method. {yields} Using thermophysical properties of air and matter. {yields} This manner could be utilized in similar dryer for other agricultural products. {yields} Method shows the heat loss and power separately. -- Abstract: Testing a dryer is necessary to evaluate its absolute and comparative performance with other dryers. A conventional flat plate continuous pistachio dryer was tested by a new practical method of mass and energy equilibrium. Results showed that the average power consumption and heat loss in three tests are 62.13 and 18.99 kW, respectively. The ratio of heat loss on power consumption showed that the efficiency of practical pistachios flat plate dryer is about 69.4%.

  8. New practical method for evaluation of a conventional flat plate continuous pistachio dryer

    International Nuclear Information System (INIS)

    Kouchakzadeh, Ahmad; Tavakoli, Teymur

    2011-01-01

    Highlights: → Evaluation of a conventional flat plate continuous pistachio dryer with a new feasible method. → Using thermophysical properties of air and matter. → This manner could be utilized in similar dryer for other agricultural products. → Method shows the heat loss and power separately. -- Abstract: Testing a dryer is necessary to evaluate its absolute and comparative performance with other dryers. A conventional flat plate continuous pistachio dryer was tested by a new practical method of mass and energy equilibrium. Results showed that the average power consumption and heat loss in three tests are 62.13 and 18.99 kW, respectively. The ratio of heat loss on power consumption showed that the efficiency of practical pistachios flat plate dryer is about 69.4%.

  9. A Robust Shape Reconstruction Method for Facial Feature Point Detection

    Directory of Open Access Journals (Sweden)

    Shuqiu Tan

    2017-01-01

    Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  10. Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study

    Directory of Open Access Journals (Sweden)

    Javier Eduardo Diaz Zamboni

    2017-01-01

    Full Text Available The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.

  11. Parameter identification for continuous point emission source based on Tikhonov regularization method coupled with particle swarm optimization algorithm.

    Science.gov (United States)

    Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun

    2017-03-05

    In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission

  12. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    Science.gov (United States)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  13. Phase-integral method allowing nearlying transition points

    CERN Document Server

    Fröman, Nanny

    1996-01-01

    The efficiency of the phase-integral method developed by the present au­ thors has been shown both analytically and numerically in many publica­ tions. With the inclusion of supplementary quantities, closely related to new Stokes constants and obtained with the aid of comparison equation technique, important classes of problems in which transition points may approach each other become accessible to accurate analytical treatment. The exposition in this monograph is of a mathematical nature but has important physical applications, some examples of which are found in the adjoined papers. Thus, we would like to emphasize that, although we aim at mathematical rigor, our treatment is made primarily with physical needs in mind. To introduce the reader into the background of this book, we start by de­ scribing the phase-integral approximation of arbitrary order generated from an unspecified base function. This is done in Chapter 1, which is reprinted, after minor changes, from a review article. Chapter 2 is the re...

  14. Zero point and zero suffix methods with robust ranking for solving fully fuzzy transportation problems

    Science.gov (United States)

    Ngastiti, P. T. B.; Surarso, Bayu; Sutimin

    2018-05-01

    Transportation issue of the distribution problem such as the commodity or goods from the supply tothe demmand is to minimize the transportation costs. Fuzzy transportation problem is an issue in which the transport costs, supply and demand are in the form of fuzzy quantities. Inthe case study at CV. Bintang Anugerah Elektrik, a company engages in the manufacture of gensets that has more than one distributors. We use the methods of zero point and zero suffix to investigate the transportation minimum cost. In implementing both methods, we use robust ranking techniques for the defuzzification process. The studyresult show that the iteration of zero suffix method is less than that of zero point method.

  15. On stability of fixed points and chaos in fractional systems

    Science.gov (United States)

    Edelman, Mark

    2018-02-01

    In this paper, we propose a method to calculate asymptotically period two sinks and define the range of stability of fixed points for a variety of discrete fractional systems of the order 0 logistic maps. Based on our analysis, we make a conjecture that chaos is impossible in the corresponding continuous fractional systems.

  16. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    Science.gov (United States)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  17. On the Convergence of the Iteration Sequence in Primal-Dual Interior-Point Methods

    National Research Council Canada - National Science Library

    Tapia, Richard A; Zhang, Yin; Ye, Yinyu

    1993-01-01

    Recently, numerous research efforts, most of them concerned with superlinear convergence of the duality gap sequence to zero in the Kojima-Mizuno-Yoshise primal-dual interior-point method for linear...

  18. Continuous registration of optical absorption spectra of periodically produced solvated electrons

    International Nuclear Information System (INIS)

    Krebs, P.

    1975-01-01

    Absorption spectra of unstable intermediates, such as solvated electrons, were usually taken point by point, recording the time-dependent light absorption after their production by a flash. The experimental arrangement for continuous recording of the spectra consists of a conventional one beam spectral photometer with a stabilized white light source, a monochromator, and a light detector. By periodic production of light absorbing intermediates such as solvated electrons, e.g., by ac uv light, a small ac signal is modulated on the light detector output which after amplification can be continuously recorded as a function of wavelength. This method allows the detection of absorption spectra when disturbances from the outside provide a signal-to-noise ratio smaller than 1

  19. A New Approximation Method for Solving Variational Inequalities and Fixed Points of Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Klin-eam Chakkrid

    2009-01-01

    Full Text Available Abstract A new approximation method for solving variational inequalities and fixed points of nonexpansive mappings is introduced and studied. We prove strong convergence theorem of the new iterative scheme to a common element of the set of fixed points of nonexpansive mapping and the set of solutions of the variational inequality for the inverse-strongly monotone mapping which solves some variational inequalities. Moreover, we apply our main result to obtain strong convergence to a common fixed point of nonexpansive mapping and strictly pseudocontractive mapping in a Hilbert space.

  20. An improved maximum power point tracking method for a photovoltaic system

    Science.gov (United States)

    Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes

    2016-06-01

    In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.

  1. Continuation Newton methods

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Sysala, Stanislav

    2015-01-01

    Roč. 70, č. 11 (2015), s. 2621-2637 ISSN 0898-1221 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:68145535 Keywords : system of nonlinear equations * Newton method * load increment method * elastoplasticity Subject RIV: IN - Informatics, Computer Science Impact factor: 1.398, year: 2015 http://www.sciencedirect.com/science/article/pii/S0898122115003818

  2. Numerical continuation of families of heteroclinic connections between periodic orbits in a Hamiltonian system

    Science.gov (United States)

    Barrabés, E.; Mondelo, J. M.; Ollé, M.

    2013-10-01

    This paper is devoted to the numerical computation and continuation of families of heteroclinic connections between hyperbolic periodic orbits (POs) of a Hamiltonian system. We describe a method that requires the numerical continuation of a nonlinear system that involves the initial conditions of the two POs, the linear approximations of the corresponding manifolds and a point in a given Poincaré section where the unstable and stable manifolds match. The method is applied to compute families of heteroclinic orbits between planar Lyapunov POs around the collinear equilibrium points of the restricted three-body problem in different scenarios. In one of them, for the Sun-Jupiter mass parameter, we provide energy ranges for which the transition between different resonances is possible.

  3. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  4. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    International Nuclear Information System (INIS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-01-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  5. Analysis of tree stand horizontal structure using random point field methods

    Directory of Open Access Journals (Sweden)

    O. P. Sekretenko

    2015-06-01

    Full Text Available This paper uses the model approach to analyze the horizontal structure of forest stands. The main types of models of random point fields and statistical procedures that can be used to analyze spatial patterns of trees of uneven and even-aged stands are described. We show how modern methods of spatial statistics can be used to address one of the objectives of forestry – to clarify the laws of natural thinning of forest stand and the corresponding changes in its spatial structure over time. Studying natural forest thinning, we describe the consecutive stages of modeling: selection of the appropriate parametric model, parameter estimation and generation of point patterns in accordance with the selected model, the selection of statistical functions to describe the horizontal structure of forest stands and testing of statistical hypotheses. We show the possibilities of a specialized software package, spatstat, which is designed to meet the challenges of spatial statistics and provides software support for modern methods of analysis of spatial data. We show that a model of stand thinning that does not consider inter-tree interaction can project the size distribution of the trees properly, but the spatial pattern of the modeled stand is not quite consistent with observed data. Using data of three even-aged pine forest stands of 25, 55, and 90-years old, we demonstrate that the spatial point process models are useful for combining measurements in the forest stands of different ages to study the forest stand natural thinning.

  6. Reliability of an experimental method to analyse the impact point on a golf ball during putting.

    Science.gov (United States)

    Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn

    2015-06-01

    This study aimed to examine the reliability of an experimental method identifying the location of the impact point on a golf ball during putting. Forty trials were completed using a mechanical putting robot set to reproduce a putt of 3.2 m, with four different putter-ball combinations. After locating the centre of the dimple pattern (centroid) the following variables were tested; distance of the impact point from the centroid, angle of the impact point from the centroid and distance of the impact point from the centroid derived from the X, Y coordinates. Good to excellent reliability was demonstrated in all impact variables reflected in very strong relative (ICC = 0.98-1.00) and absolute reliability (SEM% = 0.9-4.3%). The highest SEM% observed was 7% for the angle of the impact point from the centroid. In conclusion, the experimental method was shown to be reliable at locating the centroid location of a golf ball, therefore allowing for the identification of the point of impact with the putter head and is suitable for use in subsequent studies.

  7. Fixed point theorems in locally convex spaces—the Schauder mapping method

    Directory of Open Access Journals (Sweden)

    S. Cobzaş

    2006-03-01

    Full Text Available In the appendix to the book by F. F. Bonsal, Lectures on Some Fixed Point Theorems of Functional Analysis (Tata Institute, Bombay, 1962 a proof by Singbal of the Schauder-Tychonoff fixed point theorem, based on a locally convex variant of Schauder mapping method, is included. The aim of this note is to show that this method can be adapted to yield a proof of Kakutani fixed point theorem in the locally convex case. For the sake of completeness we include also the proof of Schauder-Tychonoff theorem based on this method. As applications, one proves a theorem of von Neumann and a minimax result in game theory.

  8. Basin boundaries and focal points in a map coming from Bairstow's method.

    Science.gov (United States)

    Gardini, Laura; Bischi, Gian-Italo; Fournier-Prunaret, Daniele

    1999-06-01

    This paper is devoted to the study of the global dynamical properties of a two-dimensional noninvertible map, with a denominator which can vanish, obtained by applying Bairstow's method to a cubic polynomial. It is shown that the complicated structure of the basins of attraction of the fixed points is due to the existence of singularities such as sets of nondefinition, focal points, and prefocal curves, which are specific to maps with a vanishing denominator, and have been recently introduced in the literature. Some global bifurcations that change the qualitative structure of the basin boundaries, are explained in terms of contacts among these singularities. The techniques used in this paper put in evidence some new dynamic behaviors and bifurcations, which are peculiar of maps with denominator; hence they can be applied to the analysis of other classes of maps coming from iterative algorithms (based on Newton's method, or others). (c) 1999 American Institute of Physics.

  9. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    OpenAIRE

    J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao

    2017-01-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...

  10. Exact extraction method for road rutting laser lines

    Science.gov (United States)

    Hong, Zhiming

    2018-02-01

    This paper analyzes the importance of asphalt pavement rutting detection in pavement maintenance and pavement administration in today's society, the shortcomings of the existing rutting detection methods are presented and a new rutting line-laser extraction method based on peak intensity characteristic and peak continuity is proposed. The intensity of peak characteristic is enhanced by a designed transverse mean filter, and an intensity map of peak characteristic based on peak intensity calculation for the whole road image is obtained to determine the seed point of the rutting laser line. Regarding the seed point as the starting point, the light-points of a rutting line-laser are extracted based on the features of peak continuity, which providing exact basic data for subsequent calculation of pavement rutting depths.

  11. A comparison of methods to adjust for continuous covariates in the analysis of randomised trials

    Directory of Open Access Journals (Sweden)

    Brennan C. Kahan

    2016-04-01

    Full Text Available Abstract Background Although covariate adjustment in the analysis of randomised trials can be beneficial, adjustment for continuous covariates is complicated by the fact that the association between covariate and outcome must be specified. Misspecification of this association can lead to reduced power, and potentially incorrect conclusions regarding treatment efficacy. Methods We compared several methods of adjustment to determine which is best when the association between covariate and outcome is unknown. We assessed (a dichotomisation or categorisation; (b assuming a linear association with outcome; (c using fractional polynomials with one (FP1 or two (FP2 polynomial terms; and (d using restricted cubic splines with 3 or 5 knots. We evaluated each method using simulation and through a re-analysis of trial datasets. Results Methods which kept covariates as continuous typically had higher power than methods which used categorisation. Dichotomisation, categorisation, and assuming a linear association all led to large reductions in power when the true association was non-linear. FP2 models and restricted cubic splines with 3 or 5 knots performed best overall. Conclusions For the analysis of randomised trials we recommend (1 adjusting for continuous covariates even if their association with outcome is unknown; (2 keeping covariates as continuous; and (3 using fractional polynomials with two polynomial terms or restricted cubic splines with 3 to 5 knots when a linear association is in doubt.

  12. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    Science.gov (United States)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J.; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest.

  13. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    International Nuclear Information System (INIS)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest. (paper)

  14. Three points of view in transport theory

    Energy Technology Data Exchange (ETDEWEB)

    Ruben, Panta Pazos [Faculdade de Matematica, PUCRS, Porto Alegre, RS (Brazil); Tilio de Vilhena, M. [Instituto de Matematica, UFRGS, Porto Alegre, RS (Brazil)

    2001-07-01

    A lot of efforts in Transport Theory is used to develop numerical methods or hybrid numerical-analytical techniques. We present in this work three points of view about transport problems. First the C0 semigroup approach, in which the free transport operator {psi} {yields} {mu} {nabla} generates an strongly continuous semigroup. The operators operator {psi} {yields} {sigma}t and operator {psi} {yields} {integral} {nabla} k(x,{mu},{mu}') {psi}(x,{mu}') d{mu}' are bounded operators, and by perturbation the transport operator {psi} {yields} {mu} {nabla} {psi} + {sigma}t {psi} - K {psi} also generates an strongly continuous semigroup. To prove the convergence of the approximations of a numerical methods to the exact solution we use the approximation theorem of C0 semi-groups in canonical form. In other way, the discrete schemes theory is employed in searching the rate of convergence of numerical techniques in transport theory. For 1D dependent of time transport problem and two-dimensional steady state problem we summarize some estimates, incorporating different boundary conditions. Finally we give a survey about the dynamical behavior of the SN approximations. In order to give a unified approach, some results illustrates the equivalence of the three points of views for the case of the steady-state transport problem for slab geometry. (author)

  15. Generic primal-dual interior point methods based on a new kernel function

    NARCIS (Netherlands)

    EL Ghami, M.; Roos, C.

    2008-01-01

    In this paper we present a generic primal-dual interior point methods (IPMs) for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm. The proposed kernel function does not satisfy all the

  16. Methods for solving the stochastic point reactor kinetic equations

    International Nuclear Information System (INIS)

    Quabili, E.R.; Karasulu, M.

    1979-01-01

    Two new methods are presented for analysis of the statistical properties of nonlinear outputs of a point reactor to stochastic non-white reactivity inputs. They are Bourret's approximation and logarithmic linearization. The results have been compared with the exact results, previously obtained in the case of Gaussian white reactivity input. It was found that when the reactivity noise has short correlation time, Bourret's approximation should be recommended because it yields results superior to those yielded by logarithmic linearization. When the correlation time is long, Bourret's approximation is not valid, but in that case, if one can assume the reactivity noise to be Gaussian, one may use the logarithmic linearization. (author)

  17. Modular correction method of bending elastic modulus based on sliding behavior of contact point

    International Nuclear Information System (INIS)

    Ma, Zhichao; Zhao, Hongwei; Zhang, Qixun; Liu, Changyi

    2015-01-01

    During the three-point bending test, the sliding behavior of the contact point between the specimen and supports was observed, the sliding behavior was verified to affect the measurements of both deflection and span length, which directly affect the calculation of the bending elastic modulus. Based on the Hertz formula to calculate the elastic contact deformation and the theoretical calculation of the sliding behavior of the contact point, a theoretical model to precisely describe the deflection and span length as a function of bending load was established. Moreover, a modular correction method of bending elastic modulus was proposed, via the comparison between the corrected elastic modulus of three materials (H63 copper–zinc alloy, AZ31B magnesium alloy and 2026 aluminum alloy) and the standard modulus obtained from standard uniaxial tensile tests, the universal feasibility of the proposed correction method was verified. Also, the ratio of corrected to raw elastic modulus presented a monotonically decreasing tendency as the raw elastic modulus of materials increased. (technical note)

  18. New sampling method in continuous energy Monte Carlo calculation for pebble bed reactors

    International Nuclear Information System (INIS)

    Murata, Isao; Takahashi, Akito; Mori, Takamasa; Nakagawa, Masayuki.

    1997-01-01

    A pebble bed reactor generally has double heterogeneity consisting of two kinds of spherical fuel element. In the core, there exist many fuel balls piled up randomly in a high packing fraction. And each fuel ball contains a lot of small fuel particles which are also distributed randomly. In this study, to realize precise neutron transport calculation of such reactors with the continuous energy Monte Carlo method, a new sampling method has been developed. The new method has been implemented in the general purpose Monte Carlo code MCNP to develop a modified version MCNP-BALL. This method was validated by calculating inventory of spherical fuel elements arranged successively by sampling during transport calculation and also by performing criticality calculations in ordered packing models. From the results, it was confirmed that the inventory of spherical fuel elements could be reproduced using MCNP-BALL within a sufficient accuracy of 0.2%. And the comparison of criticality calculations in ordered packing models between MCNP-BALL and the reference method shows excellent agreement in neutron spectrum as well as multiplication factor. MCNP-BALL enables us to analyze pebble bed type cores such as PROTEUS precisely with the continuous energy Monte Carlo method. (author)

  19. A simple method for determining the critical point of the soil water retention curve

    DEFF Research Database (Denmark)

    Chen, Chong; Hu, Kelin; Ren, Tusheng

    2017-01-01

    he transition point between capillary water and adsorbed water, which is the critical point Pc [defined by the critical matric potential (ψc) and the critical water content (θc)] of the soil water retention curve (SWRC), demarcates the energy and water content region where flow is dominated......, a fixed tangent line method was developed to estimate Pc as an alternative to the commonly used flexible tangent line method. The relationships between Pc, and particle-size distribution and specific surface area (SSA) were analyzed. For 27 soils with various textures, the mean RMSE of water content from...... the fixed tangent line method was 0.007 g g–1, which was slightly better than that of the flexible tangent line method. With increasing clay content or SSA, ψc was more negative initially but became less negative at clay contents above ∼30%. Increasing the silt contents resulted in more negative ψc values...

  20. A continuous exchange factor method for radiative exchange in enclosures with participating media

    International Nuclear Information System (INIS)

    Naraghi, M.H.N.; Chung, B.T.F.; Litkouhi, B.

    1987-01-01

    A continuous exchange factor method for analysis of radiative exchange in enclosures is developed. In this method two types of exchange functions are defined, direct exchange function and total exchange function. Certain integral equations relating total exchange functions to direct exchange functions are developed. These integral equations are solved using Gaussian quadrature integration method. The results obtained based on the present approach are found to be more accurate than those of the zonal method

  1. Distance-based microfluidic quantitative detection methods for point-of-care testing.

    Science.gov (United States)

    Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James

    2016-04-07

    Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.

  2. Stability Analysis of Continuous-Time and Discrete-Time Quaternion-Valued Neural Networks With Linear Threshold Neurons.

    Science.gov (United States)

    Chen, Xiaofeng; Song, Qiankun; Li, Zhongshan; Zhao, Zhenjiang; Liu, Yurong

    2018-07-01

    This paper addresses the problem of stability for continuous-time and discrete-time quaternion-valued neural networks (QVNNs) with linear threshold neurons. Applying the semidiscretization technique to the continuous-time QVNNs, the discrete-time analogs are obtained, which preserve the dynamical characteristics of their continuous-time counterparts. Via the plural decomposition method of quaternion, homeomorphic mapping theorem, as well as Lyapunov theorem, some sufficient conditions on the existence, uniqueness, and global asymptotical stability of the equilibrium point are derived for the continuous-time QVNNs and their discrete-time analogs, respectively. Furthermore, a uniform sufficient condition on the existence, uniqueness, and global asymptotical stability of the equilibrium point is obtained for both continuous-time QVNNs and their discrete-time version. Finally, two numerical examples are provided to substantiate the effectiveness of the proposed results.

  3. Implementation of the probability table method in a continuous-energy Monte Carlo code system

    International Nuclear Information System (INIS)

    Sutton, T.M.; Brown, F.B.

    1998-10-01

    RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5

  4. Coupling multipoint flux mixed finite element methodswith continuous Galerkin methods for poroelasticity

    KAUST Repository

    Wheeler, Mary

    2013-11-16

    We study the numerical approximation on irregular domains with general grids of the system of poroelasticity, which describes fluid flow in deformable porous media. The flow equation is discretized by a multipoint flux mixed finite element method and the displacements are approximated by a continuous Galerkin finite element method. First-order convergence in space and time is established in appropriate norms for the pressure, velocity, and displacement. Numerical results are presented that illustrate the behavior of the method. © Springer Science+Business Media Dordrecht 2013.

  5. Measurement of regional cerebral blood flow using one-point venous blood sampling and causality model. Evaluation by comparing with conventional continuous arterial blood sampling method

    International Nuclear Information System (INIS)

    Mimura, Hiroaki; Sone, Teruki; Takahashi, Yoshitake

    2008-01-01

    Optimal setting of the input function is essential for the measurement of regional cerebral blood flow (rCBF) based on the microsphere model using N-isopropyl-4-[ 123 I]iodoamphetamine ( 123 I-IMP), and usually the arterial 123 I-IMP concentration (integral value) in the initial 5 min is used for this purpose. We have developed a new convenient method in which 123 I-IMP concentration in arterial blood sample is estimated from that in venous blood sample. Brain perfusion single photon emission computed tomography (SPECT) with 123 I-IMP was performed in 110 cases of central nervous system disorders. The causality was analyzed between the various parameters of SPECT data and the ratio of octanol-extracted arterial radioactivity concentration during the first 5 min (Caoct) to octanol-extracted venous radioactivity concentration at 27 min after intravenous injection of 123 I-IMP (Cvoct). A high correlation was observed between the measured and estimated values of Caoct/Cvoct (r=0.856) when the following five parameters were included in the regression formula: radioactivity concentration in venous blood sampled at 27 min (Cv), Cvoct, Cvoct/Cv, and total brain radioactivity counts that were measured by a four-head gamma camera 5 min and 28 min after 123 I-IMP injection. Furthermore, the rCBF values obtained using the input parameters estimated by this method were also highly correlated with the rCBF values measured using the continuous arterial blood sampling method (r=0.912). These results suggest that this method would serve as the new, convenient and less invasive method of rCBF measurement in clinical setting. (author)

  6. PLOTTAB, Curve and Point Plotting with Error Bars

    International Nuclear Information System (INIS)

    1999-01-01

    1 - Description of program or function: PLOTTAB is designed to plot any combination of continuous curves and/or discrete points (with associated error bars) using user supplied titles and X and Y axis labels and units. If curves are plotted, the first curve may be used as a standard; the data and the ratio of the data to the standard will be plotted. 2 - Method of solution: PLOTTAB: The program has no idea of what data is being plotted and yet by supplying titles, X and Y axis labels and units the user can produce any number of plots with each plot containing almost any combination of curves and points with each plot properly identified. In order to define a continuous curve between tabulated points, this program must know how to interpolate between points. By input the user may specify either the default option of linear x versus linear y interpolation or alternatively log x and/or log Y interpolation. In all cases, regardless of the interpolation specified, the program will always interpolate the data to the plane of the plot (linear or log x and y plane) in order to present the true variation of the data between tabulated points, based on the user specified interpolation law. Tabulated points should be tabulated at a sufficient number of x values to insure that the difference between the specified interpolation and the 'true' variation of a curve between tabulated values is relatively small. 3 - Restrictions on the complexity of the problem: A combination of up to 30 curves and sets of discrete points may appear on each plot. If the user wishes to use this program to compare different sets of data, all of the data must be in the same units

  7. a Modeling Method of Fluttering Leaves Based on Point Cloud

    Science.gov (United States)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  8. Inversion of Gravity Anomalies Using Primal-Dual Interior Point Methods

    Directory of Open Access Journals (Sweden)

    Aaron A. Velasco

    2016-06-01

    Full Text Available Structural inversion of gravity datasets based on the use of density anomalies to derive robust images of the subsurface (delineating lithologies and their boundaries constitutes a fundamental non-invasive tool for geological exploration. The use of experimental techniques in geophysics to estimate and interpret di erences in the substructure based on its density properties have proven e cient; however, the inherent non-uniqueness associated with most geophysical datasets make this the ideal scenario for the use of recently developed robust constrained optimization techniques. We present a constrained optimization approach for a least squares inversion problem aimed to characterize 2-Dimensional Earth density structure models based on Bouguer gravity anomalies. The proposed formulation is solved with a Primal-Dual Interior-Point method including equality and inequality physical and structural constraints. We validate our results using synthetic density crustal structure models with varying complexity and illustrate the behavior of the algorithm using di erent initial density structure models and increasing noise levels in the observations. Based on these implementations, we conclude that the algorithm using Primal-Dual Interior-Point methods is robust, and its results always honor the geophysical constraints. Some of the advantages of using this approach for structural inversion of gravity data are the incorporation of a priori information related to the model parameters (coming from actual physical properties of the subsurface and the reduction of the solution space contingent on these boundary conditions.

  9. Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

    NARCIS (Netherlands)

    Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.

    2009-01-01

    We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step,

  10. Numerical methods for polyline-to-point-cloud registration with applications to patient-specific stent reconstruction.

    Science.gov (United States)

    Lin, Claire Yilin; Veneziani, Alessandro; Ruthotto, Lars

    2018-03-01

    We present novel numerical methods for polyline-to-point-cloud registration and their application to patient-specific modeling of deployed coronary artery stents from image data. Patient-specific coronary stent reconstruction is an important challenge in computational hemodynamics and relevant to the design and improvement of the prostheses. It is an invaluable tool in large-scale clinical trials that computationally investigate the effect of new generations of stents on hemodynamics and eventually tissue remodeling. Given a point cloud of strut positions, which can be extracted from images, our stent reconstruction method aims at finding a geometrical transformation that aligns a model of the undeployed stent to the point cloud. Mathematically, we describe the undeployed stent as a polyline, which is a piecewise linear object defined by its vertices and edges. We formulate the nonlinear registration as an optimization problem whose objective function consists of a similarity measure, quantifying the distance between the polyline and the point cloud, and a regularization functional, penalizing undesired transformations. Using projections of points onto the polyline structure, we derive novel distance measures. Our formulation supports most commonly used transformation models including very flexible nonlinear deformations. We also propose 2 regularization approaches ensuring the smoothness of the estimated nonlinear transformation. We demonstrate the potential of our methods using an academic 2D example and a real-life 3D bioabsorbable stent reconstruction problem. Our results show that the registration problem can be solved to sufficient accuracy within seconds using only a few number of Gauss-Newton iterations. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality

    OpenAIRE

    Li, Zhanchao; Gu, Chongshi; Wu, Zhongru

    2013-01-01

    The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model ...

  12. Measurement of gas adsorption with Jäntti's method using continuously increasing pressure

    NARCIS (Netherlands)

    Poulis, J.A.; Massen, C.H.; Robens, E.

    2002-01-01

    Jäntti et al. published a method to reduce the time necessary for adsorption measurements. They proposed to extrapolate the equilibrium in the stepwise isobaric measurement of adsorption isotherms by measuring at each step three points of the kinetic curve. For that purpose they approximated the

  13. Curvature computation in volume-of-fluid method based on point-cloud sampling

    Science.gov (United States)

    Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.

    2018-01-01

    This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.

  14. A Hybrid Maximum Power Point Search Method Using Temperature Measurements in Partial Shading Conditions

    Directory of Open Access Journals (Sweden)

    Mroczka Janusz

    2014-12-01

    Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.

  15. Change-Point Detection Method for Clinical Decision Support System Rule Monitoring.

    Science.gov (United States)

    Liu, Siqi; Wright, Adam; Hauskrecht, Milos

    2017-06-01

    A clinical decision support system (CDSS) and its components can malfunction due to various reasons. Monitoring the system and detecting its malfunctions can help one to avoid any potential mistakes and associated costs. In this paper, we investigate the problem of detecting changes in the CDSS operation, in particular its monitoring and alerting subsystem, by monitoring its rule firing counts. The detection should be performed online, that is whenever a new datum arrives, we want to have a score indicating how likely there is a change in the system. We develop a new method based on Seasonal-Trend decomposition and likelihood ratio statistics to detect the changes. Experiments on real and simulated data show that our method has a lower delay in detection compared with existing change-point detection methods.

  16. A Lightweight Surface Reconstruction Method for Online 3D Scanning Point Cloud Data Oriented toward 3D Printing

    Directory of Open Access Journals (Sweden)

    Buyun Sheng

    2018-01-01

    Full Text Available The existing surface reconstruction algorithms currently reconstruct large amounts of mesh data. Consequently, many of these algorithms cannot meet the efficiency requirements of real-time data transmission in a web environment. This paper proposes a lightweight surface reconstruction method for online 3D scanned point cloud data oriented toward 3D printing. The proposed online lightweight surface reconstruction algorithm is composed of a point cloud update algorithm (PCU, a rapid iterative closest point algorithm (RICP, and an improved Poisson surface reconstruction algorithm (IPSR. The generated lightweight point cloud data are pretreated using an updating and rapid registration method. The Poisson surface reconstruction is also accomplished by a pretreatment to recompute the point cloud normal vectors; this approach is based on a least squares method, and the postprocessing of the PDE patch generation was based on biharmonic-like fourth-order PDEs, which effectively reduces the amount of reconstructed mesh data and improves the efficiency of the algorithm. This method was verified using an online personalized customization system that was developed with WebGL and oriented toward 3D printing. The experimental results indicate that this method can generate a lightweight 3D scanning mesh rapidly and efficiently in a web environment.

  17. Continuation Methods and Non-Linear/Non-Gaussian Estimation for Flight Dynamics, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose herein to augment current NASA spaceflight dynamics programs with algorithms and software from three domains. First, we use parameter continuation methods...

  18. Recommender engine for continuous-time quantum Monte Carlo methods

    Science.gov (United States)

    Huang, Li; Yang, Yi-feng; Wang, Lei

    2017-03-01

    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  19. An Efficient Method to Create Digital Terrain Models from Point Clouds Collected by Mobile LiDAR Systems

    Science.gov (United States)

    Gézero, L.; Antunes, C.

    2017-05-01

    The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.

  20. Function parametrization by using 4-point transforms

    International Nuclear Information System (INIS)

    Dikusar, N.D.

    1996-01-01

    A continuous parametrization of the smooth curve f(x)=f(x;R) is suggested on a basis of four-point transformations. Coordinates of three reference points of the curve are chosen as parameters R. This approach allows to derive a number of advantages in function approximation and fitting of empiric data. The transformations have made possible to derive a new class of polynomials (monosplines) having the better approximation quality than monomials {x n }. A behaviour of an error of the approximation has a uniform character. A three-point model of the cubic spline (TPS) is proposed. The model allows to reduce a number of unknown parameters in twice and to obtain an advantage in a computing aspect. The new approach to the function approximation and fitting are shown on a number of examples. The proposed approach gives a new mathematical tool and a new possibility in both practical applications and theoretical research of numerical and computational methods. 13 refs., 13 figs., 2 tabs

  1. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    Science.gov (United States)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  2. Method and apparatus for improved melt flow during continuous strip casting

    Science.gov (United States)

    Follstaedt, Donald W.; King, Edward L.; Schneider, Ken C.

    1991-11-12

    The continuous casting of metal strip using the melt overflow process is improved by controlling the weir conditions in the nozzle to provide a more uniform flow of molten metal across the width of the nozzle and reducing the tendency for freezing of metal along the interface with refractory surfaces. A weir design having a sloped rear wall and tapered sidewalls and critical gap controls beneath the weir has resulted in the drastic reduction in edge tearing and a significant improvement in strip uniformity. The floor of the container vessel is preferably sloped and the gap between the nozzle and the rotating substrate is critically controlled. The resulting flow patterns observed with the improved casting process have reduced thermal gradients in the bath, contained surface slag and eliminated undesirable solidification near the discharge area by increasing the flow rates at those points.

  3. Shape resonances of Be- and Mg- investigated with the method of analytic continuation

    Science.gov (United States)

    Čurík, Roman; Paidarová, I.; Horáček, J.

    2018-05-01

    The regularized method of analytic continuation is used to study the low-energy negative-ion states of beryllium (configuration 2 s2ɛ p 2P ) and magnesium (configuration 3 s2ɛ p 2P ) atoms. The method applies an additional perturbation potential and requires only routine bound-state multi-electron quantum calculations. Such computations are accessible by most of the free or commercial quantum chemistry software available for atoms and molecules. The perturbation potential is implemented as a spherical Gaussian function with a fixed width. Stability of the analytic continuation technique with respect to the width and with respect to the input range of electron affinities is studied in detail. The computed resonance parameters Er=0.282 eV, Γ =0.316 eV for the 2 p state of Be- and Er=0.188 eV, Γ =0.167 for the 3 p state of Mg- agree well with the best results obtained by much more elaborate and computationally demanding present-day methods.

  4. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  5. Sediment acoustic index method for computing continuous suspended-sediment concentrations

    Science.gov (United States)

    Landers, Mark N.; Straub, Timothy D.; Wood, Molly S.; Domanski, Marian M.

    2016-07-11

    Suspended-sediment characteristics can be computed using acoustic indices derived from acoustic Doppler velocity meter (ADVM) backscatter data. The sediment acoustic index method applied in these types of studies can be used to more accurately and cost-effectively provide time-series estimates of suspended-sediment concentration and load, which is essential for informed solutions to many sediment-related environmental, engineering, and agricultural concerns. Advantages of this approach over other sediment surrogate methods include: (1) better representation of cross-sectional conditions from large measurement volumes, compared to other surrogate instruments that measure data at a single point; (2) high temporal resolution of collected data; (3) data integrity when biofouling is present; and (4) less rating curve hysteresis compared to streamflow as a surrogate. An additional advantage of this technique is the potential expansion of monitoring suspended-sediment concentrations at sites with existing ADVMs used in streamflow velocity monitoring. This report provides much-needed standard techniques for sediment acoustic index methods to help ensure accurate and comparable documented results.

  6. Continuous method for refining sodium. [for use in LMFBR type reactors

    Energy Technology Data Exchange (ETDEWEB)

    Batoux, B; Laurent-Atthalin, A; Salmon, M

    1973-11-16

    The invention relates to a refining method according to which commercial sodium provides a high purity sodium with, in particular, a very small calcium content. The method consists in continuously feeding a predetermined amount of sodium peroxide into a sodium stream, mixing and causing said sodium peroxide to reach with sodium at an appropriate temperature, and, finally, separating the reaction products from sodium by decanting and filtering same. The thus obtained high purity sodium meets the requirements of atomic industries in particular, in view of its possible use as coolant in nuclear reactors of the ''breeder'' type.

  7. Characterising fifteen years of continuous atmospheric radon activity observations at Cape Point (South Africa)

    Science.gov (United States)

    Botha, R.; Labuschagne, C.; Williams, A. G.; Bosman, G.; Brunke, E.-G.; Rossouw, A.; Lindsay, R.

    2018-03-01

    This paper describes and discusses fifteen years (1999-2013) of continuous hourly atmospheric radon (222Rn) monitoring at the coastal low-altitude Southern Hemisphere Cape Point Station in South Africa. A strong seasonal cycle is evident in the observed radon concentrations, with maxima during the winter months, when air masses arriving at the Cape Point station from over the African continental surface are more frequently observed, and minima during the summer months, when an oceanic fetch is predominant. An atmospheric mean radon activity concentration of 676 ± 2 mBq/m3 is found over the 15-year record, having a strongly skewed distribution that exhibits a large number of events falling into a compact range of low values (corresponding to oceanic air masses), and a smaller number of events with high radon values spread over a wide range (corresponding to continental air masses). The mean radon concentration from continental air masses (1 004 ± 6 mBq/m3) is about two times higher compared to oceanic air masses (479 ± 3 mBq/m3). The number of atmospheric radon events observed is strongly dependent on the wind direction. A power spectral Fast Fourier Transform analysis of the 15-year radon time series reveals prominent peaks at semi-diurnal, diurnal and annual timescales. Two inter-annual radon periodicities have been established, the diurnal 0.98 ± 0.04 day-1 and half-diurnal 2.07 ± 0.15 day-1. The annual peak reflects major seasonal changes in the patterns of offshore versus onshore flow associated with regional/hemispheric circulation patterns, whereas the diurnal and semi-diurnal peaks together reflect the influence of local nocturnal radon build-up over land, and the interplay between mesoscale sea/land breezes. The winter-time diurnal radon concentration had a significant decrease of about 200 mBq/m3 (17%) while the summer-time diurnal radon concentration revealed nearly no changes. A slow decline in the higher radon percentiles (75th and 95th) for the

  8. Numerical analysis for multi-group neutron-diffusion equation using Radial Point Interpolation Method (RPIM)

    International Nuclear Information System (INIS)

    Kim, Kyung-O; Jeong, Hae Sun; Jo, Daeseong

    2017-01-01

    Highlights: • Employing the Radial Point Interpolation Method (RPIM) in numerical analysis of multi-group neutron-diffusion equation. • Establishing mathematical formation of modified multi-group neutron-diffusion equation by RPIM. • Performing the numerical analysis for 2D critical problem. - Abstract: A mesh-free method is introduced to overcome the drawbacks (e.g., mesh generation and connectivity definition between the meshes) of mesh-based (nodal) methods such as the finite-element method and finite-difference method. In particular, the Point Interpolation Method (PIM) using a radial basis function is employed in the numerical analysis for the multi-group neutron-diffusion equation. The benchmark calculations are performed for the 2D homogeneous and heterogeneous problems, and the Multiquadrics (MQ) and Gaussian (EXP) functions are employed to analyze the effect of the radial basis function on the numerical solution. Additionally, the effect of the dimensionless shape parameter in those functions on the calculation accuracy is evaluated. According to the results, the radial PIM (RPIM) can provide a highly accurate solution for the multiplication eigenvalue and the neutron flux distribution, and the numerical solution with the MQ radial basis function exhibits the stable accuracy with respect to the reference solutions compared with the other solution. The dimensionless shape parameter directly affects the calculation accuracy and computing time. Values between 1.87 and 3.0 for the benchmark problems considered in this study lead to the most accurate solution. The difference between the analytical and numerical results for the neutron flux is significantly increased in the edge of the problem geometry, even though the maximum difference is lower than 4%. This phenomenon seems to arise from the derivative boundary condition at (x,0) and (0,y) positions, and it may be necessary to introduce additional strategy (e.g., the method using fictitious points and

  9. Measurement of regional cerebral blood flow using one-point arterial blood sampling and microsphere model with 123I-IMP. Correction of one-point arterial sampling count by whole brain count ratio

    International Nuclear Information System (INIS)

    Makino, Kenichi; Masuda, Yasuhiko; Gotoh, Satoshi

    1998-01-01

    The experimental subjects were 189 patients with cerebrovascular disorders. 123 I-IMP, 222 MBq, was administered by intravenous infusion. Continuous arterial blood sampling was carried out for 5 minutes, and arterial blood was also sampled once at 5 minutes after 123 I-IMP administration. Then the whole blood count of the one-point arterial sampling was compared with the octanol-extracted count of the continuous arterial sampling. A positive correlation was found between the two values. The ratio of the continuous sampling octanol-extracted count (OC) to the one-point sampling whole blood count (TC5) was compared with the whole brain count ratio (5:29 ratio, Cn) using 1-minute planar SPECT images, centering on 5 and 29 minutes after 123 I-IMP administration. Correlation was found between the two values. The following relationship was shown from the correlation equation. OC/TC5=0.390969 x Cn-0.08924. Based on this correlation equation, we calculated the theoretical continuous arterial sampling octanol-extracted count (COC). COC=TC5 x (0.390969 x Cn-0.08924). There was good correlation between the value calculated with this equation and the actually measured value. The coefficient improved to r=0.94 from the r=0.87 obtained before using the 5:29 ratio for correction. For 23 of these 189 cases, another one-point arterial sampling was carried out at 6, 7, 8, 9 and 10 minutes after the administration of 123 I-IMP. The correlation coefficient was also improved for these other point samplings when this correction method using the 5:29 ratio was applied. It was concluded that it is possible to obtain highly accurate input functions, i.e., calculated continuous arterial sampling octanol-extracted counts, using one-point arterial sampling whole blood counts by performing correction using the 5:29 ratio. (K.H.)

  10. H-Point Standard Addition Method for Simultaneous Determination of Eosin and Erytrosine

    Directory of Open Access Journals (Sweden)

    Amandeep Kaur

    2011-01-01

    Full Text Available A new, simple, sensitive and selective H-point standard addition method (HPSAM has been developed for resolving binary mixture of food colorants eosin and erythrosine, which show overlapped spectra. The method is based on the complexation of food dyes eosin and erythrosine with Fe(III complexing reagent at pH 5.5 and solubilizing complexes in triton x-100 micellar media. Absorbances at the two pairs of wavelengths, 540 and 550 nm (when eosin acts as analyte or 518 and 542 nm (when erythrosine act as analyte were monitored. This method has satisfactorily been applied for the determination of eosin and erythrosine dyes in synthetic mixtures and commercial products.

  11. The "Lung": a software-controlled air accumulator for quasi-continuous multi-point measurement of agricultural greenhouse gases

    Directory of Open Access Journals (Sweden)

    R. J. Martin

    2011-10-01

    Full Text Available We describe the design and testing of a flexible bag ("Lung" accumulator attached to a gas chromatographic (GC analyzer capable of measuring surface-atmosphere greenhouse gas exchange fluxes in a wide range of environmental/agricultural settings. In the design presented here, the Lung can collect up to three gas samples concurrently, each accumulated into a Tedlar bag over a period of 20 min or longer. Toggling collection between 2 sets of 3 bags enables quasi-continuous collection with sequential analysis and discarding of sample residues. The Lung thus provides a flexible "front end" collection system for interfacing to a GC or alternative analyzer and has been used in 2 main types of application. Firstly, it has been applied to micrometeorological assessment of paddock-scale N2O fluxes, discussed here. Secondly, it has been used for the automation of concurrent emission assessment from three sheep housed in metabolic crates with gas tracer addition and sampling multiplexed to a single GC.

    The Lung allows the same GC equipment used in laboratory discrete sample analysis to be deployed for continuous field measurement. Continuity of measurement enables spatially-averaged N2O fluxes in particular to be determined with greater accuracy, given the highly heterogeneous and episodic nature of N2O emissions. We present a detailed evaluation of the micrometeorological flux estimation alongside an independent tuneable diode laser system, reporting excellent agreement between flux estimates based on downwind vertical concentration differences. Whilst the current design is based around triplet bag sets, the basic design could be scaled up to a larger number of inlets or bags and less frequent analysis (longer accumulation times where a greater number of sampling points are required.

  12. Fixed point theorems in spaces and -trees

    Directory of Open Access Journals (Sweden)

    Kirk WA

    2004-01-01

    Full Text Available We show that if is a bounded open set in a complete space , and if is nonexpansive, then always has a fixed point if there exists such that for all . It is also shown that if is a geodesically bounded closed convex subset of a complete -tree with , and if is a continuous mapping for which for some and all , then has a fixed point. It is also noted that a geodesically bounded complete -tree has the fixed point property for continuous mappings. These latter results are used to obtain variants of the classical fixed edge theorem in graph theory.

  13. Point-point and point-line moving-window correlation spectroscopy and its applications

    Science.gov (United States)

    Zhou, Qun; Sun, Suqin; Zhan, Daqi; Yu, Zhiwu

    2008-07-01

    In this paper, we present a new extension of generalized two-dimensional (2D) correlation spectroscopy. Two new algorithms, namely point-point (P-P) correlation and point-line (P-L) correlation, have been introduced to do the moving-window 2D correlation (MW2D) analysis. The new method has been applied to a spectral model consisting of two different processes. The results indicate that P-P correlation spectroscopy can unveil the details and re-constitute the entire process, whilst the P-L can provide general feature of the concerned processes. Phase transition behavior of dimyristoylphosphotidylethanolamine (DMPE) has been studied using MW2D correlation spectroscopy. The newly proposed method verifies that the phase transition temperature is 56 °C, same as the result got from a differential scanning calorimeter. To illustrate the new method further, a lysine and lactose mixture has been studied under thermo perturbation. Using the P-P MW2D, the Maillard reaction of the mixture was clearly monitored, which has been very difficult using conventional display of FTIR spectra.

  14. Model reduction method using variable-separation for stochastic saddle point problems

    Science.gov (United States)

    Jiang, Lijian; Li, Qiuqi

    2018-02-01

    In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.

  15. A comparison of numerical methods for the solution of continuous-time DSGE models

    DEFF Research Database (Denmark)

    Parra-Alvarez, Juan Carlos

    This paper evaluates the accuracy of a set of techniques that approximate the solution of continuous-time DSGE models. Using the neoclassical growth model I compare linear-quadratic, perturbation and projection methods. All techniques are applied to the HJB equation and the optimality conditions...... parameters of the model and suggest the use of projection methods when a high degree of accuracy is required....

  16. Robust EM Continual Reassessment Method in Oncology Dose Finding

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2012-01-01

    The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities. PMID:22375092

  17. A postprocessing method based on chirp Z transform for FDTD calculation of point defect states in two-dimensional phononic crystals

    International Nuclear Information System (INIS)

    Su Xiaoxing; Wang Yuesheng

    2010-01-01

    In this paper, a new postprocessing method for the finite difference time domain (FDTD) calculation of the point defect states in two-dimensional (2D) phononic crystals (PNCs) is developed based on the chirp Z transform (CZT), one of the frequency zooming techniques. The numerical results for the defect states in 2D solid/liquid PNCs with single or double point defects show that compared with the fast Fourier transform (FFT)-based postprocessing method, the method can improve the estimation accuracy of the eigenfrequencies of the point defect states significantly when the FDTD calculation is run with relatively few iterations; and furthermore it can yield the point defect bands without calculating all eigenfrequencies outside the band gaps. The efficiency and accuracy of the FDTD method can be improved significantly with this new postprocessing method.

  18. A postprocessing method based on chirp Z transform for FDTD calculation of point defect states in two-dimensional phononic crystals

    Energy Technology Data Exchange (ETDEWEB)

    Su Xiaoxing, E-mail: xxsu@bjtu.edu.c [School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044 (China); Wang Yuesheng [Institute of Engineering Mechanics, Beijing Jiaotong University, Beijing 100044 (China)

    2010-09-01

    In this paper, a new postprocessing method for the finite difference time domain (FDTD) calculation of the point defect states in two-dimensional (2D) phononic crystals (PNCs) is developed based on the chirp Z transform (CZT), one of the frequency zooming techniques. The numerical results for the defect states in 2D solid/liquid PNCs with single or double point defects show that compared with the fast Fourier transform (FFT)-based postprocessing method, the method can improve the estimation accuracy of the eigenfrequencies of the point defect states significantly when the FDTD calculation is run with relatively few iterations; and furthermore it can yield the point defect bands without calculating all eigenfrequencies outside the band gaps. The efficiency and accuracy of the FDTD method can be improved significantly with this new postprocessing method.

  19. Iterative method to compute the Fermat points and Fermat distances of multiquarks

    International Nuclear Information System (INIS)

    Bicudo, P.; Cardoso, M.

    2009-01-01

    The multiquark confining potential is proportional to the total distance of the fundamental strings linking the quarks and antiquarks. We address the computation of the total string distance and of the Fermat points where the different strings meet. For a meson the distance is trivially the quark-antiquark distance. For a baryon the problem was solved geometrically from the onset by Fermat and by Torricelli, it can be determined just with a rule and a compass, and we briefly review it. However we also show that for tetraquarks, pentaquarks, hexaquarks, etc., the geometrical solution is much more complicated. Here we provide an iterative method, converging fast to the correct Fermat points and the total distances, relevant for the multiquark potentials.

  20. Iterative method to compute the Fermat points and Fermat distances of multiquarks

    Energy Technology Data Exchange (ETDEWEB)

    Bicudo, P. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)], E-mail: bicudo@ist.utl.pt; Cardoso, M. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)

    2009-04-13

    The multiquark confining potential is proportional to the total distance of the fundamental strings linking the quarks and antiquarks. We address the computation of the total string distance and of the Fermat points where the different strings meet. For a meson the distance is trivially the quark-antiquark distance. For a baryon the problem was solved geometrically from the onset by Fermat and by Torricelli, it can be determined just with a rule and a compass, and we briefly review it. However we also show that for tetraquarks, pentaquarks, hexaquarks, etc., the geometrical solution is much more complicated. Here we provide an iterative method, converging fast to the correct Fermat points and the total distances, relevant for the multiquark potentials.

  1. Bayesian methods for jointly estimating genomic breeding values of one continuous and one threshold trait.

    Directory of Open Access Journals (Sweden)

    Chonglong Wang

    Full Text Available Genomic selection has become a useful tool for animal and plant breeding. Currently, genomic evaluation is usually carried out using a single-trait model. However, a multi-trait model has the advantage of using information on the correlated traits, leading to more accurate genomic prediction. To date, joint genomic prediction for a continuous and a threshold trait using a multi-trait model is scarce and needs more attention. Based on the previously proposed methods BayesCπ for single continuous trait and BayesTCπ for single threshold trait, we developed a novel method based on a linear-threshold model, i.e., LT-BayesCπ, for joint genomic prediction of a continuous trait and a threshold trait. Computing procedures of LT-BayesCπ using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the advantages of LT-BayesCπ over BayesCπ and BayesTCπ with regard to the accuracy of genomic prediction on both traits. Factors affecting the performance of LT-BayesCπ were addressed. The results showed that, in all scenarios, the accuracy of genomic prediction obtained from LT-BayesCπ was significantly increased for the threshold trait compared to that from single trait prediction using BayesTCπ, while the accuracy for the continuous trait was comparable with that from single trait prediction using BayesCπ. The proposed LT-BayesCπ could be a method of choice for joint genomic prediction of one continuous and one threshold trait.

  2. PowerPoint 2007 for Dummies

    CERN Document Server

    Lowe, Doug

    2007-01-01

    New and inexperienced PowerPoint users will discover how to use the latest enhancements to PowerPoint 2007 quickly and efficiently so that they can produce unique and informative presentations PowerPoint continues to be the world's most popular presentation software This updated For Dummies guide shows users different ways to create powerful and effective slideshow presentations that incorporate data from other applications in the form of charts, clip art, sound, and video Shares the key features of PowerPoint 2007 including creating and editing slides, working with hyperlinks and action butt

  3. Evaluation of the 5 and 8 pH point titration methods for monitoring anaerobic digesters treating solid waste.

    Science.gov (United States)

    Vannecke, T P W; Lampens, D R A; Ekama, G A; Volcke, E I P

    2015-01-01

    Simple titration methods certainly deserve consideration for on-site routine monitoring of volatile fatty acid (VFA) concentration and alkalinity during anaerobic digestion (AD), because of their simplicity, speed and cost-effectiveness. In this study, the 5 and 8 pH point titration methods for measuring the VFA concentration and carbonate system alkalinity (H2CO3*-alkalinity) were assessed and compared. For this purpose, synthetic solutions with known H2CO3*-alkalinity and VFA concentration as well as samples from anaerobic digesters treating three different kind of solid wastes were analysed. The results of these two related titration methods were verified with photometric and high-pressure liquid chromatography measurements. It was shown that photometric measurements lead to overestimations of the VFA concentration in the case of coloured samples. In contrast, the 5 pH point titration method provides an accurate estimation of the VFA concentration, clearly corresponding with the true value. Concerning the H2CO3*-alkalinity, the most accurate and precise estimations, showing very similar results for repeated measurements, were obtained using the 8 pH point titration. Overall, it was concluded that the 5 pH point titration method is the preferred method for the practical monitoring of AD of solid wastes due to its robustness, cost efficiency and user-friendliness.

  4. Improving Reference Service: The Case for Using a Continuous Quality Improvement Method.

    Science.gov (United States)

    Aluri, Rao

    1993-01-01

    Discusses the evaluation of library reference service; examines problems with past evaluations, including the lack of long-term planning and a systems perspective; and suggests a method for continuously monitoring and improving reference service using quality improvement tools such as checklists, cause and effect diagrams, Pareto charts, and…

  5. Starting Point: Linking Methods and Materials for Introductory Geoscience Courses

    Science.gov (United States)

    Manduca, C. A.; MacDonald, R. H.; Merritts, D.; Savina, M.

    2004-12-01

    Introductory courses are one of the most challenging teaching environments for geoscience faculty. Courses are often large, students have a wide variety of background and skills, and student motivation can include completing a geoscience major, preparing for a career as teacher, fulfilling a distribution requirement, and general interest. The Starting Point site (http://serc.carleton.edu/introgeo/index.html) provides help for faculty teaching introductory courses by linking together examples of different teaching methods that have been used in entry-level courses with information about how to use the methods and relevant references from the geoscience and education literature. Examples span the content of geoscience courses including the atmosphere, biosphere, climate, Earth surface, energy/material cycles, human dimensions/resources, hydrosphere/cryosphere, ocean, solar system, solid earth and geologic time/earth history. Methods include interactive lecture (e.g think-pair-share, concepTests, and in-class activities and problems), investigative cases, peer review, role playing, Socratic questioning, games, and field labs. A special section of the site devoted to using an Earth System approach provides resources with content information about the various aspects of the Earth system linked to examples of teaching this content. Examples of courses incorporating Earth systems content, and strategies for designing an Earth system course are also included. A similar section on Teaching with an Earth History approach explores geologic history as a vehicle for teaching geoscience concepts and as a framework for course design. The Starting Point site has been authored and reviewed by faculty around the country. Evaluation indicates that faculty find the examples particularly helpful both for direct implementation in their classes and for sparking ideas. The help provided for using different teaching methods makes the examples particularly useful. Examples are chosen from

  6. Application of wavelet scaling function expansion continuous-energy resonance calculation method to MOX fuel problem

    International Nuclear Information System (INIS)

    Yang, W.; Wu, H.; Cao, L.

    2012-01-01

    More and more MOX fuels are used in all over the world in the past several decades. Compared with UO 2 fuel, it contains some new features. For example, the neutron spectrum is harder and more resonance interference effects within the resonance energy range are introduced because of more resonant nuclides contained in the MOX fuel. In this paper, the wavelets scaling function expansion method is applied to study the resonance behavior of plutonium isotopes within MOX fuel. Wavelets scaling function expansion continuous-energy self-shielding method is developed recently. It has been validated and verified by comparison to Monte Carlo calculations. In this method, the continuous-energy cross-sections are utilized within resonance energy, which means that it's capable to solve problems with serious resonance interference effects without iteration calculations. Therefore, this method adapts to treat the MOX fuel resonance calculation problem natively. Furthermore, plutonium isotopes have fierce oscillations of total cross-section within thermal energy range, especially for 240 Pu and 242 Pu. To take thermal resonance effect of plutonium isotopes into consideration the wavelet scaling function expansion continuous-energy resonance calculation code WAVERESON is enhanced by applying the free gas scattering kernel to obtain the continuous-energy scattering source within thermal energy range (2.1 eV to 4.0 eV) contrasting against the resonance energy range in which the elastic scattering kernel is utilized. Finally, all of the calculation results of WAVERESON are compared with MCNP calculation. (authors)

  7. Cloud-point measurement for (sulphate salts + polyethylene glycol 15000 + water) systems by the particle counting method

    International Nuclear Information System (INIS)

    Imani, A.; Modarress, H.; Eliassi, A.; Abdous, M.

    2009-01-01

    The phase separation of (water + salt + polyethylene glycol 15000) systems was studied by cloud-point measurements using the particle counting method. The effect of three kinds of sulphate salt (Na 2 SO 4 , K 2 SO 4 , (NH 4 ) 2 SO 4 ) concentration, polyethylene glycol 15000 concentration, mass ratio of polymer to salt on the cloud-point temperature of these systems have been investigated. The results obtained indicate that the cloud-point temperatures decrease linearly with increase in polyethylene glycol concentrations for different salts. Also, the cloud points decrease with an increase in mass ratio of salt to polymer.

  8. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains.

    Science.gov (United States)

    Tataru, Paula; Hobolth, Asger

    2011-12-05

    Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  9. Spent Fuel Pool Dose Rate Calculations Using Point Kernel and Hybrid Deterministic-Stochastic Shielding Methods

    International Nuclear Information System (INIS)

    Matijevic, M.; Grgic, D.; Jecmenica, R.

    2016-01-01

    This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first

  10. Point and interval forecasts of mortality rates and life expectancy: A comparison of ten principal component methods

    Directory of Open Access Journals (Sweden)

    Han Lin Shang

    2011-07-01

    Full Text Available Using the age- and sex-specific data of 14 developed countries, we compare the point and interval forecast accuracy and bias of ten principal component methods for forecasting mortality rates and life expectancy. The ten methods are variants and extensions of the Lee-Carter method. Based on one-step forecast errors, the weighted Hyndman-Ullah method provides the most accurate point forecasts of mortality rates and the Lee-Miller method is the least biased. For the accuracy and bias of life expectancy, the weighted Hyndman-Ullah method performs the best for female mortality and the Lee-Miller method for male mortality. While all methods underestimate variability in mortality rates, the more complex Hyndman-Ullah methods are more accurate than the simpler methods. The weighted Hyndman-Ullah method provides the most accurate interval forecasts for mortality rates, while the robust Hyndman-Ullah method provides the best interval forecast accuracy for life expectancy.

  11. Comparisons of adaptive TIN modelling filtering method and threshold segmentation filtering method of LiDAR point cloud

    International Nuclear Information System (INIS)

    Chen, Lin; Fan, Xiangtao; Du, Xiaoping

    2014-01-01

    Point cloud filtering is the basic and key step in LiDAR data processing. Adaptive Triangle Irregular Network Modelling (ATINM) algorithm and Threshold Segmentation on Elevation Statistics (TSES) algorithm are among the mature algorithms. However, few researches concentrate on the parameter selections of ATINM and the iteration condition of TSES, which can greatly affect the filtering results. First the paper presents these two key problems under two different terrain environments. For a flat area, small height parameter and angle parameter perform well and for areas with complex feature changes, large height parameter and angle parameter perform well. One-time segmentation is enough for flat areas, and repeated segmentations are essential for complex areas. Then the paper makes comparisons and analyses of the results by these two methods. ATINM has a larger I error in both two data sets as it sometimes removes excessive points. TSES has a larger II error in both two data sets as it ignores topological relations between points. ATINM performs well even with a large region and a dramatic topology while TSES is more suitable for small region with flat topology. Different parameters and iterations can cause relative large filtering differences

  12. Practical dose point-based methods to characterize dose distribution in a stationary elliptical body phantom for a cone-beam C-arm CT system

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jang-Hwan, E-mail: jhchoi21@stanford.edu [Department of Radiology, Stanford University, Stanford, California 94305 and Department of Mechanical Engineering, Stanford University, Stanford, California 94305 (United States); Constantin, Dragos [Microwave Physics R& E, Varian Medical Systems, Palo Alto, California 94304 (United States); Ganguly, Arundhuti; Girard, Erin; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Morin, Richard L. [Mayo Clinic Jacksonville, Jacksonville, Florida 32224 (United States); Dixon, Robert L. [Department of Radiology, Wake Forest University, Winston-Salem, North Carolina 27157 (United States)

    2015-08-15

    Purpose: To propose new dose point measurement-based metrics to characterize the dose distributions and the mean dose from a single partial rotation of an automatic exposure control-enabled, C-arm-based, wide cone angle computed tomography system over a stationary, large, body-shaped phantom. Methods: A small 0.6 cm{sup 3} ion chamber (IC) was used to measure the radiation dose in an elliptical body-shaped phantom made of tissue-equivalent material. The IC was placed at 23 well-distributed holes in the central and peripheral regions of the phantom and dose was recorded for six acquisition protocols with different combinations of minimum kVp (109 and 125 kVp) and z-collimator aperture (full: 22.2 cm; medium: 14.0 cm; small: 8.4 cm). Monte Carlo (MC) simulations were carried out to generate complete 2D dose distributions in the central plane (z = 0). The MC model was validated at the 23 dose points against IC experimental data. The planar dose distributions were then estimated using subsets of the point dose measurements using two proposed methods: (1) the proximity-based weighting method (method 1) and (2) the dose point surface fitting method (method 2). Twenty-eight different dose point distributions with six different point number cases (4, 5, 6, 7, 14, and 23 dose points) were evaluated to determine the optimal number of dose points and their placement in the phantom. The performances of the methods were determined by comparing their results with those of the validated MC simulations. The performances of the methods in the presence of measurement uncertainties were evaluated. Results: The 5-, 6-, and 7-point cases had differences below 2%, ranging from 1.0% to 1.7% for both methods, which is a performance comparable to that of the methods with a relatively large number of points, i.e., the 14- and 23-point cases. However, with the 4-point case, the performances of the two methods decreased sharply. Among the 4-, 5-, 6-, and 7-point cases, the 7-point case (1

  13. Point Measurements of Fermi Velocities by a Time-of-Flight Method

    DEFF Research Database (Denmark)

    Falk, David S.; Henningsen, J. O.; Skriver, Hans Lomholt

    1972-01-01

    The present paper describes in detail a new method of obtaining information about the Fermi velocity of electrons in metals, point by point, along certain contours on the Fermi surface. It is based on transmission of microwaves through thin metal slabs in the presence of a static magnetic field...... applied parallel to the surface. The electrons carry the signal across the slab and arrive at the second surface with a phase delay which is measured relative to a reference signal; the velocities are derived by analyzing the magnetic field dependence of the phase delay. For silver we have in this way...... obtained one component of the velocity along half the circumference of the centrally symmetric orbit for B→∥[100]. The results are in agreement with current models for the Fermi surface. For B→∥[011], the electrons involved are not moving in a symmetry plane of the Fermi surface. In such cases one cannot...

  14. Beam-pointing error compensation method of phased array radar seeker with phantom-bit technology

    Directory of Open Access Journals (Sweden)

    Qiuqiu WEN

    2017-06-01

    Full Text Available A phased array radar seeker (PARS must be able to effectively decouple body motion and accurately extract the line-of-sight (LOS rate for target missile tracking. In this study, the real-time two-channel beam pointing error (BPE compensation method of PARS for LOS rate extraction is designed. The PARS discrete beam motion principium is analyzed, and the mathematical model of beam scanning control is finished. According to the principle of the antenna element shift phase, both the antenna element shift phase law and the causes of beam-pointing error under phantom-bit conditions are analyzed, and the effect of BPE caused by phantom-bit technology (PBT on the extraction accuracy of the LOS rate is examined. A compensation method is given, which includes coordinate transforms, beam angle margin compensation, and detector dislocation angle calculation. When the method is used, the beam angle margin in the pitch and yaw directions is calculated to reduce the effect of the missile body disturbance and to improve LOS rate extraction precision by compensating for the detector dislocation angle. The simulation results validate the proposed method.

  15. Intelligent Continuous Double Auction method For Service Allocation in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Nima Farajian

    2013-10-01

    Full Text Available Market-oriented approach is an effective method for resource management because of its regulation of supply and demand and is suitable for cloud environment where the computing resources, either software or hardware, are virtualized and allocated as services from providers to users. In this paper a continuous double auction method for efficient cloud service allocation is presented in which i enables consumers to order various resources (services for workflows and coallocation, ii consumers and providers make bid and request prices based on deadline and workload time and in addition providers can tradeoff between utilization time and price of bids, iii auctioneers can intelligently find optimum matching by sharing and merging resources which result more trades. Experimental results show that proposed method is efficient in terms of successful allocation rate and resource utilization.

  16. Short run hydrothermal coordination with network constraints using an interior point method

    International Nuclear Information System (INIS)

    Lopez Lezama, Jesus Maria; Gallego Pareja, Luis Alfonso; Mejia Giraldo, Diego

    2008-01-01

    This paper presents a lineal optimization model to solve the hydrothermal coordination problem. The main contribution of this work is the inclusion of the network constraints to the hydrothermal coordination problem and its solution using an interior point method. The proposed model allows working with a system that can be completely hydraulic, thermal or mixed. Results are presented on the IEEE 14 bus test system

  17. Continuity and Discontinuity: The Case of Second Couplehood in Old Age

    Science.gov (United States)

    Koren, Chaya

    2011-01-01

    Purpose: Continuity and discontinuity are controversial concepts in social theories on aging. The aim of this article is to explore these concepts using the experiences of older persons living in second couplehood in old age as a case in point. Design and Method: Based on a larger qualitative study on second couplehood in old age, following the…

  18. Group vector space method for estimating enthalpy of vaporization of organic compounds at the normal boiling point.

    Science.gov (United States)

    Wenying, Wei; Jinyu, Han; Wen, Xu

    2004-01-01

    The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.

  19. Determination of area reduction rate by continuous ball indentation test

    International Nuclear Information System (INIS)

    Zou, Bin; Guan, Kai Shu; Wu, Sheng Bao

    2016-01-01

    Rate of area reduction is an important mechanical property to appraise the plasticity of metals, which is always obtained from the uniaxial tensile test. A methodology is proposed to determine the area reduction rate by continuous ball indentation test technique. The continuum damage accumulation theory has been adopted in this work to identify the failure point in the indentation. The corresponding indentation depth of this point can be obtained and used to estimate the area reduction rate. The local strain limit criterion proposed in the ASME VIII-2 2007 alternative rules is also adopted in this research to convert the multiaxial strain of indentation test to uniaxial strain of tensile test. The pile-up and sink-in phenomenon which can affect the result significantly is also discussed in this paper. This method can be useful in engineering practice to evaluate the material degradation under severe working condition due to the non-destructive nature of ball indentation test. In order to validate the method, continuous ball indentation test is performed on ferritic steel 16MnR and ASTM (A193B16), then the results are compared with that got from the traditional uniaxial tensile test.

  20. Continuous quality improvement process pin-points delays, speeds STEMI patients to life-saving treatment.

    Science.gov (United States)

    2011-11-01

    Using a multidisciplinary team approach, the University of California, San Diego, Health System has been able to significantly reduce average door-to-balloon angioplasty times for patients with the most severe form of heart attacks, beating national recommendations by more than a third. The multidisciplinary team meets monthly to review all cases involving patients with ST-segment-elevation myocardial infarctions (STEMI) to see where process improvements can be made. Using this continuous quality improvement (CQI) process, the health system has reduced average door-to-balloon times from 120 minutes to less than 60 minutes, and administrators are now aiming for further progress. Among the improvements instituted by the multidisciplinary team are the implementation of a "greeter" with enough clinical expertise to quickly pick up on potential STEMI heart attacks as soon as patients walk into the ED, and the purchase of an electrocardiogram (EKG) machine so that evaluations can be done in the triage area. ED staff have prepared "STEMI" packets, including items such as special IV tubing and disposable leads, so that patients headed for the catheterization laboratory are prepared to undergo the procedure soon after arrival. All the clocks and devices used in the ED are synchronized so that analysts can later review how long it took to complete each step of the care process. Points of delay can then be targeted for improvement.

  1. A Mixed-Methods Study Investigating the Relationship between Media Multitasking Orientation and Grade Point Average

    Science.gov (United States)

    Lee, Jennifer

    2012-01-01

    The intent of this study was to examine the relationship between media multitasking orientation and grade point average. The study utilized a mixed-methods approach to investigate the research questions. In the quantitative section of the study, the primary method of statistical analyses was multiple regression. The independent variables for the…

  2. Evaluating Point of Sale Tobacco Marketing Using Behavioral Laboratory Methods

    Science.gov (United States)

    Robinson, Jason D.; Drobes, David J.; Brandon, Thomas H.; Wetter, David W.; Cinciripini, Paul M.

    2018-01-01

    With passage of the 2009 Family Smoking Prevention and Tobacco Control Act, the FDA has authority to regulate tobacco advertising. As bans on traditional advertising venues and promotion of tobacco products have grown, a greater emphasis has been placed on brand exposure and price promotion in displays of products at the point-of-sale (POS). POS marketing seeks to influence attitudes and behavior towards tobacco products using a variety of explicit and implicit messaging approaches. Behavioral laboratory methods have the potential to provide the FDA with a strong scientific base for regulatory actions and a model for testing future manipulations of POS advertisements. We review aspects of POS marketing that potentially influence smoking behavior, including branding, price promotions, health claims, the marketing of emerging tobacco products, and tobacco counter-advertising. We conceptualize how POS marketing potentially influence individual attention, memory, implicit attitudes, and smoking behavior. Finally, we describe specific behavioral laboratory methods that can be adapted to measure the impact of POS marketing on these domains.

  3. Application of continuous seismic-reflection techniques to delineate paleochannels beneath the Neuse River at US Marine Corps Air Station, Cherry Point, North Carolina

    Science.gov (United States)

    Cardinell, Alex P.

    1999-01-01

    A continuous seismic-reflection profiling survey was conducted by the U.S. Geological Survey on the Neuse River near the Cherry Point Marine Corps Air Station during July 7-24, 1998. Approximately 52 miles of profiling data were collected during the survey from areas northwest of the Air Station to Flanner Beach and southeast to Cherry Point. Positioning of the seismic lines was done by using an integrated navigational system. Data from the survey were used to define and delineate paleochannel alignments under the Neuse River near the Air Station. These data also were correlated with existing surface and borehole geophysical data, including vertical seismic-profiling velocity data collected in 1995. Sediments believed to be Quaternary in age were identified at varying depths on the seismic sections as undifferentiated reflectors and lack the lateral continuity of underlying reflectors believed to represent older sediments of Tertiary age. The sediments of possible Quaternary age thicken to the southeast. Paleochannels of Quaternary age and varying depths were identified beneath the Neuse River estuary. These paleochannels range in width from 870 feet to about 6,900 feet. Two zones of buried paleochannels were identified in the continuous seismic-reflection profiling data. The eastern paleochannel zone includes two large superimposed channel features identified during this study and in re-interpreted 1995 land seismic-reflection data. The second paleochannel zone, located west of the first paleochannel zone, contains several small paleochannels near the central and south shore of the Neuse River estuary between Slocum Creek and Flanner Beach. This second zone of channel features may be continuous with those mapped by the U.S. Geological Survey in 1995 using land seismic-reflection data on the southern end of the Air Station. Most of the channels were mapped at the Quaternary-Tertiary sediment boundary. These channels appear to have been cut into the older sediments

  4. Developing Common Set of Weights with Considering Nondiscretionary Inputs and Using Ideal Point Method

    Directory of Open Access Journals (Sweden)

    Reza Kiani Mavi

    2013-01-01

    Full Text Available Data envelopment analysis (DEA is used to evaluate the performance of decision making units (DMUs with multiple inputs and outputs in a homogeneous group. In this way, the acquired relative efficiency score for each decision making unit lies between zero and one where a number of them may have an equal efficiency score of one. DEA successfully divides them into two categories of efficient DMUs and inefficient DMUs. A ranking for inefficient DMUs is given but DEA does not provide further information about the efficient DMUs. One of the popular methods for evaluating and ranking DMUs is the common set of weights (CSW method. We generate a CSW model with considering nondiscretionary inputs that are beyond the control of DMUs and using ideal point method. The main idea of this approach is to minimize the distance between the evaluated decision making unit and the ideal decision making unit (ideal point. Using an empirical example we put our proposed model to test by applying it to the data of some 20 bank branches and rank their efficient units.

  5. Interesting Interest Points

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Dahl, Anders Lindbjerg; Pedersen, Kim Steenstrup

    2012-01-01

    on spatial invariance of interest points under changing acquisition parameters by measuring the spatial recall rate. The scope of this paper is to investigate the performance of a number of existing well-established interest point detection methods. Automatic performance evaluation of interest points is hard......Not all interest points are equally interesting. The most valuable interest points lead to optimal performance of the computer vision method in which they are employed. But a measure of this kind will be dependent on the chosen vision application. We propose a more general performance measure based...... position. The LED illumination provides the option for artificially relighting the scene from a range of light directions. This data set has given us the ability to systematically evaluate the performance of a number of interest point detectors. The highlights of the conclusions are that the fixed scale...

  6. Statistics of stationary points of random finite polynomial potentials

    International Nuclear Information System (INIS)

    Mehta, Dhagash; Niemerg, Matthew; Sun, Chuang

    2015-01-01

    The stationary points (SPs) of the potential energy landscapes (PELs) of multivariate random potentials (RPs) have found many applications in many areas of Physics, Chemistry and Mathematical Biology. However, there are few reliable methods available which can find all the SPs accurately. Hence, one has to rely on indirect methods such as Random Matrix theory. With a combination of the numerical polynomial homotopy continuation method and a certification method, we obtain all the certified SPs of the most general polynomial RP for each sample chosen from the Gaussian distribution with mean 0 and variance 1. While obtaining many novel results for the finite size case of the RP, we also discuss the implications of our results on mathematics of random systems and string theory landscapes. (paper)

  7. Development of modifications to the material point method for the simulation of thin membranes, compressible fluids, and their interactions

    Energy Technology Data Exchange (ETDEWEB)

    York, A.R. II [Sandia National Labs., Albuquerque, NM (United States). Engineering and Process Dept.

    1997-07-01

    The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.

  8. Formulations to overcome the divergence of iterative method of fixed-point in nonlinear equations solution

    Directory of Open Access Journals (Sweden)

    Wilson Rodríguez Calderón

    2015-04-01

    Full Text Available When we need to determine the solution of a nonlinear equation there are two options: closed-methods which use intervals that contain the root and during the iterative process reduce the size of natural way, and, open-methods that represent an attractive option as they do not require an initial interval enclosure. In general, we know open-methods are more efficient computationally though they do not always converge. In this paper we are presenting a divergence case analysis when we use the method of fixed point iteration to find the normal height in a rectangular channel using the Manning equation. To solve this problem, we propose applying two strategies (developed by authors that allow to modifying the iteration function making additional formulations of the traditional method and its convergence theorem. Although Manning equation is solved with other methods like Newton when we use the iteration method of fixed-point an interesting divergence situation is presented which can be solved with a convergence higher than quadratic over the initial iterations. The proposed strategies have been tested in two cases; a study of divergence of square root of real numbers was made previously by authors for testing. Results in both cases have been successful. We present comparisons because are important for seeing the advantage of proposed strategies versus the most representative open-methods.

  9. A nine-point pH titration method to determine low-concentration VFA in municipal wastewater.

    Science.gov (United States)

    Ai, Hainan; Zhang, Daijun; Lu, Peili; He, Qiang

    2011-01-01

    Characterization of volatile fatty acid (VFA) in wastewater is significant for understanding the wastewater nature and the wastewater treatment process optimization based on the usage of Activated Sludge Models (ASMs). In this study, a nine-point pH titration method was developed for the determination of low-concentration VFA in municipal wastewater. The method was evaluated using synthetic wastewater containing VFA with the concentration of 10-50 mg/l and the possible interfering buffer systems of carbonate, phosphate and ammonium similar to those in real municipal wastewater. In addition, the further evaluation was conducted through the assay of real wastewater using chromatography as reference. The results showed that the recovery of VFA in the synthetic wastewater was 92%-102 and the coefficient of variance (CV) of reduplicate measurements 1.68%-4.72%. The changing content of the buffering substances had little effect on the accuracy of the method. Moreover, the titration method was agreed with chromatography in the determination of VFA in real municipal wastewater with R(2)= 0.9987 and CV =1.3-1.7. The nine-point pH titration method is capable of satisfied determination of low-concentration VFA in municipal wastewater.

  10. An interior-point method for total variation regularized positron emission tomography image reconstruction

    Science.gov (United States)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  11. The evaluation of reflective learning from the nursing student's point of view: A mixed method approach.

    Science.gov (United States)

    Fernández-Peña, Rosario; Fuentes-Pumarola, Concepció; Malagón-Aguilera, M Carme; Bonmatí-Tomàs, Anna; Bosch-Farré, Cristina; Ballester-Ferrando, David

    2016-09-01

    Adapting university programmes to European Higher Education Area criteria has required substantial changes in curricula and teaching methodologies. Reflective learning (RL) has attracted growing interest and occupies an important place in the scientific literature on theoretical and methodological aspects of university instruction. However, fewer studies have focused on evaluating the RL methodology from the point of view of nursing students. To assess nursing students' perceptions of the usefulness and challenges of RL methodology. Mixed method design, using a cross-sectional questionnaire and focus group discussion. The research was conducted via self-reported reflective learning questionnaire complemented by focus group discussion. Students provided a positive overall evaluation of RL, highlighting the method's capacity to help them better understand themselves, engage in self-reflection about the learning process, optimize their strengths and discover additional training needs, along with searching for continuous improvement. Nonetheless, RL does not help them as much to plan their learning or identify areas of weakness or needed improvement in knowledge, skills and attitudes. Among the difficulties or challenges, students reported low motivation and lack of familiarity with this type of learning, along with concerns about the privacy of their reflective journals and about the grading criteria. In general, students evaluated RL positively. The results suggest areas of needed improvement related to unfamiliarity with the methodology, ethical aspects of developing a reflective journal and the need for clear evaluation criteria. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Extending the alias Monte Carlo sampling method to general distributions

    International Nuclear Information System (INIS)

    Edwards, A.L.; Rathkopf, J.A.; Smidt, R.K.

    1991-01-01

    The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs

  13. Evaluation of spray and point inoculation methods for the phenotyping of Puccinia striiformis on wheat

    DEFF Research Database (Denmark)

    Sørensen, Chris Khadgi; Thach, Tine; Hovmøller, Mogens Støvring

    2016-01-01

    flexible application procedure for spray inoculation and it gave highly reproducible results for virulence phenotyping. Six point inoculation methods were compared to find the most suitable for assessment of pathogen aggressiveness. The use of Novec 7100 and dry dilution with Lycopodium spores gave...... for the assessment of quantitative epidemiological parameters. New protocols for spray and point inoculation of P. striiformis on wheat are presented, along with the prospect for applying these in rust research and resistance breeding activities....

  14. Three-point method for measuring the geometric error components of linear and rotary axes based on sequential multilateration

    International Nuclear Information System (INIS)

    Zhang, Zhenjiu; Hu, Hong

    2013-01-01

    The linear and rotary axes are fundamental parts of multi-axis machine tools. The geometric error components of the axes must be measured for motion error compensation to improve the accuracy of the machine tools. In this paper, a simple method named the three point method is proposed to measure the geometric error of the linear and rotary axes of the machine tools using a laser tracker. A sequential multilateration method, where uncertainty is verified through simulation, is applied to measure the 3D coordinates. Three noncollinear points fixed on the stage of each axis are selected. The coordinates of these points are simultaneously measured using a laser tracker to obtain their volumetric errors by comparing these coordinates with ideal values. Numerous equations can be established using the geometric error models of each axis. The geometric error components can be obtained by solving these equations. The validity of the proposed method is verified through a series of experiments. The results indicate that the proposed method can measure the geometric error of the axes to compensate for the errors in multi-axis machine tools.

  15. A modified likelihood-method to search for point-sources in the diffuse astrophysical neutrino-flux in IceCube

    Energy Technology Data Exchange (ETDEWEB)

    Reimann, Rene; Haack, Christian; Leuermann, Martin; Raedel, Leif; Schoenen, Sebastian; Schimp, Michael; Wiebusch, Christopher [III. Physikalisches Institut, RWTH Aachen (Germany); Collaboration: IceCube-Collaboration

    2015-07-01

    IceCube, a cubic-kilometer sized neutrino detector at the geographical South Pole, has recently measured a flux of high-energy astrophysical neutrinos. Although this flux has now been observed in multiple analyses, no point sources or source classes could be identified yet. Standard point source searches test many points in the sky for a point source of astrophysical neutrinos individually and therefore produce many trials. Our approach is to additionally use the measured diffuse spectrum to constrain the number of possible point sources and their properties. Initial studies of the method performance are shown.

  16. L’ontologie des Indivisibles et la structure du continu selon Gautier Burley The ontology of Indivisibles and the structure of continuity according to Walter Burley

    Directory of Open Access Journals (Sweden)

    Alice Lamy

    2011-12-01

    Full Text Available Pour Aristote, sous le rapport de sa composition en parties, le continu est divisible mais sous le rapport de ses limites (point, ligne, surface et profondeur, le continu est indivisible. Walter Burley, comme ses contemporains, a commenté la coexistence problématique de la divisibilité et de l’indivisibilité dans la structure du continu. Bien plus, aux prises avec sa célèbre polémique contre son adversaire Guillaume d’Ockham à propos de l’ontologie de la catégorie de quantité, il admet une structure du continu originale qui semble contenir à la fois des intervalles ou parties divisibles et des points ou indivisibles.For Aristote, concerning its composition in parts, the continuous is divisible but concerning its limits (point, line, surface and depth, the continuous is indivisible. Walter Burley, as his contemporaries, commented on the problematic coexistence of the divisibility and the indivisibility in the structure of the continuous. Much more, battling against his opponent Wilhelm of Ockham about the ontology of the category of quantity, he admits an original structure of continuous who seems to contain at the same time intervals or divisible parts and indivisible points.

  17. Variable Camber Continuous Aerodynamic Control Surfaces and Methods for Active Wing Shaping Control

    Science.gov (United States)

    Nguyen, Nhan T. (Inventor)

    2016-01-01

    An aerodynamic control apparatus for an air vehicle improves various aerodynamic performance metrics by employing multiple spanwise flap segments that jointly form a continuous or a piecewise continuous trailing edge to minimize drag induced by lift or vortices. At least one of the multiple spanwise flap segments includes a variable camber flap subsystem having multiple chordwise flap segments that may be independently actuated. Some embodiments also employ a continuous leading edge slat system that includes multiple spanwise slat segments, each of which has one or more chordwise slat segment. A method and an apparatus for implementing active control of a wing shape are also described and include the determination of desired lift distribution to determine the improved aerodynamic deflection of the wings. Flap deflections are determined and control signals are generated to actively control the wing shape to approximate the desired deflection.

  18. Theoretical Proof and Empirical Confirmation of a Continuous Labeling Method Using Naturally 13C-Depleted Carbon Dioxide

    Institute of Scientific and Technical Information of China (English)

    Weixin Cheng; Feike A. Dijkstra

    2007-01-01

    Continuous isotope labeling and tracing is often needed to study the transformation, movement, and allocation of carbon in plant-soil systems. However, existing labeling methods have numerous limitations. The present study introduces a new continuous labeling method using naturally 13C-depleted CO2. We theoretically proved that a stable level of 13C-CO2 abundance In a labeling chamber can be maintained by controlling the rate of CO2-free air injection and the rate of ambient airflow with coupling of automatic control of CO2 concentration using a CO2 analyzer. The theoretical results were tested and confirmed in a 54 day experiment in a plant growth chamber. This new continuous labeling method avoids the use of radioactive 14C or expensive 13C-enriched CO2 required by existing methods and therefore eliminates issues of radiation safety or unaffordable isotope cost, as well as creating new opportunities for short- or long-term labeling experiments under a controlled environment.

  19. Development of the H-point standard additions method for coupled liquid-chromatography and UV-visible spectrophotometry

    Energy Technology Data Exchange (ETDEWEB)

    Campins-Falco, Pilar; Bosch-Reig, Francisco; Herraez-Hernandez, Rosa; Sevillano-Cabeza, Adela (Universidad de Valencia (Spain). Facultad de Quimica, Departamento de Quimica Analitica)

    1992-02-10

    This work establishes the fundamentals of the H-point standard additions method for liquid chromatography for the simultaneous analysis of binary mixtures with overlapped chromatographic peaks. The method was compared with the deconvolution method of peak suppression and the second derivative of elution profiles. Different mixtures of diuretics were satisfactorily resolved. (author). 21 refs.; 9 figs.; 2 tabs.

  20. Investigation of point triangulation methods for optimality and performance in Structure from Motion systems

    DEFF Research Database (Denmark)

    Structure from Motion (SFM) systems are composed of cameras and structure in the form of 3D points and other features. It is most often that the structure components outnumber the cameras by a great margin. It is not uncommon to have a configuration with 3 cameras observing more than 500 3D points...... an overview of existing triangulation methods with emphasis on performance versus optimality, and will suggest a fast triangulation algorithm based on linear constraints. The structure and camera motion estimation in a SFM system is based on the minimization of some norm of the reprojection error between...

  1. Intergenerational continuity and discontinuity in Mexican-origin youths' participation in organized activities: insights from mixed-methods.

    Science.gov (United States)

    Simpkins, Sandra D; Vest, Andrea E; Price, Chara D

    2011-12-01

    Motivation theories suggest that parents are an integral support for adolescents' participation in organized activities. Despite the importance of parents, the field knows very little about how parents' own experiences in activities influence the participation of their adolescent children. The goals of this study were to examine (a) the patterns of intergenerational continuity and discontinuity in parents' activity participation during adolescence and their adolescents' activity participation, and (b) the processes underlying each of these patterns within Mexican-origin families. Qualitative and quantitative data were collected through three in-depth interviews conducted with 31 seventh-grade adolescents and their parents at three time points over a year. The quantitative data suggested there was modest intergenerational continuity in activity participation. There were three distinct patterns: nine families were continuous participants, seven families were continuous nonparticipants, and 15 families were discontinuous, where the parent did not participate but the youth did participate in activities. The continuous participant families included families in which parents valued how organized activities contributed to their own lives and actively encouraged their adolescents' participation. The continuous nonparticipant families reported less knowledge and experience with activities along with numerous barriers to participation. There were three central reasons for the change in the discontinuous families. For a third of these families, parents felt strongly about providing a different childhood for their adolescents than what they experienced. The intergenerational discontinuity in participation was also likely to be sparked by someone else in the family or an external influence (i.e., friends, schools).

  2. [An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].

    Science.gov (United States)

    Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang

    2014-07-01

    Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.

  3. An improved local radial point interpolation method for transient heat conduction analysis

    Science.gov (United States)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  4. An improved local radial point interpolation method for transient heat conduction analysis

    International Nuclear Information System (INIS)

    Wang Feng; Lin Gao; Hu Zhi-Qiang; Zheng Bao-Jing

    2013-01-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions

  5. A Riccati-Based Interior Point Method for Efficient Model Predictive Control of SISO Systems

    DEFF Research Database (Denmark)

    Hagdrup, Morten; Johansson, Rolf; Bagterp Jørgensen, John

    2017-01-01

    model parts separate. The controller is designed based on the deterministic model, while the Kalman filter results from the stochastic part. The controller is implemented as a primal-dual interior point (IP) method using Riccati recursion and the computational savings possible for SISO systems...

  6. Damage identification method for continuous girder bridges based on spatially-distributed long-gauge strain sensing under moving loads

    Science.gov (United States)

    Wu, Bitao; Wu, Gang; Yang, Caiqian; He, Yi

    2018-05-01

    A novel damage identification method for concrete continuous girder bridges based on spatially-distributed long-gauge strain sensing is presented in this paper. First, the variation regularity of the long-gauge strain influence line of continuous girder bridges which changes with the location of vehicles on the bridge is studied. According to this variation regularity, a calculation method for the distribution regularity of the area of long-gauge strain history is investigated. Second, a numerical simulation of damage identification based on the distribution regularity of the area of long-gauge strain history is conducted, and the results indicate that this method is effective for identifying damage and is not affected by the speed, axle number and weight of vehicles. Finally, a real bridge test on a highway is conducted, and the experimental results also show that this method is very effective for identifying damage in continuous girder bridges, and the local element stiffness distribution regularity can be revealed at the same time. This identified information is useful for maintaining of continuous girder bridges on highways.

  7. Precipitation of stoichiometric hydroxyapatite by a continuous method

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Morales, J.; Boix, T.; Fraile, J.; Rodriguez-Clemente, R. [Consejo Superior de Investigaciones Cientificas, Barcelona (Spain). Inst. de Ciencia de Materiales; Torrent-Burgues, J. [UPC, Barcelona (Spain). Dept. d' Enginyeria Quimica

    2001-07-01

    In this paper we present the precipitation of hydroxyapatite (HA), Ca{sub 5}(OH)(PO{sub 4}){sub 3}, from highly concentrated CaCl{sub 2} and K{sub 2}HPO{sub 4} solutions, carried out by a continuous method in a MSMPR reactor. The procedure consists of adding the reagents in a ratio Ca to P equal to 1.67, maintaining a temperature of 85 C, inert N{sub 2} atmosphere inside the reactor, and monitoring and adjusting automatically the pH by means of a pH-stat system (pH = 9.0 {+-} 0.1). Under these conditions HA with a Ca to P ratio equal or close to the stoichiometric composition (Ca/P=1.667), with a high yield (up to 99%) and a high production rate (up to 1.17 g/l.min) is obtained at steady state. The CSD, morphology, crystallinity of the precipitates and impurities present fit the requirement for its biomedical applications. (orig.)

  8. Continuous symmetric reductions of the Adler-Bobenko-Suris equations

    International Nuclear Information System (INIS)

    Tsoubelis, D; Xenitidis, P

    2009-01-01

    Continuously symmetric solutions of the Adler-Bobenko-Suris class of discrete integrable equations are presented. Initially defined by their invariance under the action of both of the extended three-point generalized symmetries admitted by the corresponding equations, these solutions are shown to be determined by an integrable system of partial differential equations. The connection of this system to the Nijhoff-Hone-Joshi 'generating partial differential equations' is established and an auto-Baecklund transformation and a Lax pair for it are constructed. Applied to the H1 and Q1 δ=0 members of the Adler-Bobenko-Suris family, the method of continuously symmetric reductions yields explicit solutions determined by the Painleve trancendents

  9. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains

    Directory of Open Access Journals (Sweden)

    Tataru Paula

    2011-12-01

    Full Text Available Abstract Background Continuous time Markov chains (CTMCs is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes are unaccessible and the past must be inferred from DNA sequence data observed in the present. Results We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD, the second on uniformization (UNI, and the third on integrals of matrix exponentials (EXPM. The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. Conclusions We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  10. Numerical methods for finding periodic points in discrete maps. High order islands chains and noble barriers in a toroidal magnetic configuration

    Energy Technology Data Exchange (ETDEWEB)

    Steinbrecher, G. [Association Euratom-Nasti Romania, Dept. of Theoretical Physics, Physics Faculty, University of Craiova (Romania); Reuss, J.D.; Misguich, J.H. [Association Euratom-CEA Cadarache, 13 - Saint-Paul-lez-Durance (France). Dept. de Recherches sur la Fusion Controlee

    2001-11-01

    We first remind usual physical and mathematical concepts involved in the dynamics of Hamiltonian systems, and namely in chaotic systems described by discrete 2D maps (representing the intersection points of toroidal magnetic lines in a poloidal plane in situations of incomplete magnetic chaos in Tokamaks). Finding the periodic points characterizing chains of magnetic islands is an essential step not only to determine the skeleton of the phase space picture, but also to determine the flux of magnetic lines across semi-permeable barriers like Cantori. We discuss here several computational methods used to determine periodic points in N dimensions, which amounts to solve a set of N nonlinear coupled equations: Newton method, minimization techniques, Laplace or steepest descend method, conjugated direction method and Fletcher-Reeves method. We have succeeded to improve this last method in an important way, without modifying its useful double-exponential convergence. This improved method has been tested and applied to finding periodic points of high order m in the 2D 'Tokamap' mapping, for values of m along rational chains of winding number n/m converging towards a noble value where a Cantorus exists. Such precise positions of periodic points have been used in the calculation of the flux across this Cantorus. (authors)

  11. Pretreatment methods to obtain pumpable high solid loading wood–water slurries for continuous hydrothermal liquefaction systems

    DEFF Research Database (Denmark)

    Dãrãbana, Iulia-Maria; Rosendahl, Lasse Aistrup; Pedersen, Thomas Helmer

    2015-01-01

    Feedstock pretreatment is a prerequisite step for continuous processing of lignocellulosic biomass through HTL, in order to facilitate the pumpability of biomass aqueous slurries. Until now, HTL feedstock pumpability could only be achieved at solid mass content below 15%. In this work, two...... pretreatment methods to obtain wood-based slurries with more than 20% solid mass content, for continuous processing in HTL systems, are proposed. The effect of biomass particle size and pretreatment method on the feedstock pumpability is analyzed. The experimental results show that pumpable wood-based slurries...

  12. Continuous Personal Improvement.

    Science.gov (United States)

    Emiliani, M. L.

    1998-01-01

    Suggests that continuous improvement tools used in the workplace can be applied to self-improvement. Explains the use of such techniques as one-piece flow, kanban, visual controls, and total productive maintenance. Points out misapplications of these tools and describes the use of fishbone diagrams to diagnose problems. (SK)

  13. High precision micro-scale Hall Effect characterization method using in-line micro four-point probes

    DEFF Research Database (Denmark)

    Petersen, Dirch Hjorth; Hansen, Ole; Lin, Rong

    2008-01-01

    Accurate characterization of ultra shallow junctions (USJ) is important in order to understand the principles of junction formation and to develop the appropriate implant and annealing technologies. We investigate the capabilities of a new micro-scale Hall effect measurement method where Hall...... effect is measured with collinear micro four-point probes (M4PP). We derive the sensitivity to electrode position errors and describe a position error suppression method to enable rapid reliable Hall effect measurements with just two measurement points. We show with both Monte Carlo simulations...... and experimental measurements, that the repeatability of a micro-scale Hall effect measurement is better than 1 %. We demonstrate the ability to spatially resolve Hall effect on micro-scale by characterization of an USJ with a single laser stripe anneal. The micro sheet resistance variations resulting from...

  14. Characterization of finite spaces having dispersion points

    International Nuclear Information System (INIS)

    Al-Bsoul, A. T

    1997-01-01

    In this paper we shall characterize the finite spaces having dispersion points. Also, we prove that the dispersion point of a finite space with a dispersion points fixed under all non constant continuous functions which answers the question raised by J. C obb and W. Voxman in 1980 affirmatively for finite space. Some open problems are given. (author). 16 refs

  15. Multi-point probe for testing electrical properties and a method of producing a multi-point probe

    DEFF Research Database (Denmark)

    2011-01-01

    A multi-point probe for testing electrical properties of a number of specific locations of a test sample comprises a supporting body defining a first surface, a first multitude of conductive probe arms (101-101'''), each of the probe arms defining a proximal end and a distal end. The probe arms...... of contact with the supporting body, and a maximum thickness perpendicular to its perpendicular bisector and its line of contact with the supporting body. Each of the probe arms has a specific area or point of contact (111-111''') at its distal end for contacting a specific location among the number...... of specific locations of the test sample. At least one of the probe arms has an extension defining a pointing distal end providing its specific area or point of contact located offset relative to its perpendicular bisector....

  16. Dietitians' perceptions of the continuing professional development ...

    African Journals Online (AJOL)

    ADSA) would be upheld to portray ... accumulation and awarding of points or Continuing Education Units. (CEUs) and a reduced annual point .... of earnings for private practising dietitians, while out of office. Suggestions included centrally located ...

  17. Fermionic quantum critical point of spinless fermions on a honeycomb lattice

    International Nuclear Information System (INIS)

    Wang, Lei; Corboz, Philippe; Troyer, Matthias

    2014-01-01

    Spinless fermions on a honeycomb lattice provide a minimal realization of lattice Dirac fermions. Repulsive interactions between nearest neighbors drive a quantum phase transition from a Dirac semimetal to a charge-density-wave state through a fermionic quantum critical point, where the coupling of the Ising order parameter to the Dirac fermions at low energy drastically affects the quantum critical behavior. Encouraged by a recent discovery (Huffman and Chandrasekharan 2014 Phys. Rev. B 89 111101) of the absence of the fermion sign problem in this model, we study the fermionic quantum critical point using the continuous-time quantum Monte Carlo method with a worm-sampling technique. We estimate the transition point V/t=1.356(1) with the critical exponents ν=0.80(3) and η=0.302(7). Compatible results for the transition point are also obtained with infinite projected entangled-pair states. (paper)

  18. Simulating Ice Shelf Response to Potential Triggers of Collapse Using the Material Point Method

    Science.gov (United States)

    Huth, A.; Smith, B. E.

    2017-12-01

    Weakening or collapse of an ice shelf can reduce the buttressing effect of the shelf on its upstream tributaries, resulting in sea level rise as the flux of grounded ice into the ocean increases. Here we aim to improve sea level rise projections by developing a prognostic 2D plan-view model that simulates the response of an ice sheet/ice shelf system to potential triggers of ice shelf weakening or collapse, such as calving events, thinning, and meltwater ponding. We present initial results for Larsen C. Changes in local ice shelf stresses can affect flow throughout the entire domain, so we place emphasis on calibrating our model to high-resolution data and precisely evolving fracture-weakening and ice geometry throughout the simulations. We primarily derive our initial ice geometry from CryoSat-2 data, and initialize the model by conducting a dual inversion for the ice viscosity parameter and basal friction coefficient that minimizes mismatch between modeled velocities and velocities derived from Landsat data. During simulations, we implement damage mechanics to represent fracture-weakening, and track ice thickness evolution, grounding line position, and ice front position. Since these processes are poorly represented by the Finite Element Method (FEM) due to mesh resolution issues and numerical diffusion, we instead implement the Material Point Method (MPM) for our simulations. In MPM, the ice domain is discretized into a finite set of Lagrangian material points that carry all variables and are tracked throughout the simulation. Each time step, information from the material points is projected to a Eulerian grid where the momentum balance equation (shallow shelf approximation) is solved similarly to FEM, but essentially treating the material points as integration points. The grid solution is then used to determine the new positions of the material points and update variables such as thickness and damage in a diffusion-free Lagrangian frame. The grid does not store

  19. Slicing Method for curved façade and window extraction from point clouds

    Science.gov (United States)

    Iman Zolanvari, S. M.; Laefer, Debra F.

    2016-09-01

    Laser scanning technology is a fast and reliable method to survey structures. However, the automatic conversion of such data into solid models for computation remains a major challenge, especially where non-rectilinear features are present. Since, openings and the overall dimensions of the buildings are the most critical elements in computational models for structural analysis, this article introduces the Slicing Method as a new, computationally-efficient method for extracting overall façade and window boundary points for reconstructing a façade into a geometry compatible for computational modelling. After finding a principal plane, the technique slices a façade into limited portions, with each slice representing a unique, imaginary section passing through a building. This is done along a façade's principal axes to segregate window and door openings from structural portions of the load-bearing masonry walls. The method detects each opening area's boundaries, as well as the overall boundary of the façade, in part, by using a one-dimensional projection to accelerate processing. Slices were optimised as 14.3 slices per vertical metre of building and 25 slices per horizontal metre of building, irrespective of building configuration or complexity. The proposed procedure was validated by its application to three highly decorative, historic brick buildings. Accuracy in excess of 93% was achieved with no manual intervention on highly complex buildings and nearly 100% on simple ones. Furthermore, computational times were less than 3 sec for data sets up to 2.6 million points, while similar existing approaches required more than 16 hr for such datasets.

  20. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    Directory of Open Access Journals (Sweden)

    Tomoaki Nakamura

    2017-12-01

    Full Text Available Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM, the emission distributions of which are Gaussian processes (GPs. Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods.

  1. Motion estimation using point cluster method and Kalman filter.

    Science.gov (United States)

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  2. Method of Check of Statistical Hypotheses for Revealing of “Fraud” Point of Sale

    Directory of Open Access Journals (Sweden)

    T. M. Bolotskaya

    2011-06-01

    Full Text Available Application method checking of statistical hypotheses fraud Point of Sale working with purchasing cards and suspected of accomplishment of unauthorized operations is analyzed. On the basis of the received results the algorithm is developed, allowing receive an assessment of works of terminals in regime off-line.

  3. A sparsity-regularized Born iterative method for reconstruction of two-dimensional piecewise continuous inhomogeneous domains

    KAUST Repository

    Sandhu, Ali Imran; Desmal, Abdulla; Bagci, Hakan

    2016-01-01

    A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile's derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.

  4. THREE-POINT BACKWARD FINITE DIFFERENCE METHOD FOR SOLVING A SYSTEM OF MIXED HYPERBOLIC-PARABOLIC PARTIAL DIFFERENTIAL EQUATIONS. (R825549C019)

    Science.gov (United States)

    A three-point backward finite-difference method has been derived for a system of mixed hyperbolic¯¯parabolic (convection¯¯diffusion) partial differential equations (mixed PDEs). The method resorts to the three-point backward differenci...

  5. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  6. Applications of a fast, continuous wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Dress, W.B.

    1997-02-01

    A fast, continuous, wavelet transform, based on Shannon`s sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon`s sampling theorem lets us view the Fourier transform of the data set as a continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time- domain sampling of the signal under analysis. Computational cost and nonorthogonality aside, the inherent flexibility and shift invariance of the frequency-space wavelets has advantages. The method has been applied to forensic audio reconstruction speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants` heart beats. Audio reconstruction is aided by selection of desired regions in the 2-D representation of the magnitude of the transformed signal. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass-spring system (e.g., a vehicle) by an occupants beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, features such as the glottal closing rate and word and phrase segmentation may be extracted from voice data.

  7. Quantification of regional cerebral blood flow (rCBF) measurement with one point sampling by sup 123 I-IMP SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Munaka, Masahiro [University of Occupational and Enviromental Health, Kitakyushu (Japan); Iida, Hidehiro; Murakami, Matsutaro

    1992-02-01

    A handy method of quantifying regional cerebral blood flow (rCBF) measurement by {sup 123}I-IMP SPECT was designed. A standard input function was made and the sampling time to calibrate this standard input function by one point sampling was optimized. An average standard input function was obtained from continuous arterial samplings of 12 healthy adults. The best sampling time was the minimum differential value between the integral calculus value of the standard input function calibrated by one point sampling and the input funciton by continuous arterial samplings. This time was 8 minutes after an intravenous injection of {sup 123}I-IMP and an error was estimated to be {+-}4.1%. The rCBF values by this method were evaluated by comparing them with the rCBF values of the input function with continuous arterial samplings in 2 healthy adults and a patient with cerebral infarction. A significant correlation (r=0.764, p<0.001) was obtained between both. (author).

  8. A Frequency Domain Design Method For Sampled-Data Compensators

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Jannerup, Ole Erik

    1990-01-01

    A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...

  9. Design of Neutral-Point Voltage Controller of a Three-level NPC Inverter with Small DC-Link Capacitors

    DEFF Research Database (Denmark)

    Maheshwari, Ram Krishan; Munk-Nielsen, Stig; Busquets-Monge, S.

    2013-01-01

    A Neutral-Point-Clamped (NPC) three-level inverter with small dc-link capacitors is presented in this paper. The inverter requires zero average neutral-point current for stable neutral-point voltage. The small dc-link capacitors may not maintain capacitor voltage balance, even with zero neutral......-point voltage control on the basis of the continuous model. The design method for optimum performance is discussed. The implementation of the proposed modulation strategy and the controller is very simple. The controller is implemented in a 7.5 kW induction machine based drive with only 14 ìF dc-link capacitors...

  10. Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

    OpenAIRE

    Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.

    2009-01-01

    We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step, which targets the ?+-center of the next pair of perturbed problems. As for the centering steps, we apply a sharper quadratic convergence result, which leads to a slightly wider neighborhood for th...

  11. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    Science.gov (United States)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  12. For Time-Continuous Optimisation

    DEFF Research Database (Denmark)

    Heinrich, Mary Katherine; Ayres, Phil

    2016-01-01

    Strategies for optimisation in design normatively assume an artefact end-point, disallowing continuous architecture that engages living systems, dynamic behaviour, and complex systems. In our Flora Robotica investigations of symbiotic plant-robot bio-hybrids, we re- quire computational tools...

  13. Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method

    Science.gov (United States)

    Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.

    2018-01-01

    Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.

  14. Calculation and decomposition of spot price using interior point nonlinear optimisation methods

    International Nuclear Information System (INIS)

    Xie, K.; Song, Y.H.

    2004-01-01

    Optimal pricing for real and reactive power is a very important issue in a deregulation environment. This paper summarises the optimal pricing problem as an extended optimal power flow problem. Then, spot prices are decomposed into different components reflecting various ancillary services. The derivation of the proposed decomposition model is described in detail. Primary-Dual Interior Point method is applied to avoid 'go' 'no go' gauge. In addition, the proposed approach can be extended to cater for other types of ancillary services. (author)

  15. TREEDE, Point Fluxes and Currents Based on Track Rotation Estimator by Monte-Carlo Method

    International Nuclear Information System (INIS)

    Dubi, A.

    1985-01-01

    1 - Description of problem or function: TREEDE is a Monte Carlo transport code based on the Track Rotation estimator, used, in general, to calculate fluxes and currents at a point. This code served as a test code in the development of the concept of the Track Rotation estimator, and therefore analogue Monte Carlo is used (i.e. no importance biasing). 2 - Method of solution: The basic idea is to follow the particle's track in the medium and then to rotate it such that it passes through the detector point. That is, rotational symmetry considerations (even in non-spherically symmetric configurations) are applied to every history, so that a very large fraction of the track histories can be rotated and made to pass through the point of interest; in this manner the 1/r 2 singularity in the un-collided flux estimator (next event estimator) is avoided. TREEDE, being a test code, is used to estimate leakage or in-medium fluxes at given points in a 3-dimensional finite box, where the source is an isotropic point source at the centre of the z = 0 surface. However, many of the constraints of geometry and source can be easily removed. The medium is assumed homogeneous with isotropic scattering, and one energy group only is considered. 3 - Restrictions on the complexity of the problem: One energy group, a homogeneous medium, isotropic scattering

  16. Gran method for end point anticipation in monosegmented flow titration

    Directory of Open Access Journals (Sweden)

    Aquino Emerson V

    2004-01-01

    Full Text Available An automatic potentiometric monosegmented flow titration procedure based on Gran linearisation approach has been developed. The controlling program can estimate the end point of the titration after the addition of three or four aliquots of titrant. Alternatively, the end point can be determined by the second derivative procedure. In this case, additional volumes of titrant are added until the vicinity of the end point and three points before and after the stoichiometric point are used for end point calculation. The performance of the system was assessed by the determination of chloride in isotonic beverages and parenteral solutions. The system employs a tubular Ag2S/AgCl indicator electrode. A typical titration, performed according to the IUPAC definition, requires only 60 mL of sample and about the same volume of titrant (AgNO3 solution. A complete titration can be carried out in 1 - 5 min. The accuracy and precision (relative standard deviation of ten replicates are 2% and 1% for the Gran and 1% and 0.5% for the Gran/derivative end point determination procedures, respectively. The proposed system reduces the time to perform a titration, ensuring low sample and reagent consumption, and full automatic sampling and titrant addition in a calibration-free titration protocol.

  17. A Field Evaluation of the Time-of-Detection Method to Estimate Population Size and Density for Aural Avian Point Counts

    Directory of Open Access Journals (Sweden)

    Mathew W. Alldredge

    2007-12-01

    Full Text Available The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture-recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence, which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low homogenous rates per interval with those singing at (high and low heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant

  18. The Multiscale Material Point Method for Simulating Transient Responses

    Science.gov (United States)

    Chen, Zhen; Su, Yu-Chen; Zhang, Hetao; Jiang, Shan; Sewell, Thomas

    2015-06-01

    To effectively simulate multiscale transient responses such as impact and penetration without invoking master/slave treatment, the multiscale material point method (Multi-MPM) is being developed in which molecular dynamics at nanoscale and dissipative particle dynamics at mesoscale might be concurrently handled within the framework of the original MPM at microscale (continuum level). The proposed numerical scheme for concurrently linking different scales is described in this paper with simple examples for demonstration. It is shown from the preliminary study that the mapping and re-mapping procedure used in the original MPM could coarse-grain the information at fine scale and that the proposed interfacial scheme could provide a smooth link between different scales. Since the original MPM is an extension from computational fluid dynamics to solid dynamics, the proposed Multi-MPM might also become robust for dealing with multiphase interactions involving failure evolution. This work is supported in part by DTRA and NSFC.

  19. Hardware-accelerated Point Generation and Rendering of Point-based Impostors

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas

    2005-01-01

    This paper presents a novel scheme for generating points from triangle models. The method is fast and lends itself well to implementation using graphics hardware. The triangle to point conversion is done by rendering the models, and the rendering may be performed procedurally or by a black box API....... I describe the technique in detail and discuss how the generated point sets can easily be used as impostors for the original triangle models used to create the points. Since the points reside solely in GPU memory, these impostors are fairly efficient. Source code is available online....

  20. A novel method of measuring the concentration of anaesthetic vapours using a dew-point hygrometer.

    Science.gov (United States)

    Wilkes, A R; Mapleson, W W; Mecklenburgh, J S

    1994-02-01

    The Antoine equation relates the saturated vapour pressure of a volatile substance, such as an anaesthetic agent, to the temperature. The measurement of the 'dew-point' of a dry gas mixture containing a volatile anaesthetic agent by a dew-point hygrometer permits the determination of the partial pressure of the anaesthetic agent. The accuracy of this technique is limited only by the accuracy of the Antoine coefficients and of the temperature measurement. Comparing measurements by the dew-point method with measurements by refractometry showed systematic discrepancies up to 0.2% and random discrepancies with SDS up to 0.07% concentration in the 1% to 5% range for three volatile anaesthetics. The systematic discrepancies may be due to errors in available data for the vapour pressures and/or the refractive indices of the anaesthetics.

  1. METHOD OF GREEN FUNCTIONS IN MATHEMATICAL MODELLING FOR TWO-POINT BOUNDARY-VALUE PROBLEMS

    Directory of Open Access Journals (Sweden)

    E. V. Dikareva

    2015-01-01

    Full Text Available Summary. In many applied problems of control, optimization, system theory, theoretical and construction mechanics, for problems with strings and nods structures, oscillation theory, theory of elasticity and plasticity, mechanical problems connected with fracture dynamics and shock waves, the main instrument for study these problems is a theory of high order ordinary differential equations. This methodology is also applied for studying mathematical models in graph theory with different partitioning based on differential equations. Such equations are used for theoretical foundation of mathematical models but also for constructing numerical methods and computer algorithms. These models are studied with use of Green function method. In the paper first necessary theoretical information is included on Green function method for multi point boundary-value problems. The main equation is discussed, notions of multi-point boundary conditions, boundary functionals, degenerate and non-degenerate problems, fundamental matrix of solutions are introduced. In the main part the problem to study is formulated in terms of shocks and deformations in boundary conditions. After that the main results are formulated. In theorem 1 conditions for existence and uniqueness of solutions are proved. In theorem 2 conditions are proved for strict positivity and equal measureness for a pair of solutions. In theorem 3 existence and estimates are proved for the least eigenvalue, spectral properties and positivity of eigenfunctions. In theorem 4 the weighted positivity is proved for the Green function. Some possible applications are considered for a signal theory and transmutation operators.

  2. An alternative extragradient projection method for quasi-equilibrium problems.

    Science.gov (United States)

    Chen, Haibin; Wang, Yiju; Xu, Yi

    2018-01-01

    For the quasi-equilibrium problem where the players' costs and their strategies both depend on the rival's decisions, an alternative extragradient projection method for solving it is designed. Different from the classical extragradient projection method whose generated sequence has the contraction property with respect to the solution set, the newly designed method possesses an expansion property with respect to a given initial point. The global convergence of the method is established under the assumptions of pseudomonotonicity of the equilibrium function and of continuity of the underlying multi-valued mapping. Furthermore, we show that the generated sequence converges to the nearest point in the solution set to the initial point. Numerical experiments show the efficiency of the method.

  3. A method for computing the stationary points of a function subject to linear equality constraints

    International Nuclear Information System (INIS)

    Uko, U.L.

    1989-09-01

    We give a new method for the numerical calculation of stationary points of a function when it is subject to equality constraints. An application to the solution of linear equations is given, together with a numerical example. (author). 5 refs

  4. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    OpenAIRE

    Wang Hao; Gao Wen; Huang Qingming; Zhao Feng

    2010-01-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matchin...

  5. Prospective comparison of liver stiffness measurements between two point wave elastography methods: Virtual ouch quantification and elastography point quantification

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hyun Suk; Lee, Jeong Min; Yoon, Jeong Hee; Lee, Dong Ho; Chang, Won; Han, Joon Koo [Seoul National University Hospital, Seoul (Korea, Republic of)

    2016-09-15

    To prospectively compare technical success rate and reliable measurements of virtual touch quantification (VTQ) elastography and elastography point quantification (ElastPQ), and to correlate liver stiffness (LS) measurements obtained by the two elastography techniques. Our study included 85 patients, 80 of whom were previously diagnosed with chronic liver disease. The technical success rate and reliable measurements of the two kinds of point shear wave elastography (pSWE) techniques were compared by χ{sup 2} analysis. LS values measured using the two techniques were compared and correlated via Wilcoxon signed-rank test, Spearman correlation coefficient, and 95% Bland-Altman limit of agreement. The intraobserver reproducibility of ElastPQ was determined by 95% Bland-Altman limit of agreement and intraclass correlation coefficient (ICC). The two pSWE techniques showed similar technical success rate (98.8% for VTQ vs. 95.3% for ElastPQ, p = 0.823) and reliable LS measurements (95.3% for VTQ vs. 90.6% for ElastPQ, p = 0.509). The mean LS measurements obtained by VTQ (1.71 ± 0.47 m/s) and ElastPQ (1.66 ± 0.41 m/s) were not significantly different (p = 0.209). The LS measurements obtained by the two techniques showed strong correlation (r = 0.820); in addition, the 95% limit of agreement of the two methods was 27.5% of the mean. Finally, the ICC of repeat ElastPQ measurements was 0.991. Virtual touch quantification and ElastPQ showed similar technical success rate and reliable measurements, with strongly correlated LS measurements. However, the two methods are not interchangeable due to the large limit of agreement.

  6. Neuromuscular control of the point to point and oscillatory movements of a sagittal arm with the actor-critic reinforcement learning method.

    Science.gov (United States)

    Golkhou, Vahid; Parnianpour, Mohamad; Lucas, Caro

    2005-04-01

    In this study, we have used a single link system with a pair of muscles that are excited with alpha and gamma signals to achieve both point to point and oscillatory movements with variable amplitude and frequency.The system is highly nonlinear in all its physical and physiological attributes. The major physiological characteristics of this system are simultaneous activation of a pair of nonlinear muscle-like-actuators for control purposes, existence of nonlinear spindle-like sensors and Golgi tendon organ-like sensor, actions of gravity and external loading. Transmission delays are included in the afferent and efferent neural paths to account for a more accurate representation of the reflex loops.A reinforcement learning method with an actor-critic (AC) architecture instead of middle and low level of central nervous system (CNS), is used to track a desired trajectory. The actor in this structure is a two layer feedforward neural network and the critic is a model of the cerebellum. The critic is trained by state-action-reward-state-action (SARSA) method. The critic will train the actor by supervisory learning based on the prior experiences. Simulation studies of oscillatory movements based on the proposed algorithm demonstrate excellent tracking capability and after 280 epochs the RMS error for position and velocity profiles were 0.02, 0.04 rad and rad/s, respectively.

  7. A sparsity-regularized Born iterative method for reconstruction of two-dimensional piecewise continuous inhomogeneous domains

    KAUST Repository

    Sandhu, Ali Imran

    2016-04-10

    A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile\\'s derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.

  8. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231

  9. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction.

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P; Rother, Kristian M; Bujnicki, Janusz M

    2013-04-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks.

  10. Poisson branching point processes

    International Nuclear Information System (INIS)

    Matsuo, K.; Teich, M.C.; Saleh, B.E.A.

    1984-01-01

    We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers

  11. Detecting change-points in extremes

    KAUST Repository

    Dupuis, D. J.

    2015-01-01

    Even though most work on change-point estimation focuses on changes in the mean, changes in the variance or in the tail distribution can lead to more extreme events. In this paper, we develop a new method of detecting and estimating the change-points in the tail of multiple time series data. In addition, we adapt existing tail change-point detection methods to our specific problem and conduct a thorough comparison of different methods in terms of performance on the estimation of change-points and computational time. We also examine three locations on the U.S. northeast coast and demonstrate that the methods are useful for identifying changes in seasonally extreme warm temperatures.

  12. An Introduction to the Material Point Method using a Case Study from Gas Dynamics

    International Nuclear Information System (INIS)

    Tran, L. T.; Kim, J.; Berzins, M.

    2008-01-01

    The Material Point Method (MPM) developed by Sulsky and colleagues is currently being used to solve many challenging problems involving large deformations and/or fragementations with considerable success as part of the Uintah code created by the CSAFE project. In order to understand the properties of this method an analysis of the considerable computational properties of MPM is undertaken in the context of model problems from gas dynamics. One aspect of the MPM method in the form used here is shown to have first order accuracy. Computational experiments using particle redistribution are described and show that smooth results with first order accuracy may be obtained.

  13. A method of undifferenced ambiguity resolution for GPS+GLONASS precise point positioning.

    Science.gov (United States)

    Yi, Wenting; Song, Weiwei; Lou, Yidong; Shi, Chuang; Yao, Yibin

    2016-05-25

    Integer ambiguity resolution is critical for achieving positions of high precision and for shortening the convergence time of precise point positioning (PPP). However, GLONASS adopts the signal processing technology of frequency division multiple access and results in inter-frequency code biases (IFCBs), which are currently difficult to correct. This bias makes the methods proposed for GPS ambiguity fixing unsuitable for GLONASS. To realize undifferenced GLONASS ambiguity fixing, we propose an undifferenced ambiguity resolution method for GPS+GLONASS PPP, which considers the IFCBs estimation. The experimental result demonstrates that the success rate of GLONASS ambiguity fixing can reach 75% through the proposed method. Compared with the ambiguity float solutions, the positioning accuracies of ambiguity-fixed solutions of GLONASS-only PPP are increased by 12.2%, 20.9%, and 10.3%, and that of the GPS+GLONASS PPP by 13.0%, 35.2%, and 14.1% in the North, East and Up directions, respectively.

  14. A phase quantification method based on EBSD data for a continuously cooled microalloyed steel

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, H.; Wynne, B.P.; Palmiere, E.J., E-mail: e.j.palmiere@sheffield.ac.uk

    2017-01-15

    Mechanical properties of steels depend on the phase constitutions of the final microstructures which can be related to the processing parameters. Therefore, accurate quantification of different phases is necessary to investigate the relationships between processing parameters, final microstructures and mechanical properties. Point counting on micrographs observed by optical or scanning electron microscopy is widely used as a phase quantification method, and different phases are discriminated according to their morphological characteristics. However, it is difficult to differentiate some of the phase constituents with similar morphology. Differently, for EBSD based phase quantification methods, besides morphological characteristics, other parameters derived from the orientation information can also be used for discrimination. In this research, a phase quantification method based on EBSD data in the unit of grains was proposed to identify and quantify the complex phase constitutions of a microalloyed steel subjected to accelerated coolings. Characteristics of polygonal ferrite/quasi-polygonal ferrite, acicular ferrite and bainitic ferrite on grain averaged misorientation angles, aspect ratios, high angle grain boundary fractions and grain sizes were analysed and used to develop the identification criteria for each phase. Comparing the results obtained by this EBSD based method and point counting, it was found that this EBSD based method can provide accurate and reliable phase quantification results for microstructures with relatively slow cooling rates. - Highlights: •A phase quantification method based on EBSD data in the unit of grains was proposed. •The critical grain area above which GAM angles are valid parameters was obtained. •Grain size and grain boundary misorientation were used to identify acicular ferrite. •High cooling rates deteriorate the accuracy of this EBSD based method.

  15. Continuation of probability density functions using a generalized Lyapunov approach

    Energy Technology Data Exchange (ETDEWEB)

    Baars, S., E-mail: s.baars@rug.nl [Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, P.O. Box 407, 9700 AK Groningen (Netherlands); Viebahn, J.P., E-mail: viebahn@cwi.nl [Centrum Wiskunde & Informatica (CWI), P.O. Box 94079, 1090 GB, Amsterdam (Netherlands); Mulder, T.E., E-mail: t.e.mulder@uu.nl [Institute for Marine and Atmospheric research Utrecht, Department of Physics and Astronomy, Utrecht University, Princetonplein 5, 3584 CC Utrecht (Netherlands); Kuehn, C., E-mail: ckuehn@ma.tum.de [Technical University of Munich, Faculty of Mathematics, Boltzmannstr. 3, 85748 Garching bei München (Germany); Wubs, F.W., E-mail: f.w.wubs@rug.nl [Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, P.O. Box 407, 9700 AK Groningen (Netherlands); Dijkstra, H.A., E-mail: h.a.dijkstra@uu.nl [Institute for Marine and Atmospheric research Utrecht, Department of Physics and Astronomy, Utrecht University, Princetonplein 5, 3584 CC Utrecht (Netherlands); School of Chemical and Biomolecular Engineering, Cornell University, Ithaca, NY (United States)

    2017-05-01

    Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.

  16. Performance Analysis of a Maximum Power Point Tracking Technique using Silver Mean Method

    Directory of Open Access Journals (Sweden)

    Shobha Rani Depuru

    2018-01-01

    Full Text Available The proposed paper presents a simple and particularly efficacious Maximum Power Point Tracking (MPPT algorithm based on Silver Mean Method (SMM. This method operates by choosing a search interval from the P-V characteristics of the given solar array and converges to MPP of the Solar Photo-Voltaic (SPV system by shrinking its interval. After achieving the maximum power, the algorithm stops shrinking and maintains constant voltage until the next interval is decided. The tracking capability efficiency and performance analysis of the proposed algorithm are validated by the simulation and experimental results with a 100W solar panel for variable temperature and irradiance conditions. The results obtained confirm that even without any perturbation and observation process, the proposed method still outperforms the traditional perturb and observe (P&O method by demonstrating far better steady state output, more accuracy and higher efficiency.

  17. Quantitative Tomography for Continuous Variable Quantum Systems.

    Science.gov (United States)

    Landon-Cardinal, Olivier; Govia, Luke C G; Clerk, Aashish A

    2018-03-02

    We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.

  18. Quantitative Tomography for Continuous Variable Quantum Systems

    Science.gov (United States)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    2018-03-01

    We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.

  19. Organizzazione bibliografica delle risorse continuative in relazione ai modelli IFLA : Ricerca all'interno del corpus delle risorse continuative croate

    Directory of Open Access Journals (Sweden)

    Tatijana Petrić

    2016-01-01

    Full Text Available Comprehensive research on continuing resources has not been conducted in Croatia, therefore this paper will indicate the current bibliographic organisation of continuing resources in comparison to the parameters set by the IFLA models, and the potential flaws of the IFLA models in the bibliographic organisation of continuing resources, in comparison to the valid national code which is used in Croatian cataloguing practice. Research on the corpus of Croatian continuing resources was performed in the period from 2000 and 2011. By using the listed population through the method of deliberate stratified sampling, the titles which had been observed were selected. Through the method of observation of bibliographic records of the selected sample in the NUL catalogue, the frequency of occurrence of parameters from the IFLA models that should identify continuing resources will be recorded and should also show the characteristics of continuing resources. In determining the parameters of observation, the FRBR model is viewed in terms of bibliographic data, FRAD is viewed in terms of other groups or entities or controlled access points for work, person and the corporate body and FRSAD in terms of the third group of entities as the subject or the subject access to continuing resources. Research results indicate that the current model of bibliographic organisation presents a high frequency of attributes that are listed in the IFLA models for all types of resources, although that was not envisaged by the PPIAK, and it is clear that the practice has moved away from the national code which does not offer solutions for all types of resources and ever more so demanding users. The current model of bibliographic organisation of the corpus of Croatian continuing resources in regards to the new IFLA model requires certain changes in order for the user to more easily access and identify continuing resources. The research results also indicate the need to update the

  20. A new integrated dual time-point amyloid PET/MRI data analysis method

    International Nuclear Information System (INIS)

    Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco; Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama; Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo; Frigo, Anna Chiara

    2017-01-01

    In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ( 18 F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between age

  1. A new integrated dual time-point amyloid PET/MRI data analysis method

    Energy Technology Data Exchange (ETDEWEB)

    Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco [University Hospital of Padua, Nuclear Medicine Unit, Department of Medicine - DIMED, Padua (Italy); Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama [Leipzig University, Department of Nuclear Medicine, Leipzig (Germany); Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo [University Hospital of Padua, Neurology, Department of Neurosciences (DNS), Padua (Italy); Frigo, Anna Chiara [University Hospital of Padua, Biostatistics, Epidemiology and Public Health Unit, Department of Cardiac, Thoracic and Vascular Sciences, Padua (Italy)

    2017-11-15

    In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ({sup 18}F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between

  2. An improved method of continuous LOD based on fractal theory in terrain rendering

    Science.gov (United States)

    Lin, Lan; Li, Lijun

    2007-11-01

    With the improvement of computer graphic hardware capability, the algorithm of 3D terrain rendering is going into the hot topic of real-time visualization. In order to solve conflict between the rendering speed and reality of rendering, this paper gives an improved method of terrain rendering which improves the traditional continuous level of detail technique based on fractal theory. This method proposes that the program needn't to operate the memory repeatedly to obtain different resolution terrain model, instead, obtains the fractal characteristic parameters of different region according to the movement of the viewpoint. Experimental results show that the method guarantees the authenticity of landscape, and increases the real-time 3D terrain rendering speed.

  3. A simple boundary element formulation for shape optimization of 2D continuous structures

    International Nuclear Information System (INIS)

    Luciano Mendes Bezerra; Jarbas de Carvalho Santos Junior; Arlindo Pires Lopes; Andre Luiz; Souza, A.C.

    2005-01-01

    For the design of nuclear equipment like pressure vessels, steam generators, and pipelines, among others, it is very important to optimize the shape of the structural systems to withstand prescribed loads such as internal pressures and prescribed or limiting referential values such as stress or strain. In the literature, shape optimization of frame structural systems is commonly found but the same is not true for continuous structural systems. In this work, the Boundary Element Method (BEM) is applied to simple problems of shape optimization of 2D continuous structural systems. The proposed formulation is based on the BEM and on deterministic optimization methods of zero and first order such as Powell's, Conjugate Gradient, and BFGS methods. Optimal characterization for the geometric configuration of 2D structure is obtained with the minimization of an objective function. Such function is written in terms of referential values (such as loads, stresses, strains or deformations) prescribed at few points inside or at the boundary of the structure. The use of the BEM for shape optimization of continuous structures is attractive compared to other methods that discretized the whole continuous. Several numerical examples of the application of the proposed formulation to simple engineering problems are presented. (authors)

  4. Neoliberalism influence in the Chilean Social Work: Professional and users’ points of view

    Directory of Open Access Journals (Sweden)

    Luis Alberto Vivero Arriagada

    2017-01-01

    Full Text Available Objective: To analyze and interpret the influence of neoliberalism in Chilean Social Work. Method: The points of view of users and benefactors of social programs are interpreted from a critical-hermeneutic perspective. All this articulated with the revision of historical data of Social Work. Results: It is seen that the profession is still influenced by conservative perspectives, expressed in a pragmatic/functional intervention having a weak theoretical framework. Conclusions: The need of strengthening the conceptual-theoretical formation, define theoretical paths in the undergraduate programs and a continuous link between the academy and the professional field of action are pointed out.

  5. Method of continuously regenerating decontaminating electrolytic solution

    International Nuclear Information System (INIS)

    Sasaki, Takashi; Kobayashi, Toshio; Wada, Koichi.

    1985-01-01

    Purpose: To continuously recover radioactive metal ions from the electrolytic solution used for the electrolytic decontamination of radioactive equipment and increased with the radioactive dose, as well as regenerate the electrolytic solution to a high concentration acid. Method: A liquid in an auxiliary tank is recycled to a cathode chamber containing water of an electro depositing regeneration tank to render pH = 2 by way of a pH controller and a pH electrode. The electrolytic solution in an electrolytic decontaminating tank is introduced by way of an injection pump to an auxiliary tank and, interlocking therewith, a regenerating solution is introduced from a regenerating solution extracting pump by way of a extraction pipeway to an electrolytic decontaminating tank. Meanwhile, electric current is supplied to the electrode to deposit radioactive metal ions dissolved in the cathode chamber on the capturing electrode. While on the other hand, anions are transferred by way of a partition wall to an anode chamber to regenerate the electrolytic solution to high concentration acid solution. While on the other hand, water is supplied by way of an electromagnetic valve interlocking with the level meter to maintain the level meter constant. This can decrease the generation of the liquid wastes and also reduce the amount of the radioactive secondary wastes. (Horiuchi, T.)

  6. SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD

    Directory of Open Access Journals (Sweden)

    J. Zhang

    2013-05-01

    Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.

  7. A fast quadrature-based numerical method for the continuous spectrum biphasic poroviscoelastic model of articular cartilage.

    Science.gov (United States)

    Stuebner, Michael; Haider, Mansoor A

    2010-06-18

    A new and efficient method for numerical solution of the continuous spectrum biphasic poroviscoelastic (BPVE) model of articular cartilage is presented. Development of the method is based on a composite Gauss-Legendre quadrature approximation of the continuous spectrum relaxation function that leads to an exponential series representation. The separability property of the exponential terms in the series is exploited to develop a numerical scheme that can be reduced to an update rule requiring retention of the strain history at only the previous time step. The cost of the resulting temporal discretization scheme is O(N) for N time steps. Application and calibration of the method is illustrated in the context of a finite difference solution of the one-dimensional confined compression BPVE stress-relaxation problem. Accuracy of the numerical method is demonstrated by comparison to a theoretical Laplace transform solution for a range of viscoelastic relaxation times that are representative of articular cartilage. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  8. The application of entropy weight TOPSIS method to optimal points in monitoring the Xinjiang radiation environment

    International Nuclear Information System (INIS)

    Feng Guangwen; Hu Youhua; Liu Qian

    2009-01-01

    In this paper, the application of the entropy weight TOPSIS method to optimal layout points in monitoring the Xinjiang radiation environment has been indroduced. With the help of SAS software, It has been found that the method is more ideal and feasible. The method can provide a reference for us to monitor radiation environment in the same regions further. As the method could bring great convenience and greatly reduce the inspecting work, it is very simple, flexible and effective for a comprehensive evaluation. (authors)

  9. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    International Nuclear Information System (INIS)

    Gora, D.; Bernardini, E.; Cruz Silva, A.H.

    2011-04-01

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  10. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Energy Technology Data Exchange (ETDEWEB)

    Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)

    2011-04-15

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  11. Inverse transformation algorithm of transient electromagnetic field and its high-resolution continuous imaging interpretation method

    International Nuclear Information System (INIS)

    Qi, Zhipeng; Li, Xiu; Lu, Xushan; Zhang, Yingying; Yao, Weihua

    2015-01-01

    We introduce a new and potentially useful method for wave field inverse transformation and its application in transient electromagnetic method (TEM) 3D interpretation. The diffusive EM field is known to have a unique integral representation in terms of a fictitious wave field that satisfies a wave equation. The continuous imaging of TEM can be accomplished using the imaging methods in seismic interpretation after the diffusion equation is transformed into a fictitious wave equation. The interpretation method based on the imaging of a fictitious wave field could be used as a fast 3D inversion method. Moreover, the fictitious wave field possesses some wave field features making it possible for the application of a wave field interpretation method in TEM to improve the prospecting resolution.Wave field transformation is a key issue in the migration imaging of a fictitious wave field. The equation in the wave field transformation belongs to the first class Fredholm integration equation, which is a typical ill-posed equation. Additionally, TEM has a large dynamic time range, which also facilitates the weakness of this ill-posed problem. The wave field transformation is implemented by using pre-conditioned regularized conjugate gradient method. The continuous imaging of a fictitious wave field is implemented by using Kirchhoff integration. A synthetic aperture and deconvolution algorithm is also introduced to improve the interpretation resolution. We interpreted field data by the method proposed in this paper, and obtained a satisfying interpretation result. (paper)

  12. System and method for continuous solids slurry depressurization

    Science.gov (United States)

    Leininger, Thomas Frederick; Steele, Raymond Douglas; Yen, Hsien-Chin William; Cordes, Stephen Michael

    2017-10-10

    A continuous slag processing system includes a rotating parallel disc pump, coupled to a motor and a brake. The rotating parallel disc pump includes opposing discs coupled to a shaft, an outlet configured to continuously receive a fluid at a first pressure, and an inlet configured to continuously discharge the fluid at a second pressure less than the first pressure. The rotating parallel disc pump is configurable in a reverse-acting pump mode and a letdown turbine mode. The motor is configured to drive the opposing discs about the shaft and against a flow of the fluid to control a difference between the first pressure and the second pressure in the reverse-acting pump mode. The brake is configured to resist rotation of the opposing discs about the shaft to control the difference between the first pressure and the second pressure in the letdown turbine mode.

  13. Mortality trends for ischemic heart disease in China: an analysis of 102 continuous disease surveillance points from 1991 to 2009.

    Science.gov (United States)

    Wan, Xia; Ren, Hongyan; Ma, Enbo; Yang, Gonghuan

    2017-07-25

    In the past 20 years, the trends of ischemic heart disease (IHD) mortality in China have been described in divergent claims. This research analyzes mortality trends for IHD by using the data from 102 continuous Disease Surveillance Points (DSP) from 1991 to 2009. The 102 continuous DSP covered 7.3 million people during the period 1991-2000, and then were expanded to a population of 52 million in the same areas for 2004-2009. The data were adjusted by using garbage code redistribution and underreporting rate, mapped from international classification of diseases ICD-9 to ICD-10. The mortality rates for IHD were further adjusted by the crude death proportion multiplied by the total number of deaths in the mortality envelope, which was calculated by using logr t  = a + bt. Age-standard death rates (ASDRs) were computed using China's 2010 census population structure. Trend in IHD was calculated from ASDRs by using a joinpoint regression model. The IHD ASDRs increased in total in regions with an average annual percentage change (AAPC) 4.96%, especially for the Southwest (AAPC = 7.97%) and Northeast areas (AAPC = 7.10%), and for male and female subjects (with 5% AAPC) as well. In rural areas, the year 2000 was a cut-off point for mortality rate with annual percentage change increasing from 3.52% in 1991-2000 to 9.02% in 2000-2009, which was much higher than in urban areas (AAPC = 1.05%). And the proportion of deaths increased in older adults, and more male deaths occurred before age 60 compared to female deaths. By observing a wide range of areas across China from 1991 to 2009, this paper concludes that the ASDR trend for IHD increased. These trends reflect changes in the Chinese standard of living and lifestyle with diets higher in fat, higher blood lipids and increased body weight.

  14. Variation method for optimization of Raman fiber amplifier pumped by continuous-spectrum radiation

    International Nuclear Information System (INIS)

    Ghasempour Ardekani, A.; Bahrampour, A. R.; Feizpour, A.

    2007-01-01

    In Raman fiber amplifiers, reduction of gain ripple versus frequency has a great importance. In this article using variational method and continuous pump, gain ripple is optimized. It is shown here that for a 40 km line the average gain is 1.3dB and the gain ripple is 0.12 dB, that is lower than the latest published data.

  15. A stochastic surplus production model in continuous time

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte

    2017-01-01

    surplus production model in continuous time (SPiCT), which in addition to stock dynamics also models the dynamics of the fisheries. This enables error in the catch process to be reflected in the uncertainty of estimated model parameters and management quantities. Benefits of the continuous-time state......Surplus production modelling has a long history as a method for managing data-limited fish stocks. Recent advancements have cast surplus production models as state-space models that separate random variability of stock dynamics from error in observed indices of biomass. We present a stochastic......-space model formulation include the ability to provide estimates of exploitable biomass and fishing mortality at any point in time from data sampled at arbitrary and possibly irregular intervals. We show in a simulation that the ability to analyse subannual data can increase the effective sample size...

  16. Common Fixed Points for Weakly Compatible Maps

    Indian Academy of Sciences (India)

    The purpose of this paper is to prove a common fixed point theorem, from the class of compatible continuous maps to a larger class of maps having weakly compatible maps without appeal to continuity, which generalized the results of Jungck [3], Fisher [1], Kang and Kim [8], Jachymski [2], and Rhoades [9].

  17. Advanced DNA-Based Point-of-Care Diagnostic Methods for Plant Diseases Detection

    Directory of Open Access Journals (Sweden)

    Han Yih Lau

    2017-12-01

    Full Text Available Diagnostic technologies for the detection of plant pathogens with point-of-care capability and high multiplexing ability are an essential tool in the fight to reduce the large agricultural production losses caused by plant diseases. The main desirable characteristics for such diagnostic assays are high specificity, sensitivity, reproducibility, quickness, cost efficiency and high-throughput multiplex detection capability. This article describes and discusses various DNA-based point-of care diagnostic methods for applications in plant disease detection. Polymerase chain reaction (PCR is the most common DNA amplification technology used for detecting various plant and animal pathogens. However, subsequent to PCR based assays, several types of nucleic acid amplification technologies have been developed to achieve higher sensitivity, rapid detection as well as suitable for field applications such as loop-mediated isothermal amplification, helicase-dependent amplification, rolling circle amplification, recombinase polymerase amplification, and molecular inversion probe. The principle behind these technologies has been thoroughly discussed in several review papers; herein we emphasize the application of these technologies to detect plant pathogens by outlining the advantages and disadvantages of each technology in detail.

  18. A Differential Scanning Calorimetry Method for Construction of Continuous Cooling Transformation Diagram of Blast Furnace Slag

    Science.gov (United States)

    Gan, Lei; Zhang, Chunxia; Shangguan, Fangqin; Li, Xiuping

    2012-06-01

    The continuous cooling crystallization of a blast furnace slag was studied by the application of the differential scanning calorimetry (DSC) method. A kinetic model describing the correlation between the evolution of the degree of crystallization with time was obtained. Bulk cooling experiments of the molten slag coupled with numerical simulation of heat transfer were conducted to validate the results of the DSC methods. The degrees of crystallization of the samples from the bulk cooling experiments were estimated by means of the X-ray diffraction (XRD) and the DSC method. It was found that the results from the DSC cooling and bulk cooling experiments are in good agreement. The continuous cooling transformation (CCT) diagram of the blast furnace slag was constructed according to crystallization kinetic model and experimental data. The obtained CCT diagram characterizes with two crystallization noses at different temperature ranges.

  19. A longitudinal approach to changes in the motivation of dutch pharmacists in the current continuing education system

    NARCIS (Netherlands)

    Sharon, L.; de Boer, Anthonius; Croiset, Gerda; Kusurkar, Rashmi A; Koster, Andries S.

    Objective. To explore the changes in motivation of Dutch pharmacists for Continuing Education (CE) in the Dutch CE system. Methods. Pharmacists’ motivation was measured across three time points with the Academic Motivation Scale, based on the Self-Determination Theory of motivation. The Latent

  20. Quad-Rotor Helicopter Autonomous Navigation Based on Vanishing Point Algorithm

    Directory of Open Access Journals (Sweden)

    Jialiang Wang

    2014-01-01

    Full Text Available Quad-rotor helicopter is becoming popular increasingly as they can well implement many flight missions in more challenging environments, with lower risk of damaging itself and its surroundings. They are employed in many applications, from military operations to civilian tasks. Quad-rotor helicopter autonomous navigation based on the vanishing point fast estimation (VPFE algorithm using clustering principle is implemented in this paper. For images collected by the camera of quad-rotor helicopter, the system executes the process of preprocessing of image, deleting noise interference, edge extracting using Canny operator, and extracting straight lines by randomized hough transformation (RHT method. Then system obtains the position of vanishing point and regards it as destination point and finally controls the autonomous navigation of the quad-rotor helicopter by continuous modification according to the calculated navigation error. The experimental results show that the quad-rotor helicopter can implement the destination navigation well in the indoor environment.

  1. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  2. Cultural continuity, traditional Indigenous language, and diabetes in Alberta First Nations: a mixed methods study.

    Science.gov (United States)

    Oster, Richard T; Grier, Angela; Lightning, Rick; Mayan, Maria J; Toth, Ellen L

    2014-10-19

    We used an exploratory sequential mixed methods approach to study the association between cultural continuity, self-determination, and diabetes prevalence in First Nations in Alberta, Canada. We conducted a qualitative description where we interviewed 10 Cree and Blackfoot leaders (members of Chief and Council) from across the province to understand cultural continuity, self-determination, and their relationship to health and diabetes, in the Alberta First Nations context. Based on the qualitative findings, we then conducted a cross-sectional analysis using provincial administrative data and publically available data for 31 First Nations communities to quantitatively examine any relationship between cultural continuity and diabetes prevalence. Cultural continuity, or "being who we are", is foundational to health in successful First Nations. Self-determination, or "being a self-sufficient Nation", stems from cultural continuity and is seriously compromised in today's Alberta Cree and Blackfoot Nations. Unfortunately, First Nations are in a continuous struggle with government policy. The intergenerational effects of colonization continue to impact the culture, which undermines the sense of self-determination, and contributes to diabetes and ill health. Crude diabetes prevalence varied dramatically among First Nations with values as low as 1.2% and as high as 18.3%. Those First Nations that appeared to have more cultural continuity (measured by traditional Indigenous language knowledge) had significantly lower diabetes prevalence after adjustment for socio-economic factors (p =0.007). First Nations that have been better able to preserve their culture may be relatively protected from diabetes.

  3. The Lagrangian Points

    Science.gov (United States)

    Linton, J. Oliver

    2017-01-01

    There are five unique points in a star/planet system where a satellite can be placed whose orbital period is equal to that of the planet. Simple methods for calculating the positions of these points, or at least justifying their existence, are developed.

  4. Fixed points of occasionally weakly biased mappings

    OpenAIRE

    Y. Mahendra Singh, M. R. Singh

    2012-01-01

    Common fixed point results due to Pant et al. [Pant et al., Weak reciprocal continuity and fixed point theorems, Ann Univ Ferrara, 57(1), 181-190 (2011)] are extended to a class of non commuting operators called occasionally weakly biased pair[ N. Hussain, M. A. Khamsi A. Latif, Commonfixed points for JH-operators and occasionally weakly biased pairs under relaxed conditions, Nonlinear Analysis, 74, 2133-2140 (2011)]. We also provideillustrative examples to justify the improvements. Abstract....

  5. Determination of disintegration rates of a 60Co point source and volume sources by the sum-peak method

    International Nuclear Information System (INIS)

    Kawano, Takao; Ebihara, Hiroshi

    1990-01-01

    The disintegration rates of 60 Co as a point source (<2 mm in diameter on a thin plastic disc) and volume sources (10-100 mL solutions in a polyethylene bottle) are determined by the sum-peak method. The sum-peak formula gives the exact disintegration rate for the point source at different positions from the detector. However, increasing the volume of the solution results in enlarged deviations from the true disintegration rate. Extended sources must be treated as an amalgam of many point sources. (author)

  6. Measuring global oil trade dependencies: An application of the point-wise mutual information method

    International Nuclear Information System (INIS)

    Kharrazi, Ali; Fath, Brian D.

    2016-01-01

    Oil trade is one of the most vital networks in the global economy. In this paper, we analyze the 1998–2012 oil trade networks using the point-wise mutual information (PMI) method and determine the pairwise trade preferences and dependencies. Using examples of the USA's trade partners, this research demonstrates the usefulness of the PMI method as an additional methodological tool to evaluate the outcomes from countries' decisions to engage in preferred trading partners. A positive PMI value indicates trade preference where trade is larger than would be expected. For example, in 2012 the USA imported 2,548.7 kbpd despite an expected 358.5 kbpd of oil from Canada. Conversely, a negative PMI value indicates trade dis-preference where the amount of trade is smaller than what would be expected. For example, the 15-year average of annual PMI between Saudi Arabia and the U.S.A. is −0.130 and between Russia and the USA −1.596. We reflect the three primary reasons of discrepancies between actual and neutral model trade can be related to position, price, and politics. The PMI can quantify the political success or failure of trade preferences and can more accurately account temporal variation of interdependencies. - Highlights: • We analyzed global oil trade networks using the point-wise mutual information method. • We identified position, price, & politics as drivers of oil trade preference. • The PMI method is useful in research on complex trade networks and dependency theory. • A time-series analysis of PMI can track dependencies & evaluate policy decisions.

  7. Vacuum expectation value of the stress tensor in an arbitrary curved background: The covariant point-separation method

    International Nuclear Information System (INIS)

    Christensen, S.M.

    1976-01-01

    A method known as covariant geodesic point separation is developed to calculate the vacuum expectation value of the stress tensor for a massive scalar field in an arbitrary gravitational field. The vacuum expectation value will diverge because the stress-tensor operator is constructed from products of field operators evaluated at the same space-time point. To remedy this problem, one of the field operators is taken to a nearby point. The resultant vacuum expectation value is finite and may be expressed in terms of the Hadamard elementary function. This function is calculated using a curved-space generalization of Schwinger's proper-time method for calculating the Feynman Green's function. The expression for the Hadamard function is written in terms of the biscalar of geodetic interval which gives a measure of the square of the geodesic distance between the separated points. Next, using a covariant expansion in terms of the tangent to the geodesic, the stress tensor may be expanded in powers of the length of the geodesic. Covariant expressions for each divergent term and for certain terms in the finite portion of the vacuum expectation value of the stress tensor are found. The properties, uses, and limitations of the results are discussed

  8. Improving indoor air quality through the use of continual multipoint monitoring of carbon dioxide and dew point.

    Science.gov (United States)

    Bearg, D W

    1998-09-01

    This article summarizes an approach for improving the indoor air quality (IAQ) in a building by providing feedback on the performance of the ventilation system. The delivery of adequate quantities of ventilation to all building occupants is necessary for the achievement of good IAQ. Feedback on the performance includes information on the adequacy of ventilation provided, the effectiveness of the distribution of this air, the adequacy of the duration of operation of the ventilation system, and the identification of leakage into the return plenum, either of outdoor or supply air. Keeping track of ventilation system performance is important not only in terms of maintaining good IAQ, but also making sure that this system continues to perform as intended after changes in building use. Information on the performance of the ventilation system is achieved by means of an automated sampling system that draws air from multiple locations and delivers it to both a carbon dioxide monitor and dew point sensor. The use of single shared sensors facilitates calibration checks as well as helps to guarantee data integrity. This approach to monitoring a building's ventilation system offers the possibility of achieving sustainable performance of this important aspect of good IAQ.

  9. Evaluation of 4 years of continuous δ13C(CO2) data using a moving Keeling plot method

    Science.gov (United States)

    Vardag, Sanam Noreen; Hammer, Samuel; Levin, Ingeborg

    2016-07-01

    Different carbon dioxide (CO2) emitters can be distinguished by their carbon isotope ratios. Therefore measurements of atmospheric δ13C(CO2) and CO2 concentration contain information on the CO2 source mix in the catchment area of an atmospheric measurement site. This information may be illustratively presented as the mean isotopic source signature. Recently an increasing number of continuous measurements of δ13C(CO2) and CO2 have become available, opening the door to the quantification of CO2 shares from different sources at high temporal resolution. Here, we present a method to compute the CO2 source signature (δS) continuously and evaluate our result using model data from the Stochastic Time-Inverted Lagrangian Transport model. Only when we restrict the analysis to situations which fulfill the basic assumptions of the Keeling plot method does our approach provide correct results with minimal biases in δS. On average, this bias is 0.2 ‰ with an interquartile range of about 1.2 ‰ for hourly model data. As a consequence of applying the required strict filter criteria, 85 % of the data points - mainly daytime values - need to be discarded. Applying the method to a 4-year dataset of CO2 and δ13C(CO2) measured in Heidelberg, Germany, yields a distinct seasonal cycle of δS. Disentangling this seasonal source signature into shares of source components is, however, only possible if the isotopic end members of these sources - i.e., the biosphere, δbio, and the fuel mix, δF - are known. From the mean source signature record in 2012, δbio could be reliably estimated only for summer to (-25.0 ± 1.0) ‰ and δF only for winter to (-32.5 ± 2.5) ‰. As the isotopic end members δbio and δF were shown to change over the season, no year-round estimation of the fossil fuel or biosphere share is possible from the measured mean source signature record without additional information from emission inventories or other tracer measurements.

  10. Point defects in solids

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    The principal properties of point defects are studied: thermodynamics, electronic structure, interactions with etended defects, production by irradiation. Some measuring methods are presented: atomic diffusion, spectroscopic methods, diffuse scattering of neutron and X rays, positron annihilation, molecular dynamics. Then points defects in various materials are investigated: ionic crystals, oxides, semiconductor materials, metals, intermetallic compounds, carbides, nitrides [fr

  11. The Continued Assessment of Self-Continuity and Identity

    Science.gov (United States)

    Dunkel, Curtis S.; Minor, Leslie; Babineau, Maureen

    2010-01-01

    Studies have found that self-continuity is predictive of a substantial number of important outcome variables. However, a recent series of studies brings into question the traditional method of measuring self-continuity in favor of an alternative (B. M. Baird, K. Le, & R. E. Lucas, 2006). The present study represents a further comparison of…

  12. A hybrid metaheuristic method to optimize the order of the sequences in continuous-casting

    Directory of Open Access Journals (Sweden)

    Achraf Touil

    2016-06-01

    Full Text Available In this paper, we propose a hybrid metaheuristic algorithm to maximize the production and to minimize the processing time in the steel-making and continuous casting (SCC by optimizing the order of the sequences where a sequence is a group of jobs with the same chemical characteristics. Based on the work Bellabdaoui and Teghem (2006 [Bellabdaoui, A., & Teghem, J. (2006. A mixed-integer linear programming model for the continuous casting planning. International Journal of Production Economics, 104(2, 260-270.], a mixed integer linear programming for scheduling steelmaking continuous casting production is presented to minimize the makespan. The order of the sequences in continuous casting is assumed to be fixed. The main contribution is to analyze an additional way to determine the optimal order of sequences. A hybrid method based on simulated annealing and genetic algorithm restricted by a tabu list (SA-GA-TL is addressed to obtain the optimal order. After parameter tuning of the proposed algorithm, it is tested on different instances using a.NET application and the commercial software solver Cplex v12.5. These results are compared with those obtained by SA-TL (simulated annealing restricted by tabu list.

  13. Simultaneous spectrophotometric determination of uranium and zirconium using cloud point extraction and multivariate methods

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Hashemi, Beshare; Shamsipur, Mojtaba

    2012-01-01

    A cloud point extraction (CPE) process using the nonionic surfactant Triton X-114 to simultaneous extraction and spectrophotometric determination of uranium and zirconium from aqueous solution using partial least squares (PLS) regression is investigated. The method is based on the complexation reaction of these cations with Alizarin Red S (ARS) and subsequent micelle-mediated extraction of products. The chemical parameters affecting the separation phase and detection process were studied and optimized. Under the optimum experimental conditions (i.e. pH 5.2, Triton X-114 = 0.20%, equilibrium time 10 min and cloud point 45 C), calibration graphs were linear in the range of 0.01-3 mg L -1 with detection limits of 2.0 and 0.80 μg L -1 for U and Zr, respectively. The experimental calibration set was composed of 16 sample solutions using an orthogonal design for two component mixtures. The root mean square error of predictions (RMSEPs) for U and Zr were 0.0907 and 0.1117, respectively. The interference effect of some anions and cations was also tested. The method was applied to the simultaneous determination of U and Zr in water samples.

  14. A novel maximum power point tracking method for PV systems using fuzzy cognitive networks (FCN)

    Energy Technology Data Exchange (ETDEWEB)

    Karlis, A.D. [Electrical Machines Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece); Kottas, T.L.; Boutalis, Y.S. [Automatic Control Systems Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece)

    2007-03-15

    Maximum power point trackers (MPPTs) play an important role in photovoltaic (PV) power systems because they maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency. This paper presents a novel MPPT method based on fuzzy cognitive networks (FCN). The new method gives a good maximum power operation of any PV array under different conditions such as changing insolation and temperature. The numerical results show the effectiveness of the proposed algorithm. (author)

  15. A method for partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    International Nuclear Information System (INIS)

    Barbee, David L; Holden, James E; Nickles, Robert J; Jeraj, Robert; Flynn, Ryan T

    2010-01-01

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised by partial volume effects which may affect treatment prognosis, assessment or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discovery LS at positions of increasing radii from the scanner's center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method's correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three-dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated

  16. APPLICATION OF PARAMETER CONTINUATION METHOD FOR INVESTIGATION OF VIBROIMPACT SYSTEMS DYNAMIC BEHAVIOUR. PROBLEM STATE. SHORT SURVEY OF WORLD SCIENTIFIC LITERATURE

    Directory of Open Access Journals (Sweden)

    V.A. Bazhenov

    2014-12-01

    Full Text Available Authors in their works study vibroimpact system dynamic behaviour by numerical parametric continuation technique combined with shooting and Newton-Raphson’s methods. The technique is adapted to two-mass two-degree-of-freedom vibroimpact system under periodic excitation. Impact is simulated by nonlinear contact interaction force based on Hertz’s contact theory. Stability or instability of obtained periodic solutions is determined by monodromy matrix eigenvalues (multipliers based on Floquet’s theory. In the present paper we describe the state of problem of parameter continuation method using for nonlinear tasks solution. Also we give the short survey of numerous contemporary literature in English and Russian about parameter continuation method application for nonlinear problems. This method is applied for vibroimpact problem solving more rarely because of the difficulties connected with repeated impacts.

  17. Characterizing fixed points

    Directory of Open Access Journals (Sweden)

    Sanjo Zlobec

    2017-04-01

    Full Text Available A set of sufficient conditions which guarantee the existence of a point x⋆ such that f(x⋆ = x⋆ is called a "fixed point theorem". Many such theorems are named after well-known mathematicians and economists. Fixed point theorems are among most useful ones in applied mathematics, especially in economics and game theory. Particularly important theorem in these areas is Kakutani's fixed point theorem which ensures existence of fixed point for point-to-set mappings, e.g., [2, 3, 4]. John Nash developed and applied Kakutani's ideas to prove the existence of (what became known as "Nash equilibrium" for finite games with mixed strategies for any number of players. This work earned him a Nobel Prize in Economics that he shared with two mathematicians. Nash's life was dramatized in the movie "Beautiful Mind" in 2001. In this paper, we approach the system f(x = x differently. Instead of studying existence of its solutions our objective is to determine conditions which are both necessary and sufficient that an arbitrary point x⋆ is a fixed point, i.e., that it satisfies f(x⋆ = x⋆. The existence of solutions for continuous function f of the single variable is easy to establish using the Intermediate Value Theorem of Calculus. However, characterizing fixed points x⋆, i.e., providing answers to the question of finding both necessary and sufficient conditions for an arbitrary given x⋆ to satisfy f(x⋆ = x⋆, is not simple even for functions of the single variable. It is possible that constructive answers do not exist. Our objective is to find them. Our work may require some less familiar tools. One of these might be the "quadratic envelope characterization of zero-derivative point" recalled in the next section. The results are taken from the author's current research project "Studying the Essence of Fixed Points". They are believed to be original. The author has received several feedbacks on the preliminary report and on parts of the project

  18. Estimation of cylinder orientation in three-dimensional point cloud using angular distance-based optimization

    Science.gov (United States)

    Su, Yun-Ting; Hu, Shuowen; Bethel, James S.

    2017-05-01

    Light detection and ranging (LIDAR) has become a widely used tool in remote sensing for mapping, surveying, modeling, and a host of other applications. The motivation behind this work is the modeling of piping systems in industrial sites, where cylinders are the most common primitive or shape. We focus on cylinder parameter estimation in three-dimensional point clouds, proposing a mathematical formulation based on angular distance to determine the cylinder orientation. We demonstrate the accuracy and robustness of the technique on synthetically generated cylinder point clouds (where the true axis orientation is known) as well as on real LIDAR data of piping systems. The proposed algorithm is compared with a discrete space Hough transform-based approach as well as a continuous space inlier approach, which iteratively discards outlier points to refine the cylinder parameter estimates. Results show that the proposed method is more computationally efficient than the Hough transform approach and is more accurate than both the Hough transform approach and the inlier method.

  19. Evaluating Environmental Favorability for Tropical Cyclone Development with the Method of Point-Downscaling

    Directory of Open Access Journals (Sweden)

    David S Nolan

    2011-08-01

    Full Text Available A new method is presented to determine the favorability for tropical cyclone development of an atmospheric environment, as represented by a mean sounding of temperature, humidity, and wind as a function of height. A mesoscale model with nested, moving grids is used to simulate the evolution of a weak, precursor vortex in a large domain with doubly periodic boundary conditions. The equations of motion are modified to maintain arbitrary profiles of both zonal and meridional wind as a function of height, without the necessary large-scale temperature gradients that cannot be consistent with doubly periodic boundary conditions. Comparisons between simulations using the point-downscaling method and simulations using wind shear balanced by temperature gradients illustrate both the advantages and the limitations of the technique. Further examples of what can be learned with this method are presented using both idealized and observed soundings and wind profiles.

  20. Development of Fracture Toughness Evaluation Method for Composite Materials by Non-Destructive Testing Method

    International Nuclear Information System (INIS)

    Lee, Y. T.; Kim, K. S.

    1998-01-01

    Fracture process of continuous fiber reinforced composites is very complex because various fracture mechanisms such as matrix cracking, debonding, delamination and fiber breaking occur simultaneously during crack growth. If fibers cause crack bridging during crack growth, the stable crack growth and unstable crack growth appear repeatedly. Therefore, it is very difficult to exactly determine tile starting point of crack growth and the fracture toughness at the critical crack length in composites. In this research, fracture toughness test for CFRP was accomplished by using acoustic emission(AE) and recording of tile fracture process in real time by video-microscope. The starting point of crack growth, pop-in point and the point of unstable crack growth can be exactly determined. Each fracture mechanism can be classified by analyzing the fracture process through AE and video-microscope. The more reliable method is the fracture toughness measurement of composite materials was proposed by using the combination of R-curve method, AE and video microscope