Loizou, Nicolas
2017-12-27
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
Loizou, Nicolas; Richtarik, Peter
2017-01-01
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
A proximal point algorithm with generalized proximal distances to BEPs
Bento, G. C.; Neto, J. X. Cruz; Lopes, J. O.; Soares Jr, P. A.; Soubeyran, A.
2014-01-01
We consider a bilevel problem involving two monotone equilibrium bifunctions and we show that this problem can be solved by a proximal point method with generalized proximal distances. We propose a framework for the convergence analysis of the sequences generated by the algorithm. This class of problems is very interesting because it covers mathematical programs and optimization problems under equilibrium constraints. As an application, we consider the problem of the stability and change dyna...
Directory of Open Access Journals (Sweden)
Kriengsak Wattanawitoon
2011-01-01
Full Text Available We prove strong and weak convergence theorems of modified hybrid proximal-point algorithms for finding a common element of the zero point of a maximal monotone operator, the set of solutions of equilibrium problems, and the set of solution of the variational inequality operators of an inverse strongly monotone in a Banach space under different conditions. Moreover, applications to complementarity problems are given. Our results modify and improve the recently announced ones by Li and Song (2008 and many authors.
Adaptive Proximal Point Algorithms for Total Variation Image Restoration
Directory of Open Access Journals (Sweden)
Ying Chen
2015-02-01
Full Text Available Image restoration is a fundamental problem in various areas of imaging sciences. This paper presents a class of adaptive proximal point algorithms (APPA with contraction strategy for total variational image restoration. In each iteration, the proposed methods choose an adaptive proximal parameter matrix which is not necessary symmetric. In fact, there is an inner extrapolation in the prediction step, which is followed by a correction step for contraction. And the inner extrapolation is implemented by an adaptive scheme. By using the framework of contraction method, global convergence result and a convergence rate of O(1/N could be established for the proposed methods. Numerical results are reported to illustrate the efficiency of the APPA methods for solving total variation image restoration problems. Comparisons with the state-of-the-art algorithms demonstrate that the proposed methods are comparable and promising.
Directory of Open Access Journals (Sweden)
Agarwal RaviP
2009-01-01
Full Text Available We glance at recent advances to the general theory of maximal (set-valued monotone mappings and their role demonstrated to examine the convex programming and closely related field of nonlinear variational inequalities. We focus mostly on applications of the super-relaxed ( -proximal point algorithm to the context of solving a class of nonlinear variational inclusion problems, based on the notion of maximal ( -monotonicity. Investigations highlighted in this communication are greatly influenced by the celebrated work of Rockafellar (1976, while others have played a significant part as well in generalizing the proximal point algorithm considered by Rockafellar (1976 to the case of the relaxed proximal point algorithm by Eckstein and Bertsekas (1992. Even for the linear convergence analysis for the overrelaxed (or super-relaxed ( -proximal point algorithm, the fundamental model for Rockafellar's case does the job. Furthermore, we attempt to explore possibilities of generalizing the Yosida regularization/approximation in light of maximal ( -monotonicity, and then applying to first-order evolution equations/inclusions.
Relatively Inexact Proximal Point Algorithm and Linear Convergence Analysis
Directory of Open Access Journals (Sweden)
Ram U. Verma
2009-01-01
Full Text Available Based on a notion of relatively maximal (m-relaxed monotonicity, the approximation solvability of a general class of inclusion problems is discussed, while generalizing Rockafellar's theorem (1976 on linear convergence using the proximal point algorithm in a real Hilbert space setting. Convergence analysis, based on this new model, is simpler and compact than that of the celebrated technique of Rockafellar in which the Lipschitz continuity at 0 of the inverse of the set-valued mapping is applied. Furthermore, it can be used to generalize the Yosida approximation, which, in turn, can be applied to first-order evolution equations as well as evolution inclusions.
Directory of Open Access Journals (Sweden)
Minghua Xu
2014-01-01
Full Text Available We consider the problem of seeking a symmetric positive semidefinite matrix in a closed convex set to approximate a given matrix. This problem may arise in several areas of numerical linear algebra or come from finance industry or statistics and thus has many applications. For solving this class of matrix optimization problems, many methods have been proposed in the literature. The proximal alternating direction method is one of those methods which can be easily applied to solve these matrix optimization problems. Generally, the proximal parameters of the proximal alternating direction method are greater than zero. In this paper, we conclude that the restriction on the proximal parameters can be relaxed for solving this kind of matrix optimization problems. Numerical experiments also show that the proximal alternating direction method with the relaxed proximal parameters is convergent and generally has a better performance than the classical proximal alternating direction method.
Euler, Simon A; Petri, Maximilian; Venderley, Melanie B; Dornan, Grant J; Schmoelz, Werner; Turnbull, Travis Lee; Plecko, Michael; Kralinger, Franz S; Millett, Peter J
2017-09-01
Varus failure is one of the most common failure modes following surgical treatment of proximal humeral fractures. Straight antegrade nails (SAN) theoretically provide increased stability by anchoring to the densest zone of the proximal humerus (subchondral zone) with the end of the nail. The aim of this study was to biomechanically investigate the characteristics of this "proximal anchoring point" (PAP). We hypothesized that the PAP would improve stability compared to the same construct without the PAP. Straight antegrade humeral nailing was performed in 20 matched pairs of human cadaveric humeri for a simulated unstable two-part fracture. Biomechanical testing, with stepwise increasing cyclic axial loading (50-N increments each 100 cycles) at an angle of 20° abduction revealed significantly higher median loads to failure for SAN constructs with the PAP (median, 450 N; range, 200-1.000 N) compared to those without the PAP (median, 325 N; range, 100-500 N; p = 0.009). SAN constructs with press-fit proximal extensions (endcaps) showed similar median loads to failure (median, 400 N; range, 200-650 N), when compared to the undersized, commercially available SAN endcaps (median, 450 N; range, 200-600 N; p = 0.240). The PAP provided significantly increased stability in SAN constructs compared to the same setup without this additional proximal anchoring point. Varus-displacing forces to the humeral head were superiorly reduced in this setting. This study provides biomechanical evidence for the "proximal anchoring point's" rationale. Straight antegrade humeral nailing may be beneficial for patients undergoing surgical treatment for unstable proximal humeral fractures to decrease secondary varus displacement and thus potentially reduce revision rates.
PHOTOJOURNALISM AND PROXIMITY IMAGES: two points of view, two professions?
Directory of Open Access Journals (Sweden)
Daniel Thierry
2011-06-01
Full Text Available For many decades, classic photojournalistic practice, firmly anchored in a creed established since Lewis Hine (1874-1940, has developed a praxis and a doxa that have barely been affected by the transformations in the various types of journalism. From the search for the “right image” which would be totally transparent by striving to refute its enunciative features from a perspective of maximumobjectivity, to the most seductive photography at supermarkets by photo agencies, the range of images seems to be decidedly framed. However, far from constituting high-powered reportingor excellent photography that is rewarded with numerous international prizes and invitations to the media-artistic world, local press photography remains in the shadows. How does oneoffer a representation of one’s self that can be shared in the local sphere? That is the first question which editors of the local daily and weekly press must grapple with. Using illustrations of the practices, this article proposes an examination of the origins ofthese practices and an analysis grounded on the originality of theauthors of these proximity photographs.
Best Proximity Point Results in Complex Valued Metric Spaces
Directory of Open Access Journals (Sweden)
Binayak S. Choudhury
2014-01-01
complex valued metric spaces. We treat the problem as that of finding the global optimal solution of a fixed point equation although the exact solution does not in general exist. We also define and use the concept of P-property in such spaces. Our results are illustrated with examples.
Best Proximity Points of Contractive-type and Nonexpansive-type Mappings
Directory of Open Access Journals (Sweden)
R. Kavitha
2018-02-01
Full Text Available The purpose of this paper is to obtain best proximity point theorems for multivalued nonexpansive-type and contractive-type mappings on complete metric spaces and on certain closed convex subsets of Banach spaces. We obtain a convergence result under some assumptions and we prove the existence of common best proximity points for a sequence of multivalued contractive-type mappings.
Inexact proximal Newton methods for self-concordant functions
DEFF Research Database (Denmark)
Li, Jinchao; Andersen, Martin Skovgaard; Vandenberghe, Lieven
2016-01-01
with an application to L1-regularized covariance selection, in which prior constraints on the sparsity pattern of the inverse covariance matrix are imposed. In the numerical experiments the proximal Newton steps are computed by an accelerated proximal gradient method, and multifrontal algorithms for positive definite...... matrices with chordal sparsity patterns are used to evaluate gradients and matrix-vector products with the Hessian of the smooth component of the objective....
Existence and Convergence of Best Proximity Points for Semi Cyclic Contraction Pairs
Directory of Open Access Journals (Sweden)
Balwant Singh Thakur
2014-02-01
Full Text Available In this article, we introduce the notion of a semi cyclic ϕ-contraction pair of mappings, which contains semi cyclic contraction pairs as a subclass. Existence and convergence results of best proximity points for semi cyclic ϕ- contraction pair of mappings are obtained.
Correction of Misclassifications Using a Proximity-Based Estimation Method
Directory of Open Access Journals (Sweden)
Shmulevich Ilya
2004-01-01
Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.
Effect of thermal processing methods on the proximate composition ...
African Journals Online (AJOL)
The nutritive value of raw and thermal processed castor oil seed (Ricinus communis) was investigated using the following parameters; proximate composition, gross energy, mineral constituents and ricin content. Three thermal processing methods; toasting, boiling and soaking-and-boiling were used in the processing of the ...
Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods
Directory of Open Access Journals (Sweden)
Feng Ma
2014-01-01
Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.
An efficient dose-compensation method for proximity effect correction
International Nuclear Information System (INIS)
Wang Ying; Han Weihua; Yang Xiang; Zhang Yang; Yang Fuhua; Zhang Renping
2010-01-01
A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved. (semiconductor technology)
Directory of Open Access Journals (Sweden)
Jie Shen
2015-01-01
Full Text Available We describe an extension of the redistributed technique form classical proximal bundle method to the inexact situation for minimizing nonsmooth nonconvex functions. The cutting-planes model we construct is not the approximation to the whole nonconvex function, but to the local convexification of the approximate objective function, and this kind of local convexification is modified dynamically in order to always yield nonnegative linearization errors. Since we only employ the approximate function values and approximate subgradients, theoretical convergence analysis shows that an approximate stationary point or some double approximate stationary point can be obtained under some mild conditions.
Dental flossing as a diagnostic method for proximal gingivitis: a validation study.
Grellmann, Alessandra Pascotini; Kantorski, Karla Zanini; Ardenghi, Thiago Machado; Moreira, Carlos Heitor Cunha; Danesi, Cristiane Cademartori; Zanatta, Fabricio Batistin
2016-05-20
This study evaluated the clinical diagnosis of proximal gingivitis by comparing two methods: dental flossing and the gingival bleeding index (GBI). One hundred subjects (aged at least 18 years, with 15% of positive proximal sites for GBI, without proximal attachment loss) were randomized into five evaluation protocols. Each protocol consisted of two assessments with a 10-minute interval between them: first GBI/second floss, first floss/second GBI, first GBI/second GBI, first tooth floss/second floss, and first gum floss-second floss. The dental floss was slid against the tooth surface (TF) and the gingival tissue (GF). The evaluated proximal sites should present teeth with established point of contact and probing depth ≤ 3mm. One trained and calibrated examiner performed all the assessments. The mean percentages of agreement and disagreement were calculated for the sites with gingival bleeding in both evaluation methods (GBI and flossing). The primary outcome was the percentage of disagreement between the assessments in the different protocols. The data were analyzed by one-way ANOVA, McNemar, chi-square and Tukey's post hoc tests, with a 5% significance level. When gingivitis was absent in the first assessment (negative GBI), bleeding was detected in the second assessment by TF and GF in 41.7% (p gingivitis in the second assessment (negative GBI), TF and GF detected bleeding in the first assessment in 38.9% (p = 0.004) and 58.3% (p gingivitis than GBI.
PROXIMAL: a method for Prediction of Xenobiotic Metabolism.
Yousofshahi, Mona; Manteiga, Sara; Wu, Charmian; Lee, Kyongbum; Hassoun, Soha
2015-12-22
Contamination of the environment with bioactive chemicals has emerged as a potential public health risk. These substances that may cause distress or disease in humans can be found in air, water and food supplies. An open question is whether these chemicals transform into potentially more active or toxic derivatives via xenobiotic metabolizing enzymes expressed in the body. We present a new prediction tool, which we call PROXIMAL (Prediction of Xenobiotic Metabolism) for identifying possible transformation products of xenobiotic chemicals in the liver. Using reaction data from DrugBank and KEGG, PROXIMAL builds look-up tables that catalog the sites and types of structural modifications performed by Phase I and Phase II enzymes. Given a compound of interest, PROXIMAL searches for substructures that match the sites cataloged in the look-up tables, applies the corresponding modifications to generate a panel of possible transformation products, and ranks the products based on the activity and abundance of the enzymes involved. PROXIMAL generates transformations that are specific for the chemical of interest by analyzing the chemical's substructures. We evaluate the accuracy of PROXIMAL's predictions through case studies on two environmental chemicals with suspected endocrine disrupting activity, bisphenol A (BPA) and 4-chlorobiphenyl (PCB3). Comparisons with published reports confirm 5 out of 7 and 17 out of 26 of the predicted derivatives for BPA and PCB3, respectively. We also compare biotransformation predictions generated by PROXIMAL with those generated by METEOR and Metaprint2D-react, two other prediction tools. PROXIMAL can predict transformations of chemicals that contain substructures recognizable by human liver enzymes. It also has the ability to rank the predicted metabolites based on the activity and abundance of enzymes involved in xenobiotic transformation.
Method to Measure Tone of Axial and Proximal Muscle
Gurfinkel, Victor S.; Cacciatore, Timothy W.; Cordo, Paul J.; Horak, Fay B.
2011-01-01
The control of tonic muscular activity remains poorly understood. While abnormal tone is commonly assessed clinically by measuring the passive resistance of relaxed limbs1, no systems are available to study tonic muscle control in a natural, active state of antigravity support. We have developed a device (Twister) to study tonic regulation of axial and proximal muscles during active postural maintenance (i.e. postural tone). Twister rotates axial body regions relative to each other about the vertical axis during stance, so as to twist the neck, trunk or hip regions. This twisting imposes length changes on axial muscles without changing the body's relationship to gravity. Because Twister does not provide postural support, tone must be regulated to counteract gravitational torques. We quantify this tonic regulation by the restive torque to twisting, which reflects the state of all muscles undergoing length changes, as well as by electromyography of relevant muscles. Because tone is characterized by long-lasting low-level muscle activity, tonic control is studied with slow movements that produce "tonic" changes in muscle length, without evoking fast "phasic" responses. Twister can be reconfigured to study various aspects of muscle tone, such as co-contraction, tonic modulation to postural changes, tonic interactions across body segments, as well as perceptual thresholds to slow axial rotation. Twister can also be used to provide a quantitative measurement of the effects of disease on axial and proximal postural tone and assess the efficacy of intervention. PMID:22214974
Genealogical series method. Hyperpolar points screen effect
International Nuclear Information System (INIS)
Gorbatov, A.M.
1991-01-01
The fundamental values of the genealogical series method -the genealogical integrals (sandwiches) have been investigated. The hyperpolar points screen effect has been found. It allows one to calculate the sandwiches for the Fermion systems with large number of particles and to ascertain the validity of the iterated-potential method as well. For the first time the genealogical-series method has been realized numerically for the central spin-independent potential
THE GROWTH POINTS OF STATISTICAL METHODS
Orlov A. I.
2014-01-01
On the basis of a new paradigm of applied mathematical statistics, data analysis and economic-mathematical methods are identified; we have also discussed five topical areas in which modern applied statistics is developing as well as the other statistical methods, i.e. five "growth points" – nonparametric statistics, robustness, computer-statistical methods, statistics of interval data, statistics of non-numeric data
Parametric methods for spatial point processes
DEFF Research Database (Denmark)
Møller, Jesper
is studied in Section 4, and Bayesian inference in Section 5. On one hand, as the development in computer technology and computational statistics continues,computationally-intensive simulation-based methods for likelihood inference probably will play a increasing role for statistical analysis of spatial...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models......(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...
An Analog-Digital Mixed Measurement Method of Inductive Proximity Sensor
Directory of Open Access Journals (Sweden)
Yi-Xin Guo
2015-12-01
Full Text Available Inductive proximity sensors (IPSs are widely used in position detection given their unique advantages. To address the problem of temperature drift, this paper presents an analog-digital mixed measurement method based on the two-dimensional look-up table. The inductance and resistance components can be separated by processing the measurement data, thus reducing temperature drift and generating quantitative outputs. This study establishes and implements a two-dimensional look-up table that reduces the online computational complexity through structural modeling and by conducting an IPS operating principle analysis. This table is effectively compressed by considering the distribution characteristics of the sample data, thus simplifying the processing circuit. Moreover, power consumption is reduced. A real-time, built-in self-test (BIST function is also designed and achieved by analyzing abnormal sample data. Experiment results show that the proposed method obtains the advantages of both analog and digital measurements, which are stable, reliable, and taken in real time, without the use of floating-point arithmetic and process-control-based components. The quantitative output of displacement measurement accelerates and stabilizes the system control and detection process. The method is particularly suitable for meeting the high-performance requirements of the aviation and aerospace fields.
International Nuclear Information System (INIS)
Lenoir, A.
2008-01-01
We focus in this thesis, on the optimization process of large systems under uncertainty, and more specifically on solving the class of so-called deterministic equivalents with the help of splitting methods. The underlying application we have in mind is the electricity unit commitment problem under climate, market and energy consumption randomness, arising at EDF. We set the natural time-space-randomness couplings related to this application and we propose two new discretization schemes to tackle the randomness one, each of them based on non-parametric estimation of conditional expectations. This constitute an alternative to the usual scenario tree construction. We use the mathematical model consisting of the sum of two convex functions, a separable one and a coupling one. On the one hand, this simplified model offers a general framework to study decomposition-coordination algorithms by elapsing technicality due to a particular choice of subsystems. On the other hand, the convexity assumption allows to take advantage of monotone operators theory and to identify proximal methods as fixed point algorithms. We underlie the differential properties of the generalized reactions we are looking for a fixed point in order to derive bounds on the speed of convergence. Then we examine two families of decomposition-coordination algorithms resulting from operator splitting methods, namely Forward-Backward and Rachford methods. We suggest some practical method of acceleration of the Rachford class methods. To this end, we analyze the method from a theoretical point of view, furnishing as a byproduct explanations to some numerical observations. Then we propose as a response some improvements. Among them, an automatic updating strategy of scaling factors can correct a potential bad initial choice. The convergence proof is made easier thanks to stability results of some operator composition with respect to graphical convergence provided before. We also submit the idea of introducing
Source splitting via the point source method
International Nuclear Information System (INIS)
Potthast, Roland; Fazi, Filippo M; Nelson, Philip A
2010-01-01
We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields u j , j = 1, ..., n of n element of N sound sources supported in different bounded domains G 1 , ..., G n in R 3 from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u 1 + ... + u n on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions g 1 ,…, g n , n element of N, to construct u l for l = 1, ..., n from u| Λ in the form u l (x) = ∫ Λ g l,x (y)u(y)ds(y), l=1,... n. (1) We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online
Energy Technology Data Exchange (ETDEWEB)
Occhino, P.; Maté, M.
2017-07-01
This paper is a first attempt to examine the role played by the geography on agrarian firms’ valuations. The geography was evaluated through the physical proximity from agrarian companies to other companies and to some strategic points which ease their accessibility to external economic agents. To get our purpose, we developed an empirical application on a sample of non-listed agrarian Spanish companies located in the region of Murcia over the period 2010-2015. We applied Discount Cash Flow methodology for non-listed companies to get their valuations. With this information, we used spatial econometric techniques to analyse the spatial distribution of agrarian firms’ valuations and model the behavior of this variable. Our results supported the assertion that agrarian firms’ valuations are conditioned by the geography. We found that firms with similar valuations tend to be grouped together in the territory. In addition, we found significant effects on agrarian firms valuations derived from the geographical proximity among closer agrarian companies and from them to external agents and transport facilities.
International Nuclear Information System (INIS)
Occhino, P.; Maté, M.
2017-01-01
This paper is a first attempt to examine the role played by the geography on agrarian firms’ valuations. The geography was evaluated through the physical proximity from agrarian companies to other companies and to some strategic points which ease their accessibility to external economic agents. To get our purpose, we developed an empirical application on a sample of non-listed agrarian Spanish companies located in the region of Murcia over the period 2010-2015. We applied Discount Cash Flow methodology for non-listed companies to get their valuations. With this information, we used spatial econometric techniques to analyse the spatial distribution of agrarian firms’ valuations and model the behavior of this variable. Our results supported the assertion that agrarian firms’ valuations are conditioned by the geography. We found that firms with similar valuations tend to be grouped together in the territory. In addition, we found significant effects on agrarian firms valuations derived from the geographical proximity among closer agrarian companies and from them to external agents and transport facilities.
Pointing Verification Method for Spaceborne Lidars
Directory of Open Access Journals (Sweden)
Axel Amediek
2017-01-01
Full Text Available High precision acquisition of atmospheric parameters from the air or space by means of lidar requires accurate knowledge of laser pointing. Discrepancies between the assumed and actual pointing can introduce large errors due to the Doppler effect or a wrongly assumed air pressure at ground level. In this paper, a method for precisely quantifying these discrepancies for airborne and spaceborne lidar systems is presented. The method is based on the comparison of ground elevations derived from the lidar ranging data with high-resolution topography data obtained from a digital elevation model and allows for the derivation of the lateral and longitudinal deviation of the laser beam propagation direction. The applicability of the technique is demonstrated by using experimental data from an airborne lidar system, confirming that geo-referencing of the lidar ground spot trace with an uncertainty of less than 10 m with respect to the used digital elevation model (DEM can be obtained.
Mazi, K.; Koussis, A. D.; Destouni, G.
2013-11-01
We investigate here seawater intrusion in three prominent Mediterranean aquifers that are subject to intensive exploitation and modified hydrologic regimes by human activities: the Nile Delta Aquifer, the Israel Coastal Aquifer and the Cyprus Akrotiri Aquifer. Using a generalized analytical sharp-interface model, we review the salinization history and current status of these aquifers, and quantify their resilience/vulnerability to current and future sea intrusion forcings. We identify two different critical limits of sea intrusion under groundwater exploitation and/or climatic stress: a limit of well intrusion, at which intruded seawater reaches key locations of groundwater pumping, and a tipping point of complete sea intrusion upto the prevailing groundwater divide of a coastal aquifer. Either limit can be reached, and ultimately crossed, under intensive aquifer exploitation and/or climate-driven change. We show that sea intrusion vulnerability for different aquifer cases can be directly compared in terms of normalized intrusion performance curves. The site-specific assessments show that the advance of seawater currently seriously threatens the Nile Delta Aquifer and the Israel Coastal Aquifer. The Cyprus Akrotiri Aquifer is currently somewhat less threatened by increased seawater intrusion.
Generic primal-dual interior point methods based on a new kernel function
EL Ghami, M.; Roos, C.
2008-01-01
In this paper we present a generic primal-dual interior point methods (IPMs) for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm. The proposed kernel function does not satisfy all the
Distributed Solutions for Loosely Coupled Feasibility Problems Using Proximal Splitting Methods
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Andersen, Martin Skovgaard; Hansson, Anders
2014-01-01
In this paper,we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations ...
On evaluating the robustness of spatial-proximity-based regionalization methods.
Lebecherel , L.; Andréassian , V.; Perrin , C.
2016-01-01
International audience; In absence of streamflow data to calibrate a hydrological model, its parameters are to be inferred by a regionalization method. In this technical note, we discuss a specific class of regionalization methods, those based on spatial proximity, which transfers hydrological information (typically calibrated parameter sets) from neighbor gauged stations to the target ungauged station. The efficiency of any spatialproximity-based regionalization method will depend on the den...
On evaluating the robustness of spatial-proximity-based regionalization methods.
Lebecherel, L.; Andréassian, V.; Perrin, C.
2016-01-01
In absence of streamflow data to calibrate a hydrological model, its parameters are to be inferred by a regionalization method. In this technical note, we discuss a specific class of regionalization methods, those based on spatial proximity, which transfers hydrological information (typically calibrated parameter sets) from neighbor gauged stations to the target ungauged station. The efficiency of any spatialproximity-based regionalization method will depend on the density of the available st...
Method Points: towards a metric for method complexity
Directory of Open Access Journals (Sweden)
Graham McLeod
1998-11-01
Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.
The value of different imaging methods on classification in displaced proximal humeral fractures
International Nuclear Information System (INIS)
Cai Jingyu; Zhu Qingsheng
2004-01-01
Objective: To investigate the influence of common X-ray, two-dimensional computed tomography (2D-CT), spiral computed tomography (SCT), and three-dimensional (3-D) reconstruction on the classification in displaced proximal humeral fractures. Methods: Three groups were divided on the basis of various imaging methods, including group A (common X-ray), group B (common X-ray and 2D-CT), and group C (3-D reconstruction of SCT and 2D-SCT). 46 cases of displaced proximal humeral fractures were classified with Neer system. The true rate of fracture classification by use of three methods was compared with each other, and clinical significance of SCT and 3-D reconstruction was evaluated. Results: Based on operation, 46 cases of displaced proximal humeral fractures in group A included 26 cases of Neer two-part fractures, 13 cases of three-part fractures, and 7 cases of four-part fractures. The true cases of common X-ray were 22 in Neer two-part fractures and 8 in three and four-part fractures, there was significant difference between Neer two-part fractures and Neer three and four-part fractures (P<0.05); 18 cases of proximal humeral fractures in group B included 3 cases of Neer two-part fractures, 9 cases of three-part fractures, and 6 cases of four-part fractures. The true cases of common X-ray and 2D-CT were 7 in Neer three and four-part fractures. 10 cases of proximal humeral fractures in group C included 1 case of Neer two-part fracture, 5 cases of three-part fractures, and 4 cases of four-part fractures. The true cases of 3-D reconstruction, MPR of SCT, and 2D-SCT were 8 in Neer three and four-part fractures. With regard to the true cases of the classification in Neer three and four-part fractures, there was significant difference in three groups and between group A and group C (P<0.05). All SCT and 3-D reconstruction played an important role in the treatment of proximal humeral fractures. Conclusion: Series of good quality X-ray examinations were the first imaging
Che, Yonglu; Khavari, Paul A
2017-12-01
Interactions between proteins are essential for fundamental cellular processes, and the diversity of such interactions enables the vast variety of functions essential for life. A persistent goal in biological research is to develop assays that can faithfully capture different types of protein interactions to allow their study. A major step forward in this direction came with a family of methods that delineates spatial proximity of proteins as an indirect measure of protein-protein interaction. A variety of enzyme- and DNA ligation-based methods measure protein co-localization in space, capturing novel interactions that were previously too transient or low affinity to be identified. Here we review some of the methods that have been successfully used to measure spatially proximal protein-protein interactions. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Kouzu, Keita; Tsujimoto, Hironori; Hiraki, Shuichi; Nomura, Shinsuke; Yamamoto, Junji; Ueno, Hideki
2018-06-01
The preoperative diagnosis of T stage is important in selecting limited treatments, such as laparoscopic proximal gastrectomy (LPG), which lacks the ability to palpate the tumor. Therefore, the present study examined the accuracy of preoperative diagnosis of the depth of tumor invasion in early gastric cancer from the view point of the indication for LPG. A total of 193 patients with cT1 gastric cancer underwent LPG with gastrointestinal endoscopic examinations and a series of upper gastrointestinal radiographs. The patients with pT1 were classified into the correctly diagnosed group (163 patients, 84.5%), and those with pT2 or deeper were classified into the underestimated group (30 patients, 15.5%). Factors that were associated with underestimation of tumor depth were analyzed. Tumor size in the underestimated group was significantly larger; the lesions were more frequently located in the upper third of the stomach and were more histologically diffuse, scirrhous, with infiltrative growth, and more frequent lymphatic and venous invasion. For upper third lesions, in univariate analysis, histology (diffuse type) was associated with underestimation of tumor depth. Multivariate analysis found that tumor size (≥20 mm) and histology (diffuse type) were independently associated with underestimation of tumor depth. gastric cancer in the upper third of the stomach with diffuse type histology and >20 mm needs particular attention when considering the application of LPG.
An Approximate Proximal Bundle Method to Minimize a Class of Maximum Eigenvalue Functions
Directory of Open Access Journals (Sweden)
Wei Wang
2014-01-01
Full Text Available We present an approximate nonsmooth algorithm to solve a minimization problem, in which the objective function is the sum of a maximum eigenvalue function of matrices and a convex function. The essential idea to solve the optimization problem in this paper is similar to the thought of proximal bundle method, but the difference is that we choose approximate subgradient and function value to construct approximate cutting-plane model to solve the above mentioned problem. An important advantage of the approximate cutting-plane model for objective function is that it is more stable than cutting-plane model. In addition, the approximate proximal bundle method algorithm can be given. Furthermore, the sequences generated by the algorithm converge to the optimal solution of the original problem.
On evaluating the robustness of spatial-proximity-based regionalization methods
Lebecherel, Laure; Andréassian, Vazken; Perrin, Charles
2016-08-01
In absence of streamflow data to calibrate a hydrological model, its parameters are to be inferred by a regionalization method. In this technical note, we discuss a specific class of regionalization methods, those based on spatial proximity, which transfers hydrological information (typically calibrated parameter sets) from neighbor gauged stations to the target ungauged station. The efficiency of any spatial-proximity-based regionalization method will depend on the density of the available streamgauging network, and the purpose of this note is to discuss how to assess the robustness of the regionalization method (i.e., its resilience to an increasingly sparse hydrometric network). We compare two options: (i) the random hydrometrical reduction (HRand) method, which consists in sub-sampling the existing gauging network around the target ungauged station, and (ii) the hydrometrical desert method (HDes), which consists in ignoring the closest gauged stations. Our tests suggest that the HDes method should be preferred, because it provides a more realistic view on regionalization performance.
Method for the detection of flaws in a tube proximate a contiguous member
International Nuclear Information System (INIS)
Holt, A.E.; Wehrmeister, A.E.; Whaley, H.L.
1979-01-01
A method for deriving the eddy current signature of a flaw in a tube proximate a contiguous member which is obscured in a composite signature of the flaw and contiguous member comprises subtracting from the composite signature a reference eddy current signature generated by scanning a reference or facsimile tube and contiguous member. The method is particularly applicable to detecting flaws in the tubes of heat exchangers of fossil fuel and nuclear power plants to enable the detection of flaws which would otherwise be obscured by contiguous members such as support plates supporting the tubes. (U.K.)
Interior-Point Methods for Linear Programming: A Review
Singh, J. N.; Singh, D.
2002-01-01
The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…
Proximal methods for the resolution of inverse problems: application to positron emission tomography
International Nuclear Information System (INIS)
Pustelnik, N.
2010-12-01
The objective of this work is to propose reliable, efficient and fast methods for minimizing convex criteria, that are found in inverse problems for imagery. We focus on restoration/reconstruction problems when data is degraded with both a linear operator and noise, where the latter is not assumed to be necessarily additive.The reliability of the method is ensured through the use of proximal algorithms, the convergence of which is guaranteed when a convex criterion is considered. Efficiency is sought through the choice of criteria adapted to the noise characteristics, the linear operators and the image specificities. Of particular interest are regularization terms based on total variation and/or sparsity of signal frame coefficients. As a consequence of the use of frames, two approaches are investigated, depending on whether the analysis or the synthesis formulation is chosen. Fast processing requirements lead us to consider proximal algorithms with a parallel structure. Theoretical results are illustrated on several large size inverse problems arising in image restoration, stereoscopy, multi-spectral imagery and decomposition into texture and geometry components. We focus on a particular application, namely Positron Emission Tomography (PET), which is particularly difficult because of the presence of a projection operator combined with Poisson noise, leading to highly corrupted data. To optimize the quality of the reconstruction, we make use of the spatio-temporal characteristics of brain tissue activity. (author)
Hirahara, Noriyuki; Monma, Hiroyuki; Shimojo, Yoshihide; Matsubara, Takeshi; Hyakudomi, Ryoji; Yano, Seiji; Tanaka, Tsuneo
2011-01-01
Abstract Here we report the method of anastomosis based on double stapling technique (hereinafter, DST) using a trans-oral anvil delivery system (EEATM OrVilTM) for reconstructing the esophagus and lifted jejunum following laparoscopic total gastrectomy or proximal gastric resection. As a basic technique, laparoscopic total gastrectomy employed Roux-en-Y reconstruction, laparoscopic proximal gastrectomy employed double tract reconstruction, and end-to-side anastomosis was used for the cut-off...
Post-Processing in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars Vabbersgaard
The material-point method (MPM) is a numerical method for dynamic or static analysis of solids using a discretization in time and space. The method has shown to be successful in modelling physical problems involving large deformations, which are difficult to model with traditional numerical tools...... such as the finite element method. In the material-point method, a set of material points is utilized to track the problem in time and space, while a computational background grid is utilized to obtain spatial derivatives relevant to the physical problem. Currently, the research within the material-point method......-point method. The first idea involves associating a volume with each material point and displaying the deformation of this volume. In the discretization process, the physical domain is divided into a number of smaller volumes each represented by a simple shape; here quadrilaterals are chosen for the presented...
Fleischer, Nancy L; Lozano, Paula; Wu, Yun-Hsuan; Hardin, James W; Meng, Gang; Liese, Angela D; Fong, Geoffrey T; Thrasher, James F
2018-03-08
To examine how point-of-sale (POS) display bans, tobacco retailer density and tobacco retailer proximity were associated with smoking cessation and relapse in a cohort of smokers in Canada, where provincial POS bans were implemented differentially over time from 2004 to 2010. Data from the 2005 to 2011 administrations of the International Tobacco Control (ITC) Canada Survey, a nationally representative cohort of adult smokers, were linked via residential geocoding with tobacco retailer data to derive for each smoker a measure of retailer density and proximity. An indicator variable identified whether the smoker's province banned POS displays at the time of the interview. Outcomes included cessation for at least 1 month at follow-up among smokers from the previous wave and relapse at follow-up among smokers who had quit at the previous wave. Logistic generalised estimating equation models were used to determine the relationship between living in a province with a POS display ban, tobacco retailer density and tobacco retailer proximity with cessation (n=4388) and relapse (n=866). Provincial POS display bans were not associated with cessation. In adjusted models, POS display bans were associated with lower odds of relapse which strengthened after adjusting for retailer density and proximity, although results were not statistically significant (OR 0.66, 95% CI 0.41 to 1.07, p=0.089). Neither tobacco retailer density nor proximity was associated with cessation or relapse. Banning POS retail displays shows promise as an additional tool to prevent relapse, although these results need to be confirmed in larger longitudinal studies. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Im, Jeong-Soo; Choi, Soon Ho; Hong, Duho; Seo, Hwa Jeong; Park, Subin; Hong, Jin Pyo
2011-01-01
This study was conducted to examine differences in proximal risk factors and suicide methods by sex and age in the national suicide mortality data in Korea. Data were collected from the National Police Agency and the National Statistical Office of Korea on suicide completers from 2004 to 2006. The 31,711 suicide case records were used to analyze suicide rates, methods, and proximal risk factors by sex and age. Suicide rate increased with age, especially in men. The most common proximal risk factor for suicide was medical illness in both sexes. The most common proximal risk factor for subjects younger than 30 years was found to be a conflict in relationships with family members, partner, or friends. Medical illness was found to increase in prevalence as a risk factor with age. Hanging/Suffocation was the most common suicide method used by both sexes. The use of drug/pesticide poisoning to suicide increased with age. A fall from height or hanging/suffocation was more popular in the younger age groups. Because proximal risk factors and suicide methods varied with sex and age, different suicide prevention measures are required after consideration of both of these parameters. Copyright © 2011 Elsevier Inc. All rights reserved.
Analysis of Stress Updates in the Material-point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
The material-point method (MPM) is a new numerical method for analysis of large strain engineering problems. The MPM applies a dual formulation, where the state of the problem (mass, stress, strain, velocity etc.) is tracked using a finite set of material points while the governing equations...... are solved on a background computational grid. Several references state, that one of the main advantages of the material-point method is the easy application of complicated material behaviour as the constitutive response is updated individually for each material point. However, as discussed here, the MPM way...
Selective Integration in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Lars; Andersen, Søren; Damkilde, Lars
2009-01-01
The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....
Fixed-point data-collection method of video signal
International Nuclear Information System (INIS)
Tang Yu; Yin Zejie; Qian Weiming; Wu Xiaoyi
1997-01-01
The author describes a Fixed-point data-collection method of video signal. The method provides an idea of fixed-point data-collection, and has been successfully applied in the research of real-time radiography on dose field, a project supported by National Science Fund
Strike Point Control on EAST Using an Isoflux Control Method
International Nuclear Information System (INIS)
Xing Zhe; Xiao Bingjia; Luo Zhengping; Walker, M. L.; Humphreys, D. A.
2015-01-01
For the advanced tokamak, the particle deposition and thermal load on the divertor is a big challenge. By moving the strike points on divertor target plates, the position of particle deposition and thermal load can be shifted. We could adjust the Poloidal Field (PF) coil current to achieve the strike point position feedback control. Using isoflux control method, the strike point position can be controlled by controlling the X point position. On the basis of experimental data, we establish relational expressions between X point position and strike point position. Benchmark experiments are carried out to validate the correctness and robustness of the control methods. The strike point position is successfully controlled following our command in the EAST operation. (paper)
Pilot points method for conditioning multiple-point statistical facies simulation on flow data
Ma, Wei; Jafarpour, Behnam
2018-05-01
We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
Behera, Prateek; Aggarwal, Sameer; Kumar, Vishal; Kumar Meena, Umesh; Saibaba, Balaji
2015-09-01
Fractures of the tibia are one of the most commonly seen orthopedic injuries. Most of them result from a high velocity trauma. While intramedullary nailing of tibial diaphyseal fractures is considered as the golden standard form of treatment for such cases, many metaphyseal and metaphyseal-diaphyseal junction fractures can also be managed by nailing. Maintenance of alignment of such fractures during surgical procedure is often challenging as the pull of patellar tendon tends to extend the proximal fragment as soon as one flexes the knee for the surgical procedure. Numerous technical modifications have been described in the literature for successfully nailing such fractures including semi extended nailing, use of medial plates and external fixators among others. In this study, it was aimed to report two cases in which we used our ingenious method of applying external fixator for maintaining alignment of the fracture and aiding in the entire process of closed intramedullary nailing of metaphyseal tibial fractures by the conventional method. We were able to get good alignment during and after the closed surgery as observed on post-operative radiographs and believe that further evaluation of this technique may be of help to surgeons who want to avoid other techniques.
Varma, Ruchi; Ghosh, Jayanta
2018-06-01
A new hybrid technique, which is a combination of neural network (NN) and support vector machine, is proposed for designing of different slotted dual band proximity coupled microstrip antennas. Slots on the patch are employed to produce the second resonance along with size reduction. The proposed hybrid model provides flexibility to design the dual band antennas in the frequency range from 1 to 6 GHz. This includes DCS (1.71-1.88 GHz), PCS (1.88-1.99 GHz), UMTS (1.92-2.17 GHz), LTE2300 (2.3-2.4 GHz), Bluetooth (2.4-2.485 GHz), WiMAX (3.3-3.7 GHz), and WLAN (5.15-5.35 GHz, 5.725-5.825 GHz) bands applications. Also, the comparative study of this proposed technique is done with the existing methods like knowledge based NN and support vector machine. The proposed method is found to be more accurate in terms of % error and root mean square % error and the results are in good accord with the measured values.
Jang, Jeong Yoon; Kang, Joon-Won; Yang, Dong Hyun; Lee, Sahmin; Sun, Byung Joo; Kim, Dae-Hee; Song, Jong-Min; Kang, Duk-Hyun; Song, Jae-Kwan
2018-03-01
Overestimation of the severity of mitral regurgitation (MR) by the proximal isovelocity surface area (PISA) method has been reported. We sought to test whether angle correction (AC) of the constrained flow field is helpful to eliminate overestimation in patients with eccentric MR. In a total of 33 patients with MR due to prolapse or flail mitral valve, both echocardiography and cardiac magnetic resonance image (CMR) were performed to calculate regurgitant volume (RV). In addition to RV by conventional PISA (RV PISA ), convergence angle (α) was measured from 2-dimensional Doppler color flow maps and RV was corrected by multiplying by α/180 (RV AC ). RV measured by CMR (RV CMR ) was used as a gold standard, which was calculated by the difference between total stroke volume measured by planimetry of the short axis slices and aortic stroke volume by phase-contrast image. The correlation between RV CMR and RV by echocardiography was modest [RV CMR vs. RV PISA (r = 0.712, p < 0.001) and RV CMR vs. RV AC (r = 0.766, p < 0.001)]. However, RV PISA showed significant overestimation (RV PISA - RV CMR = 50.6 ± 40.6 mL vs. RV AC - RV CMR = 7.7 ± 23.4 mL, p < 0.001). The overall accuracy of RV PISA for diagnosis of severe MR, defined as RV ≥ 60 mL, was 57.6% (19/33), whereas it increased to 84.8% (28/33) by using RV AC ( p = 0.028). Conventional PISA method tends to provide falsely large RV in patients with eccentric MR and a simple geometric AC of the proximal constraint flow largely eliminates overestimation.
Detection method of proximal caries using line profile in digital intra-oral radiography
Energy Technology Data Exchange (ETDEWEB)
Choi, Yong Suk; Kim, Gyu Tae; Hwang, Eui Hwan; Lee, Min Ja; Choi, Sam Jin; Park, Hun Kuk [Department of Oral and Maxillofacial Radiology, School of Dentistry and Institute of Oral Biology, Kyung Hee University, Seoul (Korea, Republic of); Park, Jeong Hoon [Department of Biomedical Engineering, College of Medicine, Kyung Hee University, Seoul (Korea, Republic of)
2009-12-15
The purpose of this study was to investigate how to detect proximal caries using line profile and validate linear measurements of proximal caries lesions by basic digital manipulation of radiographic images. The X-ray images of control group (15) and caries teeth (15) from patients were used. For each image, the line profile at the proximal caries-susceptible zone was calculated. To evaluate the contrast as a function of line profile to detect proximal caries, a difference coefficient (D) that indicates the relative difference between caries and sound dentin or intact enamel was measured. Mean values of D were 0.0354 {+-} 0.0155 in non-caries and 0.2632 {+-} 0.0982 in caries (p<0.001). The mean values of caries group were higher than non-caries group and there was correlation between proximal dental caries and D. It is demonstrated that the mean value of D from caries group was higher than that of control group. From the result, values of D possess great potentiality as a new detection parameter for proximal dental caries.
Detection method of proximal caries using line profile in digital intra-oral radiography
International Nuclear Information System (INIS)
Choi, Yong Suk; Kim, Gyu Tae; Hwang, Eui Hwan; Lee, Min Ja; Choi, Sam Jin; Park, Hun Kuk; Park, Jeong Hoon
2009-01-01
The purpose of this study was to investigate how to detect proximal caries using line profile and validate linear measurements of proximal caries lesions by basic digital manipulation of radiographic images. The X-ray images of control group (15) and caries teeth (15) from patients were used. For each image, the line profile at the proximal caries-susceptible zone was calculated. To evaluate the contrast as a function of line profile to detect proximal caries, a difference coefficient (D) that indicates the relative difference between caries and sound dentin or intact enamel was measured. Mean values of D were 0.0354 ± 0.0155 in non-caries and 0.2632 ± 0.0982 in caries (p<0.001). The mean values of caries group were higher than non-caries group and there was correlation between proximal dental caries and D. It is demonstrated that the mean value of D from caries group was higher than that of control group. From the result, values of D possess great potentiality as a new detection parameter for proximal dental caries.
McDermott, Scott D.
2017-01-01
This research study uses geographic information retrieval (GIR) to georeference toponyms and points-of-interest (POI) names from a travel journal. Travel journals are an ideal data source with which to conduct this study because they are significant accounts specific to the author's experience, and contain geographic instances based on the…
Material-Point Method Analysis of Bending in Elastic Beams
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2007-01-01
The aim of this paper is to test different types of spatial interpolation for the material-point method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...
DEFF Research Database (Denmark)
Ekstrand, K R; Alloza, Alvaro Luna; Promisiero, L
2011-01-01
This study aimed to determine the reliability and accuracy of the ICDAS and radiographs in detecting and estimating the depth of proximal lesions on extracted teeth. The lesions were visible to the naked eye. Three trained examiners scored a total of 132 sound/carious proximal surfaces from 106 p...
C-point and V-point singularity lattice formation and index sign conversion methods
Kumar Pal, Sushanta; Ruchi; Senthilkumaran, P.
2017-06-01
The generic singularities in an ellipse field are C-points namely stars, lemons and monstars in a polarization distribution with C-point indices (-1/2), (+1/2) and (+1/2) respectively. Similar to C-point singularities, there are V-point singularities that occur in a vector field and are characterized by Poincare-Hopf index of integer values. In this paper we show that the superposition of three homogenously polarized beams in different linear states leads to the formation of polarization singularity lattice. Three point sources at the focal plane of the lens are used to create three interfering plane waves. A radial/azimuthal polarization converter (S-wave plate) placed near the focal plane modulates the polarization states of the three beams. The interference pattern is found to host C-points and V-points in a hexagonal lattice. The C-points occur at intensity maxima and V-points occur at intensity minima. Modulating the state of polarization (SOP) of three plane waves from radial to azimuthal does not essentially change the nature of polarization singularity lattice as the Poincare-Hopf index for both radial and azimuthal polarization distributions is (+1). Hence a transformation from a star to a lemon is not trivial, as such a transformation requires not a single SOP change, but a change in whole spatial SOP distribution. Further there is no change in the lattice structure and the C- and V-points appear at locations where they were present earlier. Hence to convert an interlacing star and V-point lattice into an interlacing lemon and V-point lattice, the interferometer requires modification. We show for the first time a method to change the polarity of C-point and V-point indices. This means that lemons can be converted into stars and stars can be converted into lemons. Similarly the positive V-point can be converted to negative V-point and vice versa. The intensity distribution in all these lattices is invariant as the SOPs of the three beams are changed in an
TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL
Directory of Open Access Journals (Sweden)
N. Zhu
2016-06-01
Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.
Evaluation of the point-centred-quarter method of sampling ...
African Journals Online (AJOL)
-quarter method.The parameter which was most efficiently sampled was species composition relativedensity) with 90% replicate similarity being achieved with 100 point-centred-quarters. However, this technique cannot be recommended, even ...
Novel Ratio Subtraction and Isoabsorptive Point Methods for ...
African Journals Online (AJOL)
Purpose: To develop and validate two innovative spectrophotometric methods used for the simultaneous determination of ambroxol hydrochloride and doxycycline in their binary mixture. Methods: Ratio subtraction and isoabsorptive point methods were used for the simultaneous determination of ambroxol hydrochloride ...
IMAGE TO POINT CLOUD METHOD OF 3D-MODELING
Directory of Open Access Journals (Sweden)
A. G. Chibunichev
2012-07-01
Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.
New methods of subcooled water recognition in dew point hygrometers
Weremczuk, Jerzy; Jachowicz, Ryszard
2001-08-01
Two new methods of sub-cooled water recognition in dew point hygrometers are presented in this paper. The first one- impedance method use a new semiconductor mirror in which the dew point detector, the thermometer and the heaters were integrated all together. The second one an optical method based on a multi-section optical detector is discussed in the report. Experimental results of both methods are shown. New types of dew pont hydrometers of ability to recognized sub-cooled water were proposed.
Natural Preconditioning and Iterative Methods for Saddle Point Systems
Pestana, Jennifer
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. The solution of quadratic or locally quadratic extremum problems subject to linear(ized) constraints gives rise to linear systems in saddle point form. This is true whether in the continuous or the discrete setting, so saddle point systems arising from the discretization of partial differential equation problems, such as those describing electromagnetic problems or incompressible flow, lead to equations with this structure, as do, for example, interior point methods and the sequential quadratic programming approach to nonlinear optimization. This survey concerns iterative solution methods for these problems and, in particular, shows how the problem formulation leads to natural preconditioners which guarantee a fast rate of convergence of the relevant iterative methods. These preconditioners are related to the original extremum problem and their effectiveness - in terms of rapidity of convergence - is established here via a proof of general bounds on the eigenvalues of the preconditioned saddle point matrix on which iteration convergence depends.
Material-point Method Analysis of Bending in Elastic Beams
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
The aim of this paper is to test different types of spatial interpolation for the materialpoint method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...
Primal-Dual Interior Point Multigrid Method for Topology Optimization
Czech Academy of Sciences Publication Activity Database
Kočvara, Michal; Mohammed, S.
2016-01-01
Roč. 38, č. 5 (2016), B685-B709 ISSN 1064-8275 Grant - others:European Commission - EC(XE) 313781 Institutional support: RVO:67985556 Keywords : topology optimization * multigrid method s * interior point method Subject RIV: BA - General Mathematics Impact factor: 2.195, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/kocvara-0462418.pdf
Interior Point Methods for Large-Scale Nonlinear Programming
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2005-01-01
Roč. 20, č. 4-5 (2005), s. 569-582 ISSN 1055-6788 R&D Projects: GA AV ČR IAA1030405 Institutional research plan: CEZ:AV0Z10300504 Keywords : nonlinear programming * interior point methods * KKT systems * indefinite preconditioners * filter methods * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.477, year: 2005
Novel TPPO Based Maximum Power Point Method for Photovoltaic System
Directory of Open Access Journals (Sweden)
ABBASI, M. A.
2017-08-01
Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.
Taylor's series method for solving the nonlinear point kinetics equations
International Nuclear Information System (INIS)
Nahla, Abdallah A.
2011-01-01
Highlights: → Taylor's series method for nonlinear point kinetics equations is applied. → The general order of derivatives are derived for this system. → Stability of Taylor's series method is studied. → Taylor's series method is A-stable for negative reactivity. → Taylor's series method is an accurate computational technique. - Abstract: Taylor's series method for solving the point reactor kinetics equations with multi-group of delayed neutrons in the presence of Newtonian temperature feedback reactivity is applied and programmed by FORTRAN. This system is the couples of the stiff nonlinear ordinary differential equations. This numerical method is based on the different order derivatives of the neutron density, the precursor concentrations of i-group of delayed neutrons and the reactivity. The r th order of derivatives are derived. The stability of Taylor's series method is discussed. Three sets of applications: step, ramp and temperature feedback reactivities are computed. Taylor's series method is an accurate computational technique and stable for negative step, negative ramp and temperature feedback reactivities. This method is useful than the traditional methods for solving the nonlinear point kinetics equations.
Primal Interior Point Method for Minimization of Generalized Minimax Functions
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2010-01-01
Roč. 46, č. 4 (2010), s. 697-721 ISSN 0023-5954 R&D Projects: GA ČR GA201/09/1957 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * nonsmooth optimization * generalized minimax optimization * interior-point methods * modified Newton methods * variable metric methods * global convergence * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://dml.cz/handle/10338.dmlcz/140779
"Push back" technique: A simple method to remove broken drill bit from the proximal femur.
Chouhan, Devendra K; Sharma, Siddhartha
2015-11-18
Broken drill bits can be difficult to remove from the proximal femur and may necessitate additional surgical exploration or special instrumentation. We present a simple technique to remove a broken drill bit that does not require any special instrumentation and can be accomplished through the existing incision. This technique is useful for those cases where the length of the broken drill bit is greater than the diameter of the bone.
A new comparison method for dew-point generators
Heinonen, Martti
1999-12-01
A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.
Dual reference point temperature interrogating method for distributed temperature sensor
International Nuclear Information System (INIS)
Ma, Xin; Ju, Fang; Chang, Jun; Wang, Weijie; Wang, Zongliang
2013-01-01
A novel method based on dual temperature reference points is presented to interrogate the temperature in a distributed temperature sensing (DTS) system. This new method is suitable to overcome deficiencies due to the impact of DC offsets and the gain difference in the two signal channels of the sensing system during temperature interrogation. Moreover, this method can in most cases avoid the need to calibrate the gain and DC offsets in the receiver, data acquisition and conversion. An improved temperature interrogation formula is presented and the experimental results show that this method can efficiently estimate the channel amplification and system DC offset, thus improving the system accuracy. (letter)
Analysis of Spatial Interpolation in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2010-01-01
are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...
Modeling of Landslides with the Material Point Method
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2008-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...
Modelling of Landslides with the Material-point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...
Woodie, J B; Ruggles, A J; Litsky, A S
2000-01-01
To evaluate 2 methods of midbody proximal sesamoid bone repair--fixation by a screw placed in lag fashion and circumferential wire fixation--by comparing yield load and the adjacent soft-tissue strain during monotonic loading. Experimental study. 10 paired equine cadaver forelimbs from race-trained horses. A transverse midbody osteotomy of the medial proximal sesamoid bone (PSB) was created. The osteotomy was repaired with a 4.5-mm cortex bone screw placed in lag fashion or a 1.25-mm circumferential wire. The limbs were instrumented with differential variable reluctance transducers placed in the suspensory apparatus and distal sesamoidean ligaments. The limbs were tested in axial compression in a single cycle until failure. The cortex bone screw repairs had a mean yield load of 2,908.2 N; 1 limb did not fail when tested to 5,000 N. All circumferential wire repairs failed with a mean yield load of 3,406.3 N. There was no statistical difference in mean yield load between the 2 repair methods. The maximum strain generated in the soft tissues attached to the proximal sesamoid bones was not significantly different between repair groups. All repaired limbs were able to withstand loads equal to those reportedly applied to the suspensory apparatus in vivo during walking. Each repair technique should have adequate yield strength for repair of midbody fractures of the PSB immediately after surgery.
Multi-point probe for testing electrical properties and a method of producing a multi-point probe
DEFF Research Database (Denmark)
2011-01-01
A multi-point probe for testing electrical properties of a number of specific locations of a test sample comprises a supporting body defining a first surface, a first multitude of conductive probe arms (101-101'''), each of the probe arms defining a proximal end and a distal end. The probe arms...... of contact with the supporting body, and a maximum thickness perpendicular to its perpendicular bisector and its line of contact with the supporting body. Each of the probe arms has a specific area or point of contact (111-111''') at its distal end for contacting a specific location among the number...... of specific locations of the test sample. At least one of the probe arms has an extension defining a pointing distal end providing its specific area or point of contact located offset relative to its perpendicular bisector....
Primal Interior-Point Method for Large Sparse Minimax Optimization
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2009-01-01
Roč. 45, č. 5 (2009), s. 841-864 ISSN 0023-5954 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * minimax optimization * nonsmooth optimization * interior-point methods * modified Newton methods * variable metric methods * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.445, year: 2009 http://dml.cz/handle/10338.dmlcz/140034
Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware
International Nuclear Information System (INIS)
Nakata, Susumu
2008-01-01
This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.
A Review on the Modified Finite Point Method
Directory of Open Access Journals (Sweden)
Nan-Jing Wu
2014-01-01
Full Text Available The objective of this paper is to make a review on recent advancements of the modified finite point method, named MFPM hereafter. This MFPM method is developed for solving general partial differential equations. Benchmark examples of employing this method to solve Laplace, Poisson, convection-diffusion, Helmholtz, mild-slope, and extended mild-slope equations are verified and then illustrated in fluid flow problems. Application of MFPM to numerical generation of orthogonal grids, which is governed by Laplace equation, is also demonstrated.
Methods for registration laser scanner point clouds in forest stands
International Nuclear Information System (INIS)
Bienert, A.; Pech, K.; Maas, H.-G.
2011-01-01
Laser scanning is a fast and efficient 3-D measurement technique to capture surface points describing the geometry of a complex object in an accurate and reliable way. Besides airborne laser scanning, terrestrial laser scanning finds growing interest for forestry applications. These two different recording platforms show large differences in resolution, recording area and scan viewing direction. Using both datasets for a combined point cloud analysis may yield advantages because of their largely complementary information. In this paper, methods will be presented to automatically register airborne and terrestrial laser scanner point clouds of a forest stand. In a first step, tree detection is performed in both datasets in an automatic manner. In a second step, corresponding tree positions are determined using RANSAC. Finally, the geometric transformation is performed, divided in a coarse and fine registration. After a coarse registration, the fine registration is done in an iterative manner (ICP) using the point clouds itself. The methods are tested and validated with a dataset of a forest stand. The presented registration results provide accuracies which fulfill the forestry requirements [de
A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD
J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao
2017-01-01
Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...
The Oblique Basis Method from an Engineering Point of View
International Nuclear Information System (INIS)
Gueorguiev, V G
2012-01-01
The oblique basis method is reviewed from engineering point of view related to vibration and control theory. Examples are used to demonstrate and relate the oblique basis in nuclear physics to the equivalent mathematical problems in vibration theory. The mathematical techniques, such as principal coordinates and root locus, used by vibration and control theory engineers are shown to be relevant to the Richardson - Gaudin pairing-like problems in nuclear physics.
Towards Automatic Testing of Reference Point Based Interactive Methods
Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa
2016-01-01
In order to understand strengths and weaknesses of optimization algorithms, it is important to have access to different types of test problems, well defined performance indicators and analysis tools. Such tools are widely available for testing evolutionary multiobjective optimization algorithms. To our knowledge, there do not exist tools for analyzing the performance of interactive multiobjective optimization methods based on the reference point approach to communicating ...
Mahmood, Hafiz Sultan; Hoogmoed, Willem B; van Henten, Eldert J
2013-11-27
Fine-scale spatial information on soil properties is needed to successfully implement precision agriculture. Proximal gamma-ray spectroscopy has recently emerged as a promising tool to collect fine-scale soil information. The objective of this study was to evaluate a proximal gamma-ray spectrometer to predict several soil properties using energy-windows and full-spectrum analysis methods in two differently managed sandy loam fields: conventional and organic. In the conventional field, both methods predicted clay, pH and total nitrogen with a good accuracy (R2 ≥ 0.56) in the top 0-15 cm soil depth, whereas in the organic field, only clay content was predicted with such accuracy. The highest prediction accuracy was found for total nitrogen (R2 = 0.75) in the conventional field in the energy-windows method. Predictions were better in the top 0-15 cm soil depths than in the 15-30 cm soil depths for individual and combined fields. This implies that gamma-ray spectroscopy can generally benefit soil characterisation for annual crops where the condition of the seedbed is important. Small differences in soil structure (conventional vs. organic) cannot be determined. As for the methodology, we conclude that the energy-windows method can establish relations between radionuclide data and soil properties as accurate as the full-spectrum analysis method.
Multiperiod hydrothermal economic dispatch by an interior point method
Directory of Open Access Journals (Sweden)
Kimball L. M.
2002-01-01
Full Text Available This paper presents an interior point algorithm to solve the multiperiod hydrothermal economic dispatch (HTED. The multiperiod HTED is a large scale nonlinear programming problem. Various optimization methods have been applied to the multiperiod HTED, but most neglect important network characteristics or require decomposition into thermal and hydro subproblems. The algorithm described here exploits the special bordered block diagonal structure and sparsity of the Newton system for the first order necessary conditions to result in a fast efficient algorithm that can account for all network aspects. Applying this new algorithm challenges a conventional method for the use of available hydro resources known as the peak shaving heuristic.
Improved fixed point iterative method for blade element momentum computations
DEFF Research Database (Denmark)
Sun, Zhenye; Shen, Wen Zhong; Chen, Jin
2017-01-01
The blade element momentum (BEM) theory is widely used in aerodynamic performance calculations and optimization applications for wind turbines. The fixed point iterative method is the most commonly utilized technique to solve the BEM equations. However, this method sometimes does not converge...... are addressed through both theoretical analysis and numerical tests. A term from the BEM equations equals to zero at a critical inflow angle is the source of the convergence problems. When the initial inflow angle is set larger than the critical inflow angle and the relaxation methodology is adopted...
Evaluation of null-point detection methods on simulation data
Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.
Diercks, Ron L.; Bain, Gregory; Itoi, Eiji; Di Giacomo, Giovanni; Sugaya, Hiroyuki
2015-01-01
This chapter describes the bony structures of the proximal humerus. The proximal humerus is often regarded as consisting of four parts, which assists in understanding function and, more specially, describes the essential parts in reconstruction after fracture or in joint replacement. These are the
Hybrid kriging methods for interpolating sparse river bathymetry point data
Directory of Open Access Journals (Sweden)
Pedro Velloso Gomes Batista
Full Text Available ABSTRACT Terrain models that represent riverbed topography are used for analyzing geomorphologic changes, calculating water storage capacity, and making hydrologic simulations. These models are generated by interpolating bathymetry points. River bathymetry is usually surveyed through cross-sections, which may lead to a sparse sampling pattern. Hybrid kriging methods, such as regression kriging (RK and co-kriging (CK employ the correlation with auxiliary predictors, as well as inter-variable correlation, to improve the predictions of the target variable. In this study, we use the orthogonal distance of a (x, y point to the river centerline as a covariate for RK and CK. Given that riverbed elevation variability is abrupt transversely to the flow direction, it is expected that the greater the Euclidean distance of a point to the thalweg, the greater the bed elevation will be. The aim of this study was to evaluate if the use of the proposed covariate improves the spatial prediction of riverbed topography. In order to asses such premise, we perform an external validation. Transversal cross-sections are used to make the spatial predictions, and the point data surveyed between sections are used for testing. We compare the results from CK and RK to the ones obtained from ordinary kriging (OK. The validation indicates that RK yields the lowest RMSE among the interpolators. RK predictions represent the thalweg between cross-sections, whereas the other methods under-predict the river thalweg depth. Therefore, we conclude that RK provides a simple approach for enhancing the quality of the spatial prediction from sparse bathymetry data.
Marimuthu, K; Thilaga, M; Kathiresan, S; Xavier, R; Mas, R H M H
2012-06-01
The effects of different cooking methods (boiling, baking, frying and grilling) on proximate and mineral composition of snakehead fish were investigated. The mean content of moisture, protein, fat and ash of raw fish was found to be 77.2 ± 2.39, 13.9 ± 2.89, 5.9 ± 0.45 and 0.77 ± 0.12% respectively. The changes in the amount of protein and fat were found to be significantly higher in frying and grilling fish. The ash content increased significantly whereas that of the minerals (Na, K, Ca, Mg, Fe, Zn and Mn) was not affected in all cooking methods. Increased in Cu contents and decreased in P contents were observed in all cooking methods except grilling. In the present study, the grilling method of cooking is found to be the best for healthy eating.
Energy Technology Data Exchange (ETDEWEB)
Schultz-Fellenz, Emily S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-09-09
A portion of LANL’s FY15 SPE objectives includes initial ground-based or ground-proximal investigations at the SPE Phase 2 site. The area of interest is the U2ez location in Yucca Flat. This collection serves as a baseline for discrimination of surface features and acquisition of topographic signatures prior to any development or pre-shot activities associated with SPE Phase 2. Our team originally intended to perform our field investigations using previously vetted ground-based (GB) LIDAR methodologies. However, the extended proposed time frame of the GB LIDAR data collection, and associated data processing time and delivery date, were unacceptable. After technical consultation and careful literature research, LANL identified an alternative methodology to achieve our technical objectives and fully support critical model parameterization. Very-low-altitude unmanned aerial systems (UAS) photogrammetry appeared to satisfy our objectives in lieu of GB LIDAR. The SPE Phase 2 baseline collection was used as a test of this UAS photogrammetric methodology.
Directory of Open Access Journals (Sweden)
Jingyu Sun
2014-07-01
Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.
Methods for solving the stochastic point reactor kinetic equations
International Nuclear Information System (INIS)
Quabili, E.R.; Karasulu, M.
1979-01-01
Two new methods are presented for analysis of the statistical properties of nonlinear outputs of a point reactor to stochastic non-white reactivity inputs. They are Bourret's approximation and logarithmic linearization. The results have been compared with the exact results, previously obtained in the case of Gaussian white reactivity input. It was found that when the reactivity noise has short correlation time, Bourret's approximation should be recommended because it yields results superior to those yielded by logarithmic linearization. When the correlation time is long, Bourret's approximation is not valid, but in that case, if one can assume the reactivity noise to be Gaussian, one may use the logarithmic linearization. (author)
Directory of Open Access Journals (Sweden)
Yong Ye
2016-05-01
Full Text Available A novel method for proximity detection of moving targets (with high dielectric constants using a large-scale (the size of each sensor is 31 cm × 19 cm planar capacitive sensor system (PCSS is proposed. The capacitive variation with distance is derived, and a pair of electrodes in a planar capacitive sensor unit (PCSU with a spiral shape is found to have better performance on sensitivity distribution homogeneity and dynamic range than three other shapes (comb shape, rectangular shape, and circular shape. A driving excitation circuit with a Clapp oscillator is proposed, and a capacitance measuring circuit with sensitivity of 0.21 V p − p / pF is designed. The results of static experiments and dynamic experiments demonstrate that the voltage curves of static experiments are similar to those of dynamic experiments; therefore, the static data can be used to simulate the dynamic curves. The dynamic range of proximity detection for three projectiles is up to 60 cm, and the results of the following static experiments show that the PCSU with four neighboring units has the highest sensitivity (the sensitivities of other units are at least 4% lower; when the attack angle decreases, the intensity of sensor signal increases. This proposed method leads to the design of a feasible moving target detector with simple structure and low cost, which can be applied in the interception system.
a Modeling Method of Fluttering Leaves Based on Point Cloud
Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.
2017-09-01
Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD
Directory of Open Access Journals (Sweden)
J. Tang
2017-09-01
Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
A Monocular SLAM Method to Estimate Relative Pose During Satellite Proximity Operations
2015-03-26
Truth rotation values are obtained from the spot of a laser pointer that is fixed to the CubeSat and points 76 Figure 44. The AFIT 6U CubeSat Air Bearing...Demonstration for Autonomous Rendezvous Technology (DART) spacecraft, which irradiates retro-reflectors of a known orienta- tion with a laser to solve the...minimize the additional sub-system requirements on the spacecraft. Most spacecraft already have star trackers , which use dedicated CPUs to perform stellar
A Robust Shape Reconstruction Method for Facial Feature Point Detection
Directory of Open Access Journals (Sweden)
Shuqiu Tan
2017-01-01
Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.
Energy Technology Data Exchange (ETDEWEB)
Watanabe, Jun; Taki, Hideharu; Teraoka, Yoshiaki [Chubu Electric Power Co., Inc., Nagoya (Japan)
1989-06-01
Chubu Electric Power Co. is promoting to introduce ultrahigh-voltage transmission line into Nagoya City in order to supply electric power to the central area of the city. As a part of this project, the company constructed the tunnel extending 1400m long in the central area of Nagoya City by shield method. In the area around the route of this tunnel, high buildings for residence stand close together, the traffic is heavy, and also four railway lines cross over the tunnel. The important points of this construction are curved surface works of the tunnel at the main intersections, crosscutting works at the major trunk railway lines, and shield carry-over method. The finished inside diameter of the tunnel is 3.6m and it is protected by the primary lining of 175mm in thickness and by the secondary one of 200mm in thickness. As the construction is performed at the intersections where traffic is heavy, or required curved surface works, it must be carried out without the process of ground improvement such as chemical injection. Therefore, excavation was performed through high strength stratum, utilizing the strength of soil itself. At the portion where the tunnel runs across main railway line, it was taken in account that high strength stratum should remain as cover rock, and excavation was performed beneath the rock. At the end portion of shield, soil pile column with H-formed steel and steel bulk head were used as means of landslide protection. Thus, the ease operation and economic execution of works were realized. 1 ref., 14 figs., 2 tabs.
Directory of Open Access Journals (Sweden)
Sivakumar JT Gowder
2009-01-01
Full Text Available Ethanol-induced folate deficiency is due to effects of ethanol on folate metabolism and absorption. We have already shown by using different methods that ethanol interferes with reabsorption of folate from the proximal tubule. In this study, we have used the folate analogue, the fluorescein methotrexate (FL-MTX, in order to evaluate effects of ethanol on FL-MTX uptake by the human proximal tubular (HPT cells by using a confocal microscope and fluoroskan microplate reader. Since endothelins (ETs play a major role in a number of diseases and also in the damage induced by a variety of chemicals, we have used endothelin-B (ET-B and protein kinase-C (PKC inhibitors to evaluate the role of endothelin in ethanol-mediated FL-MTX uptake by using fluoroskan microplate reader. Confocal microscope and fluoroskan studies reveal that cellular absorption of FL-MTX is concentration-dependent. Moreover, ethanol concentration has an impact on FL-MTX uptake. Fluoroskan studies reveal that the ethanol-induced decrease in FL-MTX uptake is reversed by adding the ET-B receptor antagonist (RES-701-1 or PKC-selective inhibitor (BIM. Thus, we can conclude that ethanol may act via ET and ET in turn may act via ET-B receptor and the PKC signaling pathway to impair FL-MTX transport.
The Multiscale Material Point Method for Simulating Transient Responses
Chen, Zhen; Su, Yu-Chen; Zhang, Hetao; Jiang, Shan; Sewell, Thomas
2015-06-01
To effectively simulate multiscale transient responses such as impact and penetration without invoking master/slave treatment, the multiscale material point method (Multi-MPM) is being developed in which molecular dynamics at nanoscale and dissipative particle dynamics at mesoscale might be concurrently handled within the framework of the original MPM at microscale (continuum level). The proposed numerical scheme for concurrently linking different scales is described in this paper with simple examples for demonstration. It is shown from the preliminary study that the mapping and re-mapping procedure used in the original MPM could coarse-grain the information at fine scale and that the proposed interfacial scheme could provide a smooth link between different scales. Since the original MPM is an extension from computational fluid dynamics to solid dynamics, the proposed Multi-MPM might also become robust for dealing with multiphase interactions involving failure evolution. This work is supported in part by DTRA and NSFC.
Material-Point-Method Analysis of Collapsing Slopes
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
To understand the dynamic evolution of landslides and predict their physical extent, a computational model is required that is capable of analysing complex material behaviour as well as large strains and deformations. Here, a model is presented based on the so-called generalised-interpolation mat......To understand the dynamic evolution of landslides and predict their physical extent, a computational model is required that is capable of analysing complex material behaviour as well as large strains and deformations. Here, a model is presented based on the so-called generalised......, a deformed material description is introduced, based on time integration of the deformation gradient and utilising Gauss quadrature over the volume associated with each material point. The method has been implemented in a Fortran code and employed for the analysis of a landslide that took place during...
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal
Starting Point: Linking Methods and Materials for Introductory Geoscience Courses
Manduca, C. A.; MacDonald, R. H.; Merritts, D.; Savina, M.
2004-12-01
Introductory courses are one of the most challenging teaching environments for geoscience faculty. Courses are often large, students have a wide variety of background and skills, and student motivation can include completing a geoscience major, preparing for a career as teacher, fulfilling a distribution requirement, and general interest. The Starting Point site (http://serc.carleton.edu/introgeo/index.html) provides help for faculty teaching introductory courses by linking together examples of different teaching methods that have been used in entry-level courses with information about how to use the methods and relevant references from the geoscience and education literature. Examples span the content of geoscience courses including the atmosphere, biosphere, climate, Earth surface, energy/material cycles, human dimensions/resources, hydrosphere/cryosphere, ocean, solar system, solid earth and geologic time/earth history. Methods include interactive lecture (e.g think-pair-share, concepTests, and in-class activities and problems), investigative cases, peer review, role playing, Socratic questioning, games, and field labs. A special section of the site devoted to using an Earth System approach provides resources with content information about the various aspects of the Earth system linked to examples of teaching this content. Examples of courses incorporating Earth systems content, and strategies for designing an Earth system course are also included. A similar section on Teaching with an Earth History approach explores geologic history as a vehicle for teaching geoscience concepts and as a framework for course design. The Starting Point site has been authored and reviewed by faculty around the country. Evaluation indicates that faculty find the examples particularly helpful both for direct implementation in their classes and for sparking ideas. The help provided for using different teaching methods makes the examples particularly useful. Examples are chosen from
International Nuclear Information System (INIS)
Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata
2010-01-01
Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)
Evaluating Point of Sale Tobacco Marketing Using Behavioral Laboratory Methods
Robinson, Jason D.; Drobes, David J.; Brandon, Thomas H.; Wetter, David W.; Cinciripini, Paul M.
2018-01-01
With passage of the 2009 Family Smoking Prevention and Tobacco Control Act, the FDA has authority to regulate tobacco advertising. As bans on traditional advertising venues and promotion of tobacco products have grown, a greater emphasis has been placed on brand exposure and price promotion in displays of products at the point-of-sale (POS). POS marketing seeks to influence attitudes and behavior towards tobacco products using a variety of explicit and implicit messaging approaches. Behavioral laboratory methods have the potential to provide the FDA with a strong scientific base for regulatory actions and a model for testing future manipulations of POS advertisements. We review aspects of POS marketing that potentially influence smoking behavior, including branding, price promotions, health claims, the marketing of emerging tobacco products, and tobacco counter-advertising. We conceptualize how POS marketing potentially influence individual attention, memory, implicit attitudes, and smoking behavior. Finally, we describe specific behavioral laboratory methods that can be adapted to measure the impact of POS marketing on these domains.
Phase-integral method allowing nearlying transition points
Fröman, Nanny
1996-01-01
The efficiency of the phase-integral method developed by the present au thors has been shown both analytically and numerically in many publica tions. With the inclusion of supplementary quantities, closely related to new Stokes constants and obtained with the aid of comparison equation technique, important classes of problems in which transition points may approach each other become accessible to accurate analytical treatment. The exposition in this monograph is of a mathematical nature but has important physical applications, some examples of which are found in the adjoined papers. Thus, we would like to emphasize that, although we aim at mathematical rigor, our treatment is made primarily with physical needs in mind. To introduce the reader into the background of this book, we start by de scribing the phase-integral approximation of arbitrary order generated from an unspecified base function. This is done in Chapter 1, which is reprinted, after minor changes, from a review article. Chapter 2 is the re...
Quantum-Mechanical Methods for Quantifying Incorporation of Contaminants in Proximal Minerals
Directory of Open Access Journals (Sweden)
Lindsay C. Shuller-Nickles
2014-07-01
Full Text Available Incorporation reactions play an important role in dictating immobilization and release pathways for chemical species in low-temperature geologic environments. Quantum-mechanical investigations of incorporation seek to characterize the stability and geometry of incorporated structures, as well as the thermodynamics and kinetics of the reactions themselves. For a thermodynamic treatment of incorporation reactions, a source of the incorporated ion and a sink for the released ion is necessary. These sources/sinks in a real geochemical system can be solids, but more commonly, they are charged aqueous species. In this contribution, we review the current methods for ab initio calculations of incorporation reactions, many of which do not consider incorporation from aqueous species. We detail a recently-developed approach for the calculation of incorporation reactions and expand on the part that is modeling the interaction of periodic solids with aqueous source and sink phases and present new research using this approach. To model these interactions, a systematic series of calculations must be done to transform periodic solid source and sink phases to aqueous-phase clusters. Examples of this process are provided for three case studies: (1 neptunyl incorporation into studtite and boltwoodite: for the layered boltwoodite, the incorporation energies are smaller (more favorable for reactions using environmentally relevant source and sink phases (i.e., ΔErxn(oxides > ΔErxn(silicates > ΔErxn(aqueous. Estimates of the solid-solution behavior of Np5+/P5+- and U6+/Si4+-boltwoodite and Np5+/Ca2+- and U6+/K+-boltwoodite solid solutions are used to predict the limit of Np-incorporation into boltwoodite (172 and 768 ppm at 300 °C, respectively; (2 uranyl and neptunyl incorporation into carbonates and sulfates: for both carbonates and sulfates, it was found that actinyl incorporation into a defect site is more favorable than incorporation into defect-free periodic
Comparison of dew point temperature estimation methods in Southwestern Georgia
Marcus D. Williams; Scott L. Goodrick; Andrew Grundstein; Marshall Shepherd
2015-01-01
Recent upward trends in acres irrigated have been linked to increasing near-surface moisture. Unfortunately, stations with dew point data for monitoring near-surface moisture are sparse. Thus, models that estimate dew points from more readily observed data sources are useful. Daily average dew temperatures were estimated and evaluated at 14 stations in...
Gran method for end point anticipation in monosegmented flow titration
Directory of Open Access Journals (Sweden)
Aquino Emerson V
2004-01-01
Full Text Available An automatic potentiometric monosegmented flow titration procedure based on Gran linearisation approach has been developed. The controlling program can estimate the end point of the titration after the addition of three or four aliquots of titrant. Alternatively, the end point can be determined by the second derivative procedure. In this case, additional volumes of titrant are added until the vicinity of the end point and three points before and after the stoichiometric point are used for end point calculation. The performance of the system was assessed by the determination of chloride in isotonic beverages and parenteral solutions. The system employs a tubular Ag2S/AgCl indicator electrode. A typical titration, performed according to the IUPAC definition, requires only 60 mL of sample and about the same volume of titrant (AgNO3 solution. A complete titration can be carried out in 1 - 5 min. The accuracy and precision (relative standard deviation of ten replicates are 2% and 1% for the Gran and 1% and 0.5% for the Gran/derivative end point determination procedures, respectively. The proposed system reduces the time to perform a titration, ensuring low sample and reagent consumption, and full automatic sampling and titrant addition in a calibration-free titration protocol.
Natural Preconditioning and Iterative Methods for Saddle Point Systems
Pestana, Jennifer; Wathen, Andrew J.
2015-01-01
or the discrete setting, so saddle point systems arising from the discretization of partial differential equation problems, such as those describing electromagnetic problems or incompressible flow, lead to equations with this structure, as do, for example
Directory of Open Access Journals (Sweden)
Yano Seiji
2011-05-01
Full Text Available Abstract Here we report the method of anastomosis based on double stapling technique (hereinafter, DST using a trans-oral anvil delivery system (EEATM OrVilTM for reconstructing the esophagus and lifted jejunum following laparoscopic total gastrectomy or proximal gastric resection. As a basic technique, laparoscopic total gastrectomy employed Roux-en-Y reconstruction, laparoscopic proximal gastrectomy employed double tract reconstruction, and end-to-side anastomosis was used for the cut-off stump of the esophagus and lifted jejunum. We used EEATM OrVilTM as a device that permitted mechanical purse-string suture similarly to conventional EEA, and endo-Surgitie. After the gastric lymph node dissection, the esophagus was cut off using an automated stapler. EEATM OrVilTM was orally and slowly inserted from the valve tip, and a small hole was created at the tip of the obliquely cut-off stump with scissors to let the valve tip pass through. Yarn was cut to disconnect the anvil from a tube and the anvil head was retained in the esophagus. The end-Surgitie was inserted at the right subcostal margin, and after the looped-shaped thread was wrapped around the esophageal stump opening, assisting Maryland forceps inserted at the left subcostal and left abdomen were used to grasp the left and right esophageal stump. The surgeon inserted anvil grasping forceps into the right abdomen, and after grasping the esophagus with the forceps, tightened the end Surgitie, thereby completing the purse-string suture on the esophageal stump. The main unit of the automated stapler was inserted from the cut-off stump of the lifted jejunum, and a trocar was made to pass through. To prevent dropout of the small intestines from the automated stapler, the automated stapler and the lifted jejunum were fastened with silk thread, the abdomen was again inflated, and the lifted jejunum was led into the abdominal cavity. When it was confirmed that the automated stapler and center rod
International Nuclear Information System (INIS)
Kazakov, V.I.; Tarapata, M.I.; Kupdiev, Yu.I.; Navakadikyan, A.A.; Buzupov, V.A.; Tabachnikov, S.I.
1989-01-01
Method consists in the direct reproduction of geometrical figures (triangles) with different internal shading presented for the memorizing for 10 s. Proposed proximate method is described. Above method for the quantitative evaluation of man fatiguability under work load is based on the principles of physiological indication of working conditions, consequent sampling of information interrelations characterizing working stress and residual phenomena after a shift. Method was tested on persons of operator and manual labour. fig. 1; tabs. 2
Note: interpreting iterative methods convergence with diffusion point of view
Hong, Dohy
2013-01-01
In this paper, we explain the convergence speed of different iteration schemes with the fluid diffusion view when solving a linear fixed point problem. This interpretation allows one to better understand why power iteration or Jacobi iteration may converge faster or slower than Gauss-Seidel iteration.
Micro-four-point Probe Hall effect Measurement method
DEFF Research Database (Denmark)
Petersen, Dirch Hjorth; Hansen, Ole; Lin, Rong
2008-01-01
barriers and with a magnetic field applied normal to the plane of the sheet. Based on this potential, analytical expressions for the measured four-point resistance in presence of a magnetic field are derived for several simple sample geometries. We show how the sheet resistance and Hall effect...
Spatio-temporal point process filtering methods with an application
Czech Academy of Sciences Publication Activity Database
Frcalová, B.; Beneš, V.; Klement, Daniel
2010-01-01
Roč. 21, 3-4 (2010), s. 240-252 ISSN 1180-4009 R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z50110509 Keywords : cox point process * filtering * spatio-temporal modelling * spike Subject RIV: BA - General Mathematics Impact factor: 0.750, year: 2010
Gregory, J S; Testi, D; Stewart, A; Undrill, P E; Reid, D M; Aspden, R M
2004-01-01
The shape of the proximal femur has been demonstrated to be important in the occurrence of fractures of the femoral neck. Unfortunately, multiple geometric measurements frequently used to describe this shape are highly correlated. A new method, active shape modeling (ASM) has been developed to quantify the morphology of the femur. This describes the shape in terms of orthogonal modes of variation that, consequently, are all independent. To test this method, digitized standard pelvic radiographs were obtained from 26 women who had suffered a hip fracture and compared with images from 24 age-matched controls with no fracture. All subjects also had their bone mineral density (BMD) measured at five sites using dual-energy X-ray absorptiometry. An ASM was developed and principal components analysis used to identify the modes which best described the shape. Discriminant analysis was used to determine which variable, or combination of variables, was best able to discriminate between the groups. ASM alone correctly identified 74% of the individuals and placed them in the appropriate group. Only one of the BMD values (Ward's triangle) achieved a higher value (82%). A combination of Ward's triangle BMD and ASM improved the accuracy to 90%. Geometric variables used in this study were weaker, correctly classifying less than 60% of the study group. Logistic regression showed that after adjustment for age, body mass index, and BMD, the ASM data was still independently associated with hip fracture (odds ratio (OR)=1.83, 95% confidence interval 1.08 to 3.11). The odds ratio was calculated relative to a 10% increase in the probability of belonging to the fracture group. Though these initial results were obtained from a limited data set, this study shows that ASM may be a powerful method to help identify individuals at risk of a hip fracture in the future.
Comparing Single-Point and Multi-point Calibration Methods in Modulated DSC
Energy Technology Data Exchange (ETDEWEB)
Van Buskirk, Caleb Griffith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-06-14
Heat capacity measurements for High Density Polyethylene (HDPE) and Ultra-high Molecular Weight Polyethylene (UHMWPE) were performed using Modulated Differential Scanning Calorimetry (mDSC) over a wide temperature range, -70 to 115 °C, with a TA Instruments Q2000 mDSC. The default calibration method for this instrument involves measuring the heat capacity of a sapphire standard at a single temperature near the middle of the temperature range of interest. However, this method often fails for temperature ranges that exceed a 50 °C interval, likely because of drift or non-linearity in the instrument's heat capacity readings over time or over the temperature range. Therefore, in this study a method was developed to calibrate the instrument using multiple temperatures and the same sapphire standard.
Krylov Subspace Methods for Saddle Point Problems with Indefinite Preconditioning
Czech Academy of Sciences Publication Activity Database
Rozložník, Miroslav; Simoncini, V.
2002-01-01
Roč. 24, č. 2 (2002), s. 368-391 ISSN 0895-4798 R&D Projects: GA ČR GA101/00/1035; GA ČR GA201/00/0080 Institutional research plan: AV0Z1030915 Keywords : saddle point problems * preconditioning * indefinite linear systems * finite precision arithmetic * conjugate gradients Subject RIV: BA - General Mathematics Impact factor: 0.753, year: 2002
Unified analysis of preconditioning methods for saddle point matrices
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2015-01-01
Roč. 22, č. 2 (2015), s. 233-253 ISSN 1070-5325 R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:68145535 Keywords : saddle point problems * preconditioning * spectral properties Subject RIV: BA - General Mathematics Impact factor: 1.431, year: 2015 http://onlinelibrary.wiley.com/doi/10.1002/nla.1947/pdf
Development of a Multi-Point Microwave Interferometry (MPMI) Method
Energy Technology Data Exchange (ETDEWEB)
Specht, Paul Elliott [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cooper, Marcia A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jilek, Brook Anton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-09-01
A multi-point microwave interferometer (MPMI) concept was developed for non-invasively tracking a shock, reaction, or detonation front in energetic media. Initially, a single-point, heterodyne microwave interferometry capability was established. The design, construction, and verification of the single-point interferometer provided a knowledge base for the creation of the MPMI concept. The MPMI concept uses an electro-optic (EO) crystal to impart a time-varying phase lag onto a laser at the microwave frequency. Polarization optics converts this phase lag into an amplitude modulation, which is analyzed in a heterodyne interfer- ometer to detect Doppler shifts in the microwave frequency. A version of the MPMI was constructed to experimentally measure the frequency of a microwave source through the EO modulation of a laser. The successful extraction of the microwave frequency proved the underlying physical concept of the MPMI design, and highlighted the challenges associated with the longer microwave wavelength. The frequency measurements made with the current equipment contained too much uncertainty for an accurate velocity measurement. Potential alterations to the current construction are presented to improve the quality of the measured signal and enable multiple accurate velocity measurements.
George, Monica C; Lazer, Zane P; George, David S
2016-05-01
We present a technique that uses a near-point string to demonstrate the anticipated near point of multifocal and accommodating intraocular lenses (IOLs). Beads are placed on the string at distances corresponding to the near points for diffractive and accommodating IOLs. The string is held up to the patient's eye to demonstrate where each of the IOLs is likely to provide the best near vision. None of the authors has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Novel Ratio Subtraction and Isoabsorptive Point Methods for ...
African Journals Online (AJOL)
1Department of Pharmaceutical Chemistry, College of Pharmacy, King Saud University, PO Box ... Purpose: To develop and validate two innovative spectrophotometric methods used for the ..... research through the Research Group Project no.
Unemployment estimation: Spatial point referenced methods and models
Pereira, Soraia
2017-06-26
Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to
Directory of Open Access Journals (Sweden)
Kresno Wikan Sadono
2016-12-01
Full Text Available Persamaan differensial banyak digunakan untuk menggambarkan berbagai fenomena dalam bidang sains dan rekayasa. Berbagai masalah komplek dalam kehidupan sehari-hari dapat dimodelkan dengan persamaan differensial dan diselesaikan dengan metode numerik. Salah satu metode numerik, yaitu metode meshfree atau meshless berkembang akhir-akhir ini, tanpa proses pembuatan elemen pada domain. Penelitian ini menggabungkan metode meshless yaitu radial basis point interpolation method (RPIM dengan integrasi waktu discontinuous Galerkin method (DGM, metode ini disebut RPIM-DGM. Metode RPIM-DGM diaplikasikan pada advection equation pada satu dimensi. RPIM menggunakan basis function multiquadratic function (MQ dan integrasi waktu diturunkan untuk linear-DGM maupun quadratic-DGM. Hasil simulasi menunjukkan, metode ini mendekati hasil analitis dengan baik. Hasil simulasi numerik dengan RPIM DGM menunjukkan semakin banyak node dan semakin kecil time increment menunjukkan hasil numerik semakin akurat. Hasil lain menunjukkan, integrasi numerik dengan quadratic-DGM untuk suatu time increment dan jumlah node tertentu semakin meningkatkan akurasi dibandingkan dengan linear-DGM. [Title: Numerical solution of advection equation with radial basis interpolation method and discontinuous Galerkin method for time integration] Differential equation is widely used to describe a variety of phenomena in science and engineering. A variety of complex issues in everyday life can be modeled with differential equations and solved by numerical method. One of the numerical methods, the method meshfree or meshless developing lately, without making use of the elements in the domain. The research combines methods meshless, i.e. radial basis point interpolation method with discontinuous Galerkin method as time integration method. This method is called RPIM-DGM. The RPIM-DGM applied to one dimension advection equation. The RPIM using basis function multiquadratic function and time
Distributed Interior-point Method for Loosely Coupled Problems
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard
2014-01-01
In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow a...
Sampling point selection for energy estimation in the quasicontinuum method
Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.
2010-01-01
The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of
Miller, Stephen; Pike, James; Chapman, Jared; Xie, Bin; Hilton, Brian N.; Ames, Susan L.; Stacy, Alan W.
2017-01-01
This study examines the point-of-sale marketing practices used to promote electronic cigarettes at stores near schools that serve at-risk youths. One hundred stores selling tobacco products within a half-mile of alternative high schools in Southern California were assessed for this study. Seventy percent of stores in the sample sold electronic…
Czech Academy of Sciences Publication Activity Database
Ondráček, Jakub; Stavárek, Petr; Jiřičný, Vladimír; Staněk, Vladimír
2006-01-01
Roč. 20, č. 2 (2006), s. 147-155 ISSN 0352-9568 R&D Projects: GA ČR(CZ) GA104/03/1558 Institutional research plan: CEZ:AV0Z40720504 Keywords : counter-current flow * flooding point * axial dispersion Subject RIV: CI - Industrial Chemistry, Chemical Engineering Impact factor: 0.357, year: 2006
The Closest Point Method and Multigrid Solvers for Elliptic Equations on Surfaces
Chen, Yujia; Macdonald, Colin B.
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. Elliptic partial differential equations are important from both application and analysis points of view. In this paper we apply the closest point method to solve elliptic equations on general
Rainfall Deduction Method for Estimating Non-Point Source Pollution Load for Watershed
Cai, Ming; Li, Huai-en; KAWAKAMI, Yoji
2004-01-01
The water pollution can be divided into point source pollution (PSP) and non-point source pollution (NSP). Since the point source pollution has been controlled, the non-point source pollution is becoming the main pollution source. The prediction of NSP load is being increasingly important in water pollution controlling and planning in watershed. Considering the monitoring data shortage of NPS in China, a practical estimation method of non-point source pollution load --- rainfall deduction met...
CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.
Saegusa, Jun
2008-01-01
The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.
The Purification Method of Matching Points Based on Principal Component Analysis
Directory of Open Access Journals (Sweden)
DONG Yang
2017-02-01
Full Text Available The traditional purification method of matching points usually uses a small number of the points as initial input. Though it can meet most of the requirements of point constraints, the iterative purification solution is easy to fall into local extreme, which results in the missing of correct matching points. To solve this problem, we introduce the principal component analysis method to use the whole point set as initial input. And thorough mismatching points step eliminating and robust solving, more accurate global optimal solution, which intends to reduce the omission rate of correct matching points and thus reaches better purification effect, can be obtained. Experimental results show that this method can obtain the global optimal solution under a certain original false matching rate, and can decrease or avoid the omission of correct matching points.
A Bayesian MCMC method for point process models with intractable normalising constants
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper
2004-01-01
to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...... and pairwise interaction point processes....
Ashikaga, Takashi; Inagaki, Hiroshi; Satoh, Yasuhiro; Isobe, Mitsuaki
2012-01-01
Complications of retained or entrapped equipment in the coronary system are still encountered during angioplasty procedures. Although these complications are rare, it is extremely difficult to retrieve such equipments. We report on two cases that a retained IVUS catheter or an entrapped filter wire were retrieved from the coronary system using more simplified technique that does not involve in the usage of snare or any other retrieval tool. After placing an additional guidewire and balloon alongside an equipment, it was easily retrieved from the coronary system just after the proximal balloon deflation. Copyright © 2012. Published by Elsevier Inc.
Comparative analysis among several methods used to solve the point kinetic equations
International Nuclear Information System (INIS)
Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da
2007-01-01
The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)
Comparative analysis among several methods used to solve the point kinetic equations
Energy Technology Data Exchange (ETDEWEB)
Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear; E-mails: alupo@if.ufrj.br; agoncalves@con.ufrj.br; aquilino@lmp.ufrj.br; fernando@con.ufrj.br
2007-07-01
The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)
Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam
2018-03-01
We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.
A multi points ultrasonic detection method for material flow of belt conveyor
Zhang, Li; He, Rongjun
2018-03-01
For big detection error of single point ultrasonic ranging technology used in material flow detection of belt conveyor when coal distributes unevenly or is large, a material flow detection method of belt conveyor is designed based on multi points ultrasonic counter ranging technology. The method can calculate approximate sectional area of material by locating multi points on surfaces of material and belt, in order to get material flow according to running speed of belt conveyor. The test results show that the method has smaller detection error than single point ultrasonic ranging technology under the condition of big coal with uneven distribution.
Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method)
DEFF Research Database (Denmark)
Hansen, Susanne Brunsgaard; Berg, Rolf W.; Stenby, Erling Halfdan
Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf......Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf...
A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD
Directory of Open Access Journals (Sweden)
Z. Zhang
2016-06-01
Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.
Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality
Directory of Open Access Journals (Sweden)
Zhanchao Li
2013-01-01
Full Text Available The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model and change of sequence distribution law of nonparametric statistical model. On this basis, through the reduction of change point problem, the establishment of basic nonparametric change point model, and asymptotic analysis on test method of basic change point problem, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is created in consideration of the situation that in practice concrete dam crack behavior may have more abnormality points. And the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is used in the actual project, demonstrating the effectiveness and scientific reasonableness of the method established. Meanwhile, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality has a complete theoretical basis and strong practicality with a broad application prospect in actual project.
Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis
Directory of Open Access Journals (Sweden)
Yuan Gao
2014-01-01
Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.
Energy Technology Data Exchange (ETDEWEB)
Choi, Jang-Hwan, E-mail: jhchoi21@stanford.edu [Department of Radiology, Stanford University, Stanford, California 94305 and Department of Mechanical Engineering, Stanford University, Stanford, California 94305 (United States); Constantin, Dragos [Microwave Physics R& E, Varian Medical Systems, Palo Alto, California 94304 (United States); Ganguly, Arundhuti; Girard, Erin; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Morin, Richard L. [Mayo Clinic Jacksonville, Jacksonville, Florida 32224 (United States); Dixon, Robert L. [Department of Radiology, Wake Forest University, Winston-Salem, North Carolina 27157 (United States)
2015-08-15
Purpose: To propose new dose point measurement-based metrics to characterize the dose distributions and the mean dose from a single partial rotation of an automatic exposure control-enabled, C-arm-based, wide cone angle computed tomography system over a stationary, large, body-shaped phantom. Methods: A small 0.6 cm{sup 3} ion chamber (IC) was used to measure the radiation dose in an elliptical body-shaped phantom made of tissue-equivalent material. The IC was placed at 23 well-distributed holes in the central and peripheral regions of the phantom and dose was recorded for six acquisition protocols with different combinations of minimum kVp (109 and 125 kVp) and z-collimator aperture (full: 22.2 cm; medium: 14.0 cm; small: 8.4 cm). Monte Carlo (MC) simulations were carried out to generate complete 2D dose distributions in the central plane (z = 0). The MC model was validated at the 23 dose points against IC experimental data. The planar dose distributions were then estimated using subsets of the point dose measurements using two proposed methods: (1) the proximity-based weighting method (method 1) and (2) the dose point surface fitting method (method 2). Twenty-eight different dose point distributions with six different point number cases (4, 5, 6, 7, 14, and 23 dose points) were evaluated to determine the optimal number of dose points and their placement in the phantom. The performances of the methods were determined by comparing their results with those of the validated MC simulations. The performances of the methods in the presence of measurement uncertainties were evaluated. Results: The 5-, 6-, and 7-point cases had differences below 2%, ranging from 1.0% to 1.7% for both methods, which is a performance comparable to that of the methods with a relatively large number of points, i.e., the 14- and 23-point cases. However, with the 4-point case, the performances of the two methods decreased sharply. Among the 4-, 5-, 6-, and 7-point cases, the 7-point case (1
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-02-27
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the
Method of nuclear reactor control using a variable temperature load dependent set point
International Nuclear Information System (INIS)
Kelly, J.J.; Rambo, G.E.
1982-01-01
A method and apparatus for controlling a nuclear reactor in response to a variable average reactor coolant temperature set point is disclosed. The set point is dependent upon percent of full power load demand. A manually-actuated ''droop mode'' of control is provided whereby the reactor coolant temperature is allowed to drop below the set point temperature a predetermined amount wherein the control is switched from reactor control rods exclusively to feedwater flow
AN IMPROVEMENT ON GEOMETRY-BASED METHODS FOR GENERATION OF NETWORK PATHS FROM POINTS
Directory of Open Access Journals (Sweden)
Z. Akbari
2014-10-01
Full Text Available Determining network path is important for different purposes such as determination of road traffic, the average speed of vehicles, and other network analysis. One of the required input data is information about network path. Nevertheless, the data collected by the positioning systems often lead to the discrete points. Conversion of these points to the network path have become one of the challenges which different researchers, presents many ways for solving it. This study aims at investigating geometry-based methods to estimate the network paths from the obtained points and improve an existing point to curve method. To this end, some geometry-based methods have been studied and an improved method has been proposed by applying conditions on the best method after describing and illustrating weaknesses of them.
Directory of Open Access Journals (Sweden)
Zhenxiang Jiang
2016-01-01
Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.
Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud
Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.
2018-04-01
In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.
Improved DEA Cross Efficiency Evaluation Method Based on Ideal and Anti-Ideal Points
Directory of Open Access Journals (Sweden)
Qiang Hou
2018-01-01
Full Text Available A new model is introduced in the process of evaluating efficiency value of decision making units (DMUs through data envelopment analysis (DEA method. Two virtual DMUs called ideal point DMU and anti-ideal point DMU are combined to form a comprehensive model based on the DEA method. The ideal point DMU is taking self-assessment system according to efficiency concept. The anti-ideal point DMU is taking other-assessment system according to fairness concept. The two distinctive ideal point models are introduced to the DEA method and combined through using variance ration. From the new model, a reasonable result can be obtained. Numerical examples are provided to illustrate the new constructed model and certify the rationality of the constructed model through relevant analysis with the traditional DEA model.
Energy Technology Data Exchange (ETDEWEB)
Yoo, Hyun Suk; Lee, Jeong Min; Yoon, Jeong Hee; Lee, Dong Ho; Chang, Won; Han, Joon Koo [Seoul National University Hospital, Seoul (Korea, Republic of)
2016-09-15
To prospectively compare technical success rate and reliable measurements of virtual touch quantification (VTQ) elastography and elastography point quantification (ElastPQ), and to correlate liver stiffness (LS) measurements obtained by the two elastography techniques. Our study included 85 patients, 80 of whom were previously diagnosed with chronic liver disease. The technical success rate and reliable measurements of the two kinds of point shear wave elastography (pSWE) techniques were compared by χ{sup 2} analysis. LS values measured using the two techniques were compared and correlated via Wilcoxon signed-rank test, Spearman correlation coefficient, and 95% Bland-Altman limit of agreement. The intraobserver reproducibility of ElastPQ was determined by 95% Bland-Altman limit of agreement and intraclass correlation coefficient (ICC). The two pSWE techniques showed similar technical success rate (98.8% for VTQ vs. 95.3% for ElastPQ, p = 0.823) and reliable LS measurements (95.3% for VTQ vs. 90.6% for ElastPQ, p = 0.509). The mean LS measurements obtained by VTQ (1.71 ± 0.47 m/s) and ElastPQ (1.66 ± 0.41 m/s) were not significantly different (p = 0.209). The LS measurements obtained by the two techniques showed strong correlation (r = 0.820); in addition, the 95% limit of agreement of the two methods was 27.5% of the mean. Finally, the ICC of repeat ElastPQ measurements was 0.991. Virtual touch quantification and ElastPQ showed similar technical success rate and reliable measurements, with strongly correlated LS measurements. However, the two methods are not interchangeable due to the large limit of agreement.
Directory of Open Access Journals (Sweden)
Omolara Olusola Oluwaniyi
2016-10-01
Full Text Available The proximate, amino acid and fatty acid compositions of the fillet and oil from Clarias gariepinus (Catfish and Oreochromis niloticus (Tilapia were determined. The moisture content ranged from 76.27 % for catfish to 79.97 % for tilapia while the oil content ranged from 7.80 % for tilapia and 11.00 % for catfish. Ash content was in the range 8.03 – 9.16 % and the protein content was 15.83 - 18.48 %. Cooking – boiling, roasting or frying – resulted in a variation in the nutrient composition but with no significant effect on the amino acid composition except for the samples fried with palm oil which resulted in significantly reduced essential amino acids contents. All the fish samples – both fresh and processed – have amino acid scores less than 100, with lysine, threonine and the sulphur-containing amino acids being among the limiting amino acids. Both fish samples contain more unsaturated than saturated fatty acids.
Synthesis of Numerical Methods for Modeling Wave Energy Converter-Point Absorbers: Preprint
Energy Technology Data Exchange (ETDEWEB)
Li, Y.; Yu, Y. H.
2012-05-01
During the past few decades, wave energy has received significant attention among all ocean energy formats. Industry has proposed hundreds of prototypes such as an oscillating water column, a point absorber, an overtopping system, and a bottom-hinged system. In particular, many researchers have focused on modeling the floating-point absorber as the technology to extract wave energy. Several modeling methods have been used such as the analytical method, the boundary-integral equation method, the Navier-Stokes equations method, and the empirical method. However, no standardized method has been decided. To assist the development of wave energy conversion technologies, this report reviews the methods for modeling the floating-point absorber.
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-12-24
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method
Directory of Open Access Journals (Sweden)
Yueqian Shen
2016-12-01
Full Text Available A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.
A new method to identify the location of the kick point during the golf swing.
Joyce, Christopher; Burnett, Angus; Matthews, Miccal
2013-12-01
No method currently exists to determine the location of the kick point during the golf swing. This study consisted of two phases. In the first phase, the static kick point of 10 drivers (having identical grip and head but fitted with shafts of differing mass and stiffness) was determined by two methods: (1) a visual method used by professional club fitters and (2) an algorithm using 3D locations of markers positioned on the golf club. Using level of agreement statistics, we showed the latter technique was a valid method to determine the location of the static kick point. In phase two, the validated method was used to determine the dynamic kick point during the golf swing. Twelve elite male golfers had three shots analyzed for two drivers fitted with stiff shafts of differing mass (56 g and 78 g). Excellent between-trial reliability was found for dynamic kick point location. Differences were found for dynamic kick point location when compared with static kick point location, as well as between-shaft and within-shaft. These findings have implications for future investigations examining the bending behavior of golf clubs, as well as being useful to examine relationships between properties of the shaft and launch parameters.
A feature point identification method for positron emission particle tracking with multiple tracers
Energy Technology Data Exchange (ETDEWEB)
Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)
2017-01-21
A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.
Directory of Open Access Journals (Sweden)
Hongwei Ying
2014-08-01
Full Text Available An extreme point of scale space extraction method for binary multiscale and rotation invariant local feature descriptor is studied in this paper in order to obtain a robust and fast method for local image feature descriptor. Classic local feature description algorithms often select neighborhood information of feature points which are extremes of image scale space, obtained by constructing the image pyramid using certain signal transform method. But build the image pyramid always consumes a large amount of computing and storage resources, is not conducive to the actual applications development. This paper presents a dual multiscale FAST algorithm, it does not need to build the image pyramid, but can extract feature points of scale extreme quickly. Feature points extracted by proposed method have the characteristic of multiscale and rotation Invariant and are fit to construct the local feature descriptor.
On the Convergence of the Iteration Sequence in Primal-Dual Interior-Point Methods
National Research Council Canada - National Science Library
Tapia, Richard A; Zhang, Yin; Ye, Yinyu
1993-01-01
Recently, numerous research efforts, most of them concerned with superlinear convergence of the duality gap sequence to zero in the Kojima-Mizuno-Yoshise primal-dual interior-point method for linear...
Fixed Point Methods in the Stability of the Cauchy Functional Equations
Directory of Open Access Journals (Sweden)
Z. Dehvari
2013-03-01
Full Text Available By using the fixed point methods, we prove some generalized Hyers-Ulam stability of homomorphisms for Cauchy and CauchyJensen functional equations on the product algebras and on the triple systems.
A method for computing the stationary points of a function subject to linear equality constraints
International Nuclear Information System (INIS)
Uko, U.L.
1989-09-01
We give a new method for the numerical calculation of stationary points of a function when it is subject to equality constraints. An application to the solution of linear equations is given, together with a numerical example. (author). 5 refs
Method and apparatus for continuously detecting and monitoring the hydrocarbon dew-point of gas
Energy Technology Data Exchange (ETDEWEB)
Boyle, G.J.; Pritchard, F.R.
1987-08-04
This patent describes a method and apparatus for continuously detecting and monitoring the hydrocarbon dew-point of a gas. A gas sample is supplied to a dew-point detector and the temperature of a portion of the sample gas stream to be investigated is lowered progressively prior to detection until the dew-point is reached. The presence of condensate within the flowing gas is detected and subsequently the supply gas sample is heated to above the dew-point. The procedure of cooling and heating the gas stream continuously in a cyclical manner is repeated.
Five-point form of the nodal diffusion method and comparison with finite-difference
International Nuclear Information System (INIS)
Azmy, Y.Y.
1988-01-01
Nodal Methods have been derived, implemented and numerically tested for several problems in physics and engineering. In the field of nuclear engineering, many nodal formalisms have been used for the neutron diffusion equation, all yielding results which were far more computationally efficient than conventional Finite Difference (FD) and Finite Element (FE) methods. However, not much effort has been devoted to theoretically comparing nodal and FD methods in order to explain the very high accuracy of the former. In this summary we outline the derivation of a simple five-point form for the lowest order nodal method and compare it to the traditional five-point, edge-centered FD scheme. The effect of the observed differences on the accuracy of the respective methods is established by considering a simple test problem. It must be emphasized that the nodal five-point scheme derived here is mathematically equivalent to previously derived lowest order nodal methods. 7 refs., 1 tab
a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree
Kang, Q.; Huang, G.; Yang, S.
2018-04-01
Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
A GROSS ERROR ELIMINATION METHOD FOR POINT CLOUD DATA BASED ON KD-TREE
Directory of Open Access Journals (Sweden)
Q. Kang
2018-04-01
Full Text Available Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data’s pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
The complexity of interior point methods for solving discounted turn-based stochastic games
DEFF Research Database (Denmark)
Hansen, Thomas Dueholm; Ibsen-Jensen, Rasmus
2013-01-01
for general 2TBSGs. This implies that a number of interior point methods can be used to solve 2TBSGs. We consider two such algorithms: the unified interior point method of Kojima, Megiddo, Noma, and Yoshise, and the interior point potential reduction algorithm of Kojima, Megiddo, and Ye. The algorithms run...... states and discount factor γ we get κ=Θ(n(1−γ)2) , −δ=Θ(n√1−γ) , and 1/θ=Θ(n(1−γ)2) in the worst case. The lower bounds for κ, − δ, and 1/θ are all obtained using the same family of deterministic games....
Comparison of methods for accurate end-point detection of potentiometric titrations
International Nuclear Information System (INIS)
Villela, R L A; Borges, P P; Vyskočil, L
2015-01-01
Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper
Comparison of methods for accurate end-point detection of potentiometric titrations
Villela, R. L. A.; Borges, P. P.; Vyskočil, L.
2015-01-01
Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.
Ngastiti, P. T. B.; Surarso, Bayu; Sutimin
2018-05-01
Transportation issue of the distribution problem such as the commodity or goods from the supply tothe demmand is to minimize the transportation costs. Fuzzy transportation problem is an issue in which the transport costs, supply and demand are in the form of fuzzy quantities. Inthe case study at CV. Bintang Anugerah Elektrik, a company engages in the manufacture of gensets that has more than one distributors. We use the methods of zero point and zero suffix to investigate the transportation minimum cost. In implementing both methods, we use robust ranking techniques for the defuzzification process. The studyresult show that the iteration of zero suffix method is less than that of zero point method.
DEFF Research Database (Denmark)
Bey, Niki
2000-01-01
to three essential assessment steps, the method enables rough environmental evaluations and supports in this way material- and process-related decision-making in the early stages of design. In its overall structure, the Oil Point Method is related to Life Cycle Assessment - except for two main differences...... of environmental evaluation and only approximate information about the product and its life cycle. This dissertation addresses this challenge in presenting a method, which is tailored to these requirements of designers - the Oil Point Method (OPM). In providing environmental key information and confining itself...
Data-Driven Method for Wind Turbine Yaw Angle Sensor Zero-Point Shifting Fault Detection
Directory of Open Access Journals (Sweden)
Yan Pei
2018-03-01
Full Text Available Wind turbine yaw control plays an important role in increasing the wind turbine production and also in protecting the wind turbine. Accurate measurement of yaw angle is the basis of an effective wind turbine yaw controller. The accuracy of yaw angle measurement is affected significantly by the problem of zero-point shifting. Hence, it is essential to evaluate the zero-point shifting error on wind turbines on-line in order to improve the reliability of yaw angle measurement in real time. Particularly, qualitative evaluation of the zero-point shifting error could be useful for wind farm operators to realize prompt and cost-effective maintenance on yaw angle sensors. In the aim of qualitatively evaluating the zero-point shifting error, the yaw angle sensor zero-point shifting fault is firstly defined in this paper. A data-driven method is then proposed to detect the zero-point shifting fault based on Supervisory Control and Data Acquisition (SCADA data. The zero-point shifting fault is detected in the proposed method by analyzing the power performance under different yaw angles. The SCADA data are partitioned into different bins according to both wind speed and yaw angle in order to deeply evaluate the power performance. An indicator is proposed in this method for power performance evaluation under each yaw angle. The yaw angle with the largest indicator is considered as the yaw angle measurement error in our work. A zero-point shifting fault would trigger an alarm if the error is larger than a predefined threshold. Case studies from several actual wind farms proved the effectiveness of the proposed method in detecting zero-point shifting fault and also in improving the wind turbine performance. Results of the proposed method could be useful for wind farm operators to realize prompt adjustment if there exists a large error of yaw angle measurement.
Directory of Open Access Journals (Sweden)
Yi-hua Zhong
2013-01-01
Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.
The Closest Point Method and Multigrid Solvers for Elliptic Equations on Surfaces
Chen, Yujia
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. Elliptic partial differential equations are important from both application and analysis points of view. In this paper we apply the closest point method to solve elliptic equations on general curved surfaces. Based on the closest point representation of the underlying surface, we formulate an embedding equation for the surface elliptic problem, then discretize it using standard finite differences and interpolation schemes on banded but uniform Cartesian grids. We prove the convergence of the difference scheme for the Poisson\\'s equation on a smooth closed curve. In order to solve the resulting large sparse linear systems, we propose a specific geometric multigrid method in the setting of the closest point method. Convergence studies in both the accuracy of the difference scheme and the speed of the multigrid algorithm show that our approaches are effective.
Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method
Directory of Open Access Journals (Sweden)
Darae Jeong
2018-01-01
Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.
Point kernels and superposition methods for scatter dose calculations in brachytherapy
International Nuclear Information System (INIS)
Carlsson, A.K.
2000-01-01
Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)
Five-point Element Scheme of Finite Analytic Method for Unsteady Groundwater Flow
Institute of Scientific and Technical Information of China (English)
Xiang Bo; Mi Xiao; Ji Changming; Luo Qingsong
2007-01-01
In order to improve the finite analytic method's adaptability for irregular unit, by using coordinates rotation technique this paper establishes a five-point element scheme of finite analytic method. It not only solves unsteady groundwater flow equation but also gives the boundary condition. This method can be used to calculate the three typical questions of groundwater. By compared with predecessor's computed result, the result of this method is more satisfactory.
International Nuclear Information System (INIS)
Holcomb, M.J.
1999-01-01
A composite superconducting material made of coated particles of ceramic superconducting material and a metal matrix material is disclosed. The metal matrix material fills the regions between the coated particles. The coating material is a material that is chemically nonreactive with the ceramic. Preferably, it is silver. The coating serves to chemically insulate the ceramic from the metal matrix material. The metal matrix material is a metal that is susceptible to the superconducting proximity effect. Preferably, it is a NbTi alloy. The metal matrix material is induced to become superconducting by the superconducting proximity effect when the temperature of the material goes below the critical temperature of the ceramic. The material has the improved mechanical properties of the metal matrix material. Preferably, the material consists of approximately 10% NbTi, 90% coated ceramic particles (by volume). Certain aspects of the material and method will depend upon the particular ceramic superconductor employed. An alternative embodiment of the invention utilizes A15 compound superconducting particles in a metal matrix material which is preferably a NbTi alloy
Holcomb, Matthew J.
1999-01-01
A composite superconducting material made of coated particles of ceramic superconducting material and a metal matrix material. The metal matrix material fills the regions between the coated particles. The coating material is a material that is chemically nonreactive with the ceramic. Preferably, it is silver. The coating serves to chemically insulate the ceramic from the metal matrix material. The metal matrix material is a metal that is susceptible to the superconducting proximity effect. Preferably, it is a NbTi alloy. The metal matrix material is induced to become superconducting by the superconducting proximity effect when the temperature of the material goes below the critical temperature of the ceramic. The material has the improved mechanical properties of the metal matrix material. Preferably, the material consists of approximately 10% NbTi, 90% coated ceramic particles (by volume). Certain aspects of the material and method will depend upon the particular ceramic superconductor employed. An alternative embodiment of the invention utilizes A15 compound superconducting particles in a metal matrix material which is preferably a NbTi alloy.
Children's proximal societal conditions
DEFF Research Database (Denmark)
Stanek, Anja Hvidtfeldt
2018-01-01
that is above or outside the institutional setting or the children’s everyday life, but something that is represented through societal structures and actual persons participating (in political ways) within the institutional settings, in ways that has meaning to children’s possibilities to participate, learn...... and develop. Understanding school or kindergarten as (part of) the children’s proximal societal conditions for development and learning, means for instance that considerations about an inclusive agenda are no longer simply thoughts about the school – for economic reasons – having space for as many pupils...... as possible (schools for all). Such thoughts can be supplemented by reflections about which version of ‘the societal’ we wish to present our children with, and which version of ‘the societal’ we wish to set up as the condition for children’s participation and development. The point is to clarify or sharpen...
DEFF Research Database (Denmark)
Choi, Ui-Min; Blaabjerg, Frede; Lee, Kyo-Beum
2015-01-01
time of small- and medium-voltage vectors. However, if the power factor is lower, there is a limitation to eliminate neutral-point oscillations. In this case, the proposed method can be improved by changing the switching sequence properly. Additionally, a method for neutral-point voltage balancing......This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time offset to the three-phase turn-on times. The proper time offset is simply calculated considering the phase currents and dwell...
Using the method of ideal point to solve dual-objective problem for production scheduling
Directory of Open Access Journals (Sweden)
Mariia Marko
2016-07-01
Full Text Available In practice, there are often problems, which must simultaneously optimize several criterias. This so-called multi-objective optimization problem. In the article we consider the use of the method ideal point to solve the two-objective optimization problem of production planning. The process of finding solution to the problem consists of a series of steps where using simplex method, we find the ideal point. After that for solving a scalar problems, we use the method of Lagrange multipliers
PKI, Gamma Radiation Reactor Shielding Calculation by Point-Kernel Method
International Nuclear Information System (INIS)
Li Chunhuai; Zhang Liwu; Zhang Yuqin; Zhang Chuanxu; Niu Xihua
1990-01-01
1 - Description of program or function: This code calculates radiation shielding problem of gamma-ray in geometric space. 2 - Method of solution: PKI uses a point kernel integration technique, describes radiation shielding geometric space by using geometric space configuration method and coordinate conversion, and makes use of calculation result of reactor primary shielding and flow regularity in loop system for coolant
Lee, Jennifer
2012-01-01
The intent of this study was to examine the relationship between media multitasking orientation and grade point average. The study utilized a mixed-methods approach to investigate the research questions. In the quantitative section of the study, the primary method of statistical analyses was multiple regression. The independent variables for the…
International Nuclear Information System (INIS)
Wang, Ruihong; Yang, Shulin; Pei, Lucheng
2011-01-01
Deep penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, an adaptive technique under the emission point as a sampling station is presented. The main advantage is to choose the most suitable sampling number from the emission point station to get the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is also derived. The main principle is to define the importance function of the response due to the particle state and ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive method under the emission point as a station could overcome the difficulty of underestimation to the result in some degree, and the related importance sampling method gets satisfied results as well. (author)
DEFF Research Database (Denmark)
Sørensen, Chris Khadgi; Thach, Tine; Hovmøller, Mogens Støvring
2016-01-01
flexible application procedure for spray inoculation and it gave highly reproducible results for virulence phenotyping. Six point inoculation methods were compared to find the most suitable for assessment of pathogen aggressiveness. The use of Novec 7100 and dry dilution with Lycopodium spores gave...... for the assessment of quantitative epidemiological parameters. New protocols for spray and point inoculation of P. striiformis on wheat are presented, along with the prospect for applying these in rust research and resistance breeding activities....
Apparatus and method for implementing power saving techniques when processing floating point values
Kim, Young Moon; Park, Sang Phill
2017-10-03
An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.
Directory of Open Access Journals (Sweden)
Klin-eam Chakkrid
2009-01-01
Full Text Available Abstract A new approximation method for solving variational inequalities and fixed points of nonexpansive mappings is introduced and studied. We prove strong convergence theorem of the new iterative scheme to a common element of the set of fixed points of nonexpansive mapping and the set of solutions of the variational inequality for the inverse-strongly monotone mapping which solves some variational inequalities. Moreover, we apply our main result to obtain strong convergence to a common fixed point of nonexpansive mapping and strictly pseudocontractive mapping in a Hilbert space.
Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve
Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.
2009-04-01
Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.
Gender preference between traditional and PowerPoint methods of teaching gross anatomy.
Nuhu, Saleh; Adamu, Lawan Hassan; Buba, Mohammed Alhaji; Garba, Sani Hyedima; Dalori, Babagana Mohammed; Yusuf, Ashiru Hassan
2018-01-01
Teaching and learning process is increasingly metamorphosing from the traditional chalk and talk to the modern dynamism in the information and communication technology. Medical education is no exception to this dynamism more especially in the teaching of gross anatomy, which serves as one of the bases of understanding the human structure. This study was conducted to determine the gender preference of preclinical medical students on the use of traditional (chalk and talk) and PowerPoint presentation in the teaching of gross anatomy. This was cross-sectional and prospective study, which was conducted among preclinical medical students in the University of Maiduguri, Nigeria. Using simple random techniques, a questionnaire was circulated among 280 medical students, where 247 students filled the questionnaire appropriately. The data obtained was analyzed using SPSS version 20 (IBM Corporation, Armonk, NY, USA) to find the method preferred by the students among other things. Majority of the preclinical medical students in the University of Maiduguri preferred PowerPoint method in the teaching of gross anatomy over the conventional methods. The Cronbach alpha value of 0.76 was obtained which is an acceptable level of internal consistency. A statistically significant association was found between gender and preferred method of lecture delivery on the clarity of lecture content where females prefer the conventional method of lecture delivery whereas males prefer the PowerPoint method, On the reproducibility of text and diagram, females prefer PowerPoint method of teaching gross anatomy while males prefer the conventional method of teaching gross anatomy. There are gender preferences with regard to clarity of lecture contents and reproducibility of text and diagram. It was also revealed from this study that majority of the preclinical medical students in the University of Maiduguri prefer PowerPoint presentation over the traditional chalk and talk method in most of the
Energy Technology Data Exchange (ETDEWEB)
Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
International Nuclear Information System (INIS)
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-01-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator
Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai
2017-05-01
To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.
Methods of fast, multiple-point in vivo T1 determination
International Nuclear Information System (INIS)
Zhang, Y.; Spigarelli, M.; Fencil, L.E.; Yeung, H.N.
1989-01-01
Two methods of rapid, multiple-point determination of T1 in vivo have been evaluated with a phantom consisting of vials of gel in different Mn + + concentrations. The first method was an inversion-recovery- on-the-fly technique, and the second method used a variable- tip-angle (α) progressive saturation with two sub- sequences of different repetition times. In the first method, 1/T1 was evaluated by an exponential fit. In the second method, 1/T1 was obtained iteratively with a linear fit and then readjusted together with α to a model equation until self-consistency was reached
Interior Point Methods on GPU with application to Model Predictive Control
DEFF Research Database (Denmark)
Gade-Nielsen, Nicolai Fog
The goal of this thesis is to investigate the application of interior point methods to solve dynamical optimization problems, using a graphical processing unit (GPU) with a focus on problems arising in Model Predictice Control (MPC). Multi-core processors have been available for over ten years now...... software package called GPUOPT, available under the non-restrictive MIT license. GPUOPT includes includes a primal-dual interior-point method, which supports both the CPU and the GPU. It is implemented as multiple components, where the matrix operations and solver for the Newton directions is separated...
Multiscale Modeling using Molecular Dynamics and Dual Domain Material Point Method
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division. Fluid Dynamics and Solid Mechanics Group, T-3; Rice Univ., Houston, TX (United States)
2016-07-07
For problems involving large material deformation rate, the material deformation time scale can be shorter than the material takes to reach a thermodynamical equilibrium. For such problems, it is difficult to obtain a constitutive relation. History dependency become important because of thermodynamic non-equilibrium. Our goal is to build a multi-scale numerical method which can bypass the need for a constitutive relation. In conclusion, multi-scale simulation method is developed based on the dual domain material point (DDMP). Molecular dynamics (MD) simulation is performed to calculate stress. Since the communication among material points is not necessary, the computation can be done embarrassingly parallel in CPU-GPU platform.
A primal-dual interior point method for large-scale free material optimization
DEFF Research Database (Denmark)
Weldeyesus, Alemseged Gebrehiwot; Stolpe, Mathias
2015-01-01
Free Material Optimization (FMO) is a branch of structural optimization in which the design variable is the elastic material tensor that is allowed to vary over the design domain. The requirements are that the material tensor is symmetric positive semidefinite with bounded trace. The resulting...... optimization problem is a nonlinear semidefinite program with many small matrix inequalities for which a special-purpose optimization method should be developed. The objective of this article is to propose an efficient primal-dual interior point method for FMO that can robustly and accurately solve large...... of iterations the interior point method requires is modest and increases only marginally with problem size. The computed optimal solutions obtain a higher precision than other available special-purpose methods for FMO. The efficiency and robustness of the method is demonstrated by numerical experiments on a set...
Reliability of an experimental method to analyse the impact point on a golf ball during putting.
Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn
2015-06-01
This study aimed to examine the reliability of an experimental method identifying the location of the impact point on a golf ball during putting. Forty trials were completed using a mechanical putting robot set to reproduce a putt of 3.2 m, with four different putter-ball combinations. After locating the centre of the dimple pattern (centroid) the following variables were tested; distance of the impact point from the centroid, angle of the impact point from the centroid and distance of the impact point from the centroid derived from the X, Y coordinates. Good to excellent reliability was demonstrated in all impact variables reflected in very strong relative (ICC = 0.98-1.00) and absolute reliability (SEM% = 0.9-4.3%). The highest SEM% observed was 7% for the angle of the impact point from the centroid. In conclusion, the experimental method was shown to be reliable at locating the centroid location of a golf ball, therefore allowing for the identification of the point of impact with the putter head and is suitable for use in subsequent studies.
Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.
2017-07-01
Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.
Cassereau, Didier; Nauleau, Pierre; Bendjoudi, Aniss; Minonzio, Jean-Gabriel; Laugier, Pascal; Bossy, Emmanuel; Grimal, Quentin
2014-07-01
The development of novel quantitative ultrasound (QUS) techniques to measure the hip is critically dependent on the possibility to simulate the ultrasound propagation. One specificity of hip QUS is that ultrasounds propagate through a large thickness of soft tissue, which can be modeled by a homogeneous fluid in a first approach. Finite difference time domain (FDTD) algorithms have been widely used to simulate QUS measurements but they are not adapted to simulate ultrasonic propagation over long distances in homogeneous media. In this paper, an hybrid numerical method is presented to simulate hip QUS measurements. A two-dimensional FDTD simulation in the vicinity of the bone is coupled to the semi-analytic calculation of the Rayleigh integral to compute the wave propagation between the probe and the bone. The method is used to simulate a setup dedicated to the measurement of circumferential guided waves in the cortical compartment of the femoral neck. The proposed approach is validated by comparison with a full FDTD simulation and with an experiment on a bone phantom. For a realistic QUS configuration, the computation time is estimated to be sixty times less with the hybrid method than with a full FDTD approach. Copyright © 2013 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Mroczka Janusz
2014-12-01
Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
A portable low-cost 3D point cloud acquiring method based on structure light
Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia
2018-03-01
A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.
A comparative study of the maximum power point tracking methods for PV systems
International Nuclear Information System (INIS)
Liu, Yali; Li, Ming; Ji, Xu; Luo, Xi; Wang, Meidi; Zhang, Ying
2014-01-01
Highlights: • An improved maximum power point tracking method for PV system was proposed. • Theoretical derivation procedure of the proposed method was provided. • Simulation models of MPPT trackers were established based on MATLAB/Simulink. • Experiments were conducted to verify the effectiveness of the proposed MPPT method. - Abstract: Maximum power point tracking (MPPT) algorithms play an important role in the optimization of the power and efficiency of a photovoltaic (PV) generation system. According to the contradiction of the classical Perturb and Observe (P and Oa) method between the corresponding speed and the tracking accuracy on steady-state, an improved P and O (P and Ob) method has been put forward in this paper by using the Atken interpolation algorithm. To validate the correctness and performance of the proposed method, simulation and experimental study have been implemented. Simulation models of classical P and Oa method and improved P and Ob method have been established by MATLAB/Simulink to analyze each technique under varying solar irradiation and temperature. The experimental results show that the tracking efficiency of P and Ob method is an average of 93% compared to 72% for P and Oa method, this conclusion basically agree with the simulation study. Finally, we proposed the applicable conditions and scope of these MPPT methods in the practical application
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Pereira, N F; Sitek, A, E-mail: nfp4@bwh.harvard.ed, E-mail: asitek@bwh.harvard.ed [Department of Radiology, Brigham and Women' s Hospital-Harvard Medical School Boston, MA (United States)
2010-09-21
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
International Nuclear Information System (INIS)
Pereira, N F; Sitek, A
2010-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem
Omagari, Hiroki; Higashino, Shin-Ichiro
2018-04-01
In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.
Directory of Open Access Journals (Sweden)
Chia-Ming Chang
2017-01-01
Full Text Available Objective: Despite its possible role in knee arthroplasty, the proximal-distal condylar length (PDCL of the femur has never been reported in the literature. We conducted an anatomic study of the proximal aspect of the medial femoral condyle to propose a method for measuring the PDCL. Materials and Methods: Inspection of dried bone specimens was carried out to assure the most proximal condylar margin (MPCM as the eligible starting point to measure the PDCL. Simulation surgery was performed on seven pairs of cadaveric knees to verify the clinical application of measuring the PDCL after locating the MPCM. Interobserver reliability of this procedure was also analyzed. Results: Observation of the bone specimens showed that the MPCM is a concavity formed by the junction of the distal end of the supracondylar ridge and the proximal margin of the medial condyle. This anatomically distinctive structure made the MPCM an unambiguous landmark. The cadaveric simulation surgical dissection demonstrated that the MPCM is easily accessed in a surgical setting, making the measurement of the PDCL plausible. The intraclass correlation coefficient was 0.78, indicating good interobserver reliability for this technique. Conclusion: This study has suggested that the PDCL can be measured based on the MPCM in a surgical setting. PDCL measurement might be useful in joint line position management, selection of femoral component sizes, and other applications related to the proximal-distal dimension of the knee. Further investigation is required.
International Nuclear Information System (INIS)
Rachakonda, Prem; Muralikrishnan, Bala; Lee, Vincent; Shilling, Meghan; Sawyer, Daniel; Cournoyer, Luc; Cheok, Geraldine
2017-01-01
The Dimensional Metrology Group at the National Institute of Standards and Technology is performing research to support the development of documentary standards within the ASTM E57 committee. This committee is addressing the point-to-point performance evaluation of a subclass of 3D imaging systems called terrestrial laser scanners (TLSs), which are laser-based and use a spherical coordinate system. This paper discusses the usage of sphere targets for this effort, and methods to minimize the errors due to the determination of their centers. The key contributions of this paper include methods to segment sphere data from a TLS point cloud, and the study of some of the factors that influence the determination of sphere centers. (paper)
Energy Technology Data Exchange (ETDEWEB)
Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, California 90095 (United States); Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method
International Nuclear Information System (INIS)
Park, Yujin; Kazantzis, Nikolaos; Parlos, Alexander G.; Chong, Kil To
2013-01-01
Highlights: • Numerical solution for stiff differential equations using matrix exponential method. • The approximation is based on First Order Hold assumption. • Various input examples applied to the point kinetics equations. • The method shows superior useful and effective activity. - Abstract: A system of nonlinear differential equations is derived to model the dynamics of neutron density and the delayed neutron precursors within a point kinetics equation modeling framework for a nuclear reactor. The point kinetic equations are mathematically characterized as stiff, occasionally nonlinear, ordinary differential equations, posing significant challenges when numerical solutions are sought and traditionally resulting in the need for smaller time step intervals within various computational schemes. In light of the above realization, the present paper proposes a new discretization method inspired by system-theoretic notions and technically based on a combination of the matrix exponential method (MEM) and the First-Order Hold (FOH) assumption. Under the proposed time discretization structure, the sampled-data representation of the nonlinear point kinetic system of equations is derived. The performance of the proposed time discretization procedure is evaluated using several case studies with sinusoidal reactivity profiles and multiple input examples (reactivity and neutron source function). It is shown, that by applying the proposed method under a First-Order Hold for the neutron density and the precursor concentrations at each time step interval, the stiffness problem associated with the point kinetic equations can be adequately addressed and resolved. Finally, as evidenced by the aforementioned detailed simulation studies, the proposed method retains its validity and accuracy for a wide range of reactor operating conditions, including large sampling periods dictated by physical and/or technical limitations associated with the current state of sensor and
A new integral method for solving the point reactor neutron kinetics equations
International Nuclear Information System (INIS)
Li Haofeng; Chen Wenzhen; Luo Lei; Zhu Qian
2009-01-01
A numerical integral method that efficiently provides the solution of the point kinetics equations by using the better basis function (BBF) for the approximation of the neutron density in one time step integrations is described and investigated. The approach is based on an exact analytic integration of the neutron density equation, where the stiffness of the equations is overcome by the fully implicit formulation. The procedure is tested by using a variety of reactivity functions, including step reactivity insertion, ramp input and oscillatory reactivity changes. The solution of the better basis function method is compared to other analytical and numerical solutions of the point reactor kinetics equations. The results show that selecting a better basis function can improve the efficiency and accuracy of this integral method. The better basis function method can be used in real time forecasting for power reactors in order to prevent reactivity accidents.
A simple method for determining the critical point of the soil water retention curve
DEFF Research Database (Denmark)
Chen, Chong; Hu, Kelin; Ren, Tusheng
2017-01-01
he transition point between capillary water and adsorbed water, which is the critical point Pc [defined by the critical matric potential (ψc) and the critical water content (θc)] of the soil water retention curve (SWRC), demarcates the energy and water content region where flow is dominated......, a fixed tangent line method was developed to estimate Pc as an alternative to the commonly used flexible tangent line method. The relationships between Pc, and particle-size distribution and specific surface area (SSA) were analyzed. For 27 soils with various textures, the mean RMSE of water content from...... the fixed tangent line method was 0.007 g g–1, which was slightly better than that of the flexible tangent line method. With increasing clay content or SSA, ψc was more negative initially but became less negative at clay contents above ∼30%. Increasing the silt contents resulted in more negative ψc values...
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
International Nuclear Information System (INIS)
Feng Guangwen; Hu Youhua; Liu Qian
2009-01-01
In this paper, the application of the entropy weight TOPSIS method to optimal layout points in monitoring the Xinjiang radiation environment has been indroduced. With the help of SAS software, It has been found that the method is more ideal and feasible. The method can provide a reference for us to monitor radiation environment in the same regions further. As the method could bring great convenience and greatly reduce the inspecting work, it is very simple, flexible and effective for a comprehensive evaluation. (authors)
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan
2015-11-01
To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter
Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood
Asadi, A.R.; Roos, C.
2015-01-01
In this paper, we design a class of infeasible interior-point methods for linear optimization based on large neighborhood. The algorithm is inspired by a full-Newton step infeasible algorithm with a linear convergence rate in problem dimension that was recently proposed by the second author.
Method of Check of Statistical Hypotheses for Revealing of “Fraud” Point of Sale
Directory of Open Access Journals (Sweden)
T. M. Bolotskaya
2011-06-01
Full Text Available Application method checking of statistical hypotheses fraud Point of Sale working with purchasing cards and suspected of accomplishment of unauthorized operations is analyzed. On the basis of the received results the algorithm is developed, allowing receive an assessment of works of terminals in regime off-line.
A Riccati-Based Interior Point Method for Efficient Model Predictive Control of SISO Systems
DEFF Research Database (Denmark)
Hagdrup, Morten; Johansson, Rolf; Bagterp Jørgensen, John
2017-01-01
model parts separate. The controller is designed based on the deterministic model, while the Kalman filter results from the stochastic part. The controller is implemented as a primal-dual interior point (IP) method using Riccati recursion and the computational savings possible for SISO systems...
A novel maximum power point tracking method for PV systems using fuzzy cognitive networks (FCN)
Energy Technology Data Exchange (ETDEWEB)
Karlis, A.D. [Electrical Machines Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece); Kottas, T.L.; Boutalis, Y.S. [Automatic Control Systems Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece)
2007-03-15
Maximum power point trackers (MPPTs) play an important role in photovoltaic (PV) power systems because they maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency. This paper presents a novel MPPT method based on fuzzy cognitive networks (FCN). The new method gives a good maximum power operation of any PV array under different conditions such as changing insolation and temperature. The numerical results show the effectiveness of the proposed algorithm. (author)
Two-point method uncertainty during control and measurement of cylindrical element diameters
Glukhov, V. I.; Shalay, V. V.; Radev, H.
2018-04-01
The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.
Interior-Point Method for Non-Linear Non-Convex Optimization
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2004-01-01
Roč. 11, č. 5-6 (2004), s. 431-453 ISSN 1070-5325 R&D Projects: GA AV ČR IAA1030103 Institutional research plan: CEZ:AV0Z1030915 Keywords : non-linear programming * interior point methods * indefinite systems * indefinite preconditioners * preconditioned conjugate gradient method * merit functions * algorithms * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.727, year: 2004
Limiting Accuracy of Segregated Solution Methods for Nonsymmetric Saddle Point Problems
Czech Academy of Sciences Publication Activity Database
Jiránek, P.; Rozložník, Miroslav
Roc. 215, c. 1 (2008), s. 28-37 ISSN 0377-0427 R&D Projects: GA MŠk 1M0554; GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : saddle point problems * Schur complement reduction method * null-space projection method * rounding error analysis Subject RIV: BA - General Mathematics Impact factor: 1.048, year: 2008
DEFF Research Database (Denmark)
Choi, Uimin; Lee, Kyo-Beum; Blaabjerg, Frede
2013-01-01
This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time-offset to the three phase turn-on times. The proper time-offset is simply calculated considering the phase currents and dwell...
A point-value enhanced finite volume method based on approximate delta functions
Xuan, Li-Jun; Majdalani, Joseph
2018-02-01
We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.
Directory of Open Access Journals (Sweden)
Takahiro Yamaguchi
2015-05-01
Full Text Available As smartphones become widespread, a variety of smartphone applications are being developed. This paper proposes a method for indoor localization (i.e., positioning that uses only smartphones, which are general-purpose mobile terminals, as reference point devices. This method has the following features: (a the localization system is built with smartphones whose movements are confined to respective limited areas. No fixed reference point devices are used; (b the method does not depend on the wireless performance of smartphones and does not require information about the propagation characteristics of the radio waves sent from reference point devices, and (c the method determines the location at the application layer, at which location information can be easily incorporated into high-level services. We have evaluated the level of localization accuracy of the proposed method by building a software emulator that modeled an underground shopping mall. We have confirmed that the determined location is within a small area in which the user can find target objects visually.
DEFF Research Database (Denmark)
Structure from Motion (SFM) systems are composed of cameras and structure in the form of 3D points and other features. It is most often that the structure components outnumber the cameras by a great margin. It is not uncommon to have a configuration with 3 cameras observing more than 500 3D points...... an overview of existing triangulation methods with emphasis on performance versus optimality, and will suggest a fast triangulation algorithm based on linear constraints. The structure and camera motion estimation in a SFM system is based on the minimization of some norm of the reprojection error between...
A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets
Directory of Open Access Journals (Sweden)
Vilius Matiukas
2011-08-01
Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English
An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points
Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun
2014-05-01
Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.
Proximity credentials: A survey
International Nuclear Information System (INIS)
Wright, L.J.
1987-04-01
Credentials as a means of identifying individuals have traditionally been a photo badge and more recently, the coded credential. Another type of badge, the proximity credential, is making inroads in the personnel identification field. This badge can be read from a distance instead of being veiewed by a guard or inserted into a reading device. This report reviews proximity credentials, identifies the companies marketing or developing proximity credentials, and describes their respective credentials. 3 tabs
Directory of Open Access Journals (Sweden)
Wilson Rodríguez Calderón
2015-04-01
Full Text Available When we need to determine the solution of a nonlinear equation there are two options: closed-methods which use intervals that contain the root and during the iterative process reduce the size of natural way, and, open-methods that represent an attractive option as they do not require an initial interval enclosure. In general, we know open-methods are more efficient computationally though they do not always converge. In this paper we are presenting a divergence case analysis when we use the method of fixed point iteration to find the normal height in a rectangular channel using the Manning equation. To solve this problem, we propose applying two strategies (developed by authors that allow to modifying the iteration function making additional formulations of the traditional method and its convergence theorem. Although Manning equation is solved with other methods like Newton when we use the iteration method of fixed-point an interesting divergence situation is presented which can be solved with a convergence higher than quadratic over the initial iterations. The proposed strategies have been tested in two cases; a study of divergence of square root of real numbers was made previously by authors for testing. Results in both cases have been successful. We present comparisons because are important for seeing the advantage of proposed strategies versus the most representative open-methods.
Federal Laboratory Consortium — The Proximal Probes Facility consists of laboratories for microscopy, spectroscopy, and probing of nanostructured materials and their functional properties. At the...
An Introduction to the Material Point Method using a Case Study from Gas Dynamics
International Nuclear Information System (INIS)
Tran, L. T.; Kim, J.; Berzins, M.
2008-01-01
The Material Point Method (MPM) developed by Sulsky and colleagues is currently being used to solve many challenging problems involving large deformations and/or fragementations with considerable success as part of the Uintah code created by the CSAFE project. In order to understand the properties of this method an analysis of the considerable computational properties of MPM is undertaken in the context of model problems from gas dynamics. One aspect of the MPM method in the form used here is shown to have first order accuracy. Computational experiments using particle redistribution are described and show that smooth results with first order accuracy may be obtained.
A method for untriggered time-dependent searches for multiple flares from neutrino point sources
International Nuclear Information System (INIS)
Gora, D.; Bernardini, E.; Cruz Silva, A.H.
2011-04-01
A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)
A method for untriggered time-dependent searches for multiple flares from neutrino point sources
Energy Technology Data Exchange (ETDEWEB)
Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)
2011-04-15
A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)
Institute of Scientific and Technical Information of China (English)
LIN; Kuang-Jang; LIN; Chii-Ruey
2010-01-01
The Photovoltaic Array has a best optimal operating point where the array operating can obtain the maximum power.However, the optimal operating point can be compromised by the strength of solar radiation,angle,and by the change of environment and load.Due to the constant changes in these conditions,it has become very difficult to locate the optimal operating point by following a mathematical model.Therefore,this study will focus mostly on the application of Fuzzy Logic Control theory and Three-point Weight Comparison Method in effort to locate the optimal operating point of solar panel and achieve maximum efficiency in power generation. The Three-point Weight Comparison Method is the comparison between the characteristic curves of the voltage of photovoltaic array and output power;it is a rather simple way to track the maximum power.The Fuzzy Logic Control,on the other hand,can be used to solve problems that cannot be effectively dealt with by calculation rules,such as concepts,contemplation, deductive reasoning,and identification.Therefore,this paper uses these two kinds of methods to make simulation successively. The simulation results show that,the Three-point Comparison Method is more effective under the environment with more frequent change of solar radiation;however,the Fuzzy Logic Control has better tacking efficiency under the environment with violent change of solar radiation.
A travel time forecasting model based on change-point detection method
LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei
2017-06-01
Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.
A novel method of measuring the concentration of anaesthetic vapours using a dew-point hygrometer.
Wilkes, A R; Mapleson, W W; Mecklenburgh, J S
1994-02-01
The Antoine equation relates the saturated vapour pressure of a volatile substance, such as an anaesthetic agent, to the temperature. The measurement of the 'dew-point' of a dry gas mixture containing a volatile anaesthetic agent by a dew-point hygrometer permits the determination of the partial pressure of the anaesthetic agent. The accuracy of this technique is limited only by the accuracy of the Antoine coefficients and of the temperature measurement. Comparing measurements by the dew-point method with measurements by refractometry showed systematic discrepancies up to 0.2% and random discrepancies with SDS up to 0.07% concentration in the 1% to 5% range for three volatile anaesthetics. The systematic discrepancies may be due to errors in available data for the vapour pressures and/or the refractive indices of the anaesthetics.
Collective mass and zero-point energy in the generator-coordinate method
International Nuclear Information System (INIS)
Fiolhais, C.
1982-01-01
The aim of the present thesis if the study of the collective mass parameters and the zero-point energies in the GCM framework with special regards to the fission process. After the derivation of the collective Schroedinger equation in the framework of the Gaussian overlap approximation the inertia parameters are compared with those of the adiabatic time-dependent Hartree-Fock method. Then the kinetic and the potential zero-point energy occurring in this formulation are studied. Thereafter the practical application of the described formalism is discussed. Then a numerical calculation of the GCM mass parameter and the zero-point energy for the fission process on the base of a two-center shell model with a pairing force in the BCS approximation is presented. (HSI) [de
SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD
Directory of Open Access Journals (Sweden)
J. Zhang
2013-05-01
Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.
Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom
Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J.; Spadinger, Ingrid
2016-01-01
The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest.
Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom
International Nuclear Information System (INIS)
Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J; Spadinger, Ingrid
2016-01-01
The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest. (paper)
Mang, Samuel; Bucher, Hannes; Nickolaus, Peter
2016-01-01
The scintillation proximity assay (SPA) technology has been widely used to establish high throughput screens (HTS) for a range of targets in the pharmaceutical industry. PDE12 (aka. 2'- phosphodiesterase) has been published to participate in the degradation of oligoadenylates that are involved in the establishment of an antiviral state via the activation of ribonuclease L (RNAse-L). Degradation of oligoadenylates by PDE12 terminates these antiviral activities, leading to decreased resistance of cells for a variety of viral pathogens. Therefore inhibitors of PDE12 are discussed as antiviral therapy. Here we describe the use of the yttrium silicate SPA bead technology to assess inhibitory activity of compounds against PDE12 in a homogeneous, robust HTS feasible assay using tritiated adenosine-P-adenylate ([3H]ApA) as substrate. We found that the used [3H]ApA educt, was not able to bind to SPA beads, whereas the product [3H]AMP, as known before, was able to bind to SPA beads. This enables the measurement of PDE12 activity on [3H]ApA as a substrate using a wallac microbeta counter. This method describes a robust and high throughput capable format in terms of specificity, commonly used compound solvents, ease of detection and assay matrices. The method could facilitate the search for PDE12 inhibitors as antiviral compounds.
Directory of Open Access Journals (Sweden)
YANG Bisheng
2016-02-01
Full Text Available An efficient method of feature image generation of point clouds to automatically classify dense point clouds into different categories is proposed, such as terrain points, building points. The method first uses planar projection to sort points into different grids, then calculates the weights and feature values of grids according to the distribution of laser scanning points, and finally generates the feature image of point clouds. Thus, the proposed method adopts contour extraction and tracing means to extract the boundaries and point clouds of man-made objects (e.g. buildings and trees in 3D based on the image generated. Experiments show that the proposed method provides a promising solution for classifying and extracting man-made objects from vehicle-borne laser scanning point clouds.
Directory of Open Access Journals (Sweden)
Y. Susilo
2012-12-01
Full Text Available Research was carried out to obtain a selective ligand which strongly bind to estrogen receptors through determination of binding affinity of estradiol-17β-hemisuccinate. Selectivity of these compounds for estrogen receptor was studied using Scintillation Proximity Assay (SPA method. Primary reagents required in the SPA method including radioligand and receptor, the former was obtained by labeling of estradiol-17β-hemisuccinate with 125I, while MCF7 was used as the receptor. The labeling process was performed by indirect method via two-stage reaction. In this procedure, first step was activation of estradiol-17β-hemisuccinate using isobutylchloroformate and tributylamine as a catalist, while labeling of histamine with 125I was carried out using chloramin-T method to produce 125I-histamine. The second stage was conjugation of activated estradiol-17β-hemisuccinate with 125I-histamine. The product of estradiol-17β-hemisuccinate labeled 125I was extracted using toluene. Furtherly, the organic layer was purified by TLC system. Characterization of estradiol-17β-hemisuccinate labeled 125I from this solvent extraction was carried out by determining its radiochemical purity and the result was obtained using paper electrophoresis and TLC were 79.8% and 84.4% respectively. Radiochemical purity could be increased when purification step was repeated using TLC system, the result showed up to 97.8%. Determination of binding affinity by the SPA method was carried out using MCF7 cell lines which express estrogen receptors showed the value of Kd at 7.192 x 10-3 nM and maximum binding at 336.1 nM. This low value of Kd indicated that binding affinity of estradiol-17β-hemisuccinate was high or strongly binds to estrogen recepto
International Nuclear Information System (INIS)
Gao, H
2016-01-01
Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected to the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).
Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study
Directory of Open Access Journals (Sweden)
Javier Eduardo Diaz Zamboni
2017-01-01
Full Text Available The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.
A Novel Complementary Method for the Point-Scan Nondestructive Tests Based on Lamb Waves
Directory of Open Access Journals (Sweden)
Rahim Gorgin
2014-01-01
Full Text Available This study presents a novel area-scan damage identification method based on Lamb waves which can be used as a complementary method for point-scan nondestructive techniques. The proposed technique is able to identify the most probable locations of damages prior to point-scan test which lead to decreasing the time and cost of inspection. The test-piece surface was partitioned with some smaller areas and the damage probability presence of each area was evaluated. A0 mode of Lamb wave was generated and collected using a mobile handmade transducer set at each area. Subsequently, a damage presence probability index (DPPI based on the energy of captured responses was defined for each area. The area with the highest DPPI value highlights the most probable locations of damages in test-piece. Point-scan nondestructive methods can then be used once these areas are found to identify the damage in detail. The approach was validated by predicting the most probable locations of representative damages including through-thickness hole and crack in aluminum plates. The obtained experimental results demonstrated the high potential of developed method in defining the most probable locations of damages in structures.
TREEDE, Point Fluxes and Currents Based on Track Rotation Estimator by Monte-Carlo Method
International Nuclear Information System (INIS)
Dubi, A.
1985-01-01
1 - Description of problem or function: TREEDE is a Monte Carlo transport code based on the Track Rotation estimator, used, in general, to calculate fluxes and currents at a point. This code served as a test code in the development of the concept of the Track Rotation estimator, and therefore analogue Monte Carlo is used (i.e. no importance biasing). 2 - Method of solution: The basic idea is to follow the particle's track in the medium and then to rotate it such that it passes through the detector point. That is, rotational symmetry considerations (even in non-spherically symmetric configurations) are applied to every history, so that a very large fraction of the track histories can be rotated and made to pass through the point of interest; in this manner the 1/r 2 singularity in the un-collided flux estimator (next event estimator) is avoided. TREEDE, being a test code, is used to estimate leakage or in-medium fluxes at given points in a 3-dimensional finite box, where the source is an isotropic point source at the centre of the z = 0 surface. However, many of the constraints of geometry and source can be easily removed. The medium is assumed homogeneous with isotropic scattering, and one energy group only is considered. 3 - Restrictions on the complexity of the problem: One energy group, a homogeneous medium, isotropic scattering
Invalid-point removal based on epipolar constraint in the structured-light method
Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin
2018-06-01
In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.
Dew Point Calibration System Using a Quartz Crystal Sensor with a Differential Frequency Method
Directory of Open Access Journals (Sweden)
Ningning Lin
2016-11-01
Full Text Available In this paper, the influence of temperature on quartz crystal microbalance (QCM sensor response during dew point calibration is investigated. The aim is to present a compensation method to eliminate temperature impact on frequency acquisition. A new sensitive structure is proposed with double QCMs. One is kept in contact with the environment, whereas the other is not exposed to the atmosphere. There is a thermal conductivity silicone pad between each crystal and a refrigeration device to keep a uniform temperature condition. A differential frequency method is described in detail and is applied to calibrate the frequency characteristics of QCM at the dew point of −3.75 °C. It is worth noting that frequency changes of two QCMs were approximately opposite when temperature conditions were changed simultaneously. The results from continuous experiments show that the frequencies of two QCMs as the dew point moment was reached have strong consistency and high repeatability, leading to the conclusion that the sensitive structure can calibrate dew points with high reliability.
Dew Point Calibration System Using a Quartz Crystal Sensor with a Differential Frequency Method.
Lin, Ningning; Meng, Xiaofeng; Nie, Jing
2016-11-18
In this paper, the influence of temperature on quartz crystal microbalance (QCM) sensor response during dew point calibration is investigated. The aim is to present a compensation method to eliminate temperature impact on frequency acquisition. A new sensitive structure is proposed with double QCMs. One is kept in contact with the environment, whereas the other is not exposed to the atmosphere. There is a thermal conductivity silicone pad between each crystal and a refrigeration device to keep a uniform temperature condition. A differential frequency method is described in detail and is applied to calibrate the frequency characteristics of QCM at the dew point of -3.75 °C. It is worth noting that frequency changes of two QCMs were approximately opposite when temperature conditions were changed simultaneously. The results from continuous experiments show that the frequencies of two QCMs as the dew point moment was reached have strong consistency and high repeatability, leading to the conclusion that the sensitive structure can calibrate dew points with high reliability.
Basin boundaries and focal points in a map coming from Bairstow's method.
Gardini, Laura; Bischi, Gian-Italo; Fournier-Prunaret, Daniele
1999-06-01
This paper is devoted to the study of the global dynamical properties of a two-dimensional noninvertible map, with a denominator which can vanish, obtained by applying Bairstow's method to a cubic polynomial. It is shown that the complicated structure of the basins of attraction of the fixed points is due to the existence of singularities such as sets of nondefinition, focal points, and prefocal curves, which are specific to maps with a vanishing denominator, and have been recently introduced in the literature. Some global bifurcations that change the qualitative structure of the basin boundaries, are explained in terms of contacts among these singularities. The techniques used in this paper put in evidence some new dynamic behaviors and bifurcations, which are peculiar of maps with denominator; hence they can be applied to the analysis of other classes of maps coming from iterative algorithms (based on Newton's method, or others). (c) 1999 American Institute of Physics.
Energy Technology Data Exchange (ETDEWEB)
Barboza, Luciano Vitoria [Sul-riograndense Federal Institute for Education, Science and Technology (IFSul), Pelotas, RS (Brazil)], E-mail: luciano@pelotas.ifsul.edu.br
2009-07-01
This paper presents an overview about the maximum load ability problem and aims to study the main factors that limit this load ability. Specifically this study focuses its attention on determining which electric system buses influence directly on the power demand supply. The proposed approach uses the conventional maximum load ability method modelled by an optimization problem. The solution of this model is performed using the Interior Point methodology. As consequence of this solution method, the Lagrange multipliers are used as parameters that identify the probable 'bottlenecks' in the electric power system. The study also shows the relationship between the Lagrange multipliers and the cost function in the Interior Point optimization interpreted like sensitivity parameters. In order to illustrate the proposed methodology, the approach was applied to an IEEE test system and to assess its performance, a real equivalent electric system from the South- Southeast region of Brazil was simulated. (author)
Solution of Dendritic Growth in Steel by the Novel Point Automata Method
International Nuclear Information System (INIS)
Lorbiecka, A Z; Šarler, B
2012-01-01
The aim of this paper is the simulation of dendritic growth in steel in two dimensions by a coupled deterministic continuum mechanics heat and species transfer model and a stochastic localized phase change kinetics model taking into account the undercooling, curvature, kinetic, and thermodynamic anisotropy. The stochastic model receives temperature and concentration information from the deterministic model and the deterministic heat, and species diffusion equations receive the solid fraction information from the stochastic model. The heat and species transfer models are solved on a regular grid by the standard explicit Finite Difference Method (FDM). The phase-change kinetics model is solved by a novel Point Automata (PA) approach. The PA method was developed [1] in order to circumvent the mesh anisotropy problem, associated with the classical Cellular Automata (CA) method. The PA approach is established on randomly distributed points and neighbourhood configuration, similar as appears in meshless methods. A comparison of the PA and CA methods is shown. It is demonstrated that the results with the new PA method are not sensitive to the crystallographic orientations of the dendrite.
Teaching Methods in Mathematics and the Current Pedagogical Point of View in School Education.
岩崎, 潔; Kiyosi, Iwasaki
1995-01-01
It should be a basic principal that studies in teaching profession in universities should take into consideration the current pedagogical points of view in education and the future prospects of that education. This paper discusses the findings of a survey on the degree of recognition that students in our Math courses have about the currents pedagogical understading of teacher trainig. In this paper I will consider how to teach effectively teaching methods in Mathematics.
Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality
Li, Zhanchao; Gu, Chongshi; Wu, Zhongru
2013-01-01
The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model ...
Standard test method for determination of breaking strength of ceramic tiles by three-point loading
American Society for Testing and Materials. Philadelphia
2001-01-01
1.1 This test method covers the determination of breaking strength of ceramic tiles by three-point loading. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Short run hydrothermal coordination with network constraints using an interior point method
International Nuclear Information System (INIS)
Lopez Lezama, Jesus Maria; Gallego Pareja, Luis Alfonso; Mejia Giraldo, Diego
2008-01-01
This paper presents a lineal optimization model to solve the hydrothermal coordination problem. The main contribution of this work is the inclusion of the network constraints to the hydrothermal coordination problem and its solution using an interior point method. The proposed model allows working with a system that can be completely hydraulic, thermal or mixed. Results are presented on the IEEE 14 bus test system
Coordinate alignment of combined measurement systems using a modified common points method
Zhao, G.; Zhang, P.; Xiao, W.
2018-03-01
The co-ordinate metrology has been extensively researched for its outstanding advantages in measurement range and accuracy. The alignment of different measurement systems is usually achieved by integrating local coordinates via common points before measurement. The alignment errors would accumulate and significantly reduce the global accuracy, thus need to be minimized. In this thesis, a modified common points method (MCPM) is proposed to combine different traceable system errors of the cooperating machines, and optimize the global accuracy by introducing mutual geometric constraints. The geometric constraints, obtained by measuring the common points in individual local coordinate systems, provide the possibility to reduce the local measuring uncertainty whereby enhance the global measuring certainty. A simulation system is developed in Matlab to analyze the feature of MCPM using the Monto-Carlo method. An exemplary setup is constructed to verify the feasibility and efficiency of the proposed method associated with laser tracker and indoor iGPS systems. Experimental results show that MCPM could significantly improve the alignment accuracy.
A New Iterative Method for Equilibrium Problems and Fixed Point Problems
Directory of Open Access Journals (Sweden)
Abdul Latif
2013-01-01
Full Text Available Introducing a new iterative method, we study the existence of a common element of the set of solutions of equilibrium problems for a family of monotone, Lipschitz-type continuous mappings and the sets of fixed points of two nonexpansive semigroups in a real Hilbert space. We establish strong convergence theorems of the new iterative method for the solution of the variational inequality problem which is the optimality condition for the minimization problem. Our results improve and generalize the corresponding recent results of Anh (2012, Cianciaruso et al. (2010, and many others.
A time domain inverse dynamic method for the end point tracking control of a flexible manipulator
Kwon, Dong-Soo; Book, Wayne J.
1991-01-01
The inverse dynamic equation of a flexible manipulator was solved in the time domain. By dividing the inverse system equation into the causal part and the anticausal part, we calculated the torque and the trajectories of all state variables for a given end point trajectory. The interpretation of this method in the frequency domain was explained in detail using the two-sided Laplace transform and the convolution integral. The open loop control of the inverse dynamic method shows an excellent result in simulation. For real applications, a practical control strategy is proposed by adding a feedback tracking control loop to the inverse dynamic feedforward control, and its good experimental performance is presented.
International Nuclear Information System (INIS)
Xia, Donghui; Huang, Mei; Wang, Zhijiang; Zhang, Feng; Zhuang, Ge
2016-01-01
Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.
Energy Technology Data Exchange (ETDEWEB)
Xia, Donghui [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Huang, Mei [Southwestern Institute of Physics, 610041 Chengdu (China); Wang, Zhijiang, E-mail: wangzj@hust.edu.cn [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Zhang, Feng [Southwestern Institute of Physics, 610041 Chengdu (China); Zhuang, Ge [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China)
2016-10-15
Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.
A fast point-cloud computing method based on spatial symmetry of Fresnel field
Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui
2017-10-01
Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.
Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka
Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.
Performance Analysis of a Maximum Power Point Tracking Technique using Silver Mean Method
Directory of Open Access Journals (Sweden)
Shobha Rani Depuru
2018-01-01
Full Text Available The proposed paper presents a simple and particularly efficacious Maximum Power Point Tracking (MPPT algorithm based on Silver Mean Method (SMM. This method operates by choosing a search interval from the P-V characteristics of the given solar array and converges to MPP of the Solar Photo-Voltaic (SPV system by shrinking its interval. After achieving the maximum power, the algorithm stops shrinking and maintains constant voltage until the next interval is decided. The tracking capability efficiency and performance analysis of the proposed algorithm are validated by the simulation and experimental results with a 100W solar panel for variable temperature and irradiance conditions. The results obtained confirm that even without any perturbation and observation process, the proposed method still outperforms the traditional perturb and observe (P&O method by demonstrating far better steady state output, more accuracy and higher efficiency.
An improved local radial point interpolation method for transient heat conduction analysis
Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang
2013-06-01
The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.
An improved local radial point interpolation method for transient heat conduction analysis
International Nuclear Information System (INIS)
Wang Feng; Lin Gao; Hu Zhi-Qiang; Zheng Bao-Jing
2013-01-01
The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions
Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method
Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.
2018-01-01
Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.
Directory of Open Access Journals (Sweden)
Hosein Ghaffarzadeh
Full Text Available Abstract This paper investigates the numerical modeling of the flexural wave propagation in Euler-Bernoulli beams using the Hermite-type radial point interpolation method (HRPIM under the damage quantification approach. HRPIM employs radial basis functions (RBFs and their derivatives for shape function construction as a meshfree technique. The performance of Multiquadric(MQ RBF to the assessment of the reflection ratio was evaluated. HRPIM signals were compared with the theoretical and finite element responses. Results represent that MQ is a suitable RBF for HRPIM and wave propagation. However, the range of the proper shape parameters is notable. The number of field nodes is the main parameter for accurate wave propagation modeling using HRPIM. The size of support domain should be less thanan upper bound in order to prevent high error. With regard to the number of quadrature points, providing the minimum numbers of points are adequate for the stable solution, but the existence of more points in damage region does not leads to necessarily the accurate responses. It is concluded that the pure HRPIM, without any polynomial terms, is acceptable but considering a few terms will improve the accuracy; even though more terms make the problem unstable and inaccurate.
International Nuclear Information System (INIS)
Kim, Kyung-O; Jeong, Hae Sun; Jo, Daeseong
2017-01-01
Highlights: • Employing the Radial Point Interpolation Method (RPIM) in numerical analysis of multi-group neutron-diffusion equation. • Establishing mathematical formation of modified multi-group neutron-diffusion equation by RPIM. • Performing the numerical analysis for 2D critical problem. - Abstract: A mesh-free method is introduced to overcome the drawbacks (e.g., mesh generation and connectivity definition between the meshes) of mesh-based (nodal) methods such as the finite-element method and finite-difference method. In particular, the Point Interpolation Method (PIM) using a radial basis function is employed in the numerical analysis for the multi-group neutron-diffusion equation. The benchmark calculations are performed for the 2D homogeneous and heterogeneous problems, and the Multiquadrics (MQ) and Gaussian (EXP) functions are employed to analyze the effect of the radial basis function on the numerical solution. Additionally, the effect of the dimensionless shape parameter in those functions on the calculation accuracy is evaluated. According to the results, the radial PIM (RPIM) can provide a highly accurate solution for the multiplication eigenvalue and the neutron flux distribution, and the numerical solution with the MQ radial basis function exhibits the stable accuracy with respect to the reference solutions compared with the other solution. The dimensionless shape parameter directly affects the calculation accuracy and computing time. Values between 1.87 and 3.0 for the benchmark problems considered in this study lead to the most accurate solution. The difference between the analytical and numerical results for the neutron flux is significantly increased in the edge of the problem geometry, even though the maximum difference is lower than 4%. This phenomenon seems to arise from the derivative boundary condition at (x,0) and (0,y) positions, and it may be necessary to introduce additional strategy (e.g., the method using fictitious points and
Modular correction method of bending elastic modulus based on sliding behavior of contact point
International Nuclear Information System (INIS)
Ma, Zhichao; Zhao, Hongwei; Zhang, Qixun; Liu, Changyi
2015-01-01
During the three-point bending test, the sliding behavior of the contact point between the specimen and supports was observed, the sliding behavior was verified to affect the measurements of both deflection and span length, which directly affect the calculation of the bending elastic modulus. Based on the Hertz formula to calculate the elastic contact deformation and the theoretical calculation of the sliding behavior of the contact point, a theoretical model to precisely describe the deflection and span length as a function of bending load was established. Moreover, a modular correction method of bending elastic modulus was proposed, via the comparison between the corrected elastic modulus of three materials (H63 copper–zinc alloy, AZ31B magnesium alloy and 2026 aluminum alloy) and the standard modulus obtained from standard uniaxial tensile tests, the universal feasibility of the proposed correction method was verified. Also, the ratio of corrected to raw elastic modulus presented a monotonically decreasing tendency as the raw elastic modulus of materials increased. (technical note)
King, Nathan D.; Ruuth, Steven J.
2017-05-01
Maps from a source manifold M to a target manifold N appear in liquid crystals, color image enhancement, texture mapping, brain mapping, and many other areas. A numerical framework to solve variational problems and partial differential equations (PDEs) that map between manifolds is introduced within this paper. Our approach, the closest point method for manifold mapping, reduces the problem of solving a constrained PDE between manifolds M and N to the simpler problems of solving a PDE on M and projecting to the closest points on N. In our approach, an embedding PDE is formulated in the embedding space using closest point representations of M and N. This enables the use of standard Cartesian numerics for general manifolds that are open or closed, with or without orientation, and of any codimension. An algorithm is presented for the important example of harmonic maps and generalized to a broader class of PDEs, which includes p-harmonic maps. Improved efficiency and robustness are observed in convergence studies relative to the level set embedding methods. Harmonic and p-harmonic maps are computed for a variety of numerical examples. In these examples, we denoise texture maps, diffuse random maps between general manifolds, and enhance color images.
Analysis of tree stand horizontal structure using random point field methods
Directory of Open Access Journals (Sweden)
O. P. Sekretenko
2015-06-01
Full Text Available This paper uses the model approach to analyze the horizontal structure of forest stands. The main types of models of random point fields and statistical procedures that can be used to analyze spatial patterns of trees of uneven and even-aged stands are described. We show how modern methods of spatial statistics can be used to address one of the objectives of forestry – to clarify the laws of natural thinning of forest stand and the corresponding changes in its spatial structure over time. Studying natural forest thinning, we describe the consecutive stages of modeling: selection of the appropriate parametric model, parameter estimation and generation of point patterns in accordance with the selected model, the selection of statistical functions to describe the horizontal structure of forest stands and testing of statistical hypotheses. We show the possibilities of a specialized software package, spatstat, which is designed to meet the challenges of spatial statistics and provides software support for modern methods of analysis of spatial data. We show that a model of stand thinning that does not consider inter-tree interaction can project the size distribution of the trees properly, but the spatial pattern of the modeled stand is not quite consistent with observed data. Using data of three even-aged pine forest stands of 25, 55, and 90-years old, we demonstrate that the spatial point process models are useful for combining measurements in the forest stands of different ages to study the forest stand natural thinning.
An unsteady point vortex method for coupled fluid-solid problems
Energy Technology Data Exchange (ETDEWEB)
Michelin, Sebastien [Jacobs School of Engineering, UCSD, Department of Mechanical and Aerospace Engineering, La Jolla, CA (United States); Ecole Nationale Superieure des Mines de Paris, Paris (France); Llewellyn Smith, Stefan G. [Jacobs School of Engineering, UCSD, Department of Mechanical and Aerospace Engineering, La Jolla, CA (United States)
2009-06-15
A method is proposed for the study of the two-dimensional coupled motion of a general sharp-edged solid body and a surrounding inviscid flow. The formation of vorticity at the body's edges is accounted for by the shedding at each corner of point vortices whose intensity is adjusted at each time step to satisfy the regularity condition on the flow at the generating corner. The irreversible nature of vortex shedding is included in the model by requiring the vortices' intensity to vary monotonically in time. A conservation of linear momentum argument is provided for the equation of motion of these point vortices (Brown-Michael equation). The forces and torques applied on the solid body are computed as explicit functions of the solid body velocity and the vortices' position and intensity, thereby providing an explicit formulation of the vortex-solid coupled problem as a set of non-linear ordinary differential equations. The example of a falling card in a fluid initially at rest is then studied using this method. The stability of broadside-on fall is analysed and the shedding of vorticity from both plate edges is shown to destabilize this position, consistent with experimental studies and numerical simulations of this problem. The reduced-order representation of the fluid motion in terms of point vortices is used to understand the physical origin of this destabilization. (orig.)
Nazemizadeh, M.; Rahimi, H. N.; Amini Khoiy, K.
2012-03-01
This paper presents an optimal control strategy for optimal trajectory planning of mobile robots by considering nonlinear dynamic model and nonholonomic constraints of the system. The nonholonomic constraints of the system are introduced by a nonintegrable set of differential equations which represent kinematic restriction on the motion. The Lagrange's principle is employed to derive the nonlinear equations of the system. Then, the optimal path planning of the mobile robot is formulated as an optimal control problem. To set up the problem, the nonlinear equations of the system are assumed as constraints, and a minimum energy objective function is defined. To solve the problem, an indirect solution of the optimal control method is employed, and conditions of the optimality derived as a set of coupled nonlinear differential equations. The optimality equations are solved numerically, and various simulations are performed for a nonholonomic mobile robot to illustrate effectiveness of the proposed method.
Model reduction method using variable-separation for stochastic saddle point problems
Jiang, Lijian; Li, Qiuqi
2018-02-01
In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.
Energy Technology Data Exchange (ETDEWEB)
Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica
2015-07-01
In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)
International Nuclear Information System (INIS)
Tumelero, Fernanda; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana
2015-01-01
In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)
Statistical methods for change-point detection in surface temperature records
Pintar, A. L.; Possolo, A.; Zhang, N. F.
2013-09-01
We describe several statistical methods to detect possible change-points in a time series of values of surface temperature measured at a meteorological station, and to assess the statistical significance of such changes, taking into account the natural variability of the measured values, and the autocorrelations between them. These methods serve to determine whether the record may suffer from biases unrelated to the climate signal, hence whether there may be a need for adjustments as considered by M. J. Menne and C. N. Williams (2009) "Homogenization of Temperature Series via Pairwise Comparisons", Journal of Climate 22 (7), 1700-1717. We also review methods to characterize patterns of seasonality (seasonal decomposition using monthly medians or robust local regression), and explain the role they play in the imputation of missing values, and in enabling robust decompositions of the measured values into a seasonal component, a possible climate signal, and a station-specific remainder. The methods for change-point detection that we describe include statistical process control, wavelet multi-resolution analysis, adaptive weights smoothing, and a Bayesian procedure, all of which are applicable to single station records.
Beam-pointing error compensation method of phased array radar seeker with phantom-bit technology
Directory of Open Access Journals (Sweden)
Qiuqiu WEN
2017-06-01
Full Text Available A phased array radar seeker (PARS must be able to effectively decouple body motion and accurately extract the line-of-sight (LOS rate for target missile tracking. In this study, the real-time two-channel beam pointing error (BPE compensation method of PARS for LOS rate extraction is designed. The PARS discrete beam motion principium is analyzed, and the mathematical model of beam scanning control is finished. According to the principle of the antenna element shift phase, both the antenna element shift phase law and the causes of beam-pointing error under phantom-bit conditions are analyzed, and the effect of BPE caused by phantom-bit technology (PBT on the extraction accuracy of the LOS rate is examined. A compensation method is given, which includes coordinate transforms, beam angle margin compensation, and detector dislocation angle calculation. When the method is used, the beam angle margin in the pitch and yaw directions is calculated to reduce the effect of the missile body disturbance and to improve LOS rate extraction precision by compensating for the detector dislocation angle. The simulation results validate the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Tumelero, Fernanda; Petersen, Claudio Zen; Goncalves, Glenio Aguiar [Universidade Federal de Pelotas, Capao do Leao, RS (Brazil). Programa de Pos Graduacao em Modelagem Matematica; Schramm, Marcelo [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica
2016-12-15
In this work, we report a solution to solve the Neutron Point Kinetics Equations applying the Polynomial Approach Method. The main idea is to expand the neutron density and delayed neutron precursors as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions and the analytical continuation is used to determine the solutions of the next intervals. A genuine error control is developed based on an analogy with the Rest Theorem. For illustration, we also report simulations for different approaches types (linear, quadratic and cubic). The results obtained by numerical simulations for linear approximation are compared with results in the literature.
An improved maximum power point tracking method for a photovoltaic system
Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes
2016-06-01
In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.
Point Measurements of Fermi Velocities by a Time-of-Flight Method
DEFF Research Database (Denmark)
Falk, David S.; Henningsen, J. O.; Skriver, Hans Lomholt
1972-01-01
The present paper describes in detail a new method of obtaining information about the Fermi velocity of electrons in metals, point by point, along certain contours on the Fermi surface. It is based on transmission of microwaves through thin metal slabs in the presence of a static magnetic field...... applied parallel to the surface. The electrons carry the signal across the slab and arrive at the second surface with a phase delay which is measured relative to a reference signal; the velocities are derived by analyzing the magnetic field dependence of the phase delay. For silver we have in this way...... obtained one component of the velocity along half the circumference of the centrally symmetric orbit for B→∥[100]. The results are in agreement with current models for the Fermi surface. For B→∥[011], the electrons involved are not moving in a symmetry plane of the Fermi surface. In such cases one cannot...
Iterative method to compute the Fermat points and Fermat distances of multiquarks
International Nuclear Information System (INIS)
Bicudo, P.; Cardoso, M.
2009-01-01
The multiquark confining potential is proportional to the total distance of the fundamental strings linking the quarks and antiquarks. We address the computation of the total string distance and of the Fermat points where the different strings meet. For a meson the distance is trivially the quark-antiquark distance. For a baryon the problem was solved geometrically from the onset by Fermat and by Torricelli, it can be determined just with a rule and a compass, and we briefly review it. However we also show that for tetraquarks, pentaquarks, hexaquarks, etc., the geometrical solution is much more complicated. Here we provide an iterative method, converging fast to the correct Fermat points and the total distances, relevant for the multiquark potentials.
Iterative method to compute the Fermat points and Fermat distances of multiquarks
Energy Technology Data Exchange (ETDEWEB)
Bicudo, P. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)], E-mail: bicudo@ist.utl.pt; Cardoso, M. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)
2009-04-13
The multiquark confining potential is proportional to the total distance of the fundamental strings linking the quarks and antiquarks. We address the computation of the total string distance and of the Fermat points where the different strings meet. For a meson the distance is trivially the quark-antiquark distance. For a baryon the problem was solved geometrically from the onset by Fermat and by Torricelli, it can be determined just with a rule and a compass, and we briefly review it. However we also show that for tetraquarks, pentaquarks, hexaquarks, etc., the geometrical solution is much more complicated. Here we provide an iterative method, converging fast to the correct Fermat points and the total distances, relevant for the multiquark potentials.
Pain point system scale (PPSS: a method for postoperative pain estimation in retrospective studies
Directory of Open Access Journals (Sweden)
Gkotsi A
2012-11-01
Full Text Available Anastasia Gkotsi,1 Dimosthenis Petsas,2 Vasilios Sakalis,3 Asterios Fotas,3 Argyrios Triantafyllidis,3 Ioannis Vouros,3 Evangelos Saridakis,2 Georgios Salpiggidis,3 Athanasios Papathanasiou31Department of Experimental Physiology, Aristotle University of Thessaloniki, Thessaloniki, Greece; 2Department of Anesthesiology, 3Department of Urology, Hippokration General Hospital, Thessaloniki, GreecePurpose: Pain rating scales are widely used for pain assessment. Nevertheless, a new tool is required for pain assessment needs in retrospective studies.Methods: The postoperative pain episodes, during the first postoperative day, of three patient groups were analyzed. Each pain episode was assessed by a visual analog scale, numerical rating scale, verbal rating scale, and a new tool – pain point system scale (PPSS – based on the analgesics administered. The type of analgesic was defined based on the authors’ clinic protocol, patient comorbidities, pain assessment tool scores, and preadministered medications by an artificial neural network system. At each pain episode, each patient was asked to fill the three pain scales. Bartlett’s test and Kaiser–Meyer–Olkin criterion were used to evaluate sample sufficiency. The proper scoring system was defined by varimax rotation. Spearman’s and Pearson’s coefficients assessed PPSS correlation to the known pain scales.Results: A total of 262 pain episodes were evaluated in 124 patients. The PPSS scored one point for each dose of paracetamol, three points for each nonsteroidal antiinflammatory drug or codeine, and seven points for each dose of opioids. The correlation between the visual analog scale and PPSS was found to be strong and linear (rho: 0.715; P <0.001 and Pearson: 0.631; P < 0.001.Conclusion: PPSS correlated well with the known pain scale and could be used safely in the evaluation of postoperative pain in retrospective studies.Keywords: pain scale, retrospective studies, pain point system
Directory of Open Access Journals (Sweden)
Urriza I
2010-01-01
Full Text Available Abstract This paper presents a word length selection method for the implementation of digital controllers in both fixed-point and floating-point hardware on FPGAs. This method uses the new types defined in the VHDL-2008 fixed-point and floating-point packages. These packages allow customizing the word length of fixed and floating point representations and shorten the design cycle simplifying the design of arithmetic operations. The method performs bit-true simulations in order to determine the word length to represent the constant coefficients and the internal signals of the digital controller while maintaining the control system specifications. A mixed-signal simulation tool is used to simulate the closed loop system as a whole in order to analyze the impact of the quantization effects and loop delays on the control system performance. The method is applied to implement a digital controller for a switching power converter. The digital circuit is implemented on an FPGA, and the simulations are experimentally verified.
Simulating Ice Shelf Response to Potential Triggers of Collapse Using the Material Point Method
Huth, A.; Smith, B. E.
2017-12-01
Weakening or collapse of an ice shelf can reduce the buttressing effect of the shelf on its upstream tributaries, resulting in sea level rise as the flux of grounded ice into the ocean increases. Here we aim to improve sea level rise projections by developing a prognostic 2D plan-view model that simulates the response of an ice sheet/ice shelf system to potential triggers of ice shelf weakening or collapse, such as calving events, thinning, and meltwater ponding. We present initial results for Larsen C. Changes in local ice shelf stresses can affect flow throughout the entire domain, so we place emphasis on calibrating our model to high-resolution data and precisely evolving fracture-weakening and ice geometry throughout the simulations. We primarily derive our initial ice geometry from CryoSat-2 data, and initialize the model by conducting a dual inversion for the ice viscosity parameter and basal friction coefficient that minimizes mismatch between modeled velocities and velocities derived from Landsat data. During simulations, we implement damage mechanics to represent fracture-weakening, and track ice thickness evolution, grounding line position, and ice front position. Since these processes are poorly represented by the Finite Element Method (FEM) due to mesh resolution issues and numerical diffusion, we instead implement the Material Point Method (MPM) for our simulations. In MPM, the ice domain is discretized into a finite set of Lagrangian material points that carry all variables and are tracked throughout the simulation. Each time step, information from the material points is projected to a Eulerian grid where the momentum balance equation (shallow shelf approximation) is solved similarly to FEM, but essentially treating the material points as integration points. The grid solution is then used to determine the new positions of the material points and update variables such as thickness and damage in a diffusion-free Lagrangian frame. The grid does not store
Comparison of P&O and INC Methods in Maximum Power Point Tracker for PV Systems
Chen, Hesheng; Cui, Yuanhui; Zhao, Yue; Wang, Zhisen
2018-03-01
In the context of renewable energy, the maximum power point tracker (MPPT) is often used to increase the solar power efficiency, taking into account the randomness and volatility of solar energy due to changes in temperature and photovoltaic. In all MPPT techniques, perturb & observe and incremental conductance are widely used in MPPT controllers, because of their simplicity and ease of operation. According to the internal structure of the photovoltaic cell and the output volt-ampere characteristic, this paper established the circuit model and establishes the dynamic simulation model in Matlab/Simulink with the preparation of the s function. The perturb & observe MPPT method and the incremental conductance MPPT method were analyzed and compared by the theoretical analysis and digital simulation. The simulation results have shown that the system with INC MPPT method has better dynamic performance and improves the output power of photovoltaic power generation.
A method of undifferenced ambiguity resolution for GPS+GLONASS precise point positioning.
Yi, Wenting; Song, Weiwei; Lou, Yidong; Shi, Chuang; Yao, Yibin
2016-05-25
Integer ambiguity resolution is critical for achieving positions of high precision and for shortening the convergence time of precise point positioning (PPP). However, GLONASS adopts the signal processing technology of frequency division multiple access and results in inter-frequency code biases (IFCBs), which are currently difficult to correct. This bias makes the methods proposed for GPS ambiguity fixing unsuitable for GLONASS. To realize undifferenced GLONASS ambiguity fixing, we propose an undifferenced ambiguity resolution method for GPS+GLONASS PPP, which considers the IFCBs estimation. The experimental result demonstrates that the success rate of GLONASS ambiguity fixing can reach 75% through the proposed method. Compared with the ambiguity float solutions, the positioning accuracies of ambiguity-fixed solutions of GLONASS-only PPP are increased by 12.2%, 20.9%, and 10.3%, and that of the GPS+GLONASS PPP by 13.0%, 35.2%, and 14.1% in the North, East and Up directions, respectively.
Directory of Open Access Journals (Sweden)
David S Nolan
2011-08-01
Full Text Available A new method is presented to determine the favorability for tropical cyclone development of an atmospheric environment, as represented by a mean sounding of temperature, humidity, and wind as a function of height. A mesoscale model with nested, moving grids is used to simulate the evolution of a weak, precursor vortex in a large domain with doubly periodic boundary conditions. The equations of motion are modified to maintain arbitrary profiles of both zonal and meridional wind as a function of height, without the necessary large-scale temperature gradients that cannot be consistent with doubly periodic boundary conditions. Comparisons between simulations using the point-downscaling method and simulations using wind shear balanced by temperature gradients illustrate both the advantages and the limitations of the technique. Further examples of what can be learned with this method are presented using both idealized and observed soundings and wind profiles.
H-Point Standard Addition Method for Simultaneous Determination of Eosin and Erytrosine
Directory of Open Access Journals (Sweden)
Amandeep Kaur
2011-01-01
Full Text Available A new, simple, sensitive and selective H-point standard addition method (HPSAM has been developed for resolving binary mixture of food colorants eosin and erythrosine, which show overlapped spectra. The method is based on the complexation of food dyes eosin and erythrosine with Fe(III complexing reagent at pH 5.5 and solubilizing complexes in triton x-100 micellar media. Absorbances at the two pairs of wavelengths, 540 and 550 nm (when eosin acts as analyte or 518 and 542 nm (when erythrosine act as analyte were monitored. This method has satisfactorily been applied for the determination of eosin and erythrosine dyes in synthetic mixtures and commercial products.
Change-Point Detection Method for Clinical Decision Support System Rule Monitoring.
Liu, Siqi; Wright, Adam; Hauskrecht, Milos
2017-06-01
A clinical decision support system (CDSS) and its components can malfunction due to various reasons. Monitoring the system and detecting its malfunctions can help one to avoid any potential mistakes and associated costs. In this paper, we investigate the problem of detecting changes in the CDSS operation, in particular its monitoring and alerting subsystem, by monitoring its rule firing counts. The detection should be performed online, that is whenever a new datum arrives, we want to have a score indicating how likely there is a change in the system. We develop a new method based on Seasonal-Trend decomposition and likelihood ratio statistics to detect the changes. Experiments on real and simulated data show that our method has a lower delay in detection compared with existing change-point detection methods.
Directory of Open Access Journals (Sweden)
Yin Yanshu
2017-12-01
Full Text Available In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.
A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems
International Nuclear Information System (INIS)
Zhang Guiyong; Liu Guirong
2010-01-01
In the framework of a weakened weak (W 2 ) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W 2 formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H 1 space, but in a G 1 space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H 1 space and G 1 space can be viewed as a space of functions with weakened weak (W 2 ) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W 2 formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W 2 formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much softer than the overly-stiff FEM model and
METHOD OF GREEN FUNCTIONS IN MATHEMATICAL MODELLING FOR TWO-POINT BOUNDARY-VALUE PROBLEMS
Directory of Open Access Journals (Sweden)
E. V. Dikareva
2015-01-01
Full Text Available Summary. In many applied problems of control, optimization, system theory, theoretical and construction mechanics, for problems with strings and nods structures, oscillation theory, theory of elasticity and plasticity, mechanical problems connected with fracture dynamics and shock waves, the main instrument for study these problems is a theory of high order ordinary differential equations. This methodology is also applied for studying mathematical models in graph theory with different partitioning based on differential equations. Such equations are used for theoretical foundation of mathematical models but also for constructing numerical methods and computer algorithms. These models are studied with use of Green function method. In the paper first necessary theoretical information is included on Green function method for multi point boundary-value problems. The main equation is discussed, notions of multi-point boundary conditions, boundary functionals, degenerate and non-degenerate problems, fundamental matrix of solutions are introduced. In the main part the problem to study is formulated in terms of shocks and deformations in boundary conditions. After that the main results are formulated. In theorem 1 conditions for existence and uniqueness of solutions are proved. In theorem 2 conditions are proved for strict positivity and equal measureness for a pair of solutions. In theorem 3 existence and estimates are proved for the least eigenvalue, spectral properties and positivity of eigenfunctions. In theorem 4 the weighted positivity is proved for the Green function. Some possible applications are considered for a signal theory and transmutation operators.
A RECOGNITION METHOD FOR AIRPLANE TARGETS USING 3D POINT CLOUD DATA
Directory of Open Access Journals (Sweden)
M. Zhou
2012-07-01
Full Text Available LiDAR is capable of obtaining three dimension coordinates of the terrain and targets directly and is widely applied in digital city, emergent disaster mitigation and environment monitoring. Especially because of its ability of penetrating the low density vegetation and canopy, LiDAR technique has superior advantages in hidden and camouflaged targets detection and recognition. Based on the multi-echo data of LiDAR, and combining the invariant moment theory, this paper presents a recognition method for classic airplanes (even hidden targets mainly under the cover of canopy using KD-Tree segmented point cloud data. The proposed algorithm firstly uses KD-tree to organize and manage point cloud data, and makes use of the clustering method to segment objects, and then the prior knowledge and invariant recognition moment are utilized to recognise airplanes. The outcomes of this test verified the practicality and feasibility of the method derived in this paper. And these could be applied in target measuring and modelling of subsequent data processing.
Directory of Open Access Journals (Sweden)
Reza Kiani Mavi
2013-01-01
Full Text Available Data envelopment analysis (DEA is used to evaluate the performance of decision making units (DMUs with multiple inputs and outputs in a homogeneous group. In this way, the acquired relative efficiency score for each decision making unit lies between zero and one where a number of them may have an equal efficiency score of one. DEA successfully divides them into two categories of efficient DMUs and inefficient DMUs. A ranking for inefficient DMUs is given but DEA does not provide further information about the efficient DMUs. One of the popular methods for evaluating and ranking DMUs is the common set of weights (CSW method. We generate a CSW model with considering nondiscretionary inputs that are beyond the control of DMUs and using ideal point method. The main idea of this approach is to minimize the distance between the evaluated decision making unit and the ideal decision making unit (ideal point. Using an empirical example we put our proposed model to test by applying it to the data of some 20 bank branches and rank their efficient units.
Curvature computation in volume-of-fluid method based on point-cloud sampling
Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.
2018-01-01
This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.
A Regularized Algorithm for the Proximal Split Feasibility Problem
Directory of Open Access Journals (Sweden)
Zhangsong Yao
2014-01-01
Full Text Available The proximal split feasibility problem has been studied. A regularized method has been presented for solving the proximal split feasibility problem. Strong convergence theorem is given.
Directory of Open Access Journals (Sweden)
Zhiqiang Yang
2016-05-01
Full Text Available Due to the dynamic process of maximum power point tracking (MPPT caused by turbulence and large rotor inertia, variable-speed wind turbines (VSWTs cannot maintain the optimal tip speed ratio (TSR from cut-in wind speed up to the rated speed. Therefore, in order to increase the total captured wind energy, the existing aerodynamic design for VSWT blades, which only focuses on performance improvement at a single TSR, needs to be improved to a multi-point design. In this paper, based on a closed-loop system of VSWTs, including turbulent wind, rotor, drive train and MPPT controller, the distribution of operational TSR and its description based on inflow wind energy are investigated. Moreover, a multi-point method considering the MPPT dynamic process for the aerodynamic optimization of VSWT blades is proposed. In the proposed method, the distribution of operational TSR is obtained through a dynamic simulation of the closed-loop system under a specific turbulent wind, and accordingly the multiple design TSRs and the corresponding weighting coefficients in the objective function are determined. Finally, using the blade of a National Renewable Energy Laboratory (NREL 1.5 MW wind turbine as the baseline, the proposed method is compared with the conventional single-point optimization method using the commercial software Bladed. Simulation results verify the effectiveness of the proposed method.
International Nuclear Information System (INIS)
Chen, Lin; Fan, Xiangtao; Du, Xiaoping
2014-01-01
Point cloud filtering is the basic and key step in LiDAR data processing. Adaptive Triangle Irregular Network Modelling (ATINM) algorithm and Threshold Segmentation on Elevation Statistics (TSES) algorithm are among the mature algorithms. However, few researches concentrate on the parameter selections of ATINM and the iteration condition of TSES, which can greatly affect the filtering results. First the paper presents these two key problems under two different terrain environments. For a flat area, small height parameter and angle parameter perform well and for areas with complex feature changes, large height parameter and angle parameter perform well. One-time segmentation is enough for flat areas, and repeated segmentations are essential for complex areas. Then the paper makes comparisons and analyses of the results by these two methods. ATINM has a larger I error in both two data sets as it sometimes removes excessive points. TSES has a larger II error in both two data sets as it ignores topological relations between points. ATINM performs well even with a large region and a dramatic topology while TSES is more suitable for small region with flat topology. Different parameters and iterations can cause relative large filtering differences
Lei Guo; Haoran Jiang; Xinhua Wang; Fangai Liu
2017-01-01
Point-of-interest (POI) recommendation has been well studied in recent years. However, most of the existing methods focus on the recommendation scenarios where users can provide explicit feedback. In most cases, however, the feedback is not explicit, but implicit. For example, we can only get a user’s check-in behaviors from the history of what POIs she/he has visited, but never know how much she/he likes and why she/he does not like them. Recently, some researchers have noticed this problem ...
Calculation and decomposition of spot price using interior point nonlinear optimisation methods
International Nuclear Information System (INIS)
Xie, K.; Song, Y.H.
2004-01-01
Optimal pricing for real and reactive power is a very important issue in a deregulation environment. This paper summarises the optimal pricing problem as an extended optimal power flow problem. Then, spot prices are decomposed into different components reflecting various ancillary services. The derivation of the proposed decomposition model is described in detail. Primary-Dual Interior Point method is applied to avoid 'go' 'no go' gauge. In addition, the proposed approach can be extended to cater for other types of ancillary services. (author)
Approximate Dual Averaging Method for Multiagent Saddle-Point Problems with Stochastic Subgradients
Directory of Open Access Journals (Sweden)
Deming Yuan
2014-01-01
Full Text Available This paper considers the problem of solving the saddle-point problem over a network, which consists of multiple interacting agents. The global objective function of the problem is a combination of local convex-concave functions, each of which is only available to one agent. Our main focus is on the case where the projection steps are calculated approximately and the subgradients are corrupted by some stochastic noises. We propose an approximate version of the standard dual averaging method and show that the standard convergence rate is preserved, provided that the projection errors decrease at some appropriate rate and the noises are zero-mean and have bounded variance.
Modular endoprosthetic replacement for metastatic tumours of the proximal femur
Directory of Open Access Journals (Sweden)
Carter Simon R
2008-11-01
Full Text Available Abstract Background and aims Endoprosthetic replacements of the proximal femur are commonly required to treat destructive metastases with either impending or actual pathological fractures at this site. Modular prostheses provide an off the shelf availability and can be adapted to most reconstructive situations for proximal femoral replacements. The aim of this study was to assess the clinical and functional outcomes following modular tumour prosthesis reconstruction of the proximal femur in 100 consecutive patients with metastatic tumours and to compare them with the published results of patients with modular and custom made endoprosthetic replacements. Methods 100 consecutive patients who underwent modular tumour prosthetic reconstruction of the proximal femur for metastases using the METS system from 2001 to 2007 were studied. The patient, tumour and treatment factors in relation to overall survival, local control, implant survival and complications were analysed. Functional scores were obtained from surviving patients. Results and conclusion There were 45 male and 55 female patients. The mean age was 60.2 years. The indications were metastases. Seventy five patients presented with pathological fracture or with failed fixation and 25 patients were at a high risk of developing a fracture. The mean follow up was 15.9 months [range 0–77]. Three patients died within 2 weeks following surgery. 69 patients have died and 31 are alive. Of the 69 patients who were dead 68 did not need revision surgery indicating that the implant provided single definitive treatment which outlived the patient. There were three dislocations (2/5 with THR and 1/95 with unipolar femoral heads. 6 patients had deep infections. The estimated five year implant survival (Kaplan-Meier analysis was 83.1% with revision as end point. The mean TESS score was 64% (54%–82%. We conclude that METS modular tumour prosthesis for proximal femur provides versatility; low implant related
A new maximum power point method based on a sliding mode approach for solar energy harvesting
International Nuclear Information System (INIS)
Farhat, Maissa; Barambones, Oscar; Sbita, Lassaad
2017-01-01
Highlights: • Create a simple, easy of implement and accurate V_M_P_P estimator. • Stability analysis of the proposed system based on the Lyapunov’s theory. • A comparative study versus P&O, highlight SMC good performances. • Construct a new PS-SMC algorithm to include the partial shadow case. • Experimental validation of the SMC MPP tracker. - Abstract: This paper presents a photovoltaic (PV) system with a maximum power point tracking (MPPT) facility. The goal of this work is to maximize power extraction from the photovoltaic generator (PVG). This goal is achieved using a sliding mode controller (SMC) that drives a boost converter connected between the PVG and the load. The system is modeled and tested under MATLAB/SIMULINK environment. In simulation, the sliding mode controller offers fast and accurate convergence to the maximum power operating point that outperforms the well-known perturbation and observation method (P&O). The sliding mode controller performance is evaluated during steady-state, against load varying and panel partial shadow (PS) disturbances. To confirm the above conclusion, a practical implementation of the maximum power point tracker based sliding mode controller on a hardware setup is performed on a dSPACE real time digital control platform. The data acquisition and the control system are conducted all around dSPACE 1104 controller board and its RTI environment. The experimental results demonstrate the validity of the proposed control scheme over a stand-alone real photovoltaic system.
Wang, D.; Hollaus, M.; Pfeifer, N.
2017-09-01
Classification of wood and leaf components of trees is an essential prerequisite for deriving vital tree attributes, such as wood mass, leaf area index (LAI) and woody-to-total area. Laser scanning emerges to be a promising solution for such a request. Intensity based approaches are widely proposed, as different components of a tree can feature discriminatory optical properties at the operating wavelengths of a sensor system. For geometry based methods, machine learning algorithms are often used to separate wood and leaf points, by providing proper training samples. However, it remains unclear how the chosen machine learning classifier and features used would influence classification results. To this purpose, we compare four popular machine learning classifiers, namely Support Vector Machine (SVM), Na¨ıve Bayes (NB), Random Forest (RF), and Gaussian Mixture Model (GMM), for separating wood and leaf points from terrestrial laser scanning (TLS) data. Two trees, an Erytrophleum fordii and a Betula pendula (silver birch) are used to test the impacts from classifier, feature set, and training samples. Our results showed that RF is the best model in terms of accuracy, and local density related features are important. Experimental results confirmed the feasibility of machine learning algorithms for the reliable classification of wood and leaf points. It is also noted that our studies are based on isolated trees. Further tests should be performed on more tree species and data from more complex environments.
Directory of Open Access Journals (Sweden)
D. Wang
2017-09-01
Full Text Available Classification of wood and leaf components of trees is an essential prerequisite for deriving vital tree attributes, such as wood mass, leaf area index (LAI and woody-to-total area. Laser scanning emerges to be a promising solution for such a request. Intensity based approaches are widely proposed, as different components of a tree can feature discriminatory optical properties at the operating wavelengths of a sensor system. For geometry based methods, machine learning algorithms are often used to separate wood and leaf points, by providing proper training samples. However, it remains unclear how the chosen machine learning classifier and features used would influence classification results. To this purpose, we compare four popular machine learning classifiers, namely Support Vector Machine (SVM, Na¨ıve Bayes (NB, Random Forest (RF, and Gaussian Mixture Model (GMM, for separating wood and leaf points from terrestrial laser scanning (TLS data. Two trees, an Erytrophleum fordii and a Betula pendula (silver birch are used to test the impacts from classifier, feature set, and training samples. Our results showed that RF is the best model in terms of accuracy, and local density related features are important. Experimental results confirmed the feasibility of machine learning algorithms for the reliable classification of wood and leaf points. It is also noted that our studies are based on isolated trees. Further tests should be performed on more tree species and data from more complex environments.
Proximity functions for general right cylinders
International Nuclear Information System (INIS)
Kellerer, A.M.
1981-01-01
Distributions of distances between pairs of points within geometrical objects, or the closely related proximity functions and geometric reduction factors, have applications to dosimetric and microdosimetric calculations. For convex bodies these functions are linked to the chord-length distributions that result from random intersections by straight lines. A synopsis of the most important relations is given. The proximity functions and related functions are derived for right cylinders with arbitrary cross sections. The solution utilizes the fact that the squares of the distances between two random points are sums of independently distributed squares of distances parallel and perpendicular to the axis of the cylinder. Analogous formulas are derived for the proximity functions or geometric reduction factors for a cylinder relative to a point. This requires only a minor modification of the solution
Directory of Open Access Journals (Sweden)
Ilaria Iaconeta
2017-09-01
Full Text Available The simulation of large deformation problems, involving complex history-dependent constitutive laws, is of paramount importance in several engineering fields. Particular attention has to be paid to the choice of a suitable numerical technique such that reliable results can be obtained. In this paper, a Material Point Method (MPM and a Galerkin Meshfree Method (GMM are presented and verified against classical benchmarks in solid mechanics. The aim is to demonstrate the good behavior of the methods in the simulation of cohesive-frictional materials, both in static and dynamic regimes and in problems dealing with large deformations. The vast majority of MPM techniques in the literatrue are based on some sort of explicit time integration. The techniques proposed in the current work, on the contrary, are based on implicit approaches, which can also be easily adapted to the simulation of static cases. The two methods are presented so as to highlight the similarities to rather than the differences from “standard” Updated Lagrangian (UL approaches commonly employed by the Finite Elements (FE community. Although both methods are able to give a good prediction, it is observed that, under very large deformation of the medium, GMM lacks robustness due to its meshfree natrue, which makes the definition of the meshless shape functions more difficult and expensive than in MPM. On the other hand, the mesh-based MPM is demonstrated to be more robust and reliable for extremely large deformation cases.
Przewłócki, Jarosław; Górski, Jarosław; Świdziński, Waldemar
2016-12-01
The paper deals with the probabilistic analysis of the settlement of a non-cohesive soil layer subjected to cyclic loading. Originally, the settlement assessment is based on a deterministic compaction model, which requires integration of a set of differential equations. However, with the use of the Bessel functions, the settlement of a soil stratum can be calculated by a simplified algorithm. The compaction model parameters were determined for soil samples taken from subsoil near the Izmit Bay, Turkey. The computations were performed for various sets of random variables. The point estimate method was applied, and the results were verified by the Monte Carlo method. The outcome leads to a conclusion that can be useful in the prediction of soil settlement under seismic loading.
A method for the solvent extraction of low-boiling-point plant volatiles.
Xu, Ning; Gruber, Margaret; Westcott, Neil; Soroka, Julie; Parkin, Isobel; Hegedus, Dwayne
2005-01-01
A new method has been developed for the extraction of volatiles from plant materials and tested on seedling tissue and mature leaves of Arabidopsis thaliana, pine needles and commercial mixtures of plant volatiles. Volatiles were extracted with n-pentane and then subjected to quick distillation at a moderate temperature. Under these conditions, compounds such as pigments, waxes and non-volatile compounds remained undistilled, while short-chain volatile compounds were distilled into a receiving flask using a high-efficiency condenser. Removal of the n-pentane and concentration of the volatiles in the receiving flask was carried out using a Vigreux column condenser prior to GC-MS. The method is ideal for the rapid extraction of low-boiling-point volatiles from small amounts of plant material, such as is required when conducting metabolic profiling or defining biological properties of volatile components from large numbers of mutant lines.
A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds
Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang
2017-04-01
3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.
Distance-based microfluidic quantitative detection methods for point-of-care testing.
Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James
2016-04-07
Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.
The shooting method and multiple solutions of two/multi-point BVPs of second-order ODE
Directory of Open Access Journals (Sweden)
Man Kam Kwong
2006-06-01
Full Text Available Within the last decade, there has been growing interest in the study of multiple solutions of two- and multi-point boundary value problems of nonlinear ordinary differential equations as fixed points of a cone mapping. Undeniably many good results have emerged. The purpose of this paper is to point out that, in the special case of second-order equations, the shooting method can be an effective tool, sometimes yielding better results than those obtainable via fixed point techniques.
Inversion of Gravity Anomalies Using Primal-Dual Interior Point Methods
Directory of Open Access Journals (Sweden)
Aaron A. Velasco
2016-06-01
Full Text Available Structural inversion of gravity datasets based on the use of density anomalies to derive robust images of the subsurface (delineating lithologies and their boundaries constitutes a fundamental non-invasive tool for geological exploration. The use of experimental techniques in geophysics to estimate and interpret di erences in the substructure based on its density properties have proven e cient; however, the inherent non-uniqueness associated with most geophysical datasets make this the ideal scenario for the use of recently developed robust constrained optimization techniques. We present a constrained optimization approach for a least squares inversion problem aimed to characterize 2-Dimensional Earth density structure models based on Bouguer gravity anomalies. The proposed formulation is solved with a Primal-Dual Interior-Point method including equality and inequality physical and structural constraints. We validate our results using synthetic density crustal structure models with varying complexity and illustrate the behavior of the algorithm using di erent initial density structure models and increasing noise levels in the observations. Based on these implementations, we conclude that the algorithm using Primal-Dual Interior-Point methods is robust, and its results always honor the geophysical constraints. Some of the advantages of using this approach for structural inversion of gravity data are the incorporation of a priori information related to the model parameters (coming from actual physical properties of the subsurface and the reduction of the solution space contingent on these boundary conditions.
Bai, Bing
2012-03-01
There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.
International Nuclear Information System (INIS)
Ghasemi, Jahan B.; Hashemi, Beshare; Shamsipur, Mojtaba
2012-01-01
A cloud point extraction (CPE) process using the nonionic surfactant Triton X-114 to simultaneous extraction and spectrophotometric determination of uranium and zirconium from aqueous solution using partial least squares (PLS) regression is investigated. The method is based on the complexation reaction of these cations with Alizarin Red S (ARS) and subsequent micelle-mediated extraction of products. The chemical parameters affecting the separation phase and detection process were studied and optimized. Under the optimum experimental conditions (i.e. pH 5.2, Triton X-114 = 0.20%, equilibrium time 10 min and cloud point 45 C), calibration graphs were linear in the range of 0.01-3 mg L -1 with detection limits of 2.0 and 0.80 μg L -1 for U and Zr, respectively. The experimental calibration set was composed of 16 sample solutions using an orthogonal design for two component mixtures. The root mean square error of predictions (RMSEPs) for U and Zr were 0.0907 and 0.1117, respectively. The interference effect of some anions and cations was also tested. The method was applied to the simultaneous determination of U and Zr in water samples.
Development of a cloud-point extraction method for copper and nickel determination in food samples
International Nuclear Information System (INIS)
Azevedo Lemos, Valfredo; Selis Santos, Moacy; Teixeira David, Graciete; Vasconcelos Maciel, Mardson; Almeida Bezerra, Marcos de
2008-01-01
A new, simple and versatile cloud-point extraction (CPE) methodology has been developed for the separation and preconcentration of copper and nickel. The metals in the initial aqueous solution were complexed with 2-(2'-benzothiazolylazo)-5-(N,N-diethyl)aminophenol (BDAP) and Triton X-114 was added as surfactant. Dilution of the surfactant-rich phase with acidified methanol was performed after phase separation, and the copper and nickel contents were measured by flame atomic absorption spectrometry. The variables affecting the cloud-point extraction were optimized using a Box-Behnken design. Under the optimum experimental conditions, enrichment factors of 29 and 25 were achieved for copper and nickel, respectively. The accuracy of the method was evaluated and confirmed by analysis of the followings certified reference materials: Apple Leaves, Spinach Leaves and Tomato Leaves. The limits of detection expressed to solid sample analysis were 0.1 μg g -1 (Cu) and 0.4 μg g -1 (Ni). The precision for 10 replicate measurements of 75 μg L -1 Cu or Ni was 6.4 and 1.0, respectively. The method has been successfully applied to the analysis of food samples
Slicing Method for curved façade and window extraction from point clouds
Iman Zolanvari, S. M.; Laefer, Debra F.
2016-09-01
Laser scanning technology is a fast and reliable method to survey structures. However, the automatic conversion of such data into solid models for computation remains a major challenge, especially where non-rectilinear features are present. Since, openings and the overall dimensions of the buildings are the most critical elements in computational models for structural analysis, this article introduces the Slicing Method as a new, computationally-efficient method for extracting overall façade and window boundary points for reconstructing a façade into a geometry compatible for computational modelling. After finding a principal plane, the technique slices a façade into limited portions, with each slice representing a unique, imaginary section passing through a building. This is done along a façade's principal axes to segregate window and door openings from structural portions of the load-bearing masonry walls. The method detects each opening area's boundaries, as well as the overall boundary of the façade, in part, by using a one-dimensional projection to accelerate processing. Slices were optimised as 14.3 slices per vertical metre of building and 25 slices per horizontal metre of building, irrespective of building configuration or complexity. The proposed procedure was validated by its application to three highly decorative, historic brick buildings. Accuracy in excess of 93% was achieved with no manual intervention on highly complex buildings and nearly 100% on simple ones. Furthermore, computational times were less than 3 sec for data sets up to 2.6 million points, while similar existing approaches required more than 16 hr for such datasets.
Measuring global oil trade dependencies: An application of the point-wise mutual information method
International Nuclear Information System (INIS)
Kharrazi, Ali; Fath, Brian D.
2016-01-01
Oil trade is one of the most vital networks in the global economy. In this paper, we analyze the 1998–2012 oil trade networks using the point-wise mutual information (PMI) method and determine the pairwise trade preferences and dependencies. Using examples of the USA's trade partners, this research demonstrates the usefulness of the PMI method as an additional methodological tool to evaluate the outcomes from countries' decisions to engage in preferred trading partners. A positive PMI value indicates trade preference where trade is larger than would be expected. For example, in 2012 the USA imported 2,548.7 kbpd despite an expected 358.5 kbpd of oil from Canada. Conversely, a negative PMI value indicates trade dis-preference where the amount of trade is smaller than what would be expected. For example, the 15-year average of annual PMI between Saudi Arabia and the U.S.A. is −0.130 and between Russia and the USA −1.596. We reflect the three primary reasons of discrepancies between actual and neutral model trade can be related to position, price, and politics. The PMI can quantify the political success or failure of trade preferences and can more accurately account temporal variation of interdependencies. - Highlights: • We analyzed global oil trade networks using the point-wise mutual information method. • We identified position, price, & politics as drivers of oil trade preference. • The PMI method is useful in research on complex trade networks and dependency theory. • A time-series analysis of PMI can track dependencies & evaluate policy decisions.
Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul
2014-01-01
Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
THE PROXIMATE COMPOSITION OF AFRICAN BUSH MANGO ...
African Journals Online (AJOL)
BIG TIMMY
Information regarding previous studies on these physico-chemical ... This behaviour may be attributed to its high myristic acid ... The authors express deep appreciation to the. Heads of ... of a typical rural processing method on the proximate ...
Directory of Open Access Journals (Sweden)
Ssennoga Twaha
2017-12-01
Full Text Available This study proposes and implements maximum power Point Tracking (MPPT control on thermoelectric generation system using an extremum seeking control (ESC algorithm. The MPPT is applied to guarantee maximum power extraction from the TEG system. The work has been carried out through modelling of thermoelectric generator/dc-dc converter system using Matlab/Simulink. The effectiveness of ESC technique has been assessed by comparing the results with those of the Perturb and Observe (P&O MPPT method under the same operating conditions. Results indicate that ESC MPPT method extracts more power than the P&O technique, where the output power of ESC technique is higher than that of P&O by 0.47 W or 6.1% at a hot side temperature of 200 °C. It is also noted that the ESC MPPT based model is almost fourfold faster than the P&O method. This is attributed to smaller MPPT circuit of ESC compared to that of P&O, hence we conclude that the ESC MPPT method outperforms the P&O technique.
Evaluation of factor for one-point venous blood sampling method based on the causality model
International Nuclear Information System (INIS)
Matsutomo, Norikazu; Onishi, Hideo; Kobara, Kouichi; Sasaki, Fumie; Watanabe, Haruo; Nagaki, Akio; Mimura, Hiroaki
2009-01-01
One-point venous blood sampling method (Mimura, et al.) can evaluate the regional cerebral blood flow (rCBF) value with a high degree of accuracy. However, the method is accompanied by complexity of technique because it requires a venous blood Octanol value, and its accuracy is affected by factors of input function. Therefore, we evaluated the factors that are used for input function to determine the accuracy input function and simplify the technique. The input function which uses the time-dependent brain count of 5 minutes, 15 minutes, and 25 minutes from administration, and the input function in which an objective variable is used as the artery octanol value to exclude the venous blood octanol value are created. Therefore, a correlation between these functions and rCBF value by the microsphere (MS) method is evaluated. Creation of a high-accuracy input function and simplification of technique are possible. The rCBF value obtained by the input function, the factor of which is a time-dependent brain count of 5 minutes from administration, and the objective variable is artery octanol value, had a high correlation with the MS method (y=0.899x+4.653, r=0.842). (author)
Different seeds to solve the equations of stochastic point kinetics using the Euler-Maruyama method
International Nuclear Information System (INIS)
Suescun D, D.; Oviedo T, M.
2017-09-01
In this paper, a numerical study of stochastic differential equations that describe the kinetics in a nuclear reactor is presented. These equations, known as the stochastic equations of punctual kinetics they model temporal variations in neutron population density and concentrations of deferred neutron precursors. Because these equations are probabilistic in nature (since random oscillations in the neutrons and population of precursors were considered to be approximately normally distributed, and these equations also possess strong coupling and stiffness properties) the proposed method for the numerical simulations is the Euler-Maruyama scheme that provides very good approximations for calculating the neutron population and concentrations of deferred neutron precursors. The method proposed for this work was computationally tested for different seeds, initial conditions, experimental data and forms of reactivity for a group of precursors and then for six groups of deferred neutron precursors at each time step with 5000 Brownian movements per seed. In a paper reported in the literature, the Euler-Maruyama method was proposed, but there are many doubts about the reported values, in addition to not reporting the seed used, so in this work is expected to rectify the reported values. After taking the average of the different seeds used to generate the pseudo-random numbers the results provided by the Euler-Maruyama scheme will be compared in mean and standard deviation with other methods reported in the literature and results of the deterministic model of the equations of the punctual kinetics. This comparison confirms in particular that the Euler-Maruyama scheme is an efficient method to solve the equations of stochastic point kinetics but different from the values found and reported by another author. The Euler-Maruyama method is simple and easy to implement, provides acceptable results for neutron population density and concentration of deferred neutron precursors and
Ryvolová, Markéta; Preisler, Jan; Foret, Frantisek; Hauser, Peter C; Krásenský, Pavel; Paull, Brett; Macka, Mirek
2010-01-01
This work for the first time combines three on-capillary detection methods, namely, capacitively coupled contactless conductometric (C(4)D), photometric (PD), and fluorimetric (FD), in a single (identical) point of detection cell, allowing concurrent measurements at a single point of detection for use in capillary electrophoresis, capillary electrochromatography, and capillary/nanoliquid chromatography. The novel design is based on a standard 6.3 mm i.d. fiber-optic SMA adapter with a drilled opening for the separation capillary to go through, to which two concentrically positioned C(4)D detection electrodes with a detection gap of 7 mm were added on each side acting simultaneously as capillary guides. The optical fibers in the SMA adapter were used for the photometric signal (absorbance), and another optical fiber at a 45 degrees angle to the capillary was applied to collect the emitted light for FD. Light emitting diodes (255 and 470 nm) were used as light sources for the PD and FD detection modes. LOD values were determined under flow-injection conditions to exclude any stacking effects: For the 470 nm LED limits of detection (LODs) for FD and PD were for fluorescein (1 x 10(-8) mol/L) and tartrazine (6 x 10(-6) mol/L), respectively, and the LOD for the C(4)D was for magnesium chloride (5 x 10(-7) mol/L). The advantage of the three different detection signals in a single point is demonstrated in capillary electrophoresis using model mixtures and samples including a mixture of fluorescent and nonfluorescent dyes and common ions, underivatized amino acids, and a fluorescently labeled digest of bovine serum albumin.
Advanced DNA-Based Point-of-Care Diagnostic Methods for Plant Diseases Detection
Directory of Open Access Journals (Sweden)
Han Yih Lau
2017-12-01
Full Text Available Diagnostic technologies for the detection of plant pathogens with point-of-care capability and high multiplexing ability are an essential tool in the fight to reduce the large agricultural production losses caused by plant diseases. The main desirable characteristics for such diagnostic assays are high specificity, sensitivity, reproducibility, quickness, cost efficiency and high-throughput multiplex detection capability. This article describes and discusses various DNA-based point-of care diagnostic methods for applications in plant disease detection. Polymerase chain reaction (PCR is the most common DNA amplification technology used for detecting various plant and animal pathogens. However, subsequent to PCR based assays, several types of nucleic acid amplification technologies have been developed to achieve higher sensitivity, rapid detection as well as suitable for field applications such as loop-mediated isothermal amplification, helicase-dependent amplification, rolling circle amplification, recombinase polymerase amplification, and molecular inversion probe. The principle behind these technologies has been thoroughly discussed in several review papers; herein we emphasize the application of these technologies to detect plant pathogens by outlining the advantages and disadvantages of each technology in detail.
Electromagnetic properties of proximity systems
Kresin, Vladimir Z.
1985-07-01
Magnetic screening in the proximity system Sα-Mβ, where Mβ is a normal metal N, semiconductor (semimetal), or a superconductor, is studied. Main attention is paid to the low-temperature region where nonlocality plays an important role. The thermodynamic Green's-function method is employed in order to describe the behavior of the proximity system in an external field. The temperature and thickness dependences of the penetration depth λ are obtained. The dependence λ(T) differs in a striking way from the dependence in usual superconductors. The strong-coupling effect is taken into account. A special case of screening in a superconducting film backed by a size-quantizing semimetal film is considered. The results obtained are in good agreement with experimental data.
Electromagnetic properties of proximity systems
International Nuclear Information System (INIS)
Kresin, V.Z.
1985-01-01
Magnetic screening in the proximity system S/sub α/-M/sub β/, where M/sub β/ is a normal metal N, semiconductor (semimetal), or a superconductor, is studied. Main attention is paid to the low-temperature region where nonlocality plays an important role. The thermodynamic Green's-function method is employed in order to describe the behavior of the proximity system in an external field. The temperature and thickness dependences of the penetration depth lambda are obtained. The dependence lambda(T) differs in a striking way from the dependence in usual superconductors. The strong-coupling effect is taken into account. A special case of screening in a superconducting film backed by a size-quantizing semimetal film is considered. The results obtained are in good agreement with experimental data
A new integrated dual time-point amyloid PET/MRI data analysis method
Energy Technology Data Exchange (ETDEWEB)
Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco [University Hospital of Padua, Nuclear Medicine Unit, Department of Medicine - DIMED, Padua (Italy); Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama [Leipzig University, Department of Nuclear Medicine, Leipzig (Germany); Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo [University Hospital of Padua, Neurology, Department of Neurosciences (DNS), Padua (Italy); Frigo, Anna Chiara [University Hospital of Padua, Biostatistics, Epidemiology and Public Health Unit, Department of Cardiac, Thoracic and Vascular Sciences, Padua (Italy)
2017-11-15
In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ({sup 18}F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between
A new integrated dual time-point amyloid PET/MRI data analysis method
International Nuclear Information System (INIS)
Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco; Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama; Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo; Frigo, Anna Chiara
2017-01-01
In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ( 18 F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between age
Directory of Open Access Journals (Sweden)
Kaijun Zhou
2017-09-01
Full Text Available The Jump Point Search (JPS algorithm is adopted for local path planning of the driverless car under urban environment, and it is a fast search method applied in path planning. Firstly, a vector Geographic Information System (GIS map, including Global Positioning System (GPS position, direction, and lane information, is built for global path planning. Secondly, the GIS map database is utilized in global path planning for the driverless car. Then, the JPS algorithm is adopted to avoid the front obstacle, and to find an optimal local path for the driverless car in the urban environment. Finally, 125 different simulation experiments in the urban environment demonstrate that JPS can search out the optimal and safety path successfully, and meanwhile, it has a lower time complexity compared with the Vector Field Histogram (VFH, the Rapidly Exploring Random Tree (RRT, A*, and the Probabilistic Roadmaps (PRM algorithms. Furthermore, JPS is validated usefully in the structured urban environment.
Comments on the comparison of global methods for linear two-point boundary value problems
International Nuclear Information System (INIS)
de Boor, C.; Swartz, B.
1977-01-01
A more careful count of the operations involved in solving the linear system associated with collocation of a two-point boundary value problem using a rough splines reverses results recently reported by others in this journal. In addition, it is observed that the use of the technique of ''condensation of parameters'' can decrease the computer storage required. Furthermore, the use of a particular highly localized basis can also reduce the setup time when the mesh is irregular. Finally, operation counts are roughly estimated for the solution of certain linear system associated with two competing collocation methods; namely, collocation with smooth splines and collocation of the equivalent first order system with continuous piecewise polynomials
Solving eigenvalue problems on curved surfaces using the Closest Point Method
Macdonald, Colin B.
2011-06-01
Eigenvalue problems are fundamental to mathematics and science. We present a simple algorithm for determining eigenvalues and eigenfunctions of the Laplace-Beltrami operator on rather general curved surfaces. Our algorithm, which is based on the Closest Point Method, relies on an embedding of the surface in a higher-dimensional space, where standard Cartesian finite difference and interpolation schemes can be easily applied. We show that there is a one-to-one correspondence between a problem defined in the embedding space and the original surface problem. For open surfaces, we present a simple way to impose Dirichlet and Neumann boundary conditions while maintaining second-order accuracy. Convergence studies and a series of examples demonstrate the effectiveness and generality of our approach. © 2011 Elsevier Inc.
Improved incremental conductance method for maximum power point tracking using cuk converter
Directory of Open Access Journals (Sweden)
M. Saad Saoud
2014-03-01
Full Text Available The Algerian government relies on a strategy focused on the development of inexhaustible resources such as solar and uses to diversify energy sources and prepare the Algeria of tomorrow: about 40% of the production of electricity for domestic consumption will be from renewable sources by 2030, Therefore it is necessary to concentrate our forces in order to reduce the application costs and to increment their performances, Their performance is evaluated and compared through theoretical analysis and digital simulation. This paper presents simulation of improved incremental conductance method for maximum power point tracking (MPPT using DC-DC cuk converter. This improved algorithm is used to track MPPs because it performs precise control under rapidly changing Atmospheric conditions, Matlab/ Simulink were employed for simulation studies.
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
Directory of Open Access Journals (Sweden)
Florin POPESCU
2017-12-01
Full Text Available Early warning system (EWS based on a reliable forecasting process has become a critical component of the management of large complex industrial projects in the globalized transnational environment. The purpose of this research is to critically analyze the forecasting methods from the point of view of early warning, choosing those useful for the construction of EWS. This research addresses complementary techniques, using Bayesian Networks, which addresses both uncertainties and causality in project planning and execution, with the goal of generating early warning signals for project managers. Even though Bayesian networks have been widely used in a range of decision-support applications, their application as early warning systems for project management is still new.
International Nuclear Information System (INIS)
Matijevic, M.; Grgic, D.; Jecmenica, R.
2016-01-01
This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first
Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro
2016-07-01
Numerical analysis of the rotation of an ultrasonically levitated droplet with a free surface boundary is discussed. The ultrasonically levitated droplet is often reported to rotate owing to the surface tangential component of acoustic radiation force. To observe the torque from an acoustic wave and clarify the mechanism underlying the phenomena, it is effective to take advantage of numerical simulation using the distributed point source method (DPSM) and moving particle semi-implicit (MPS) method, both of which do not require a calculation grid or mesh. In this paper, the numerical treatment of the viscoacoustic torque, which emerges from the viscous boundary layer and governs the acoustical droplet rotation, is discussed. The Reynolds stress traction force is calculated from the DPSM result using the idea of effective normal particle velocity through the boundary layer and input to the MPS surface particles. A droplet levitated in an acoustic chamber is simulated using the proposed calculation method. The droplet is vertically supported by a plane standing wave from an ultrasonic driver and subjected to a rotating sound field excited by two acoustic sources on the side wall with different phases. The rotation of the droplet is successfully reproduced numerically and its acceleration is discussed and compared with those in the literature.
Lenton, T. M.; Livina, V. N.; Dakos, V.; Van Nes, E. H.; Scheffer, M.
2012-01-01
We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings. PMID:22291229
Neighborhoods and manageable proximity
Directory of Open Access Journals (Sweden)
Stavros Stavrides
2011-08-01
Full Text Available The theatricality of urban encounters is above all a theatricality of distances which allow for the encounter. The absolute “strangeness” of the crowd (Simmel 1997: 74 expressed, in its purest form, in the absolute proximity of a crowded subway train, does not generally allow for any movements of approach, but only for nervous hostile reactions and submissive hypnotic gestures. Neither forced intersections in the course of pedestrians or vehicles, nor the instantaneous crossing of distances by the technology of live broadcasting and remote control give birth to places of encounter. In the forced proximity of the metropolitan crowd which haunted the city of the 19th and 20th century, as well as in the forced proximity of the tele-presence which haunts the dystopic prospect of the future “omnipolis” (Virilio 1997: 74, the necessary distance, which is the stage of an encounter between different instances of otherness, is dissipated.
Comparison of point-of-care methods for preparation of platelet concentrate (platelet-rich plasma).
Weibrich, Gernot; Kleis, Wilfried K G; Streckbein, Philipp; Moergel, Maximilian; Hitzler, Walter E; Hafner, Gerd
2012-01-01
This study analyzed the concentrations of platelets and growth factors in platelet-rich plasma (PRP), which are likely to depend on the method used for its production. The cellular composition and growth factor content of platelet concentrates (platelet-rich plasma) produced by six different procedures were quantitatively analyzed and compared. Platelet and leukocyte counts were determined on an automatic cell counter, and analysis of growth factors was performed using enzyme-linked immunosorbent assay. The principal differences between the analyzed PRP production methods (blood bank method of intermittent flow centrifuge system/platelet apheresis and by the five point-of-care methods) and the resulting platelet concentrates were evaluated with regard to resulting platelet, leukocyte, and growth factor levels. The platelet counts in both whole blood and PRP were generally higher in women than in men; no differences were observed with regard to age. Statistical analysis of platelet-derived growth factor AB (PDGF-AB) and transforming growth factor β1 (TGF-β1) showed no differences with regard to age or gender. Platelet counts and TGF-β1 concentration correlated closely, as did platelet counts and PDGF-AB levels. There were only rare correlations between leukocyte counts and PDGF-AB levels, but comparison of leukocyte counts and PDGF-AB levels demonstrated certain parallel tendencies. TGF-β1 levels derive in substantial part from platelets and emphasize the role of leukocytes, in addition to that of platelets, as a source of growth factors in PRP. All methods of producing PRP showed high variability in platelet counts and growth factor levels. The highest growth factor levels were found in the PRP prepared using the Platelet Concentrate Collection System manufactured by Biomet 3i.
Probabilistic multiobjective wind-thermal economic emission dispatch based on point estimated method
International Nuclear Information System (INIS)
Azizipanah-Abarghooee, Rasoul; Niknam, Taher; Roosta, Alireza; Malekpour, Ahmad Reza; Zare, Mohsen
2012-01-01
In this paper, wind power generators are being incorporated in the multiobjective economic emission dispatch problem which minimizes wind-thermal electrical energy cost and emissions produced by fossil-fueled power plants, simultaneously. Large integration of wind energy sources necessitates an efficient model to cope with uncertainty arising from random wind variation. Hence, a multiobjective stochastic search algorithm based on 2m point estimated method is implemented to analyze the probabilistic wind-thermal economic emission dispatch problem considering both overestimation and underestimation of available wind power. 2m point estimated method handles the system uncertainties and renders the probability density function of desired variables efficiently. Moreover, a new population-based optimization algorithm called modified teaching-learning algorithm is proposed to determine the set of non-dominated optimal solutions. During the simulation, the set of non-dominated solutions are kept in an external memory (repository). Also, a fuzzy-based clustering technique is implemented to control the size of the repository. In order to select the best compromise solution from the repository, a niching mechanism is utilized such that the population will move toward a smaller search space in the Pareto-optimal front. In order to show the efficiency and feasibility of the proposed framework, three different test systems are represented as case studies. -- Highlights: ► WPGs are being incorporated in the multiobjective economic emission dispatch problem. ► 2m PEM handles the system uncertainties. ► A MTLBO is proposed to determine the set of non-dominated (Pareto) optimal solutions. ► A fuzzy-based clustering technique is implemented to control the size of the repository.
ProxImaL: efficient image optimization using proximal algorithms
Heide, Felix
2016-07-11
Computational photography systems are becoming increasingly diverse, while computational resources-for example on mobile platforms-are rapidly increasing. As diverse as these camera systems may be, slightly different variants of the underlying image processing tasks, such as demosaicking, deconvolution, denoising, inpainting, image fusion, and alignment, are shared between all of these systems. Formal optimization methods have recently been demonstrated to achieve state-of-the-art quality for many of these applications. Unfortunately, different combinations of natural image priors and optimization algorithms may be optimal for different problems, and implementing and testing each combination is currently a time-consuming and error-prone process. ProxImaL is a domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety of linear and nonlinear image formation models and cost functions, advanced image priors, and noise models. The compiler intelligently chooses the best way to translate a problem formulation and choice of optimization algorithm into an efficient solver implementation. In applications to the image processing pipeline, deconvolution in the presence of Poisson-distributed shot noise, and burst denoising, we show that a few lines of ProxImaL code can generate highly efficient solvers that achieve state-of-the-art results. We also show applications to the nonlinear and nonconvex problem of phase retrieval.
This study compared the utility of three sampling methods for ecological monitoring based on: interchangeability of data (rank correlations), precision (coefficient of variation), cost (minutes/transect), and potential of each method to generate multiple indicators. Species richness and foliar cover...
LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics
Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel
2017-10-01
Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.
Application of distributed point source method (DPSM) to wave propagation in anisotropic media
Fooladi, Samaneh; Kundu, Tribikram
2017-04-01
Distributed Point Source Method (DPSM) was developed by Placko and Kundu1, as a technique for modeling electromagnetic and elastic wave propagation problems. DPSM has been used for modeling ultrasonic, electrostatic and electromagnetic fields scattered by defects and anomalies in a structure. The modeling of such scattered field helps to extract valuable information about the location and type of defects. Therefore, DPSM can be used as an effective tool for Non-Destructive Testing (NDT). Anisotropy adds to the complexity of the problem, both mathematically and computationally. Computation of the Green's function which is used as the fundamental solution in DPSM is considerably more challenging for anisotropic media, and it cannot be reduced to a closed-form solution as is done for isotropic materials. The purpose of this study is to investigate and implement DPSM for an anisotropic medium. While the mathematical formulation and the numerical algorithm will be considered for general anisotropic media, more emphasis will be placed on transversely isotropic materials in the numerical example presented in this paper. The unidirectional fiber-reinforced composites which are widely used in today's industry are good examples of transversely isotropic materials. Development of an effective and accurate NDT method based on these modeling results can be of paramount importance for in-service monitoring of damage in composite structures.
Arahman, Nasrul; Maimun, Teuku; Mukramah, Syawaliah
2017-01-01
The composition of polymer solution and the methods of membrane preparation determine the solidification process of membrane. The formation of membrane structure prepared via non-solvent induced phase separation (NIPS) method is mostly determined by phase separation process between polymer, solvent, and non-solvent. This paper discusses the phase separation process of polymer solution containing Polyethersulfone (PES), N-methylpirrolidone (NMP), and surfactant Tetronic 1307 (Tet). Cloud point experiment is conducted to determine the amount of non-solvent needed on induced phase separation. Amount of water required as a non-solvent decreases by the addition of surfactant Tet. Kinetics of phase separation for such system is studied by the light scattering measurement. With the addition of Tet., the delayed phase separation is observed and the structure growth rate decreases. Moreover, the morphology of fabricated membrane from those polymer systems is analyzed by scanning electron microscopy (SEM). The images of both systems show the formation of finger-like macrovoids through the cross-section.
Fernández-Peña, Rosario; Fuentes-Pumarola, Concepció; Malagón-Aguilera, M Carme; Bonmatí-Tomàs, Anna; Bosch-Farré, Cristina; Ballester-Ferrando, David
2016-09-01
Adapting university programmes to European Higher Education Area criteria has required substantial changes in curricula and teaching methodologies. Reflective learning (RL) has attracted growing interest and occupies an important place in the scientific literature on theoretical and methodological aspects of university instruction. However, fewer studies have focused on evaluating the RL methodology from the point of view of nursing students. To assess nursing students' perceptions of the usefulness and challenges of RL methodology. Mixed method design, using a cross-sectional questionnaire and focus group discussion. The research was conducted via self-reported reflective learning questionnaire complemented by focus group discussion. Students provided a positive overall evaluation of RL, highlighting the method's capacity to help them better understand themselves, engage in self-reflection about the learning process, optimize their strengths and discover additional training needs, along with searching for continuous improvement. Nonetheless, RL does not help them as much to plan their learning or identify areas of weakness or needed improvement in knowledge, skills and attitudes. Among the difficulties or challenges, students reported low motivation and lack of familiarity with this type of learning, along with concerns about the privacy of their reflective journals and about the grading criteria. In general, students evaluated RL positively. The results suggest areas of needed improvement related to unfamiliarity with the methodology, ethical aspects of developing a reflective journal and the need for clear evaluation criteria. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Ibrahim Karahan
2016-04-01
Full Text Available Let C be a nonempty closed convex subset of a real Hilbert space H. Let {T_{n}}:C›H be a sequence of nearly nonexpansive mappings such that F:=?_{i=1}^{?}F(T_{i}?Ø. Let V:C›H be a ?-Lipschitzian mapping and F:C›H be a L-Lipschitzian and ?-strongly monotone operator. This paper deals with a modified iterative projection method for approximating a solution of the hierarchical fixed point problem. It is shown that under certain approximate assumptions on the operators and parameters, the modified iterative sequence {x_{n}} converges strongly to x^{*}?F which is also the unique solution of the following variational inequality: ?0, ?x?F. As a special case, this projection method can be used to find the minimum norm solution of above variational inequality; namely, the unique solution x^{*} to the quadratic minimization problem: x^{*}=argmin_{x?F}?x?². The results here improve and extend some recent corresponding results of other authors.
New encapsulation method using low-melting-point alloy for sealing micro heat pipes
Energy Technology Data Exchange (ETDEWEB)
Li, Congming; Wang, Xiaodong; Zhou, Chuanpeng; Luo, Yi; Li, Zhixin; Li, Sidi [Dalian University of Technology, Dalian (China)
2017-06-15
This study proposed a method using Low-melting-point alloy (LMPA) to seal Micro heat pipes (MHPs), which were made of Si substrates and glass covers. Corresponding MHP structures with charging and sealing channels were designed. Three different auxiliary structures were investigated to study the sealability of MHPs with LMPA. One structure is rectangular and the others are triangular with corner angles of 30° and 45°, respectively. Each auxiliary channel for LMPA is 0.5 mm wide and 135 μm deep. LMPA was heated to molten state, injected to channels, and then cooled to room temperature. According to the material characteristic of LMPA, the alloy should swell in the following 12 hours to form strong interaction force between LMPA and Si walls. Experimental results show that the flow speed of liquid LMPA in channels plays an important role in sealing MHPs, and the sealing performance of triangular structures is always better than that of rectangular structure. Therefore, triangular structures are more suitable in sealing MHPs than rectangular ones. LMPA sealing is a plane packaging method that can be applied in the thermal management of high-power IC device and LEDs. Meanwhile, implanting in commercialized fabrication of MHP is easy.
New encapsulation method using low-melting-point alloy for sealing micro heat pipes
International Nuclear Information System (INIS)
Li, Congming; Wang, Xiaodong; Zhou, Chuanpeng; Luo, Yi; Li, Zhixin; Li, Sidi
2017-01-01
This study proposed a method using Low-melting-point alloy (LMPA) to seal Micro heat pipes (MHPs), which were made of Si substrates and glass covers. Corresponding MHP structures with charging and sealing channels were designed. Three different auxiliary structures were investigated to study the sealability of MHPs with LMPA. One structure is rectangular and the others are triangular with corner angles of 30° and 45°, respectively. Each auxiliary channel for LMPA is 0.5 mm wide and 135 μm deep. LMPA was heated to molten state, injected to channels, and then cooled to room temperature. According to the material characteristic of LMPA, the alloy should swell in the following 12 hours to form strong interaction force between LMPA and Si walls. Experimental results show that the flow speed of liquid LMPA in channels plays an important role in sealing MHPs, and the sealing performance of triangular structures is always better than that of rectangular structure. Therefore, triangular structures are more suitable in sealing MHPs than rectangular ones. LMPA sealing is a plane packaging method that can be applied in the thermal management of high-power IC device and LEDs. Meanwhile, implanting in commercialized fabrication of MHP is easy.
Proximal collagenous gastroenteritides:
DEFF Research Database (Denmark)
Nielsen, Ole Haagen; Riis, Lene Buhl; Danese, Silvio
2014-01-01
AIM: While collagenous colitis represents the most common form of the collagenous gastroenteritides, the collagenous entities affecting the proximal part of the gastrointestinal tract are much less recognized and possibly overlooked. The aim was to summarize the latest information through a syste...
DEFF Research Database (Denmark)
Palm, Henrik; Teixidor, Jordi
2015-01-01
searched the homepages of the national heath authorities and national orthopedic societies in West Europe and found 11 national or regional (in case of no national) guidelines including any type of proximal femoral fracture surgery. RESULTS: Pathway consensus is outspread (internal fixation for un...
Donahue, Craig J.; Rais, Elizabeth A.
2009-01-01
This lab experiment illustrates the use of thermogravimetric analysis (TGA) to perform proximate analysis on a series of coal samples of different rank. Peat and coke are also examined. A total of four exercises are described. These are dry exercises as students interpret previously recorded scans. The weight percent moisture, volatile matter,…
Fixed point theorems in locally convex spacesÃ¢Â€Â”the Schauder mapping method
Directory of Open Access Journals (Sweden)
S. Cobzaş
2006-03-01
Full Text Available In the appendix to the book by F. F. Bonsal, Lectures on Some Fixed Point Theorems of Functional Analysis (Tata Institute, Bombay, 1962 a proof by Singbal of the Schauder-Tychonoff fixed point theorem, based on a locally convex variant of Schauder mapping method, is included. The aim of this note is to show that this method can be adapted to yield a proof of Kakutani fixed point theorem in the locally convex case. For the sake of completeness we include also the proof of Schauder-Tychonoff theorem based on this method. As applications, one proves a theorem of von Neumann and a minimax result in game theory.
Directory of Open Access Journals (Sweden)
Lei Guo
2017-02-01
Full Text Available Point-of-interest (POI recommendation has been well studied in recent years. However, most of the existing methods focus on the recommendation scenarios where users can provide explicit feedback. In most cases, however, the feedback is not explicit, but implicit. For example, we can only get a user’s check-in behaviors from the history of what POIs she/he has visited, but never know how much she/he likes and why she/he does not like them. Recently, some researchers have noticed this problem and began to learn the user preferences from the partial order of POIs. However, these works give equal weight to each POI pair and cannot distinguish the contributions from different POI pairs. Intuitively, for the two POIs in a POI pair, the larger the frequency difference of being visited and the farther the geographical distance between them, the higher the contribution of this POI pair to the ranking function. Based on the above observations, we propose a weighted ranking method for POI recommendation. Specifically, we first introduce a Bayesian personalized ranking criterion designed for implicit feedback to POI recommendation. To fully utilize the partial order of POIs, we then treat the cost function in a weighted way, that is give each POI pair a different weight according to their frequency of being visited and the geographical distance between them. Data analysis and experimental results on two real-world datasets demonstrate the existence of user preference on different POI pairs and the effectiveness of our weighted ranking method.
Gézero, L.; Antunes, C.
2017-05-01
The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.
Hund, E; Massart, D L; Smeyers-Verbeke, J
1999-10-01
The H-point standard additions method (HPSAM) and two versions of the generalized H-point standard additions method (GHPSAM) are evaluated for the UV-analysis of two-component mixtures. Synthetic mixtures of anhydrous caffeine and phenazone as well as of atovaquone and proguanil hydrochloride were used. Furthermore, the method was applied to pharmaceutical formulations that contain these compounds as active drug substances. This paper shows both the difficulties that are related to the methods and the conditions by which acceptable results can be obtained.
International Nuclear Information System (INIS)
Imani, A.; Modarress, H.; Eliassi, A.; Abdous, M.
2009-01-01
The phase separation of (water + salt + polyethylene glycol 15000) systems was studied by cloud-point measurements using the particle counting method. The effect of three kinds of sulphate salt (Na 2 SO 4 , K 2 SO 4 , (NH 4 ) 2 SO 4 ) concentration, polyethylene glycol 15000 concentration, mass ratio of polymer to salt on the cloud-point temperature of these systems have been investigated. The results obtained indicate that the cloud-point temperatures decrease linearly with increase in polyethylene glycol concentrations for different salts. Also, the cloud points decrease with an increase in mass ratio of salt to polymer.
Computation of point reactor dynamics equations with thermal feedback via weighted residue method
International Nuclear Information System (INIS)
Suo Changan; Liu Xiaoming
1986-01-01
Point reactor dynamics equations with six groups of delayed neutrons have been computed via weighted-residual method in which the delta function was taken as a weighting function, and the parabolic with or without exponential factor as a trial function respectively for an insertion of large or smaller reactivity. The reactivity inserted into core can be varied with time, including insertion in forms of step function, polynomials up to second power and sine function. A thermal feedback of single flow channel model was added in. The thermal equations concerned were treated by use of a backward difference technique. A WRK code has been worked out, including implementation of an automatic selection of time span based on an input of error requirement and of an automatic change between computation with large reactivity and that with smaller one. On the condition of power varied slowly and without feedback, the results are not sensitive to the selection of values of time span. At last, the comparison of relevant results has shown that the agreement is quite well
A mixed-methods evaluation of a multidisciplinary point of care ultrasound program.
Smith, Andrew; Parsons, Michael; Renouf, Tia; Boyd, Sarah; Rogers, Peter
2018-04-24
Point of Care Ultrasound (PoCUS) is well established within emergency medicine, however, the availability of formal training for other clinical disciplines is limited. Memorial University has established a cost-efficient, multidisciplinary PoCUS training program focusing on training residents' discipline-specific ultrasound skills. This study evaluates the skills, knowledge, and attitudes of residents who participated in the program. Analysis was conducted using a mixed-methods, sequential exploratory approach. Initially, a focus group of seven first year residents was conducted to generate themes that were used to guide development of a survey administered to residents over a two-year period. Thirty residents responded to the survey (response rate 63.8%) with 53.3% meeting the training requirements for focused assessment using sonography in trauma, 43.3% for pleural effusion, 40.0% for aortic aneurysms, and 40.0% for cardiac scans. Early pregnancy assessment was the skill of least interest with 46.6% not interested. Over half the residents (53.6%) agreed or strongly agreed that a multidisciplinary program met their needs while 21.4% disagreed. The focus group found the multidisciplinary approach adequate. A single PoCUS curriculum has been shown to meet the needs and expectations of a majority of residents from multiple disciplines. It can enhance collaboration and bridge gaps between increasingly compartmentalized practices of medicine.
Development of a Cloud-Point Extraction Method for Cobalt Determination in Natural Water Samples
Directory of Open Access Journals (Sweden)
Mohammad Reza Jamali
2013-01-01
Full Text Available A new, simple, and versatile cloud-point extraction (CPE methodology has been developed for the separation and preconcentration of cobalt. The cobalt ions in the initial aqueous solution were complexed with 4-Benzylpiperidinedithiocarbamate, and Triton X-114 was added as surfactant. Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the cobalt content was measured by flame atomic absorption spectrometry. The main factors affecting CPE procedure, such as pH, concentration of ligand, amount of Triton X-114, equilibrium temperature, and incubation time were investigated and optimized. Under the optimal conditions, the limit of detection (LOD for cobalt was 0.5 μg L-1, with sensitivity enhancement factor (EF of 67. Calibration curve was linear in the range of 2–150 μg L-1, and relative standard deviation was 3.2% (c=100 μg L-1; n=10. The proposed method was applied to the determination of trace cobalt in real water samples with satisfactory analytical results.
CaFE: a tool for binding affinity prediction using end-point free energy methods.
Liu, Hui; Hou, Tingjun
2016-07-15
Accurate prediction of binding free energy is of particular importance to computational biology and structure-based drug design. Among those methods for binding affinity predictions, the end-point approaches, such as MM/PBSA and LIE, have been widely used because they can achieve a good balance between prediction accuracy and computational cost. Here we present an easy-to-use pipeline tool named Calculation of Free Energy (CaFE) to conduct MM/PBSA and LIE calculations. Powered by the VMD and NAMD programs, CaFE is able to handle numerous static coordinate and molecular dynamics trajectory file formats generated by different molecular simulation packages and supports various force field parameters. CaFE source code and documentation are freely available under the GNU General Public License via GitHub at https://github.com/huiliucode/cafe_plugin It is a VMD plugin written in Tcl and the usage is platform-independent. tingjunhou@zju.edu.cn. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Electronic structures of β-SiC containing point defects studied by DX-Xα method
International Nuclear Information System (INIS)
Sawabe, Takashi; Yano, Toyohiko
2008-01-01
The DV-Xα method was used to calculate the bond order between atoms in cubic silicon carbide (β-SiC) with a point defect. Three types of β-SiC cluster models were used: pure cluster, vacancy cluster and interstitial cluster. The bond order was influenced by the kind to defects. The bonds between C interstitial and neighboring C atoms were composed of anti-bonding type interactions, while the bonds between Si interstitial and neighboring C and Si atoms were composed of bonding type interactions. The overlap population of each molecular orbital was examined to obtain detailed information of the chemical bonding. It appeared more difficult to recombine interstitial atoms in a cluster with a C atom vacancy than in a cluster with a Si atom vacancy, due to the stronger Si-Si bonds surrounding the C atom vacancy. The C interstitial atom had C2s and C2p anti-bonding interactions with high energy levels. The Si interstitial had minimal anti-bonding interactions. (author)
Energy Technology Data Exchange (ETDEWEB)
Campins-Falco, Pilar; Bosch-Reig, Francisco; Herraez-Hernandez, Rosa; Sevillano-Cabeza, Adela (Universidad de Valencia (Spain). Facultad de Quimica, Departamento de Quimica Analitica)
1992-02-10
This work establishes the fundamentals of the H-point standard additions method for liquid chromatography for the simultaneous analysis of binary mixtures with overlapped chromatographic peaks. The method was compared with the deconvolution method of peak suppression and the second derivative of elution profiles. Different mixtures of diuretics were satisfactorily resolved. (author). 21 refs.; 9 figs.; 2 tabs.
The most remote point method for the site selection of the future GGOS network
Hase, Hayo; Pedreros, Felipe
2014-10-01
The Global Geodetic Observing System (GGOS) proposes 30-40 geodetic observatories as global infrastructure for the most accurate reference frame to monitor the global change. To reach this goal, several geodetic observatories have upgrade plans to become GGOS stations. Most initiatives are driven by national institutions following national interests. From a global perspective, the site distribution remains incomplete and the initiatives to improve this are up until now insufficient. This article is a contribution to answer the question on where to install new GGOS observatories and where to add observation techniques to existing observatories. It introduces the iterative most remote point (MRP) method for filling in the largest gaps in existing technique-specific networks. A spherical version of the Voronoi-diagram is used to pick the optimal location of the new observatory, but practical concerns determine its realistic location. Once chosen, the process is iterated. A quality and a homogeneity parameter of global networks measure the progress of improving the homogeneity of the global site distribution. This method is applied to the global networks of VGOS, and VGOS co-located with SLR to derive some clues about where additional observatory sites or additional observation techniques at existing observatories will improve the GGOS network configuration. With only six additional VGOS-stations, the homogeneity of the global VGOS-network could be significantly improved by more than . From the presented analysis, 25 known or new co-located VGOS and SLR sites are proposed as the future GGOS backbone: Colombo, Easter Island, Fairbanks, Fortaleza, Galapagos, GGAO, Hartebeesthoek, Honiara, Ibadan, Kokee Park, La Plata, Mauritius, McMurdo, Metsahövi, Ny Alesund, Riyadh, San Diego, Santa Maria, Shanghai, Syowa, Tahiti, Tristan de Cunha, Warkworth, Wettzell, and Yarragadee.
A Numerical Investigation of CFRP-Steel Interfacial Failure with Material Point Method
International Nuclear Information System (INIS)
Shen Luming; Faleh, Haydar; Al-Mahaidi, Riadh
2010-01-01
The success of retrofitting steel structures by using the Carbon Fibre Reinforced Polymers (CFRP) significantly depends on the performance and integrity of CFRP-steel joint and the effectiveness of the adhesive used. Many of the previous numerical studies focused on the design and structural performance of the CFRP-steel system and neglected the mechanical responses of adhesive layer, which results in the lack of understanding in how the adhesive layer between the CFRP and steel performs during the loading and failure stages. Based on the recent observation on the failure of CFRP-steel bond in the double lap shear tests, a numerical approach is proposed in this study to simulate the delamination process of CFRP sheet from steel plate using the Material Point Method (MPM). In the proposed approach, an elastoplasticity model with a linear hardening and softening law is used to model the epoxy layer. The MPM, which does not employ fixed mesh-connectivity, is employed as a robust spatial discretization method to accommodate the multi-scale discontinuities involved in the CFRP-steel bond failure process. To demonstrate the potential of the proposed approach, a parametric study is conducted to investigate the effects of bond length and loading rates on the capacity and failure modes of CFRP-steel system. The evolution of the CFRP-steel bond failure and the distribution of stress and strain along bond length direction will be presented. The simulation results not only well match the available experimental data but also provide a better understanding on the physics behind the CFRP sheet delamination process.
International Nuclear Information System (INIS)
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-01-01
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced
Energy Technology Data Exchange (ETDEWEB)
Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)
2016-05-15
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced
Application of a practical method for the isocenter point in vivo dosimetry by a transit signal
International Nuclear Information System (INIS)
Piermattei, Angelo; Fidanzio, Andrea; Azario, Luigi
2007-01-01
This work reports the results of the application of a practical method to determine the in vivo dose at the isocenter point, D iso , of brain thorax and pelvic treatments using a transit signal S t . The use of a stable detector for the measurement of the signal S t (obtained by the x-ray beam transmitted through the patient) reduces many of the disadvantages associated with the use of solid-state detectors positioned on the patient as their periodic recalibration, and their positioning is time consuming. The method makes use of a set of correlation functions, obtained by the ratio between S t and the mid-plane dose value, D m , in standard water-equivalent phantoms, both determined along the beam central axis. The in vivo measurement of D iso required the determination of the water-equivalent thickness of the patient along the beam central axis by the treatment planning system that uses the electron densities supplied by calibrated Hounsfield numbers of the computed tomography scanner. This way it is, therefore, possible to compare D iso with the stated doses, D iso,TPS , generally used by the treatment planning system for the determination of the monitor units. The method was applied in five Italian centers that used beams of 6 MV, 10 MV, 15 MV x-rays and 60 Co γ-rays. In particular, in four centers small ion-chambers were positioned below the patient and used for the S t measurement. In only one center, the S t signals were obtained directly by the central pixels of an EPID (electronic portal imaging device) equipped with commercial software that enabled its use as a stable detector. In the four centers where an ion-chamber was positioned on the EPID, 60 pelvic treatments were followed for two fields, an anterior-posterior or a posterior-anterior irradiation and a lateral-lateral irradiation. Moreover, ten brain tumors were checked for a lateral-lateral irradiation, and five lung tumors carried out with three irradiations with different gantry angles were
Application of a practical method for the isocenter point in vivo dosimetry by a transit signal
Energy Technology Data Exchange (ETDEWEB)
Piermattei, Angelo [UO di Fisica Sanitaria, Centro di Ricerca e Formazione ad Alta Tecnologia nelle Scienze Biomediche dell' Universita Cattolica Sacro Cuore, Campobasso (Italy); Fidanzio, Andrea [Istituto di Fisica, Universita Cattolica del Sacro Cuore, Rome (Italy); Azario, Luigi [Istituto di Fisica, Universita Cattolica del Sacro Cuore, Rome (Italy)] (and others)
2007-08-21
This work reports the results of the application of a practical method to determine the in vivo dose at the isocenter point, D{sub iso}, of brain thorax and pelvic treatments using a transit signal S{sub t}. The use of a stable detector for the measurement of the signal S{sub t} (obtained by the x-ray beam transmitted through the patient) reduces many of the disadvantages associated with the use of solid-state detectors positioned on the patient as their periodic recalibration, and their positioning is time consuming. The method makes use of a set of correlation functions, obtained by the ratio between S{sub t} and the mid-plane dose value, D{sub m}, in standard water-equivalent phantoms, both determined along the beam central axis. The in vivo measurement of D{sub iso} required the determination of the water-equivalent thickness of the patient along the beam central axis by the treatment planning system that uses the electron densities supplied by calibrated Hounsfield numbers of the computed tomography scanner. This way it is, therefore, possible to compare D{sub iso} with the stated doses, D{sub iso,TPS}, generally used by the treatment planning system for the determination of the monitor units. The method was applied in five Italian centers that used beams of 6 MV, 10 MV, 15 MV x-rays and {sup 60}Co {gamma}-rays. In particular, in four centers small ion-chambers were positioned below the patient and used for the S{sub t} measurement. In only one center, the S{sub t} signals were obtained directly by the central pixels of an EPID (electronic portal imaging device) equipped with commercial software that enabled its use as a stable detector. In the four centers where an ion-chamber was positioned on the EPID, 60 pelvic treatments were followed for two fields, an anterior-posterior or a posterior-anterior irradiation and a lateral-lateral irradiation. Moreover, ten brain tumors were checked for a lateral-lateral irradiation, and five lung tumors carried out with
International Nuclear Information System (INIS)
Heller, E.J.
1996-01-01
It is well known that at long wavelengths λ an s-wave scatterer can have a scattering cross section σ on the order of λ 2 , much larger than its physical size, as measured by the range of its potential. Very interesting phenomena can arise when two or more identical scatterers are placed close together, well within one wavelength. We show that, for a pair of identical scatterers, an extremely narrow p-wave open-quote open-quote proximity close-quote close-quote resonance develops from a broader s-wave resonance of the individual scatterers. A new s-wave resonance of the pair also appears. The relation of these proximity resonances (so called because they appear when the scatterers are close together) to the Thomas and Efimov effects is discussed. copyright 1996 The American Physical Society
A comparison of point counts with a new acoustic sampling method ...
African Journals Online (AJOL)
We showed that the estimates of species richness, abundance and community composition based on point counts and post-hoc laboratory listening to acoustic samples are very similar, especially for a distance limited up to 50 m. Species that were frequently missed during both point counts and listening to acoustic samples ...
Performances improvement of maximum power point tracking perturb and observe method
Energy Technology Data Exchange (ETDEWEB)
Egiziano, L.; Femia, N.; Granozio, D.; Petrone, G.; Spagnuolo, G. [Salermo Univ., Salermo (Italy); Vitelli, M. [Seconda Univ. di Napoli, Napoli (Italy)
2006-07-01
Perturb and observe best operation conditions were investigated in order to identify edge efficiency performance capabilities of a maximum power point (MPP) tracking technique for photovoltaic (PV) applications. The strategy was developed to ensure a 3-points behavior across the MPP under a fixed irradiation level with a central point blocked on the MPP and 2 operating points operating at voltage values that guaranteed the same power levels. The system was also devised to quickly detect the MPP movement in the presence of varying atmospheric conditions by increasing the perturbation so that the MPP was guaranteed within a few sampling periods. A perturbation equation was selected where amplitude was represented as a function of the actual power drawn from the PV field together with the adoption of a parabolic interpolation of the sequence of the final 3 acquired voltage power couples corresponding to as many operating points. The technique was developed to ensure that the power difference between 2 consecutive operating points was higher than the power quantization error. Simulations were conducted to demonstrate that the proposed technique arranged operating points symmetrically around the MPP. The average power of the 3-points set was achieved by means of the parabolic prediction. Experiments conducted to validate the simulation showed a reduced power oscillation below the MPP and a real power gain. 2 refs., 8 figs.
Uncemented allograft-prosthetic composite reconstruction of the proximal femur
Directory of Open Access Journals (Sweden)
Li Min
2014-01-01
Full Text Available Background: Allograft-prosthetic composite can be divided into three groups names cemented, uncemented, and partially cemented. Previous studies have mainly reported outcomes in cemented and partially cemented allograft-prosthetic composites, but have rarely focused on the uncemented allograft-prosthetic composites. The objectives of our study were to describe a surgical technique for using proximal femoral uncemented allograft-prosthetic composite and to present the radiographic and clinical results. Materials and Methods: Twelve patients who underwent uncemented allograft-prosthetic composite reconstruction of the proximal femur after bone tumor resection were retrospectively evaluated at an average followup of 24.0 months. Clinical records and radiographs were evaluated. Results: In our series, union occurred in all the patients (100%; range 5-9 months. Until the most recent followup, there were no cases with infection, nonunion of the greater trochanter, junctional bone resorption, dislocation, allergic reaction, wear of acetabulum socket, recurrence, and metastasis. But there were three periprosthetic fractures which were fixed using cerclage wire during surgery. Five cases had bone resorption in and around the greater trochanter. The average Musculoskeletal Tumor Society (MSTS score and Harris hip score (HHS were 26.2 points (range 24-29 points and 80.6 points (range 66.2-92.7 points, respectively. Conclusions: These results showed that uncemented allograft-prosthetic composite could promote bone union through compression at the host-allograft junction and is a good choice for proximal femoral resection. Although this technology has its own merits, long term outcomes are yet not validated.
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-05-01
To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have
Energy Technology Data Exchange (ETDEWEB)
York, A.R. II [Sandia National Labs., Albuquerque, NM (United States). Engineering and Process Dept.
1997-07-01
The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.
Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A
2015-01-01
Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.
Mochizuki, Yuta; Kaneko, Takao; Kawahara, Keisuke; Toyoda, Shinya; Kono, Norihiko; Hada, Masaru; Ikegami, Hiroyasu; Musha, Yoshiro
2017-11-20
The quadrant method was described by Bernard et al. and it has been widely used for postoperative evaluation of anterior cruciate ligament (ACL) reconstruction. The purpose of this research is to further develop the quadrant method measuring four points, which we named four-point quadrant method, and to compare with the quadrant method. Three-dimensional computed tomography (3D-CT) analyses were performed in 25 patients who underwent double-bundle ACL reconstruction using the outside-in technique. The four points in this study's quadrant method were defined as point1-highest, point2-deepest, point3-lowest, and point4-shallowest, in femoral tunnel position. Value of depth and height in each point was measured. Antero-medial (AM) tunnel is (depth1, height2) and postero-lateral (PL) tunnel is (depth3, height4) in this four-point quadrant method. The 3D-CT images were evaluated independently by 2 orthopaedic surgeons. A second measurement was performed by both observers after a 4-week interval. Intra- and inter-observer reliability was calculated by means of intra-class correlation coefficient (ICC). Also, the accuracy of the method was evaluated against the quadrant method. Intra-observer reliability was almost perfect for both AM and PL tunnel (ICC > 0.81). Inter-observer reliability of AM tunnel was substantial (ICC > 0.61) and that of PL tunnel was almost perfect (ICC > 0.81). The AM tunnel position was 0.13% deep, 0.58% high and PL tunnel position was 0.01% shallow, 0.13% low compared to quadrant method. The four-point quadrant method was found to have high intra- and inter-observer reliability and accuracy. This method can evaluate the tunnel position regardless of the shape and morphology of the bone tunnel aperture for use of comparison and can provide measurement that can be compared with various reconstruction methods. The four-point quadrant method of this study is considered to have clinical relevance in that it is a detailed and accurate tool for
International Nuclear Information System (INIS)
Reyes Lopez, Y.; Yervilla Herrera, H.; Viamontes Esquivel, A.; Recarey Morfa, C. A.
2009-01-01
In the following paper we developed a new method to interpolate large volumes of scattered data, focused mainly on the results of the Mesh free Methods, Points Methods and the Particles Methods application. Through this one, we use local radial basis function as interpolating functions. We also use over-tree as the data structure that allows to accelerate the localization of the data that influences to interpolate the values at a new point, speeding up the application of scientific visualization techniques to generate images from large data volumes from the application of Mesh-free Methods, Points and Particle Methods, in the resolution of diverse models of physics-mathematics. As an example, the results obtained after applying this method using the local interpolation functions of Shepard are shown. (Author) 22 refs
Golkhou, Vahid; Parnianpour, Mohamad; Lucas, Caro
2005-04-01
In this study, we have used a single link system with a pair of muscles that are excited with alpha and gamma signals to achieve both point to point and oscillatory movements with variable amplitude and frequency.The system is highly nonlinear in all its physical and physiological attributes. The major physiological characteristics of this system are simultaneous activation of a pair of nonlinear muscle-like-actuators for control purposes, existence of nonlinear spindle-like sensors and Golgi tendon organ-like sensor, actions of gravity and external loading. Transmission delays are included in the afferent and efferent neural paths to account for a more accurate representation of the reflex loops.A reinforcement learning method with an actor-critic (AC) architecture instead of middle and low level of central nervous system (CNS), is used to track a desired trajectory. The actor in this structure is a two layer feedforward neural network and the critic is a model of the cerebellum. The critic is trained by state-action-reward-state-action (SARSA) method. The critic will train the actor by supervisory learning based on the prior experiences. Simulation studies of oscillatory movements based on the proposed algorithm demonstrate excellent tracking capability and after 280 epochs the RMS error for position and velocity profiles were 0.02, 0.04 rad and rad/s, respectively.
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images
Directory of Open Access Journals (Sweden)
Zhiying Song
2017-01-01
Full Text Available The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS method and a dynamic threshold denoising (DTD method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933 on feature images and less Euclidean distance error (ED = 2.826 on landmark points, outperforming the source data (NC = −0.496, ED = 25.847 and the compared method (NC = −0.614, ED = 16.085. Moreover, our method is about ten times faster than the compared one.
McGill, Matthew J. (Inventor); Scott, Vibart S. (Inventor); Marzouk, Marzouk (Inventor)
2001-01-01
A holographic optical element transforms a spectral distribution of light to image points. The element comprises areas, each of which acts as a separate lens to image the light incident in its area to an image point. Each area contains the recorded hologram of a point source object. The image points can be made to lie in a line in the same focal plane so as to align with a linear array detector. A version of the element has been developed that has concentric equal areas to match the circular fringe pattern of a Fabry-Perot interferometer. The element has high transmission efficiency, and when coupled with high quantum efficiency solid state detectors, provides an efficient photon-collecting detection system. The element may be used as part of the detection system in a direct detection Doppler lidar system or multiple field of view lidar system.
Comparative study of building footprint estimation methods from LiDAR point clouds
Rozas, E.; Rivera, F. F.; Cabaleiro, J. C.; Pena, T. F.; Vilariño, D. L.
2017-10-01
Building area calculation from LiDAR points is still a difficult task with no clear solution. Their different characteristics, such as shape or size, have made the process too complex to automate. However, several algorithms and techniques have been used in order to obtain an approximated hull. 3D-building reconstruction or urban planning are examples of important applications that benefit of accurate building footprint estimations. In this paper, we have carried out a study of accuracy in the estimation of the footprint of buildings from LiDAR points. The analysis focuses on the processing steps following the object recognition and classification, assuming that labeling of building points have been previously performed. Then, we perform an in-depth analysis of the influence of the point density over the accuracy of the building area estimation. In addition, a set of buildings with different size and shape were manually classified, in such a way that they can be used as benchmark.
Calculation Method for Equilibrium Points in Dynamical Systems Based on Adaptive Sinchronization
Directory of Open Access Journals (Sweden)
Manuel Prian Rodríguez
2017-12-01
Full Text Available In this work, a control system is proposed as an equivalent numerical procedure whose aim is to obtain the natural equilibrium points of a dynamical system. These equilibrium points may be employed later as setpoint signal for different control techniques. The proposed procedure is based on the adaptive synchronization between an oscillator and a reference model driven by the oscillator state variables. A stability analysis is carried out and a simplified algorithm is proposed. Finally, satisfactory simulation results are shown.
Advanced DNA-Based Point-of-Care Diagnostic Methods for Plant Diseases Detection
Lau, Han Yih; Botella, Jose R.
2017-01-01
Diagnostic technologies for the detection of plant pathogens with point-of-care capability and high multiplexing ability are an essential tool in the fight to reduce the large agricultural production losses caused by plant diseases. The main desirable characteristics for such diagnostic assays are high specificity, sensitivity, reproducibility, quickness, cost efficiency and high-throughput multiplex detection capability. This article describes and discusses various DNA-based point-of care di...
Wenying, Wei; Jinyu, Han; Wen, Xu
2004-01-01
The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.
Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization
Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.
2009-01-01
We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step,
Directory of Open Access Journals (Sweden)
Buyun Sheng
2018-01-01
Full Text Available The existing surface reconstruction algorithms currently reconstruct large amounts of mesh data. Consequently, many of these algorithms cannot meet the efficiency requirements of real-time data transmission in a web environment. This paper proposes a lightweight surface reconstruction method for online 3D scanned point cloud data oriented toward 3D printing. The proposed online lightweight surface reconstruction algorithm is composed of a point cloud update algorithm (PCU, a rapid iterative closest point algorithm (RICP, and an improved Poisson surface reconstruction algorithm (IPSR. The generated lightweight point cloud data are pretreated using an updating and rapid registration method. The Poisson surface reconstruction is also accomplished by a pretreatment to recompute the point cloud normal vectors; this approach is based on a least squares method, and the postprocessing of the PDE patch generation was based on biharmonic-like fourth-order PDEs, which effectively reduces the amount of reconstructed mesh data and improves the efficiency of the algorithm. This method was verified using an online personalized customization system that was developed with WebGL and oriented toward 3D printing. The experimental results indicate that this method can generate a lightweight 3D scanning mesh rapidly and efficiently in a web environment.
Multiple intramedullary nailing of proximal phalangeal fractures of hand
Directory of Open Access Journals (Sweden)
Patankar Hemant
2008-01-01
Full Text Available Background: Proximal phalangeal fractures are commonly encountered fractures in the hand. Majority of them are stable and can be treated by non-operative means. However, unstable fractures i.e. those with shortening, displacement, angulation, rotational deformity or segmental fractures need surgical intervention. This prospective study was undertaken to evaluate the functional outcome after surgical stabilization of these fractures with joint-sparing multiple intramedullary nailing technique. Materials and Methods: Thirty-five patients with 35 isolated unstable proximal phalangeal shaft fractures of hand were managed by surgical stabilization with multiple intramedullary nailing technique. Fractures of the thumb were excluded. All the patients were followed up for a minimum of six months. They were assessed radiologically and clinically. The clinical evaluation was based on two criteria. 1. total active range of motion for digital functional assessment as suggested by the American Society for Surgery of Hand and 2. grip strength. Results: All the patients showed radiological union at six weeks. The overall results were excellent in all the patients. Adventitious bursitis was observed at the point of insertion of nails in one patient. Conclusion: Joint-sparing multiple intramedullary nailing of unstable proximal phalangeal fractures of hand provides satisfactory results with good functional outcome and fewer complications.
International Nuclear Information System (INIS)
Krappe, H.J.
1989-01-01
The contribution of inelastic excitations to radial and tangential friction form-factors in heavy-ion collisions is investigated in the frame-work of perturbation theory. The dependence of the form factors on the essential geometrical and level-density parameters of the scattering system is exhibited in a rather closed form. The conditions for the existence of time-local friction coefficients are discussed. Results are compared to form factors from other models, in particular the transfer-related proximity friction. For the radial friction coefficient the inelastic excitation mechanism seems to be the dominant contribution in peripheral collisions. (orig.)
Webb, Lawrence X
2002-01-01
Fractures of the proximal femur include fractures of the head, neck, intertrochanteric, and subtrochanteric regions. Head fractures commonly accompany dislocations. Neck fractures and intertrochanteric fractures occur with greatest frequency in elderly patients with a low bone mineral density and are produced by low-energy mechanisms. Subtrochanteric fractures occur in a predominantly strong cortical osseous region which is exposed to large compressive stresses. Implants used to address these fractures must be able to accommodate significant loads while the fractures consolidate. Complications secondary to these injuries produce significant morbidity and include infection, nonunion, malunion, decubitus ulcers, fat emboli, deep venous thrombosis, pulmonary embolus, pneumonia, myocardial infarction, stroke, and death.
International Nuclear Information System (INIS)
Kawano, Takao; Ebihara, Hiroshi
1990-01-01
The disintegration rates of 60 Co as a point source (<2 mm in diameter on a thin plastic disc) and volume sources (10-100 mL solutions in a polyethylene bottle) are determined by the sum-peak method. The sum-peak formula gives the exact disintegration rate for the point source at different positions from the detector. However, increasing the volume of the solution results in enlarged deviations from the true disintegration rate. Extended sources must be treated as an amalgam of many point sources. (author)
Wang Hao; Gao Wen; Huang Qingming; Zhao Feng
2010-01-01
Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matchin...
Czech Academy of Sciences Publication Activity Database
Kopačka, Ján; Gabriel, Dušan; Plešek, Jiří; Ulbin, M.
2016-01-01
Roč. 105, č. 11 (2016), s. 803-833 ISSN 0029-5981 R&D Projects: GA ČR(CZ) GAP101/12/2315; GA MŠk(CZ) ME10114 Institutional support: RVO:61388998 Keywords : closest point projection * local contact search * quadratic elements * Newtons methods * geometric iteration methods * simplex method Subject RIV: JC - Computer Hardware ; Software Impact factor: 2.162, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/nme.4994/abstract
Comparison of a point-of-care analyser for the determination of HbA1c with HPLC method
Grant, D.A.; Dunseath, G.J.; Churm, R.; Luzio, S.D.
2017-01-01
Aims: As the use of Point of Care Testing (POCT) devices for measurement of glycated haemoglobin (HbA1c) increases, it is imperative to determine how their performance compares to laboratory methods. This study compared the performance of the automated Quo-Test POCT device (EKF Diagnostics), which uses boronate fluorescence quenching technology, with a laboratory based High Performance Liquid Chromatography (HPLC) method (Biorad D10) for measurement of HbA1c. Methods: Whole blood EDTA samples...
Tan, W.F.; Lu, S.J.; Liu, F.; Feng, X.H.; He, J.Z.; Koopal, L.K.
2008-01-01
Manganese (Mn) oxides are important components in soils and sediments. Points-of-zero charge (PZC) of three synthetic Mn oxides (birnessite, cryptomelane, and todorokite) were determined by using three classical techniques (potentiometric titration or PT, rapid PT or R-PT, and salt titration or ST)
Echosonography with proximity sensors
International Nuclear Information System (INIS)
Thaisiam, W; Laithong, T; Meekhun, S; Chaiwathyothin, N; Thanlarp, P; Danworaphong, S
2013-01-01
We propose the use of a commercial ultrasonic proximity sensor kit for profiling an altitude-varying surface by employing echosonography. The proximity sensor kit, two identical transducers together with its dedicated operating circuit, is used as a profiler for the construction of an image. Ultrasonic pulses are emitted from one of the transducers and received by the other. The time duration between the pulses allows us to determine the traveling distance of each pulse. In the experiment, the circuit is used with the addition of two copper wires for directing the outgoing and incoming signals to an oscilloscope. The time of flight of ultrasonic pulses can thus be determined. Square grids of 5 × 5 cm 2 are made from fishing lines, forming pixels in the image. The grids are designed to hold the detection unit in place, about 30 cm above a flat surface. The surface to be imaged is constructed to be height varying and placed on the flat surface underneath the grids. Our result shows that an image of the profiled surface can be created by varying the location of the detection unit along the grid. We also investigate the deviation in relation to the time of flight of the ultrasonic pulse. Such an experiment should be valuable for conveying the concept of ultrasonic imaging to physical and medical science undergraduate students. Due to its simplicity, the setup could be made in any undergraduate laboratory relatively inexpensively and it requires no complex parts. The results illustrate the concept of echosonography. (paper)
Point Estimation Method of Electromagnetic Flowmeters Life Based on Randomly Censored Failure Data
Directory of Open Access Journals (Sweden)
Zhen Zhou
2014-08-01
Full Text Available This paper analyzes the characteristics of the enterprise after-sale service records for field failure data, and summarizes the types of field data. Maximum likelihood estimation method and the least squares method are presented for the complexity and difficulty of field failure data processing, and Monte Carlo simulation method is proposed. Monte Carlo simulation, the relatively simple calculation method, is an effective method, whose result is closed to that of the other two methods. Through the after-sale service records analysis of a specific electromagnetic flowmeter enterprises, this paper illustrates the effectiveness of field failure data processing methods.
Radar Image Simulation: Validation of the Point Scattering Method. Volume 2
1977-09-01
the Engineer Topographic Labor - atory (ETL), Fort Belvoir, Virginia. This Radar Simulation Study was performed to validate the point tcattering radar...e.n For radar, the number of Independent samples in a given re.-olution cell is given by 5 ,: N L 2w (16) L Acoso where: 0 Radar incidence angle; w
Shi, Yixun
2009-01-01
Based on a sequence of points and a particular linear transformation generalized from this sequence, two recent papers (E. Mauch and Y. Shi, "Using a sequence of number pairs as an example in teaching mathematics". Math. Comput. Educ., 39 (2005), pp. 198-205; Y. Shi, "Case study projects for college mathematics courses based on a particular…
Vannecke, T P W; Lampens, D R A; Ekama, G A; Volcke, E I P
2015-01-01
Simple titration methods certainly deserve consideration for on-site routine monitoring of volatile fatty acid (VFA) concentration and alkalinity during anaerobic digestion (AD), because of their simplicity, speed and cost-effectiveness. In this study, the 5 and 8 pH point titration methods for measuring the VFA concentration and carbonate system alkalinity (H2CO3*-alkalinity) were assessed and compared. For this purpose, synthetic solutions with known H2CO3*-alkalinity and VFA concentration as well as samples from anaerobic digesters treating three different kind of solid wastes were analysed. The results of these two related titration methods were verified with photometric and high-pressure liquid chromatography measurements. It was shown that photometric measurements lead to overestimations of the VFA concentration in the case of coloured samples. In contrast, the 5 pH point titration method provides an accurate estimation of the VFA concentration, clearly corresponding with the true value. Concerning the H2CO3*-alkalinity, the most accurate and precise estimations, showing very similar results for repeated measurements, were obtained using the 8 pH point titration. Overall, it was concluded that the 5 pH point titration method is the preferred method for the practical monitoring of AD of solid wastes due to its robustness, cost efficiency and user-friendliness.
Doursounian, L; Grimberg, J; Cazeau, C; Touzard, R C
1996-01-01
The authors describe a new internal fixation device, and report on 17 proximal humeral fractures managed with this technique. The fracture patterns, using Neer's classification were: 9 displaced three-part fractures, 4 displaced four-part fractures and 4 interior fracture dislocations (mean age of the patients: 70 years). The device is a two-part titanium device. The humeral component has a long vertical stem cemented in the humeral shaft; and a short proximal portion set at an angle of 135 degrees on the stem, with a neck and a Morse taper cone. The other part is a crown-shaped stapple, whose base is a perforated disk with a central Morse taper socket. The rim of the crown has five prongs which, together with the central socket, are impacted in the cancellous bone of the humeral head. The taper of the humeral component is inserted into the central socket of the stapple to provide fracture fixation. Tuberosities are reattached to the shaft with non absorbable sutures. Mean follow-up was 29 months. The global ratings were as follows: 4 excellent results, 6 good results, 4 fair results, 3 poor results. Mean active forward flexion: 100 degrees, and mean active external rotation 22 degrees. After exclusion of the 4 fracture-dislocations, the global rating became: 4 excellent results, 5 good results, 3 fair results, 1 poor result. Mean active forward flexion: 110 degrees and mean active external rotation: 31.5 degrees. There were no case of avascular necrosis in 13 patients. Complications requiring surgery occurred in one case: an upper protrusion of the stapple which required replacement of the stapple by a prosthetic humeral head. Other complications included: 2 asymptomatic partial protrusions of the stapple, 2 complete and two partial avascular necrosis in fracture-dislocations. Except for the fracture-dislocations our device confers several major benefits. The humeral head is preserved. Typical problems associated with joint replacement (dislocations, loosening
Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris
2016-04-01
The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence
Directory of Open Access Journals (Sweden)
Han Lin Shang
2011-07-01
Full Text Available Using the age- and sex-specific data of 14 developed countries, we compare the point and interval forecast accuracy and bias of ten principal component methods for forecasting mortality rates and life expectancy. The ten methods are variants and extensions of the Lee-Carter method. Based on one-step forecast errors, the weighted Hyndman-Ullah method provides the most accurate point forecasts of mortality rates and the Lee-Miller method is the least biased. For the accuracy and bias of life expectancy, the weighted Hyndman-Ullah method performs the best for female mortality and the Lee-Miller method for male mortality. While all methods underestimate variability in mortality rates, the more complex Hyndman-Ullah methods are more accurate than the simpler methods. The weighted Hyndman-Ullah method provides the most accurate interval forecasts for mortality rates, while the robust Hyndman-Ullah method provides the best interval forecast accuracy for life expectancy.
Directory of Open Access Journals (Sweden)
L. Hoegner
2016-06-01
Full Text Available This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine
In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...
Analysis method of beam pointing stability based on optical transmission matrix
Wang, Chuanchuan; Huang, PingXian; Li, Xiaotong; Cen, Zhaofen
2016-10-01
Quite a lot of factors will make effects on beam pointing stability of an optical system, Among them, the element tolerance is one of the most important and common factors. In some large laser systems, it will make final micro beams spot on the image plane deviate obviously. So it is essential for us to achieve effective and accurate analysis theoretically on element tolerance. In order to make the analysis of beam pointing stability convenient and theoretical, we consider transmission of a single chief ray rather than beams approximately to stand for the whole spot deviation. According to optical matrix, we also simplify this complex process of light transmission to multiplication of many matrices. So that we can set up element tolerance model, namely having mathematical expression to illustrate spot deviation in an optical system with element tolerance. In this way, we can realize quantitative analysis of beam pointing stability theoretically. In second half of the paper, we design an experiment to get the spot deviation in a multipass optical system caused by element tolerance, then we adjust the tolerance step by step and compare the results with the datum got from tolerance model, finally prove the correction of tolerance model successfully.
Study on Scattered Data Points Interpolation Method Based on Multi-line Structured Light
International Nuclear Information System (INIS)
Fan, J Y; Wang, F G; W, Y; Zhang, Y L
2006-01-01
Aiming at the range image obtained through multi-line structured light, a regional interpolation method is put forward in this paper. This method divides interpolation into two parts according to the memory format of the scattered data, one is interpolation of the data on the stripes, and the other is interpolation of data between the stripes. Trend interpolation method is applied to the data on the stripes, and Gauss wavelet interpolation method is applied to the data between the stripes. Experiments prove regional interpolation method feasible and practical, and it also promotes the speed and precision
Solving point reactor kinetic equations by time step-size adaptable numerical methods
International Nuclear Information System (INIS)
Liao Chaqing
2007-01-01
Based on the analysis of effects of time step-size on numerical solutions, this paper showed the necessity of step-size adaptation. Based on the relationship between error and step-size, two-step adaptation methods for solving initial value problems (IVPs) were introduced. They are Two-Step Method and Embedded Runge-Kutta Method. PRKEs were solved by implicit Euler method with step-sizes optimized by using Two-Step Method. It was observed that the control error has important influence on the step-size and the accuracy of solutions. With suitable control errors, the solutions of PRKEs computed by the above mentioned method are accurate reasonably. The accuracy and usage of MATLAB built-in ODE solvers ode23 and ode45, both of which adopt Runge-Kutta-Fehlberg method, were also studied and discussed. (authors)
Proximity detection system underground
Energy Technology Data Exchange (ETDEWEB)
Denis Kent [Mine Site Technologies (Australia)
2008-04-15
Mine Site Technologies (MST) with the support ACARP and Xstrata Coal NSW, as well as assistance from Centennial Coal, has developed a Proximity Detection System to proof of concept stage as per plan. The basic aim of the project was to develop a system to reduce the risk of the people coming into contact with vehicles in an uncontrolled manner (i.e. being 'run over'). The potential to extend the developed technology into other areas, such as controls for vehicle-vehicle collisions and restricting access of vehicle or people into certain zones (e.g. non FLP vehicles into Hazardous Zones/ERZ) was also assessed. The project leveraged off MST's existing Intellectual Property and experience gained with our ImPact TRACKER tagging technology, allowing the development to be fast tracked. The basic concept developed uses active RFID Tags worn by miners underground to be detected by vehicle mounted Readers. These Readers in turn provide outputs that can be used to alert a driver (e.g. by light and/or audible alarm) that a person (Tag) approaching within their vicinity. The prototype/test kit developed proved the concept and technology, the four main components being: Active RFID Tags to send out signals for detection by vehicle mounted receivers; Receiver electronics to detect RFID Tags approaching within the vicinity of the unit to create a long range detection system (60 m to 120 m); A transmitting/exciter device to enable inner detection zone (within 5 m to 20 m); and A software/hardware device to process & log incoming Tags reads and create certain outputs. Tests undertaken in the laboratory and at a number of mine sites, confirmed the technology path taken could form the basis of a reliable Proximity Detection/Alert System.
Zhang, Yuncheng
The mathematical pointing model is establishment of the visual tracking theodolite for satellites in two kinds of observation methods at Yunnan Observatory, which is related to the digitalisation reform and the optical-electronic technique reform, is introduced respectively in this paper.
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Edlund, Kristian
2013-01-01
In this paper, we develop an efficient interior-point method (IPM) for the linear programs arising in economic model predictive control of linear systems. The novelty of our algorithm is that it combines a homogeneous and self-dual model, and a specialized Riccati iteration procedure. We test...
DEFF Research Database (Denmark)
Khoshfetrat Pakazad, Sina; Hansson, Anders; Andersen, Martin S.
2017-01-01
In this paper, we propose a distributed algorithm for solving coupled problems with chordal sparsity or an inherent tree structure which relies on primal–dual interior-point methods. We achieve this by distributing the computations at each iteration, using message-passing. In comparison to existi...
Derado, Josip; Garner, Mary L.; Tran, Thu-Hang
2016-01-01
Students' abilities and interests vary dramatically in the college mathematics classroom. How do we teach all of these students effectively? In this paper, we present the Point Reward System (PRS), a new method of assessment that addresses this problem. We designed the PRS with three main goals in mind: to increase the retention rates; to keep all…
Energy Technology Data Exchange (ETDEWEB)
Baldwin, J.M. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems
1996-04-01
The Dimensional Inspection Techniques Specification (DITS) Project is an ongoing effort to produce tools and guidelines for optimum sampling and data analysis of machined parts, when measured using point-sample methods of dimensional metrology. This report is a compilation of results of a literature survey, conducted in support of the DITS. Over 160 citations are included, with author abstracts where available.
International Nuclear Information System (INIS)
Hagag, O.M.; Nafee, S.S.; Naeem, M.A.; El Khatib, A.M.
2011-01-01
The direct mathematical method has been developed for calculating the total efficiency of many cylindrical gamma detectors, especially HPGe and NaI detector. Different source geometries are considered (point and disk). Further into account is taken of gamma attenuation from detector window or any interfacing absorbing layer. Results are compared with published experimental data to study the validity of the direct mathematical method to calculate total efficiency for any gamma detector size.
Musaiger, Abdulrahman O; D'Souza, Reshma
2008-03-01
This study analyzed eight cooked species of fish and one species of shrimps (grilled, curried, fried and cooked in rice) commonly consumed in Bahrain for their proximate, mineral and heavy metal content. The results reveled that the protein content was in the range of 22.8-29.2 g/100 g, while the fat content was between 2.9-11.9 g/100 g. The energy content was the highest in the fried Scomberomorus commerson being 894.2 KJ/100 g, followed by Scomberomorus commerson cooked in rice (867.3 KJ/100 g). The samples also had a considerable content of sodium ranging from 120-600 mg/100 g, potassium (310-560 mg/100 g) phosphorous (200-330 mg/100 g), magnesium (26-54 mg/100 g) and zinc (0.4-2.0 mg/100 g), while the other minerals were present to a lower extent. Lead was present to an extent of 0.30 microg/g in the grilled Plectorhinchus sordidus while Lethrinus nebulosus cooked in rice contained 0.35 microg/g of mercury. Cadmium levels were constant at cooking fish and shrimps have an effect on their nutrient composition and heavy metal content hence, it is advisable to avoid excessive frying and use minimal salt. In addition, consuming of a wide variety of species of fish and alternating between the various modes of cooking is the best approach to achieve improved dietary habits, minimizing mercury exposure and increasing omega-3 fatty acid intake.
Random fixed point equations and inverse problems using "collage method" for contraction mappings
Kunze, H. E.; La Torre, D.; Vrscay, E. R.
2007-10-01
In this paper we are interested in the direct and inverse problems for the following class of random fixed point equations T(w,x(w))=x(w) where is a given operator, [Omega] is a probability space and X is a Polish metric space. The inverse problem is solved by recourse to the collage theorem for contractive maps. We then consider two applications: (i) random integral equations, and (ii) random iterated function systems with greyscale maps (RIFSM), for which noise is added to the classical IFSM.
Czech Academy of Sciences Publication Activity Database
Pastorek, Lukáš; Sobol, Margaryta; Hozák, Pavel
2016-01-01
Roč. 146, č. 4 (2016), s. 391-406 ISSN 0948-6143 R&D Projects: GA TA ČR(CZ) TE01020118; GA ČR GA15-08738S; GA MŠk(CZ) ED1.1.00/02.0109; GA MŠk(CZ) LM2015062 Grant - others:Human Frontier Science Program(FR) RGP0017/2013 Institutional support: RVO:68378050 Keywords : Colocalization * Quantitative analysis * Pointed patterns * Transmission electron microscopy * Manders' coefficients * Immunohistochemistry Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.553, year: 2016
Direct measurement of surface-state conductance by microscopic four-point probe method
DEFF Research Database (Denmark)
Hasegawa, S.; Shiraki, I.; Tanikawa, T.
2002-01-01
For in situ measurements of local electrical conductivity of well defined crystal surfaces in ultrahigh vacuum, we have developed microscopic four-point probes with a probe spacing of several micrometres, installed in a scanning-electron - microscope/electron-diffraction chamber. The probe...... is precisely positioned on targeted areas of the sample surface by using piezoactuators. This apparatus enables conductivity measurement with extremely high surface sensitivity, resulting in direct access to surface-state conductivity of the surface superstructures, and clarifying the influence of atomic steps...
Electrostatics of a Point Charge between Intersecting Planes: Exact Solutions and Method of Images
Mei, W. N.; Holloway, A.
2005-01-01
In this work, the authors present a commonly used example in electrostatics that could be solved exactly in a conventional manner, yet expressed in a compact form, and simultaneously work out special cases using the method of images. Then, by plotting the potentials and electric fields obtained from these two methods, the authors demonstrate that…
Creating the Data Basis for Environmental Evaluations with the Oil Point Method
DEFF Research Database (Denmark)
Bey, Niki; Lenau, Torben Anker
1999-01-01
In order to support designers in decision-making, some methods have been developed which are based on environmental indicators. These methods, however, can only be used, if indicators for the specific product concept exist and are readily available.Based on this situation, the authors developed a...
Calculation of condition indices for road structures using a deduct points method
CSIR Research Space (South Africa)
Roux, MP
2016-07-01
Full Text Available The DER-rating method has been adopted as the national standard for the rating of road structures. This method is defects-based and involves the rating of defects on the various inspections items of road structures in terms of degree (D), extent (E...
Development of safe mechanism for surgical robots using equilibrium point control method.
Park, Shinsuk; Lim, Hokjin; Kim, Byeong-sang; Song, Jae-bok
2006-01-01
This paper introduces a novel mechanism for surgical robotic systems to generate human arm-like compliant motion. The mechanism is based on the idea of the equilibrium point control hypothesis which claims that multi-joint limb movements are achieved by shifting the limbs' equilibrium positions defined by neuromuscular activity. The equilibrium point control can be implemented on a robot manipulator by installing two actuators at each joint of the manipulator, one to control the joint position, and the other to control the joint stiffness. This double-actuator mechanism allows us to arbitrarily manipulate the stiffness (or impedance) of a robotic manipulator as well as its position. Also, the force at the end-effector can be estimated based on joint stiffness and joint angle changes without using force transducers. A two-link manipulator and a three-link manipulator with the double-actuator units have been developed, and experiments and simulation results show the potential of the proposed approach. By creating the human arm-like behavior, this mechanism can improve the performance of robot manipulators to execute stable and safe movement in surgical environments by using a simple control scheme.
Directory of Open Access Journals (Sweden)
Dominique Placko
2016-10-01
Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.
PROXIMITY MANAGEMENT IN CRISIS CONDITIONS
Directory of Open Access Journals (Sweden)
Ion Dorin BUMBENECI
2010-01-01
Full Text Available The purpose of this study is to evaluate the level of assimilation for the terms "Proximity Management" and "Proximity Manager", both in the specialized literature and in practice. The study has two parts: the theoretical research of the two terms, and an evaluation of the use of Proximity management in 32 companies in Gorj, Romania. The object of the evaluation resides in 27 companies with less than 50 employees and 5 companies with more than 50 employees.
Industrial Computed Tomography using Proximal Algorithm
Zang, Guangming
2016-04-14
In this thesis, we present ProxiSART, a flexible proximal framework for robust 3D cone beam tomographic reconstruction based on the Simultaneous Algebraic Reconstruction Technique (SART). We derive the proximal operator for the SART algorithm and use it for minimizing the data term in a proximal algorithm. We show the flexibility of the framework by plugging in different powerful regularizers, and show its robustness in achieving better reconstruction results in the presence of noise and using fewer projections. We compare our framework to state-of-the-art methods and existing popular software tomography reconstruction packages, on both synthetic and real datasets, and show superior reconstruction quality, especially from noisy data and a small number of projections.
PROXIMATE AND ELEMENTAL COMPOSITION OF WHITE GRUBS
African Journals Online (AJOL)
DR. AMINU
This study determined the proximate and mineral element composition of whole white grubs using standard methods of analysis. ... and 12.75 ± 3.65% respectively. Mineral contents of white grub in terms of relative concentration .... of intracellular Ca, bone mineralization, blood coagulation, and plasma membrane potential ...
Phytochemical Screening, Proximate and Mineral Composition of ...
African Journals Online (AJOL)
Leaves of sweet potato (Ipomoea batatas) grown in Tepi area was studied for their class of phytochemicals, mineral and proximate composition using standard analytical methods. The phytochemical screening revealed the presence of alkaloids, flavonoid, terpenoids, saponins, quinones, phenol, tannins, amino acid and ...
Disability occurrence and proximity to death
Klijs, Bart; Mackenbach, Johan P.; Kunst, Anton E.
2010-01-01
Purpose. This paper aims to assess whether disability occurrence is related more strongly to proximity to death than to age. Method. Self reported disability and vital status were available from six annual waves and a subsequent 12-year mortality follow-up of the Dutch GLOBE longitudinal study.
Directory of Open Access Journals (Sweden)
Ana Tobar
Full Text Available BACKGROUND: Obesity is associated with glomerular hyperfiltration, increased proximal tubular sodium reabsorption, glomerular enlargement and renal hypertrophy. A single experimental study reported an increased glomerular urinary space in obese dogs. Whether proximal tubular volume is increased in obese subjects and whether their glomerular and tubular urinary spaces are enlarged is unknown. OBJECTIVE: To determine whether proximal tubules and glomerular and tubular urinary space are enlarged in obese subjects with proteinuria and glomerular hyperfiltration. METHODS: Kidney biopsies from 11 non-diabetic obese with proteinuria and 14 non-diabetic lean patients with a creatinine clearance above 50 ml/min and with mild or no interstitial fibrosis were retrospectively analyzed using morphometric methods. The cross-sectional area of the proximal tubular epithelium and lumen, the volume of the glomerular tuft and of Bowman's space and the nuclei number per tubular profile were estimated. RESULTS: Creatinine clearance was higher in the obese than in the lean group (P=0.03. Proteinuria was similarly increased in both groups. Compared to the lean group, the obese group displayed a 104% higher glomerular tuft volume (P=0.001, a 94% higher Bowman's space volume (P=0.003, a 33% higher cross-sectional area of the proximal tubular epithelium (P=0.02 and a 54% higher cross-sectional area of the proximal tubular lumen (P=0.01. The nuclei number per proximal tubular profile was similar in both groups, suggesting that the increase in tubular volume is due to hypertrophy and not to hyperplasia. CONCLUSIONS: Obesity-related glomerular hyperfiltration is associated with proximal tubular epithelial hypertrophy and increased glomerular and tubular urinary space volume in subjects with proteinuria. The expanded glomerular and urinary space is probably a direct consequence of glomerular hyperfiltration. These effects may be involved in the pathogenesis of obesity
Point-splitting as a regularization method for λφ4-type vertices: Abelian case
International Nuclear Information System (INIS)
Moura-Melo, Winder A.; Helayel Neto, J.A.
1998-11-01
We obtained regularized Abelian Lagrangians containing λφ 4 -type vertices by means of a suitable point-splitting procedure. The calculation is developed in details for a general Lagrangian, whose fields (gauge and matter ones) satisfy certain conditions. We illustrates our results by considering some special cases, such as the Abelian Higgs, the (ψ-barψ) 2 and the Avdeev-Chizov (real rank-2 antisymmetric tensor as matter fields) models. We also discuss some features of the obtained Lagrangian such as the regularity and non-locality of its new integrating terms. Moreover, the resolution of the Abelian case may teach us some useful technical aspects when dealing with the non-Abelian one. (author)
Directory of Open Access Journals (Sweden)
Phayap Katchang
2010-01-01
Full Text Available The purpose of this paper is to investigate the problem of finding a common element of the set of solutions for mixed equilibrium problems, the set of solutions of the variational inclusions with set-valued maximal monotone mappings and inverse-strongly monotone mappings, and the set of fixed points of a family of finitely nonexpansive mappings in the setting of Hilbert spaces. We propose a new iterative scheme for finding the common element of the above three sets. Our results improve and extend the corresponding results of the works by Zhang et al. (2008, Peng et al. (2008, Peng and Yao (2009, as well as Plubtieng and Sriprad (2009 and some well-known results in the literature.
International Nuclear Information System (INIS)
Zhang, Zhenjiu; Hu, Hong
2013-01-01
The linear and rotary axes are fundamental parts of multi-axis machine tools. The geometric error components of the axes must be measured for motion error compensation to improve the accuracy of the machine tools. In this paper, a simple method named the three point method is proposed to measure the geometric error of the linear and rotary axes of the machine tools using a laser tracker. A sequential multilateration method, where uncertainty is verified through simulation, is applied to measure the 3D coordinates. Three noncollinear points fixed on the stage of each axis are selected. The coordinates of these points are simultaneously measured using a laser tracker to obtain their volumetric errors by comparing these coordinates with ideal values. Numerous equations can be established using the geometric error models of each axis. The geometric error components can be obtained by solving these equations. The validity of the proposed method is verified through a series of experiments. The results indicate that the proposed method can measure the geometric error of the axes to compensate for the errors in multi-axis machine tools.
A new method to obtain ground control points based on SRTM data
Wang, Pu; An, Wei; Deng, Xin-pu; Zhang, Xi
2013-09-01
The GCPs are widely used in remote sense image registration and geometric correction. Normally, the DRG and DOM are the major data source from which GCPs are extracted. But the high accuracy products of DRG and DOM are usually costly to obtain. Some of the production are free, yet without any guarantee. In order to balance the cost and the accuracy, the paper proposes a method of extracting the GCPs from SRTM data. The method consist of artificial assistance, binarization, data resample and reshape. With artificial assistance to find out which part of SRTM data could be used as GCPs, such as the islands or sharp coast line. By utilizing binarization algorithm , the shape information of the region is obtained while other information is excluded. Then the binary data is resampled to a suitable resolution required by specific application. At last, the data would be reshaped according to satellite imaging type to obtain the GCPs which could be used. There are three advantages of the method proposed in the paper. Firstly, the method is easy for implementation. Unlike the DRG data or DOM data that charges a lot, the SRTM data is totally free to access without any constricts. Secondly, the SRTM has a high accuracy about 90m that is promised by its producer, so the GCPs got from it can also obtain a high quality. Finally, given the SRTM data covers nearly all the land surface of earth between latitude -60° and latitude +60°, the GCPs which are produced by the method can cover most important regions of the world. The method which obtain GCPs from SRTM data can be used in meteorological satellite image or some situation alike, which have a relative low requirement about the accuracy. Through plenty of simulation test, the method is proved convenient and effective.
Cai, Y
2003-01-01
We have investigate a method of measuring the complete lattice functions including the coupling parameters at any azimuthal position in a periodic an symplectic system. In particular, the method is applied to measure the lattice functions at the interaction point where the beams collide. It has been demonstrate that a complete set of lattice functions can be accurately measured with two adjacent beam position monitors and the known transformation matrix between them. As a by-product, the method also automatically measures the complete one-turn matrix.
Directory of Open Access Journals (Sweden)
Firmino Cardoso Pereira
2015-05-01
Full Text Available This article evaluates the effectiveness of the methods fixed area plots (AP and point-centered quarters (PQ to describe a woody community from typical Cerrado. We used 10 APs and 140 PQs, distributed into 5 transects. We compared the density of individuals, floristic composition, richness of families, genera, and species, and vertical and horizontal vegetation structure. The AP method was more effective to sample the density of individuals. The PQ method was more effective for characterizing species richness, vertical vegetation structure, and record of species with low abundance. The composition of families, genera, and species, as well as the species with higher importance value index in the community were similarly determined by the 2 methods. The methods compared are complementary. We suggest that the use of AP, PQ, or both methods may be aimed at the vegetation parameter under study.
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
Watanabe, Takashi; Yoshida, Toshiya; Ohniwa, Katsumi
This paper discusses a new control strategy for photovoltaic power generation systems with consideration of dynamic characteristics of the photovoltaic cells. The controller estimates internal currents of an equivalent circuit for the cells. This estimated, or the virtual current and the actual voltage of the cells are fed to a conventional Maximum-Power-Point-Tracking (MPPT) controller. Consequently, this MPPT controller still tracks the optimum point even though it is so designed that the seeking speed of the operating point is extremely high. This system may suit for applications, which are installed in rapidly changeable insolation and temperature-conditions e.g. automobiles, trains, and airplanes. The proposed method is verified by experiment with a combination of this estimating function and the modified Boehringer's MPPT algorithm.
Ito, Yukihiro; Natsu, Wataru; Kunieda, Masanori
This paper describes the influences of anisotropy found in the elastic modulus of monocrystalline silicon wafers on the measurement accuracy of the three-point-support inverting method which can measure the warp and thickness of thin large panels simultaneously. Deflection due to gravity depends on the crystal orientation relative to the positions of the three-point-supports. Thus the deviation of actual crystal orientation from the direction indicated by the notch fabricated on the wafer causes measurement errors. Numerical analysis of the deflection confirmed that the uncertainty of thickness measurement increases from 0.168µm to 0.524µm due to this measurement error. In addition, experimental results showed that the rotation of crystal orientation relative to the three-point-supports is effective for preventing wafer vibration excited by disturbance vibration because the resonance frequency of wafers can be changed. Thus, surface shape measurement accuracy was improved by preventing resonant vibration during measurement.
DEFF Research Database (Denmark)
Barfod, Adrian; Straubhaar, Julien; Høyer, Anne-Sophie
2017-01-01
the incorporation of elaborate datasets and provides a framework for stochastic hydrostratigraphic modelling. This paper focuses on comparing three MPS methods: snesim, DS and iqsim. The MPS methods are tested and compared on a real-world hydrogeophysical survey from Kasted in Denmark, which covers an area of 45 km......2. The comparison of the stochastic hydrostratigraphic MPS models is carried out in an elaborate scheme of visual inspection, mathematical similarity and consistency with boreholes. Using the Kasted survey data, a practical example for modelling new survey areas is presented. A cognitive...
An automated method for the evaluation of the pointing accuracy of Sun-tracking devices
Baumgartner, Dietmar J.; Pötzi, Werner; Freislich, Heinrich; Strutzmann, Heinz; Veronig, Astrid M.; Rieder, Harald E.
2017-03-01
The accuracy of solar radiation measurements, for direct (DIR) and diffuse (DIF) radiation, depends significantly on the precision of the operational Sun-tracking device. Thus, rigid targets for instrument performance and operation have been specified for international monitoring networks, e.g., the Baseline Surface Radiation Network (BSRN) operating under the auspices of the World Climate Research Program (WCRP). Sun-tracking devices that fulfill these accuracy requirements are available from various instrument manufacturers; however, none of the commercially available systems comprise an automatic accuracy control system allowing platform operators to independently validate the pointing accuracy of Sun-tracking sensors during operation. Here we present KSO-STREAMS (KSO-SunTRackEr Accuracy Monitoring System), a fully automated, system-independent, and cost-effective system for evaluating the pointing accuracy of Sun-tracking devices. We detail the monitoring system setup, its design and specifications, and the results from its application to the Sun-tracking system operated at the Kanzelhöhe Observatory (KSO) Austrian radiation monitoring network (ARAD) site. The results from an evaluation campaign from March to June 2015 show that the tracking accuracy of the device operated at KSO lies within BSRN specifications (i.e., 0.1° tracking accuracy) for the vast majority of observations (99.8 %). The evaluation of manufacturer-specified active-tracking accuracies (0.02°), during periods with direct solar radiation exceeding 300 W m-2, shows that these are satisfied in 72.9 % of observations. Tracking accuracies are highest during clear-sky conditions and on days where prevailing clear-sky conditions are interrupted by frontal movement; in these cases, we obtain the complete fulfillment of BSRN requirements and 76.4 % of observations within manufacturer-specified active-tracking accuracies. Limitations to tracking surveillance arise during overcast conditions and
GOCE in ocean modelling - Point mass method applied on GOCE gravity gradients
DEFF Research Database (Denmark)
Herceg, Matija; Knudsen, Per
This presentation is an introduction to my Ph.D project. The main objective of the study is to improve the methodology for combining GOCE gravity field models with satellite altimetry to derive optimal dynamic ocean topography models for oceanography. Here a method for geoid determination using...
Criticism on Jäntti's Three Point Method on curtailing gas adsorption measurements
Massen, C.H.; Poulis, J.A.; Robens, E.
2000-01-01
Jäntti introduced a method to reduce the time required for the stepwise measurement of adsorption isotherms (Jäntti et al., Progress in Vacuum Microbalance Techniques, Vol. 1, Heyden, London, pp. 345–353, 1972). After a pressure change he measured the adsorbed mass three times and calculated its
A sensorless method for measuring the point mobility of mechanical structures
Boulandet, R.; Michau, M.; Herzog, P.; Micheau, P.; Berry, A.
2016-09-01
This paper presents a convenient and cost-effective experimental tool for measuring the mobility characteristics of a mechanical structure. The objective is to demonstrate that the point mobility measurement can be performed using only an electrodynamic inertial exciter. Unlike previous work based on voice coil actuators, no load cell or accelerometer is needed. Instead, it is theoretically shown that the mobility characteristics of the structure can be estimated from variations in the electrical input impedance of the actuator fixed onto it, provided that the electromechanical parameters of the actuator are known. The proof of concept is made experimentally using a cheap commercially available actuator on a simply supported plate, leading to a good dynamic range from 100 Hz to 1 kHz. The methodology to assess the basic parameters of the actuator is also given. Measured data are compared to a standard shaker testing and the strengths and weaknesses of the sensorless mobility measuring device are discussed. It is believed that this sensorless mobility measuring device can be a convenient experimental tool to determine the dynamic characteristics of a wide range of mechanical structures.
Analytic method study of point-reactor kinetic equation when cold start-up
International Nuclear Information System (INIS)
Zhang Fan; Chen Wenzhen; Gui Xuewen
2008-01-01
The reactor cold start-up is a process of inserting reactivity by lifting control rod discontinuously. Inserting too much reactivity will cause short-period and may cause an overpressure accident in the primary loop. It is therefore very important to understand the rule of neutron density variation and to find out the relationships among the speed of lifting control rod, and the duration and speed of neutron density response. It is also helpful for the operators to grasp the rule in order to avoid a start-up accident. This paper starts with one-group delayed neutron point-reactor kinetics equations and provides their analytic solution when reactivity is introduced by lifting control rods discontinuously. The analytic expression is validated by comparison with practical data. It is shown that the analytic solution agrees well with numerical solution. Using this analytical solution, the relationships among neutron density response with the speed of lifting control rod and its duration are also studied. By comparing the results with those under the condition of step inserted reactivity, useful conclusions are drawn
Analytical three-point Dixon method: With applications for spiral water-fat imaging.
Wang, Dinghui; Zwart, Nicholas R; Li, Zhiqiang; Schär, Michael; Pipe, James G
2016-02-01
The goal of this work is to present a new three-point analytical approach with flexible even or uneven echo increments for water-fat separation and to evaluate its feasibility with spiral imaging. Two sets of possible solutions of water and fat are first found analytically. Then, two field maps of the B0 inhomogeneity are obtained by linear regression. The initial identification of the true solution is facilitated by the root-mean-square error of the linear regression and the incorporation of a fat spectrum model. The resolved field map after a region-growing algorithm is refined iteratively for spiral imaging. The final water and fat images are recalculated using a joint water-fat separation and deblurring algorithm. Successful implementations were demonstrated with three-dimensional gradient-echo head imaging and single breathhold abdominal imaging. Spiral, high-resolution T1 -weighted brain images were shown with comparable sharpness to the reference Cartesian images. With appropriate choices of uneven echo increments, it is feasible to resolve the aliasing of the field map voxel-wise. High-quality water-fat spiral imaging can be achieved with the proposed approach. © 2015 Wiley Periodicals, Inc.
Development of Precise Point Positioning Method Using Global Positioning System Measurements
Directory of Open Access Journals (Sweden)
Byung-Kyu Choi
2011-09-01
Full Text Available Precise point positioning (PPP is increasingly used in several parts such as monitoring of crustal movement and maintaining an international terrestrial reference frame using global positioning system (GPS measurements. An accuracy of PPP data processing has been increased due to the use of the more precise satellite orbit/clock products. In this study we developed PPP algorithm that utilizes data collected by a GPS receiver. The measurement error modelling including the tropospheric error and the tidal model in data processing was considered to improve the positioning accuracy. The extended Kalman filter has been also employed to estimate the state parameters such as positioning information and float ambiguities. For the verification, we compared our results to other of International GNSS Service analysis center. As a result, the mean errors of the estimated position on the East-West, North-South and Up-Down direction for the five days were 0.9 cm, 0.32 cm, and 1.14 cm in 95% confidence level.
A Practical Computational Method for the Anisotropic Redshift-Space 3-Point Correlation Function
Slepian, Zachary; Eisenstein, Daniel J.
2018-04-01
We present an algorithm enabling computation of the anisotropic redshift-space galaxy 3-point correlation function (3PCF) scaling as N2, with N the number of galaxies. Our previous work showed how to compute the isotropic 3PCF with this scaling by expanding the radially-binned density field around each galaxy in the survey into spherical harmonics and combining these coefficients to form multipole moments. The N2 scaling occurred because this approach never explicitly required the relative angle between a galaxy pair about the primary galaxy. Here we generalize this work, demonstrating that in the presence of azimuthally-symmetric anisotropy produced by redshift-space distortions (RSD) the 3PCF can be described by two triangle side lengths, two independent total angular momenta, and a spin. This basis for the anisotropic 3PCF allows its computation with negligible additional work over the isotropic 3PCF. We also present the covariance matrix of the anisotropic 3PCF measured in this basis. Our algorithm tracks the full 5-D redshift-space 3PCF, uses an accurate line of sight to each triplet, is exact in angle, and easily handles edge correction. It will enable use of the anisotropic large-scale 3PCF as a probe of RSD in current and upcoming large-scale redshift surveys.
Chen, Rui; Wang, Haotian; Shi, Jun; Hu, Pei
2016-05-01
CYP2D6 is a high polymorphic enzyme. Determining its phenotype before CYP2D6 substrate treatment can avoid dose-dependent adverse events or therapeutic failures. Alternative phenotyping methods of CYP2D6 were compared to aluate the appropriate and precise time points for phenotyping after single-dose and ultiple-dose of 30-mg controlled-release (CR) dextromethorphan (DM) and to explore the antimodes for potential sampling methods. This was an open-label, single and multiple-dose study. 21 subjects were assigned to receive a single dose of CR DM 30 mg orally, followed by a 3-day washout period prior to oral administration of CR DM 30 mg every 12 hours for 6 days. Metabolic ratios (MRs) from AUC∞ after single dosing and from AUC0-12h at steady state were taken as the gold standard. The correlations of metabolic ratios of DM to dextrorphan (MRDM/DX) values based on different phenotyping methods were assessed. Linear regression formulas were derived to calculate the antimodes for potential sample methods. In the single-dose part of the study statistically significant correlations were found between MRDM/DX from AUC∞ and from serial plasma points from 1 to 30 hours or from urine (all p-values < 0.001). In the multiple-dose part, statistically significant correlations were found between MRDM/DX from AUC0-12h on day 6 and MRDM/DX from serial plasma points from 0 to 36 hours after the last dosing (all p-values < 0.001). Based on reported urinary antimode and linear regression analysis, the antimodes of AUC and plasma points were derived to profile the trend of antimodes as the drug concentrations changed. MRDM/DX from plasma points had good correlations with MRDM/DX from AUC. Plasma points from 1 to 30 hours after single dose of 30-mg CR DM and any plasma point at steady state after multiple doses of CR DM could potentially be used for phenotyping of CYP2D6.
Lin, Claire Yilin; Veneziani, Alessandro; Ruthotto, Lars
2018-03-01
We present novel numerical methods for polyline-to-point-cloud registration and their application to patient-specific modeling of deployed coronary artery stents from image data. Patient-specific coronary stent reconstruction is an important challenge in computational hemodynamics and relevant to the design and improvement of the prostheses. It is an invaluable tool in large-scale clinical trials that computationally investigate the effect of new generations of stents on hemodynamics and eventually tissue remodeling. Given a point cloud of strut positions, which can be extracted from images, our stent reconstruction method aims at finding a geometrical transformation that aligns a model of the undeployed stent to the point cloud. Mathematically, we describe the undeployed stent as a polyline, which is a piecewise linear object defined by its vertices and edges. We formulate the nonlinear registration as an optimization problem whose objective function consists of a similarity measure, quantifying the distance between the polyline and the point cloud, and a regularization functional, penalizing undesired transformations. Using projections of points onto the polyline structure, we derive novel distance measures. Our formulation supports most commonly used transformation models including very flexible nonlinear deformations. We also propose 2 regularization approaches ensuring the smoothness of the estimated nonlinear transformation. We demonstrate the potential of our methods using an academic 2D example and a real-life 3D bioabsorbable stent reconstruction problem. Our results show that the registration problem can be solved to sufficient accuracy within seconds using only a few number of Gauss-Newton iterations. Copyright © 2017 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Christensen, S.M.
1976-01-01
A method known as covariant geodesic point separation is developed to calculate the vacuum expectation value of the stress tensor for a massive scalar field in an arbitrary gravitational field. The vacuum expectation value will diverge because the stress-tensor operator is constructed from products of field operators evaluated at the same space-time point. To remedy this problem, one of the field operators is taken to a nearby point. The resultant vacuum expectation value is finite and may be expressed in terms of the Hadamard elementary function. This function is calculated using a curved-space generalization of Schwinger's proper-time method for calculating the Feynman Green's function. The expression for the Hadamard function is written in terms of the biscalar of geodetic interval which gives a measure of the square of the geodesic distance between the separated points. Next, using a covariant expansion in terms of the tangent to the geodesic, the stress tensor may be expanded in powers of the length of the geodesic. Covariant expressions for each divergent term and for certain terms in the finite portion of the vacuum expectation value of the stress tensor are found. The properties, uses, and limitations of the results are discussed
International Nuclear Information System (INIS)
Zhang Zhiming; Zhu Xuesong; Bao Zhaohua; Yang Huilin
2012-01-01
Objective: to explore the initial outcome and efficacy of S 3 proximal humerus locking plate in the treatment of proximal humerus fractures. Methods: Twenty-two patients with proximal humerus fracture were treated with the S 3 proximal humerus locking plate. Most of the fractures were complex, two-part (n=4), three-part (n=11) and four-part (n=7) fractures according to the Neer classification of the proximal humerus fractures. Results: All patients were followed up for 3∼15 months. There were no complications related to the implant including loosening or breakage of the plate. Good and excellent results were documented in 17 patients fair results in 4 patients according the Neer scores of shoulder. Conclusion: New design concepts of S 3 proximal humerus plate provide the subchondral support and the internal fixation support. With the addition of the proper exercise of the shoulder joint, the outcomes would be satisfied. (authors)
Method of local pointed function reduction of original shape in Fourier transformation
International Nuclear Information System (INIS)
Dosch, H.; Slavyanov, S.Yu.
2002-01-01
The method for analytical reduction of the original shape in the one-dimensional Fourier transformation by the fourier image modulus is proposed. The basic concept of the method consists in the presentation of the model shape in the form of the local peak functions sum. The eigenfunctions, generated by the linear differential equations with the polynomial coefficients, are selected as the latter ones. This provides for the possibility of managing the Fourier transformation without numerical integration. This reduces the reverse task to the nonlinear regression with a small number of the evaluated parameters and to the numerical or asymptotic study on the model peak functions - the eigenfunctions of the differential tasks and their fourier images [ru
Directory of Open Access Journals (Sweden)
Wenzhen Chen
2013-01-01
Full Text Available The singularly perturbed method (SPM is proposed to obtain the analytical solution for the delayed supercritical process of nuclear reactor with temperature feedback and small step reactivity inserted. The relation between the reactivity and time is derived. Also, the neutron density (or power and the average density of delayed neutron precursors as the function of reactivity are presented. The variations of neutron density (or power and temperature with time are calculated and plotted and compared with those by accurate solution and other analytical methods. It is shown that the results by the SPM are valid and accurate in the large range and the SPM is simpler than those in the previous literature.
An Entry Point for Formal Methods: Specification and Analysis of Event Logs
Directory of Open Access Journals (Sweden)
Howard Barringer
2010-03-01
Full Text Available Formal specification languages have long languished, due to the grave scalability problems faced by complete verification methods. Runtime verification promises to use formal specifications to automate part of the more scalable art of testing, but has not been widely applied to real systems, and often falters due to the cost and complexity of instrumentation for online monitoring. In this paper we discuss work in progress to apply an event-based specification system to the logging mechanism of the Mars Science Laboratory mission at JPL. By focusing on log analysis, we exploit the "instrumentation" already implemented and required for communicating with the spacecraft. We argue that this work both shows a practical method for using formal specifications in testing and opens interesting research avenues, including a challenging specification learning problem.
Some Critical Points in the Methods and Philosphy of Physical Sciences
Bozdemir, Süleyman
2018-01-01
Nowadays, it seems that there are not enough studies on the philosophy and methods of physical sciences that would be attractive to the researchers in the field. However, many revolutionary inventions have come from the mechanism of the philosophical thought of the physical sciences. This is, of course, a vast and very interesting topic that must be investigated in detail by philosophers, scientists or philosopher-scientists such as physicists. In order to do justice to it one has to write a ...
Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen
2010-12-01
Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.
Directory of Open Access Journals (Sweden)
Wang Hao
2010-01-01
Full Text Available Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.
Directory of Open Access Journals (Sweden)
Antonio Roberto Balbo
2012-01-01
Full Text Available This paper proposes a predictor-corrector primal-dual interior point method which introduces line search procedures (IPLS in both the predictor and corrector steps. The Fibonacci search technique is used in the predictor step, while an Armijo line search is used in the corrector step. The method is developed for application to the economic dispatch (ED problem studied in the field of power systems analysis. The theory of the method is examined for quadratic programming problems and involves the analysis of iterative schemes, computational implementation, and issues concerning the adaptation of the proposed algorithm to solve ED problems. Numerical results are presented, which demonstrate improvements and the efficiency of the IPLS method when compared to several other methods described in the literature. Finally, postoptimization analyses are performed for the solution of ED problems.
Directory of Open Access Journals (Sweden)
Ramazan Gürkan
2013-01-01
Full Text Available A new cloud point extraction (CPE method was developed for the separation and preconcentration of copper (II prior to spectrophotometric analysis. For this purpose, 1-(2,4-dimethylphenyl azonapthalen-2-ol (Sudan II was used as a chelating agent and the solution pH was adjusted to 10.0 with borate buffer. Polyethylene glycol tert-octylphenyl ether (Triton X-114 was used as an extracting agent in the presence of sodium dodecylsulphate (SDS. After phase separation, based on the cloud point of the mixture, the surfactant-rich phase was diluted with acetone, and the enriched analyte was spectrophotometrically determined at 537 nm. The variables affecting CPE efficiency were optimized. The calibration curve was linear within the range 0.285-20 µg L-1 with a detection limit of 0.085 µg L-1. The method was successfully applied to the quantification of copper in different beverage samples.
High precision micro-scale Hall Effect characterization method using in-line micro four-point probes
DEFF Research Database (Denmark)
Petersen, Dirch Hjorth; Hansen, Ole; Lin, Rong
2008-01-01
Accurate characterization of ultra shallow junctions (USJ) is important in order to understand the principles of junction formation and to develop the appropriate implant and annealing technologies. We investigate the capabilities of a new micro-scale Hall effect measurement method where Hall...... effect is measured with collinear micro four-point probes (M4PP). We derive the sensitivity to electrode position errors and describe a position error suppression method to enable rapid reliable Hall effect measurements with just two measurement points. We show with both Monte Carlo simulations...... and experimental measurements, that the repeatability of a micro-scale Hall effect measurement is better than 1 %. We demonstrate the ability to spatially resolve Hall effect on micro-scale by characterization of an USJ with a single laser stripe anneal. The micro sheet resistance variations resulting from...
Two Stages repair of proximal hypospadias: Review of 33 cases
African Journals Online (AJOL)
HussamHassan
Background/Purpose: Proximal hypospadias with chordee is the most challenging variant of hypospadias to reconstruct. During the last 10 years, the approach to sever hypospadias has been controversial. Materials & Methods: During the period from June 2002 to December 2009, I performed 33 cases with proximal.
Czech Academy of Sciences Publication Activity Database
Červinka, Michal
2010-01-01
Roč. 2010, č. 4 (2010), s. 730-753 ISSN 0023-5954 Institutional research plan: CEZ:AV0Z10750506 Keywords : equilibrium problems with complementarity constraints * homotopy * C-stationarity Subject RIV: BC - Control Systems Theory Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/cervinka-on computation of c-stationary points for equilibrium problems with linear complementarity constraints via homotopy method.pdf
International Nuclear Information System (INIS)
Durani, Smeer; Mathur, Neerja; Chowdary, G.S.
2007-01-01
The cloud point extraction behavior (CPE) of vanadium (V) using 5,7 dibromo 8-hydroxyquinoline (DBHQ) and triton X 100 was investigated. Vanadium (V) was extracted with 4 ml of 0.5 mg/ml DBHQ and 6 ml of 8% (V/V) triton X 100 at the pH 3.7. A few hydrogeochemical samples were analysed for vanadium using the above method. (author)
Energy Technology Data Exchange (ETDEWEB)
Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.; Chen, Yousu; Trudnowski, Daniel J.; Diao, Ruisheng; Fuller, Jason C.; Mittelstadt, William A.; Hauer, John F.; Dagle, Jeffery E.
2010-10-18
Small signal stability problems are one of the major threats to grid stability and reliability in the U.S. power grid. An undamped mode can cause large-amplitude oscillations and may result in system breakups and large-scale blackouts. There have been several incidents of system-wide oscillations. Of those incidents, the most notable is the August 10, 1996 western system breakup, a result of undamped system-wide oscillations. Significant efforts have been devoted to monitoring system oscillatory behaviors from measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision, time-synchronized data needed for detecting oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measurements to identify system oscillation modes and their damping. Low damping indicates potential system stability issues. Modal analysis has been demonstrated with phasor measurements to have the capability of estimating system modes from both oscillation signals and ambient data. With more and more phasor measurements available and ModeMeter techniques maturing, there is yet a need for methods to bring modal analysis from monitoring to actions. The methods should be able to associate low damping with grid operating conditions, so operators or automated operation schemes can respond when low damping is observed. The work presented in this report aims to develop such a method and establish a Modal Analysis for Grid Operation (MANGO) procedure to aid grid operation decision making to increase inter-area modal damping. The procedure can provide operation suggestions (such as increasing generation or decreasing load) for mitigating inter-area oscillations.
Methods for cross point analysis of double-demand functions in assessing animal preferences
DEFF Research Database (Denmark)
Engel, Bas; Webb, Laura E.; Jensen, Margit Bak
2014-01-01
a required number of times (the workload). Each panel is linked to one of the simultaneously presented resources. Workloads are varied over sessions and resources. Per session, for each resource the number of times that an animal is rewarded by access to the resource is observed. Four statistical approaches...... for analysis of these observations, including two novel approaches, are presented and discussed. The two novel approaches are based on relative numbers of rewards, i.e. analyses of proportions, while the other two methods that have been used before are based on absolute numbers of rewards, i.e. analyses...
Modeling of Semiconductors and Correlated Oxides with Point Defects by First Principles Methods
Wang, Hao
2014-06-15
Point defects in silicon, vanadium dioxide, and doped ceria are investigated by density functional theory. Defects involving vacancies and interstitial oxygen and carbon in silicon are after formed in outer space and significantly affect device performances. The screened hybrid functional by Heyd-Scuseria-Ernzerhof is used to calculate formation energies, binding energies, and electronic structures of the defective systems because standard density functional theory underestimates the bang gap of silicon. The results indicate for the A-center a −2 charge state. Tin is proposed to be an effective dopant to suppress the formation of A-centers. For the total energy difference between the A- and B-type carbon related G-centers we find close agreement with the experiment. The results indicate that the C-type G-center is more stable than both the A- and B-types. The electronic structures of the monoclinic and rutile phases of vanadium dioxide are also studied using the Heyd-Scuseria-Ernzerhof functional. The ground states of the pure phases obtained by calculations including spin polarization disagree with the experimental observations that the monoclinic phase should not be magnetic, the rutile phase should be metallic, and the monoclinic phase should have a lower total energy than the rutile phase. By tuning the Hartree-Fock fraction α to 10% the agreement with experiments is improved in terms of band gaps and relative energies of the phases. A calculation scheme is proposed to simulate the relationship between the transition temperature of the metal-insulator transition and the dopant concentration in tungsten doped vanadium dioxide. We achieve good agreement with the experimental situation. 18.75% and 25% yttrium, lanthanum, praseodymium, samarium, and gadolinium doped ceria supercells generated by the special quasirandom structure approach are employed to investigate the impact of doping on the O diffusion. The experimental behavior of the conductivity for the
Modeling of Semiconductors and Correlated Oxides with Point Defects by First Principles Methods
Wang, Hao
2014-01-01
Point defects in silicon, vanadium dioxide, and doped ceria are investigated by density functional theory. Defects involving vacancies and interstitial oxygen and carbon in silicon are after formed in outer space and significantly affect device performances. The screened hybrid functional by Heyd-Scuseria-Ernzerhof is used to calculate formation energies, binding energies, and electronic structures of the defective systems because standard density functional theory underestimates the bang gap of silicon. The results indicate for the A-center a −2 charge state. Tin is proposed to be an effective dopant to suppress the formation of A-centers. For the total energy difference between the A- and B-type carbon related G-centers we find close agreement with the experiment. The results indicate that the C-type G-center is more stable than both the A- and B-types. The electronic structures of the monoclinic and rutile phases of vanadium dioxide are also studied using the Heyd-Scuseria-Ernzerhof functional. The ground states of the pure phases obtained by calculations including spin polarization disagree with the experimental observations that the monoclinic phase should not be magnetic, the rutile phase should be metallic, and the monoclinic phase should have a lower total energy than the rutile phase. By tuning the Hartree-Fock fraction α to 10% the agreement with experiments is improved in terms of band gaps and relative energies of the phases. A calculation scheme is proposed to simulate the relationship between the transition temperature of the metal-insulator transition and the dopant concentration in tungsten doped vanadium dioxide. We achieve good agreement with the experimental situation. 18.75% and 25% yttrium, lanthanum, praseodymium, samarium, and gadolinium doped ceria supercells generated by the special quasirandom structure approach are employed to investigate the impact of doping on the O diffusion. The experimental behavior of the conductivity for the
Energy Technology Data Exchange (ETDEWEB)
Kim, Hyun Jin; Beak, Il Kwon; Kim, Kyu Han; Jang, Seok Pil [Korea Aerospace University, Goyang (Korea, Republic of)
2014-08-15
In the present study, the melting temperature depression of Sn nanoparticles manufactured using the modified evaporation method was investigated. For this purpose, a modified evaporation method with mass productivity was developed. Using the manufacturing process, Sn nanoparticles of 10 nm size was manufactured in benzyl alcohol solution to prevent oxidation. To examine the morphology and size distribution of the nanonoparticles, a transmission electron microscope was used. The melting temperature of the Sn nanoparticles was measured using a Differential scanning calorimetry (DSC) which can calculate the endothermic energy during the phase changing process and an X-ray photoelectron spectroscopy (XPS) used for observing the manufactured Sn nanoparticle compound. The melting temperature of the Sn nanoparticles was observed to be 129 ℃, which is 44 ℃ lower than that of the bulk material. Finally, the melting temperature was compared with the Gibbs Thomson and Lai's equations, which can predict the melting temperature according to the particle size. Based on the experimental results, the melting temperature of the Sn nanoparticles was found to match well with those recommended by the Lai's equation.
Karakawa, Ayako; Murata, Hiroshi; Hirasawa, Hiroyo; Mayama, Chihiro; Asaoka, Ryo
2013-01-01
To compare the performance of newly proposed point-wise linear regression (PLR) with the binomial test (binomial PLR) against mean deviation (MD) trend analysis and permutation analyses of PLR (PoPLR), in detecting global visual field (VF) progression in glaucoma. 15 VFs (Humphrey Field Analyzer, SITA standard, 24-2) were collected from 96 eyes of 59 open angle glaucoma patients (6.0 ± 1.5 [mean ± standard deviation] years). Using the total deviation of each point on the 2(nd) to 16(th) VFs (VF2-16), linear regression analysis was carried out. The numbers of VF test points with a significant trend at various probability levels (pbinomial test (one-side). A VF series was defined as "significant" if the median p-value from the binomial test was binomial PLR method (0.14 to 0.86) was significantly higher than MD trend analysis (0.04 to 0.89) and PoPLR (0.09 to 0.93). The PIS of the proposed method (0.0 to 0.17) was significantly lower than the MD approach (0.0 to 0.67) and PoPLR (0.07 to 0.33). The PBNS of the three approaches were not significantly different. The binomial BLR method gives more consistent results than MD trend analysis and PoPLR, hence it will be helpful as a tool to 'flag' possible VF deterioration.
Proximal caries detection: Sirona Sidexis versus Kodak Ektaspeed Plus.
Khan, Emad A; Tyndall, Donald A; Ludlow, John B; Caplan, Daniel
2005-01-01
This study compared the accuracy of intraoral film and a charge-coupled device (CCD) receptor for proximal caries detection. Four observers evaluated images of the proximal surfaces of 40 extracted posterior teeth. The presence or absence of caries was scored using a five-point confidence scale. The actual status of each surface was determined from ground section histology. Responses were evaluated by means of receiver operating characteristic (ROC) analysis. Areas under ROC curves (Az) were assessed through a paired t-test. The performance of the CCD-based intraoral sensor was not different statistically from Ektaspeed Plus film in detecting proximal caries.
Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia
2018-05-01
Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.
Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization
Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.
2009-01-01
We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step, which targets the ?+-center of the next pair of perturbed problems. As for the centering steps, we apply a sharper quadratic convergence result, which leads to a slightly wider neighborhood for th...
Hemiarthroplasty for proximal humeral fracture: restoration of the Gothic arch.
Krishnan, Sumant G; Bennion, Phillip W; Reineck, John R; Burkhead, Wayne Z
2008-10-01
Proximal humerus fractures are the most common fractures of the shoulder girdle, and initial management of these injuries often determines final outcome. When arthroplasty is used to manage proximal humeral fractures, surgery remains technically demanding, and outcomes have been unpredictable. Recent advances in both technique and prosthetic implants have led to more successful and reproducible results. Key technical points include restoration of the Gothic arch, anatomic tuberosity reconstruction, and minimal soft tissue dissection.
Respondent driven sampling: determinants of recruitment and a method to improve point estimation.
Directory of Open Access Journals (Sweden)
Nicky McCreesh
Full Text Available Respondent-driven sampling (RDS is a variant of a link-tracing design intended for generating unbiased estimates of the composition of hidden populations that typically involves giving participants several coupons to recruit their peers into the study. RDS may generate biased estimates if coupons are distributed non-randomly or if potential recruits present for interview non-randomly. We explore if biases detected in an RDS study were due to either of these mechanisms, and propose and apply weights to reduce bias due to non-random presentation for interview.Using data from the total population, and the population to whom recruiters offered their coupons, we explored how age and socioeconomic status were associated with being offered a coupon, and, if offered a coupon, with presenting for interview. Population proportions were estimated by weighting by the assumed inverse probabilities of being offered a coupon (as in existing RDS methods, and also of presentation for interview if offered a coupon by age and socioeconomic status group.Younger men were under-recruited primarily because they were less likely to be offered coupons. The under-recruitment of higher socioeconomic status men was due in part to them being less likely to present for interview. Consistent with these findings, weighting for non-random presentation for interview by age and socioeconomic status group greatly improved the estimate of the proportion of men in the lowest socioeconomic group, reducing the root-mean-squared error of RDS estimates of socioeconomic status by 38%, but had little effect on estimates for age. The weighting also improved estimates for tribe and religion (reducing root-mean-squared-errors by 19-29%, but had little effect for sexual activity or HIV status.Data collected from recruiters on the characteristics of men to whom they offered coupons may be used to reduce bias in RDS studies. Further evaluation of this new method is required.
A nine-point pH titration method to determine low-concentration VFA in municipal wastewater.
Ai, Hainan; Zhang, Daijun; Lu, Peili; He, Qiang
2011-01-01
Characterization of volatile fatty acid (VFA) in wastewater is significant for understanding the wastewater nature and the wastewater treatment process optimization based on the usage of Activated Sludge Models (ASMs). In this study, a nine-point pH titration method was developed for the determination of low-concentration VFA in municipal wastewater. The method was evaluated using synthetic wastewater containing VFA with the concentration of 10-50 mg/l and the possible interfering buffer systems of carbonate, phosphate and ammonium similar to those in real municipal wastewater. In addition, the further evaluation was conducted through the assay of real wastewater using chromatography as reference. The results showed that the recovery of VFA in the synthetic wastewater was 92%-102 and the coefficient of variance (CV) of reduplicate measurements 1.68%-4.72%. The changing content of the buffering substances had little effect on the accuracy of the method. Moreover, the titration method was agreed with chromatography in the determination of VFA in real municipal wastewater with R(2)= 0.9987 and CV =1.3-1.7. The nine-point pH titration method is capable of satisfied determination of low-concentration VFA in municipal wastewater.
A periodic point-based method for the analysis of Nash equilibria in 2 x 2 symmetric quantum games
International Nuclear Information System (INIS)
Schneider, David
2011-01-01
We present a novel method of looking at Nash equilibria in 2 x 2 quantum games. Our method is based on a mathematical connection between the problem of identifying Nash equilibria in game theory, and the topological problem of the periodic points in nonlinear maps. To adapt our method to the original protocol designed by Eisert et al (1999 Phys. Rev. Lett. 83 3077-80) to study quantum games, we are forced to extend the space of strategies from the initial proposal. We apply our method to the extended strategy space version of the quantum Prisoner's dilemma and find that a new set of Nash equilibria emerge in a natural way. Nash equilibria in this set are optimal as Eisert's solution of the quantum Prisoner's dilemma and include this solution as a limit case.
A periodic point-based method for the analysis of Nash equilibria in 2 x 2 symmetric quantum games
Energy Technology Data Exchange (ETDEWEB)
Schneider, David, E-mail: schneide@tandar.cnea.gov.ar [Departamento de Fisica, Comision Nacional de EnergIa Atomica. Av. del Libertador 8250, 1429 Buenos Aires (Argentina)
2011-03-04
We present a novel method of looking at Nash equilibria in 2 x 2 quantum games. Our method is based on a mathematical connection between the problem of identifying Nash equilibria in game theory, and the topological problem of the periodic points in nonlinear maps. To adapt our method to the original protocol designed by Eisert et al (1999 Phys. Rev. Lett. 83 3077-80) to study quantum games, we are forced to extend the space of strategies from the initial proposal. We apply our method to the extended strategy space version of the quantum Prisoner's dilemma and find that a new set of Nash equilibria emerge in a natural way. Nash equilibria in this set are optimal as Eisert's solution of the quantum Prisoner's dilemma and include this solution as a limit case.
Yehia, Ali M.
2013-05-01
New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines.
DEFF Research Database (Denmark)
Barfod, Adrian
The PhD thesis presents a new method for analyzing the relationship between resistivity and lithology, as well as a method for quantifying the hydrostratigraphic modeling uncertainty related to Multiple-Point Statistical (MPS) methods. Three-dimensional (3D) geological models are im...... is to improve analysis and research of the resistivity-lithology relationship and ensemble geological/hydrostratigraphic modeling. The groundwater mapping campaign in Denmark, beginning in the 1990’s, has resulted in the collection of large amounts of borehole and geophysical data. The data has been compiled...... in two publicly available databases, the JUPITER and GERDA databases, which contain borehole and geophysical data, respectively. The large amounts of available data provided a unique opportunity for studying the resistivity-lithology relationship. The method for analyzing the resistivity...
Coco, Armando; Russo, Giovanni
2018-05-01
In this paper we propose a second-order accurate numerical method to solve elliptic problems with discontinuous coefficients (with general non-homogeneous jumps in the solution and its gradient) in 2D and 3D. The method consists of a finite-difference method on a Cartesian grid in which complex geometries (boundaries and interfaces) are embedded, and is second order accurate in the solution and the gradient itself. In order to avoid the drop in accuracy caused by the discontinuity of the coefficients across the interface, two numerical values are assigned on grid points that are close to the interface: a real value, that represents the numerical solution on that grid point, and a ghost value, that represents the numerical solution extrapolated from the other side of the interface, obtained by enforcing the assigned non-homogeneous jump conditions on the solution and its flux. The method is also extended to the case of matrix coefficient. The linear system arising from the discretization is solved by an efficient multigrid approach. Unlike the 1D case, grid points are not necessarily aligned with the normal derivative and therefore suitable stencils must be chosen to discretize interface conditions in order to achieve second order accuracy in the solution and its gradient. A proper treatment of the interface conditions will allow the multigrid to attain the optimal convergence factor, comparable with the one obtained by Local Fourier Analysis for rectangular domains. The method is robust enough to handle large jump in the coefficients: order of accuracy, monotonicity of the errors and good convergence factor are maintained by the scheme.
International Nuclear Information System (INIS)
Bižić, Milan B; Petrović, Dragan Z; Tomić, Miloš C; Djinović, Zoran V
2017-01-01
This paper presents the development of a unique method for experimental determination of wheel–rail contact forces and contact point position by using the instrumented wheelset (IWS). Solutions of key problems in the development of IWS are proposed, such as the determination of optimal locations, layout, number and way of connecting strain gauges as well as the development of an inverse identification algorithm (IIA). The base for the solution of these problems is the wheel model and results of FEM calculations, while IIA is based on the method of blind source separation using independent component analysis. In the first phase, the developed method was tested on a wheel model and a high accuracy was obtained (deviations of parameters obtained with IIA and really applied parameters in the model are less than 2%). In the second phase, experimental tests on the real object or IWS were carried out. The signal-to-noise ratio was identified as the main influential parameter on the measurement accuracy. Ð¢hÐµ obtained results have shown that the developed method enables measurement of vertical and lateral wheel–rail contact forces Q and Y and their ratio Y / Q with estimated errors of less than 10%, while the estimated measurement error of contact point position is less than 15%. At flange contact and higher values of ratio Y / Q or Y force, the measurement errors are reduced, which is extremely important for the reliability and quality of experimental tests of safety against derailment of railway vehicles according to the standards UIC 518 and EN 14363. The obtained results have shown that the proposed method can be successfully applied in solving the problem of high accuracy measurement of wheel–rail contact forces and contact point position using IWS. (paper)
DEFF Research Database (Denmark)
Hasheminamin, Maryam; Agelidis, Vassilios; Ahmadi, Abdollah
2018-01-01
Voltage rise (VR) due to reverse power flow is an important obstacle for high integration of Photovoltaic (PV) into residential networks. This paper introduces and elaborates a novel methodology of an index-based single-point-reactive power-control (SPRPC) methodology to mitigate voltage rise by ...... system with high r/x ratio. Efficacy, effectiveness and cost study of SPRPC is compared to droop control to evaluate its advantages.......Voltage rise (VR) due to reverse power flow is an important obstacle for high integration of Photovoltaic (PV) into residential networks. This paper introduces and elaborates a novel methodology of an index-based single-point-reactive power-control (SPRPC) methodology to mitigate voltage rise...... by absorbing adequate reactive power from one selected point. The proposed index utilizes short circuit analysis to select the best point to apply this Volt/Var control method. SPRPC is supported technically and financially by distribution network operator that makes it cost effective, simple and efficient...
Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N
2000-05-01
We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.
rCBF measurement by one-point venous sampling with the ARG method
International Nuclear Information System (INIS)
Yoshida, Nobuhiro; Okamoto, Toshiaki; Takahashi, Hidekado; Hattori, Teruo
1997-01-01
We investigated the possibility of using venous blood sampling instead of arterial blood sampling for the current method of ARG (autoradiography) used to determine regional cerebral blood flow (rCBF) on the basis of one session of arterial blood sampling and SPECT. For this purpose, the ratio of the arterial blood radioactivity count to the venous blood radioactivity count, the coefficient of variation, and the correlation and differences between arterial blood-based rCBF and venous blood-based rCBF were analyzed. The coefficient of variation was lowest (4.1%) 20 minutes after injection into the dorsum manus. When the relationship between venous and arterial blood counts was analyzed, arterial blood counts correlated well with venous blood counts collected at the dorsum manus 20 or 30 minutes after intravenous injection and with venous blood counts collected at the wrist 20 minutes after intravenous injection (r=0.97 or higher). The difference from rCBF determined on the basis of arterial blood was smallest (0.7) for rCBF determined on the basis of venous blood collected at the dorsum manus 20 minutes after intravenous injection. (author)
ProxImaL: efficient image optimization using proximal algorithms
Heide, Felix; Diamond, Steven; Nieß ner, Matthias; Ragan-Kelley, Jonathan; Heidrich, Wolfgang; Wetzstein, Gordon
2016-01-01
domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety