Loizou, Nicolas
2017-12-27
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
Directory of Open Access Journals (Sweden)
Kriengsak Wattanawitoon
2011-01-01
Full Text Available We prove strong and weak convergence theorems of modified hybrid proximal-point algorithms for finding a common element of the zero point of a maximal monotone operator, the set of solutions of equilibrium problems, and the set of solution of the variational inequality operators of an inverse strongly monotone in a Banach space under different conditions. Moreover, applications to complementarity problems are given. Our results modify and improve the recently announced ones by Li and Song (2008 and many authors.
Best Proximity Points for a New Class of Generalized Proximal Mappings
Directory of Open Access Journals (Sweden)
Tayyab Kamran
2017-03-01
Full Text Available The best proximity points are usually used to find the optimal approximate solution of the operator equation Tx = x, when T has no fixed point. In this paper, we prove some best proximity point theorems for nonself multivalued operators, following the foot steps of Basha and Shahzad [Best proximity point theorems for generalized proximal contractions, Fixed Point Theory Appl., 2012, 2012:42].
On the existence of best proximity points for generalized contractions
Directory of Open Access Journals (Sweden)
V. Vetrivel
2014-04-01
Full Text Available In this article we establish the existence of a unique best proximity point for some generalized non self contractions on a metric space in a simpler way using a geometric result. Our results generalize some recent best proximity point theorems and several fixed point theorems proved by various authors.
Interproximal contact points and proximal caries in posterior primary teeth.
Allison, Paul J; Schwartz, Stephane
2003-01-01
The purpose of this study was to investigate the hypothesis that the risk of proximal caries in posterior primary teeth is higher when interproximal contact points are closed than when they are open. A cross-sectional study design was used with a sample of 286 children aged 24 to 72 months (mean age 54 months +/- 16 months). Children with any permanent dentition were excluded. Caries (defined as a lesion halfway through enamel or further) was assessed radiographically by a single dentist. The open/closed nature of contact points was assessed by a different dentist through resistance to dental floss. Data concerning known risk factors and indicators for caries were also collected. Analyses were performed at the level of the contact point, comparing the same contact points in different children. Multiple logistic regression was used to asses the relationship between open/closed status and caries status for each posterior contact point. In 7 of the 8 contact points examined, the odds for caries were significantly increased when contact points were dosed. This research suggests that the risk for proximal caries in the posterior primary dentition is raised if contact points are dosed compared to those that are open.
Directory of Open Access Journals (Sweden)
Minghua Xu
2014-01-01
Full Text Available We consider the problem of seeking a symmetric positive semidefinite matrix in a closed convex set to approximate a given matrix. This problem may arise in several areas of numerical linear algebra or come from finance industry or statistics and thus has many applications. For solving this class of matrix optimization problems, many methods have been proposed in the literature. The proximal alternating direction method is one of those methods which can be easily applied to solve these matrix optimization problems. Generally, the proximal parameters of the proximal alternating direction method are greater than zero. In this paper, we conclude that the restriction on the proximal parameters can be relaxed for solving this kind of matrix optimization problems. Numerical experiments also show that the proximal alternating direction method with the relaxed proximal parameters is convergent and generally has a better performance than the classical proximal alternating direction method.
A General Simulation Method for Multiple Bodies in Proximate Flight
Meakin, Robert L.
2003-01-01
Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.
Best proximity points of contractive mappings on a metric space with a graph and applications
Directory of Open Access Journals (Sweden)
Asrifa Sultana
2017-04-01
Full Text Available We establish an existence and uniqueness theorem on best proximity point for contractive mappings on a metric space endowed with a graph. As an application of this theorem, we obtain a result on the existence of unique best proximity point for uniformly locally contractive mappings. Moreover, our theorem subsumes and generalizes many recent fixed point and best proximity point results.
Alternating proximal gradient method for nonnegative matrix factorization
Xu, Yangyang
2011-01-01
Nonnegative matrix factorization has been widely applied in face recognition, text mining, as well as spectral analysis. This paper proposes an alternating proximal gradient method for solving this problem. With a uniformly positive lower bound assumption on the iterates, any limit point can be proved to satisfy the first-order optimality conditions. A Nesterov-type extrapolation technique is then applied to accelerate the algorithm. Though this technique is at first used for convex program, it turns out to work very well for the non-convex nonnegative matrix factorization problem. Extensive numerical experiments illustrate the efficiency of the alternating proximal gradient method and the accleration technique. Especially for real data tests, the accelerated method reveals high superiority to state-of-the-art algorithms in speed with comparable solution qualities.
Directory of Open Access Journals (Sweden)
Manuel De la Sen
2017-04-01
Full Text Available The main objective of this paper is to deal with some properties of interest in two types of fuzzy ordered proximal contractions of cyclic self-mappings T integrated in a pair ( g , T of mappings. In particular, g is a non-contractive fuzzy self-mapping, in the framework of non-Archimedean ordered fuzzy complete metric spaces and T is a p -cyclic proximal contraction. Two types of such contractions (so called of type I and of type II are dealt with. In particular, the existence, uniqueness and limit properties for sequences to optimal fuzzy best proximity coincidence points are investigated for such pairs of mappings.
A Best Proximity Point Result in Modular Spaces with the Fatou Property
Directory of Open Access Journals (Sweden)
Mohamed Jleli
2013-01-01
Full Text Available Consider a nonself-mapping , where is a pair of nonempty subsets of a modular space . A best proximity point of is a point satisfying the condition: . In this paper, we introduce the class of proximal quasicontraction nonself-mappings in modular spaces with the Fatou property. For such mappings, we provide sufficient conditions assuring the existence and uniqueness of best proximity points.
Inexact proximal Newton methods for self-concordant functions
DEFF Research Database (Denmark)
Li, Jinchao; Andersen, Martin Skovgaard; Vandenberghe, Lieven
2016-01-01
with an application to L1-regularized covariance selection, in which prior constraints on the sparsity pattern of the inverse covariance matrix are imposed. In the numerical experiments the proximal Newton steps are computed by an accelerated proximal gradient method, and multifrontal algorithms for positive definite......We analyze the proximal Newton method for minimizing a sum of a self-concordant function and a convex function with an inexpensive proximal operator. We present new results on the global and local convergence of the method when inexact search directions are used. The method is illustrated...
Best Proximity Point Theorems in Partially Ordered b-Quasi Metric Spaces
Directory of Open Access Journals (Sweden)
Ali Abkar
2016-11-01
Full Text Available In this paper, we introduce the notion of an ordered rational proximal contraction in partially ordered b-quasi metric spaces. We shall then prove some best proximity point theorems in partially ordered b-quasi metric spaces.
PHOTOJOURNALISM AND PROXIMITY IMAGES: two points of view, two professions?
Directory of Open Access Journals (Sweden)
Daniel Thierry
2011-06-01
Full Text Available For many decades, classic photojournalistic practice, firmly anchored in a creed established since Lewis Hine (1874-1940, has developed a praxis and a doxa that have barely been affected by the transformations in the various types of journalism. From the search for the “right image” which would be totally transparent by striving to refute its enunciative features from a perspective of maximumobjectivity, to the most seductive photography at supermarkets by photo agencies, the range of images seems to be decidedly framed. However, far from constituting high-powered reportingor excellent photography that is rewarded with numerous international prizes and invitations to the media-artistic world, local press photography remains in the shadows. How does oneoffer a representation of one’s self that can be shared in the local sphere? That is the first question which editors of the local daily and weekly press must grapple with. Using illustrations of the practices, this article proposes an examination of the origins ofthese practices and an analysis grounded on the originality of theauthors of these proximity photographs.
Photojournalism and proximity images: two points of view, two professions?
Directory of Open Access Journals (Sweden)
Daniel Thierry
2011-06-01
Full Text Available For many decades, classic photojournalistic practice, firmly anchored in a creed established since Lewis Hine (1874-1940, has developed a praxis and a doxa that have barely been affected by the transformations in the various types of journalism. From the search for the “right image” which would be totally transparent by striving to refute its enunciative features from a perspective of maximumobjectivity, to the most seductive photography at supermarkets by photo agencies, the range of images seems to be decidedly framed. However, far from constituting high-powered reportingor excellent photography that is rewarded with numerous international prizes and invitations to the media-artistic world, local press photography remains in the shadows. How does oneoffer a representation of one’s self that can be shared in the local sphere? That is the first question which editors of the local daily and weekly press must grapple with. Using illustrations of the practices, this article proposes an examination of the origins ofthese practices and an analysis grounded on the originality of theauthors of these proximity photographs.
Best Proximity Point Results in Complex Valued Metric Spaces
Directory of Open Access Journals (Sweden)
Binayak S. Choudhury
2014-01-01
complex valued metric spaces. We treat the problem as that of finding the global optimal solution of a fixed point equation although the exact solution does not in general exist. We also define and use the concept of P-property in such spaces. Our results are illustrated with examples.
Inexact proximal Newton methods for self-concordant functions
DEFF Research Database (Denmark)
Li, Jinchao; Andersen, Martin Skovgaard; Vandenberghe, Lieven
2016-01-01
We analyze the proximal Newton method for minimizing a sum of a self-concordant function and a convex function with an inexpensive proximal operator. We present new results on the global and local convergence of the method when inexact search directions are used. The method is illustrated...... matrices with chordal sparsity patterns are used to evaluate gradients and matrix-vector products with the Hessian of the smooth component of the objective....
A Line-Search-Based Partial Proximal Alternating Directions Method for Separable Convex Optimization
Directory of Open Access Journals (Sweden)
Yu-hua Zeng
2014-01-01
Full Text Available We propose an appealing line-search-based partial proximal alternating directions (LSPPAD method for solving a class of separable convex optimization problems. These problems under consideration are common in practice. The proposed method solves two subproblems at each iteration: one is solved by a proximal point method, while the proximal term is absent from the other. Both subproblems admit inexact solutions. A line search technique is used to guarantee the convergence. The convergence of the LSPPAD method is established under some suitable conditions. The advantage of the proposed method is that it provides the tractability of the subproblem in which the proximal term is absent. Numerical tests show that the LSPPAD method has better performance compared with the existing alternating projection based prediction-correction (APBPC method if both are employed to solve the described problem.
Effect of processing method on the Proximate composition, mineral ...
African Journals Online (AJOL)
Effect of processing method on the Proximate composition, mineral content and antinutritional factors of Taro (Colocasia esculenta, L.) growth in Ethiopia. T Adane, A Shimelis, R Negussie, B Tilahun, GD Haki ...
Proximal extrapolated gradient methods for variational inequalities.
Malitsky, Yu
2018-01-01
The paper concerns with novel first-order methods for monotone variational inequalities. They use a very simple linesearch procedure that takes into account a local information of the operator. Also, the methods do not require Lipschitz continuity of the operator and the linesearch procedure uses only values of the operator. Moreover, when the operator is affine our linesearch becomes very simple, namely, it needs only simple vector-vector operations. For all our methods, we establish the ergodic convergence rate. In addition, we modify one of the proposed methods for the case of a composite minimization. Preliminary results from numerical experiments are quite promising.
The Random Material Point Method
Wang, B.; Vardon, P.J.; Hicks, M.A.
2017-01-01
The material point method is a finite element variant which allows the material, represented by a point-wise discretization, to move through the background mesh. This means that large deformations, such as those observed post slope failure, can be computed. By coupling this material level
The Proximities of Asteroids and Critical Points of the Distance Function
Directory of Open Access Journals (Sweden)
Milisavljevic, S.
2010-06-01
Full Text Available The proximities are important for different purposes,for example to evaluate the risk of collisions of asteroids or comets with the Solar-System planets. We describe a simple and efficient method for finding the asteroid proximities in the case of elliptical orbits with a common focus. In several examples we have compared our method with the recent excellent algebraic and polynomial solutions of Gronchi (2002, 2005.
The proximities of asteroids and critical points of the distance function
Directory of Open Access Journals (Sweden)
Milisavljević Slaviša
2010-01-01
Full Text Available The proximities are important for different purposes, for example to evaluate the risk of collisions of asteroids or comets with the Solar-System planets. We describe a simple and efficient method for finding the asteroid proximities in the case of elliptical orbits with a common focus. In several examples we have compared our method with the recent excellent algebraic and polynomial solutions of Gronchi (2002, 2005.
Effect of Processing Methods on the Proximate and Energy ...
African Journals Online (AJOL)
Four methods of processing were assessed to investigate the effect of processing methods on the digestibility, proximate and energy composition of Lablab purpureus (Rongai) beans. The processing methods were boiling (in water), fermentation, toasting and fermentation plus toasting. Some of The beans were boiled for 0, ...
Rathee, Savita; Dhingra, Kusum; Kumar, Anil
2016-01-01
Here, we extend the notion of (E.A.) property in a convex metric space defined by Kumar and Rathee (Fixed Point Theory Appl 1-14, 2014) by introducing a new class of self-maps which satisfies the common property (E.A.) in the context of convex metric space and ensure the existence of common fixed point for this newly introduced class of self-maps. Also, we guarantee the existence of common best proximity points for this class of maps satisfying generalized non-expansive type condition. We furnish an example in support of the proved results.
Correction of Misclassifications Using a Proximity-Based Estimation Method
Directory of Open Access Journals (Sweden)
Shmulevich Ilya
2004-01-01
Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.
Effect of thermal processing methods on the proximate composition ...
African Journals Online (AJOL)
The nutritive value of raw and thermal processed castor oil seed (Ricinus communis) was investigated using the following parameters; proximate composition, gross energy, mineral constituents and ricin content. Three thermal processing methods; toasting, boiling and soaking-and-boiling were used in the processing of the ...
Convergence Analysis of a Proximal Point Algorithm for Minimizing Differences of Functions
An, Nguyen Thai; Nam, Nguyen Mau
2015-01-01
Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A progress to go beyond convexity was made by considering the class of functions representable as differences of convex functions. In this paper, we introduce a generalized proximal point algorithm to minimize the difference of a nonconvex function and a convex function. We also study convergence results of this algo...
Best Proximity Point Results in Non-Archimedean Modular Metric Space
Directory of Open Access Journals (Sweden)
Mohadeshe Paknazar
2017-04-01
Full Text Available In this paper, we introduce the new notion of Suzuki-type ( α , β , θ , γ -contractive mapping and investigate the existence and uniqueness of the best proximity point for such mappings in non-Archimedean modular metric space using the weak P λ -property. Meanwhile, we present an illustrative example to emphasize the realized improvements. These obtained results extend and improve certain well-known results in the literature.
DEFF Research Database (Denmark)
Ekstrand, K R; Alloza, Alvaro Luna; Promisiero, L
2011-01-01
This study aimed to determine the reliability and accuracy of the ICDAS and radiographs in detecting and estimating the depth of proximal lesions on extracted teeth. The lesions were visible to the naked eye. Three trained examiners scored a total of 132 sound/carious proximal surfaces from 106...... primary teeth and 160 sound/carious proximal surfaces from 140 permanent teeth. The selected surfaces were first scored visually, using the 7 classes in the ICDAS. They were then assessed on radiographs using a 5-point classification system. Reexaminations were conducted with both scoring systems. Teeth...... and the radiographs. The associations between the 2 detection methods were measured to be moderate. In particular, the ICDAS was accurate in predicting lesion depth (histologically) confined to the enamel/outer third of the dentine versus deeper lesions. This study shows that when proximal lesions are open...
On the convergence of a linesearch based proximal-gradient method for nonconvex optimization
Bonettini, S.; Loris, I.; Porta, F.; Prato, M.; Rebegoldi, S.
2017-05-01
We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications.
Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods
Directory of Open Access Journals (Sweden)
Feng Ma
2014-01-01
Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.
Distributed Interior-point Method for Loosely Coupled Problems
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard
2014-01-01
In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow...... and require many iterations to converge. In order to alleviate this issue, we propose algorithms that combine the Newton and interior-point methods with proximal splitting methods for solving such problems. Particularly, the algorithm for solving unconstrained loosely coupled problems, is based on Newton......’s method and utilizes proximal splitting to distribute the computations for calculating the Newton step at each iteration. A combination of this algorithm and the interior-point method is then used to introduce a distributed algorithm for solving constrained loosely coupled problems. We also provide...
Nonlinear Rescaling and Proximal-Like Methods in Convex Optimization
Polyak, Roman; Teboulle, Marc
1997-01-01
The nonlinear rescaling principle (NRP) consists of transforming the objective function and/or the constraints of a given constrained optimization problem into another problem which is equivalent to the original one in the sense that their optimal set of solutions coincides. A nonlinear transformation parameterized by a positive scalar parameter and based on a smooth scaling function is used to transform the constraints. The methods based on NRP consist of sequential unconstrained minimization of the classical Lagrangian for the equivalent problem, followed by an explicit formula updating the Lagrange multipliers. We first show that the NRP leads naturally to proximal methods with an entropy-like kernel, which is defined by the conjugate of the scaling function, and establish that the two methods are dually equivalent for convex constrained minimization problems. We then study the convergence properties of the nonlinear rescaling algorithm and the corresponding entropy-like proximal methods for convex constrained optimization problems. Special cases of the nonlinear resealing algorithm are presented. In particular a new class of exponential penalty-modified barrier functions methods is introduced.
A method for sex estimation using the proximal femur.
Curate, Francisco; Coelho, João; Gonçalves, David; Coelho, Catarina; Ferreira, Maria Teresa; Navega, David; Cunha, Eugénia
2016-09-01
The assessment of sex is crucial to the establishment of a biological profile of an unidentified skeletal individual. The best methods currently available for the sexual diagnosis of human skeletal remains generally rely on the presence of well-preserved pelvic bones, which is not always the case. Postcranial elements, including the femur, have been used to accurately estimate sex in skeletal remains from forensic and bioarcheological settings. In this study, we present an approach to estimate sex using two measurements (femoral neck width [FNW] and femoral neck axis length [FNAL]) of the proximal femur. FNW and FNAL were obtained in a training sample (114 females and 138 males) from the Luís Lopes Collection (National History Museum of Lisbon). Logistic regression and the C4.5 algorithm were used to develop models to predict sex in unknown individuals. Proposed cross-validated models correctly predicted sex in 82.5-85.7% of the cases. The models were also evaluated in a test sample (96 females and 96 males) from the Coimbra Identified Skeletal Collection (University of Coimbra), resulting in a sex allocation accuracy of 80.1-86.2%. This study supports the relative value of the proximal femur to estimate sex in skeletal remains, especially when other exceedingly dimorphic skeletal elements are not accessible for analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Dental flossing as a diagnostic method for proximal gingivitis: a validation study.
Grellmann, Alessandra Pascotini; Kantorski, Karla Zanini; Ardenghi, Thiago Machado; Moreira, Carlos Heitor Cunha; Danesi, Cristiane Cademartori; Zanatta, Fabricio Batistin
2016-05-20
This study evaluated the clinical diagnosis of proximal gingivitis by comparing two methods: dental flossing and the gingival bleeding index (GBI). One hundred subjects (aged at least 18 years, with 15% of positive proximal sites for GBI, without proximal attachment loss) were randomized into five evaluation protocols. Each protocol consisted of two assessments with a 10-minute interval between them: first GBI/second floss, first floss/second GBI, first GBI/second GBI, first tooth floss/second floss, and first gum floss-second floss. The dental floss was slid against the tooth surface (TF) and the gingival tissue (GF). The evaluated proximal sites should present teeth with established point of contact and probing depth ≤ 3mm. One trained and calibrated examiner performed all the assessments. The mean percentages of agreement and disagreement were calculated for the sites with gingival bleeding in both evaluation methods (GBI and flossing). The primary outcome was the percentage of disagreement between the assessments in the different protocols. The data were analyzed by one-way ANOVA, McNemar, chi-square and Tukey's post hoc tests, with a 5% significance level. When gingivitis was absent in the first assessment (negative GBI), bleeding was detected in the second assessment by TF and GF in 41.7% (p gingivitis in the second assessment (negative GBI), TF and GF detected bleeding in the first assessment in 38.9% (p = 0.004) and 58.3% (p gingivitis than GBI.
Directory of Open Access Journals (Sweden)
Adriela Azevedo Souza Mariath
2007-12-01
Full Text Available The purpose of this study was to validate the elastomeric impression after temporary tooth separation as a method of cavitation detection in proximal caries lesions in primary molars with outer half dentin radiolucency. Fifty-one children (4-10 years old, presenting radiolucency in the outer half of the dentin at the proximal surfaces of primary molars and proximal anatomic contact with the adjacent tooth (without restoration/cavitated caries lesion were enrolled in the study. Temporary tooth separation was performed with an orthodontic rubber ring placed around the contact point during 2-3 days. Thereafter, impression of the proximal surfaces was made. The elastomeric impressions were classified as "non-cavitated" or "cavitated" surfaces. Visual inspection after tooth separation was considered as the gold standard. Examiner reliability of visual inspection after tooth separation was determined (kappa 0.92. Impression examination was repeated every 5 participants to evaluate the reproducibility of the method. The frequency of cavitated lesions was 65%, and 67% of those were inactive. Sensitivity, specificity, positive and negative predictive values were 0.88% (95%CI 0.73-0.95, 0.89% (95%CI 0.67-0.97, 0.94% (95%CI 0.79-0.98 and 0.80% (95%CI 0.58-0.92, respectively. Impression examination showed total agreement regarding cavitation. The evaluation of elastomeric impression after tooth separation is a useful clinical resource in cavitation detection for clinicians and researchers when visual inspection is doubtful.
Directory of Open Access Journals (Sweden)
Somayya Komal
2016-10-01
Full Text Available In this article, we introduced the best proximity point theorems for $\\mathcal{Z}$-contraction and Suzuki type $\\mathcal{Z}$-contraction in the setting of complete metric spaces. Also by the help of weak $P$-property and $P$-property, we proved existence and uniqueness of best proximity point. There is a simple example to show the validity of our results. Our results extended and unify many existing results in the literature. Moreover, an application to fractional order functional differential equation is discussed.
Effect of cooking method on proximate and mineral composition of ...
African Journals Online (AJOL)
This study investigated effect(s) of cooking fresh fish (Lake Malawi tilapia) by boiling, roasting, pan frying and, using a locally made fireless cooker) on its proximate (protein, fat, ash and moisture) and mineral (calcium, magnesium, zinc, iron and phosphorus) composition. Highest and lowest values for crude protein were ...
7383 EFFECT OF PROCESSING METHOD ON THE PROXIMATE ...
African Journals Online (AJOL)
Dr. Tilahun
2013-04-02
Apr 2, 2013 ... Although taro is widely grown in Ethiopia, it is an underutilized crop and little is known about its proximate and micro-element composition and the antinutritional factors of the raw, boiled and fermented products. Boiling and fermentation processing techniques are widely used in the country, especially ...
Jayaseelan, Dhinu J; Moats, Nick; Ricardo, Christopher R
2014-03-01
Case report. Proximal hamstring tendinopathy is a relatively uncommon overuse injury seen in runners. In contrast to the significant amount of literature guiding the evaluation and treatment of hamstring strains, there is little literature about the physical therapy management of proximal hamstring tendinopathy, other than the general recommendations to increase strength and flexibility. Two runners were treated in physical therapy for proximal hamstring tendinopathy. Each presented with buttock pain with running and sitting, as well as tenderness to palpation at the ischial tuberosity. Each patient was prescribed a specific exercise program focusing on eccentric loading of the hamstrings and lumbopelvic stabilization exercises. Trigger point dry needling was also used with both runners to facilitate improved joint motion and to decrease pain. Both patients were treated in 8 to 9 visits over 8 to 10 weeks. Clinically significant improvements were seen in pain, tenderness, and function in each case. Each patient returned to running and sitting without symptoms. Proximal hamstring tendinopathy can be difficult to treat. In these 2 runners, eccentric loading of the hamstrings, lumbopelvic stabilization exercises, and trigger point dry needling provided short- and long-term pain reduction and functional benefits. Further research is needed to determine the effectiveness of this cluster of interventions for this condition. Therapy, level 4.
Ekstrand, K R; Luna, L E; Promisiero, L; Cortes, A; Cuevas, S; Reyes, J F; Torres, C E; Martignon, S
2011-01-01
This study aimed to determine the reliability and accuracy of the ICDAS and radiographs in detecting and estimating the depth of proximal lesions on extracted teeth. The lesions were visible to the naked eye. Three trained examiners scored a total of 132 sound/carious proximal surfaces from 106 primary teeth and 160 sound/carious proximal surfaces from 140 permanent teeth. The selected surfaces were first scored visually, using the 7 classes in the ICDAS. They were then assessed on radiographs using a 5-point classification system. Reexaminations were conducted with both scoring systems. Teeth were then sectioned and the selected surfaces histologically classified using a stereomicroscope (×5). Intrareproducibility values (weighted kappa statistics) for the ICDAS for both primary and permanent teeth were >0.9, and for the radiographs between 0.6 and 0.8. Interreproducibility values for the ICDAS were >0.85, for the radiographs >0.6. For both primary and permanent teeth, the accuracy of each examiner (Spearman's correlation coefficient) for the ICDAS was ≥0.85, and for the radiographs ≥0.45. Corresponding data were achieved when using pooled data from the 3 examiners for both the ICDAS and the radiographs. The associations between the 2 detection methods were measured to be moderate. In particular, the ICDAS was accurate in predicting lesion depth (histologically) confined to the enamel/outer third of the dentine versus deeper lesions. This study shows that when proximal lesions are open for inspection, the ICDAS is a more reliable and accurate method than the radiograph for detecting and estimating the depth of the lesion in both primary and permanent teeth. Copyright © 2011 S. Karger AG, Basel.
Parametric methods for spatial point processes
DEFF Research Database (Denmark)
Møller, Jesper
(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models....... The increasing development of such likelihood based methods, whether frequentist or Bayesian, has lead to more objective and efficient statistical procedures. When checking a fitted parametric point process model, summary statistics and residual analysis (Chapter 4.5) play an important role in combination...
Method to measure tone of axial and proximal muscle.
Gurfinkel, Victor S; Cacciatore, Timothy W; Cordo, Paul J; Horak, Fay B
2011-12-14
The control of tonic muscular activity remains poorly understood. While abnormal tone is commonly assessed clinically by measuring the passive resistance of relaxed limbs, no systems are available to study tonic muscle control in a natural, active state of antigravity support. We have developed a device (Twister) to study tonic regulation of axial and proximal muscles during active postural maintenance (i.e. postural tone). Twister rotates axial body regions relative to each other about the vertical axis during stance, so as to twist the neck, trunk or hip regions. This twisting imposes length changes on axial muscles without changing the body's relationship to gravity. Because Twister does not provide postural support, tone must be regulated to counteract gravitational torques. We quantify this tonic regulation by the restive torque to twisting, which reflects the state of all muscles undergoing length changes, as well as by electromyography of relevant muscles. Because tone is characterized by long-lasting low-level muscle activity, tonic control is studied with slow movements that produce "tonic" changes in muscle length, without evoking fast "phasic" responses. Twister can be reconfigured to study various aspects of muscle tone, such as co-contraction, tonic modulation to postural changes, tonic interactions across body segments, as well as perceptual thresholds to slow axial rotation. Twister can also be used to provide a quantitative measurement of the effects of disease on axial and proximal postural tone and assess the efficacy of intervention.
Method to Measure Tone of Axial and Proximal Muscle
Gurfinkel, Victor S.; Cacciatore, Timothy W.; Cordo, Paul J.; Horak, Fay B.
2011-01-01
The control of tonic muscular activity remains poorly understood. While abnormal tone is commonly assessed clinically by measuring the passive resistance of relaxed limbs1, no systems are available to study tonic muscle control in a natural, active state of antigravity support. We have developed a device (Twister) to study tonic regulation of axial and proximal muscles during active postural maintenance (i.e. postural tone). Twister rotates axial body regions relative to each other about the vertical axis during stance, so as to twist the neck, trunk or hip regions. This twisting imposes length changes on axial muscles without changing the body's relationship to gravity. Because Twister does not provide postural support, tone must be regulated to counteract gravitational torques. We quantify this tonic regulation by the restive torque to twisting, which reflects the state of all muscles undergoing length changes, as well as by electromyography of relevant muscles. Because tone is characterized by long-lasting low-level muscle activity, tonic control is studied with slow movements that produce "tonic" changes in muscle length, without evoking fast "phasic" responses. Twister can be reconfigured to study various aspects of muscle tone, such as co-contraction, tonic modulation to postural changes, tonic interactions across body segments, as well as perceptual thresholds to slow axial rotation. Twister can also be used to provide a quantitative measurement of the effects of disease on axial and proximal postural tone and assess the efficacy of intervention. PMID:22214974
Device and method for determining freezing points
Mathiprakasam, Balakrishnan (Inventor)
1986-01-01
A freezing point method and device (10) are disclosed. The method and device pertain to an inflection point technique for determining the freezing points of mixtures. In both the method and device (10), the mixture is cooled to a point below its anticipated freezing point and then warmed at a substantially linear rate. During the warming process, the rate of increase of temperature of the mixture is monitored by, for example, thermocouple (28) with the thermocouple output signal being amplified and differentiated by a differentiator (42). The rate of increase of temperature data are analyzed and a peak rate of increase of temperature is identified. In the preferred device (10) a computer (22) is utilized to analyze the rate of increase of temperature data following the warming process. Once the maximum rate of increase of temperature is identified, the corresponding temperature of the mixture is located and earmarked as being substantially equal to the freezing point of the mixture. In a preferred device (10), the computer (22), in addition to collecting the temperature and rate of change of temperature data, controls a programmable power supply (14) to provide a predetermined amount of cooling and warming current to thermoelectric modules (56).
Chang, Chih-Yuan; Owen, Gerry; Pease, Roger Fabian W.; Kailath, Thomas
1992-07-01
Dose correction is commonly used to compensate for the proximity effect in electron lithography. The computation of the required dose modulation is usually carried out using 'self-consistent' algorithms that work by solving a large number of simultaneous linear equations. However, there are two major drawbacks: the resulting correction is not exact, and the computation time is excessively long. A computational scheme, as shown in Figure 1, has been devised to eliminate this problem by the deconvolution of the point spread function in the pattern domain. The method is iterative, based on a steepest descent algorithm. The scheme has been successfully tested on a simple pattern with a minimum feature size 0.5 micrometers , exposed on a MEBES tool at 10 KeV in 0.2 micrometers of PMMA resist on a silicon substrate.
Revisiting Blasius Flow by Fixed Point Method
Directory of Open Access Journals (Sweden)
Ding Xu
2014-01-01
Full Text Available The well-known Blasius flow is governed by a third-order nonlinear ordinary differential equation with two-point boundary value. Specially, one of the boundary conditions is asymptotically assigned on the first derivative at infinity, which is the main challenge on handling this problem. Through introducing two transformations not only for independent variable bur also for function, the difficulty originated from the semi-infinite interval and asymptotic boundary condition is overcome. The deduced nonlinear differential equation is subsequently investigated with the fixed point method, so the original complex nonlinear equation is replaced by a series of integrable linear equations. Meanwhile, in order to improve the convergence and stability of iteration procedure, a sequence of relaxation factors is introduced in the framework of fixed point method and determined by the steepest descent seeking algorithm in a convenient manner.
Pointing Verification Method for Spaceborne Lidars
Directory of Open Access Journals (Sweden)
Axel Amediek
2017-01-01
Full Text Available High precision acquisition of atmospheric parameters from the air or space by means of lidar requires accurate knowledge of laser pointing. Discrepancies between the assumed and actual pointing can introduce large errors due to the Doppler effect or a wrongly assumed air pressure at ground level. In this paper, a method for precisely quantifying these discrepancies for airborne and spaceborne lidar systems is presented. The method is based on the comparison of ground elevations derived from the lidar ranging data with high-resolution topography data obtained from a digital elevation model and allows for the derivation of the lateral and longitudinal deviation of the laser beam propagation direction. The applicability of the technique is demonstrated by using experimental data from an airborne lidar system, confirming that geo-referencing of the lidar ground spot trace with an uncertainty of less than 10 m with respect to the used digital elevation model (DEM can be obtained.
Yang, S.-X.; Fotso, H.; Su, S.-Q.; Galanakis, D.; Khatami, E.; She, J.-H.; Moreno, J.; Zaanen, J.; Jarrell, M.
2011-01-01
We use the dynamical cluster approximation to understand the proximity of the superconducting dome to the quantum critical point in the two-dimensional Hubbard model. In a BCS formalism, Tc may be enhanced through an increase in the d-wave pairing interaction (Vd) or the bare pairing susceptibility (χ0d). At optimal doping, where Vd is revealed to be featureless, we find a power-law behavior of χ0d(ω=0), replacing the BCS log, and strongly enhanced Tc. We suggest experiments to verify our predictions.
Effect of different heat processing methods on the proximate ...
African Journals Online (AJOL)
Plants (legumes) are important sources of dietary protein for both human and animals, but the presence of antinutritive factors affect the nutritional quality of the legumes. Unless these factors are destroyed by processing methods, they can exert adverse physiological effects when ingested by animals. To improve the ...
Method Points: towards a metric for method complexity
Directory of Open Access Journals (Sweden)
Graham McLeod
1998-11-01
Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.
A method for designing plates in treatments of proximal humeral fracture and distal radial fracture
Directory of Open Access Journals (Sweden)
Lin Wang
2016-11-01
Full Text Available The purpose of this paper was to quickly design fixation plates for fractured proximal humerus and distal radius according to the requirements of surgical treatment. Therefore, a new method to quickly design cloverleaf plate appropriate for proximal humerus and volar plate appropriate for distal radius is put forward. First, three-dimensional (3D reconstruction models of fractured proximal humerus and distal radius were generated based on deforming mean parametric models of proximal humerus and distal radius, respectively. Second, based on region-of-interest marked on the 3D reconstruction model of proximal humerus and distal radius, abutted surfaces of cloverleaf plate and volar plate were established, respectively. Then, parametric abutted surface was established after setting rational parameters for the surface of the cloverleaf plate. Parametric abutted surface of volar plate was established using the same method. Finally, parametric cloverleaf plate and volar plate are generated through thickening their respective parametric abutted surfaces. The parametric plates, acting as templates, accelerate and simplify the design process and therefore allow users to construct plate with editing valid parameters easily. Group of cloverleaf plates and volar plates with different sizes were generated quickly, showing that the proposed method is feasible and effective.
Existing bridge evaluation using deficiency point method
Directory of Open Access Journals (Sweden)
Vičan Josef
2016-01-01
Full Text Available In the transforming EU countries, transportation infrastructure has a prominent position in advancing industry and society. Recent developments show, that attention should be moved from the design of new structures towards the repair and reconstruction of existing ones to ensure and increase their satisfactory structural reliability and durability. The problem is very urgent because many construction projects, especially transport infrastructure, in most European countries are more than 50-60 years old and require rehabilitations based on objective evaluations. Therefore, the paper presents methodology of existing bridge evaluation based on reliability concept using Deficiency Point Method. The methodology was prepared from the viewpoint to determine the priority order for existing bridge rehabilitation.
Computation of the heat and entropy of adsorption in proximity of inflection points
Poursaeidesfahani, A.; Torres-Knoop, A.; Rigutto, M.; Nair, N.; Dubbeldam, D.; Vlugt, T.J.H.
2016-01-01
The adsorption of different heptane isomers in MFI- and MEL-type Zeolites is studied to investigate the performance of molecular simulation for computing the heat and entropy of adsorption as a function of loading. It is :shown that none of the conventional methods are capable of computing the heat
Hurdebise, Quentin; Heinesch, Bernard; De Ligne, Anne; Vincke, Caroline; Aubinet, Marc
2017-04-01
Understanding if and how the spatial and temporal variability of the surrounding environment affects turbulence is essential for long-term eddy covariance measurements above growing and heterogeneous ecosystems. It requires characterizing the surrounding environment. One way to achieve this is to analyse the canopy aerodynamic distance, which is the difference between measurement height (z) and displacement height (d). In this study, twenty years of eddy covariance measurements from the Vielsalm Terrestrial Observatory, a site located in a mixed temperate forest, were used. Canopy aerodynamic distance (z-d) estimates were obtained using two micrometeorological methods: the first one, which is original so far as we know, was based on analysing sensible heat cospectra; the second one was derived from the wind speed profile equation. Canopy height estimates based on inventories were used to validate both methods. The micrometeorological methods allowed the z-d variations due to changes in canopy or measurement height to be detected. In addition, the results obtained using the two methods were well correlated, spatially and temporally, with the z-d derived from canopy height measurements. The micrometeorological approaches used could therefore be a promising tool for investigating z-d variability at a high directional and temporal resolution. Questions remain, however, particularly with regard to the variability observed that cannot be explained by canopy or measurement height variation. Forest management practices and the non-fulfilment of similarity relationships were suspected to be the main explanatory factors.
Evaluation of the wheel-point and step-point methods of veld ...
African Journals Online (AJOL)
The step-point method yielded results on percentage veld composition and on veld composition score which did not differ in precision or in absolute amount from those obtained using the wheel-point apparatus. Adoption of the step point method in preference to the wheel-point method saves in equipment and manpower, ...
Proximity-dependent labeling methods for proteomic profiling in living cells.
Chen, Chiao-Lin; Perrimon, Norbert
2017-07-01
Characterizing the proteome composition of organelles and subcellular regions of living cells can facilitate the understanding of cellular organization as well as protein interactome networks. Proximity labeling-based methods coupled with mass spectrometry (MS) offer a high-throughput approach for systematic analysis of spatially restricted proteomes. Proximity labeling utilizes enzymes that generate reactive radicals to covalently tag neighboring proteins with biotin. The biotinylated endogenous proteins can then be isolated for further analysis by MS. To analyze protein-protein interactions or identify components that localize to discrete subcellular compartments, spatial expression is achieved by fusing the enzyme to specific proteins or signal peptides that target to particular subcellular regions. Although these technologies have only been introduced recently, they have already provided deep insights into a wide range of biological processes. Here, we describe and compare current methods of proximity labeling as well as their applications. As each method has its own unique features, the goal of this review is to describe how different proximity labeling methods can be used to answer different biological questions. WIREs Dev Biol 2017, 6:e272. doi: 10.1002/wdev.272 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.
Energy Technology Data Exchange (ETDEWEB)
Baker, Lucas R.; Pierzynski, Gary M.; Hettiarachchi, Ganga M.; Scheckel, Kirk G.; Newville, Matthew
2012-01-01
The use of P to immobilize Pb in contaminated soils has been well documented. However, the influence of P on Zn speciation in soils has not been extensively examined, and these two metals often occur as co-contaminants. We hypothesized that additions of P to a Pb/Zn-contaminated soil would induce Zn phosphate mineral formation and fluid P sources would be more effective than granular P amendments. A combination of different synchrotron-based techniques, namely, spatially resolved micro-X-ray fluorescence (μ-XRF), micro-extended X-ray absorption fine structure spectroscopy (μ-EXAFS), and micro-X-ray diffraction (μ-XRD), were used to speciate Zn at two incubation times in the proximity of application points (0 to 4 mm) for fluid and granular P amendments in a Pb/Zn smelter-contaminated soil. Phosphate rock (PR), triple super phosphate (TSP), monoammonium phosphate (MAP), and fluid ammonium polyphosphate induced Zn phosphate formation. Ammonium polyphosphate was more effective at greater distances (up to 3.7 mm) from the point of P application. Phosphoric acid increased the presence of soluble Zn species because of increased acidity. Soluble Zn has implications with respect to Zn bioavailability, which may negatively impact vegetation and other sensitive organisms. Although additions of P immobilize Pb, this practice needs close monitoring due to potential increases in Zn solubility in a Pb/Zn smelter-contaminated soil.
Che, Yonglu; Khavari, Paul A
2017-12-01
Interactions between proteins are essential for fundamental cellular processes, and the diversity of such interactions enables the vast variety of functions essential for life. A persistent goal in biological research is to develop assays that can faithfully capture different types of protein interactions to allow their study. A major step forward in this direction came with a family of methods that delineates spatial proximity of proteins as an indirect measure of protein-protein interaction. A variety of enzyme- and DNA ligation-based methods measure protein co-localization in space, capturing novel interactions that were previously too transient or low affinity to be identified. Here we review some of the methods that have been successfully used to measure spatially proximal protein-protein interactions. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
An Approximate Proximal Bundle Method to Minimize a Class of Maximum Eigenvalue Functions
Directory of Open Access Journals (Sweden)
Wei Wang
2014-01-01
Full Text Available We present an approximate nonsmooth algorithm to solve a minimization problem, in which the objective function is the sum of a maximum eigenvalue function of matrices and a convex function. The essential idea to solve the optimization problem in this paper is similar to the thought of proximal bundle method, but the difference is that we choose approximate subgradient and function value to construct approximate cutting-plane model to solve the above mentioned problem. An important advantage of the approximate cutting-plane model for objective function is that it is more stable than cutting-plane model. In addition, the approximate proximal bundle method algorithm can be given. Furthermore, the sequences generated by the algorithm converge to the optimal solution of the original problem.
Material-Point Method Analysis of Bending in Elastic Beams
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2007-01-01
The aim of this paper is to rest different kinds of spatial interpolation for the material-point method.......The aim of this paper is to rest different kinds of spatial interpolation for the material-point method....
Interior-Point Methods for Linear Programming: A Review
Singh, J. N.; Singh, D.
2002-01-01
The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…
Slope failure analysis using the random material point method
Wang, B.; Hicks, M.A.; Vardon, P.J.
2016-01-01
The random material point method (RMPM), which combines random field theory and the material point method (MPM), is proposed. It differs from the random finite-element method (RFEM), by assigning random field (cell) values to material points that are free to move relative to the computational grid
In vivo Biotinylation Based Method for the Study of Protein-Protein Proximity in Eukaryotic Cells
Directory of Open Access Journals (Sweden)
Arman Kulyyassov
2014-01-01
Full Text Available Introduction: The spatiotemporal order plays an important role in cell functioning and is affected in many pathologies such as cancer and neurodegenerative diseases. One of the ultimate goals of molecular biology is reconstruction of the spatiotemporal structure of a living cell at the molecular level. This task includes determination of proximities between different molecular components in the cell and monitoring their time- and physiological state-dependent changes. In many cases, proximity between macromolecules arises due to their interactions; however, the contribution of dynamic self-organization in generation of spatiotemporal order is emerging as another viable possibility. Specifically, in proteomics, this implies that the detection of protein-protein proximity is a more general task than gaining information about physical interactions between proteins, as it could detail aspects of spatial order in vivo that are challenging to reconstitute in binding experiments in vitro. Methods: In this work, we have developed a method of monitoring protein-protein proximity in vivo. For this purpose, the BirA was fused to one of the interaction partners, whereas the BAP was modified to make the detection of its biotinylation possible by mass spectrometry. Results: Using several experimental systems, we showed that the biotinylation is interaction dependent. In addition, we demonstrated that BAP domains with different primary amino acid structures and thus with different molecular weights can be used in the same experiment, providing the possibility of multiplexing. Alternatively to the changes in primary amino acid structure, the stable isotope format can also be used, providing another way to perform multiplexing experiments. Finally, we also demonstrated that our system could help to overcome another limitation of current methodologies to detect protein-protein proximity. For example, one can follow the state of a protein of interest at a defined
Khoo, B C C; Beck, T J; Brown, K; Price, R I
2013-09-01
DXA-derived bone structural geometry has been reported extensively but lacks an accuracy standard. In this study, we describe a novel anthropometric structural geometry phantom that simulates the proximal femur for use in assessing accuracy of geometry measurements by DXA or other X-ray methods. The phantom consists of seven different interchangeable neck modules with geometries that span the range of dimensions in an adult human proximal femur, including those representing osteoporosis. Ten repeated hip scans of each neck module using two current DXA scanner models were performed without repositioning. After scanner specific calibration, hip structure analysis was used to derive structural geometry. Scanner performance was similar for the two manufacturers. DXA-derived HSA geometric measurements were highly correlated with values derived directly from phantom geometry and position; R² between DXA and phantom measures were greater than 94% for all parameters, while precision error ranged between 0.3 and 3.9%. Despite high R² there were some systematic geometry errors for both scanners that were small for outer diameter, but increasing with complexity of geometrical parameter; e.g. buckling ratio. In summary, the anthropometric phantom and its fabrication concept were shown to be appropriate for evaluating proximal femoral structural geometry in two different DXA systems.
Directory of Open Access Journals (Sweden)
Bin Gao
2016-08-01
Full Text Available Abstract The single image super-resolution (SISR problem represents a class of efficient models appealing in many computer vision applications. In this paper, we focus on designing a proximal symmetric alternating direction method of multipliers (SADMM for the SISR problem. By taking full exploitation of the special structure, the method enjoys the advantage of being easily implementable by linearizing the quadratic term of subproblems in the SISR problem. With this linearization, the resulting subproblems easily achieve closed-form solutions. A global convergence result is established for the proposed method. Preliminary numerical results demonstrate that the proposed method is efficient and the computing time is saved by nearly 40% compared with several state-of-the-art methods.
Distributed Solutions for Loosely Coupled Feasibility Problems Using Proximal Splitting Methods
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Andersen, Martin Skovgaard; Hansson, Anders
2014-01-01
In this paper,we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations ...... in terms of the distance of the iterates to the feasible set, which are similar to those of classical projection methods. In case the feasibility problem is infeasible, we provide convergence rate results that concern the convergence of certain error bounds.......In this paper,we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations...... of CFPs. We also put forth distributed convergence tests which enable us to establish feasibility or infeasibility of the problem distributedly, and we provide convergence rate results. Under the assumption that the problem is feasible and boundedly linearly regular, these convergence results are given...
Directory of Open Access Journals (Sweden)
Mirela Marinova-Takorova
2014-06-01
Full Text Available Aim: The aim of the presented study is to compare the effectiveness of the diagnosis with a dental microscope, laser fluorescence (DIAGNOcam and X-ray examination in proximal caries diagnosis. Material and methods: Thirty-eight adult patients were examined. They were first examined with a dental mirror and a probe, under magnification 6.4 times. After that a diagnosis with DIAGNOcam was performed. Bitewing X-ray images were administered. The data from the three diagnostic methods was compared using SPSS 16 package of Windows. The lesions that were diagnosed as involving dentin were then excavated which served as a confirmation of the diagnosis. Results: The results of the study showed that dentinal lesions were detected with a high degree of correlation with all three diagnostic methods. The visual examination seriously underestimated lesions involving only enamel. In these cases there was a good correlation between laser fluorescence and X-ray data. Conclusions: Based on the conducted study we could conclude that the diagnosis of proximal caries with DIAGNOcam is equivalent to X-ray, both being more accurate in cases with early lesions, compared to visual diagnosis.
Biased gradient squared descent saddle point finding method.
Duncan, Juliana; Wu, Qiliang; Promislow, Keith; Henkelman, Graeme
2014-05-21
The harmonic approximation to transition state theory simplifies the problem of calculating a chemical reaction rate to identifying relevant low energy saddle points in a chemical system. Here, we present a saddle point finding method which does not require knowledge of specific product states. In the method, the potential energy landscape is transformed into the square of the gradient, which converts all critical points of the original potential energy surface into global minima. A biasing term is added to the gradient squared landscape to stabilize the low energy saddle points near a minimum of interest, and destabilize other critical points. We demonstrate that this method is competitive with the dimer min-mode following method in terms of the number of force evaluations required to find a set of low-energy saddle points around a reactant minimum.
Miu, A
2015-01-01
Fractures are a very important issue in a child's orthopedic pathology. Neglected a good amount of time, being considered "not too serious", or "rare", having better and faster healing methods and not leaving sequels, like in the case of adults, a child's fractures remain an important chapter of traumatology in general. Because of the raising prevalence of child osteoarticular traumas, as well as new less invasive treatment methods, this theme is always to date. The paper analyzes particular cases of bone fractures that appeared due to minor traumas, on bones with a high brittleness, localized especially on the long bones. Although these fractures on a pathological bone can be seen at all levels of the human skeleton, this paper focuses on fractures located in the proximal third part of the femur. A group of children admitted in the Pediatric Orthopedic Department of "M.S. Curie" Hospital-Bucharest with this diagnostic, were analyzed between 2009 and 2013.
Sivasubramani, S.; Ahmad, Md. Samar
2014-06-01
This paper proposes a new hybrid algorithm combining harmony search (HS) algorithm and interior point method (IPM) for economic dispatch (ED) problem with valve-point effect. ED problem with valve-point effect is modeled as a non-linear, constrained and non-convex optimization problem having several local minima. IPM is a best non-linear optimization method for convex optimization problems. Since ED problem with valve-point effect has multiple local minima, IPM results in a local optimum solution. In order to avoid IPM getting trapped in a local optimum, a new evolutionary algorithm HS, which is good in global exploration, has been combined. In the hybrid method, HS is used for global search and IPM for local search. The hybrid method has been tested on three different test systems to prove its effectiveness. Finally, the simulation results are also compared with other methods reported in the literature.
Liu, Xiaoqiang; Chen, Yanming; Cheng, Liang; Yao, Mengru; Deng, Shulin; Li, Manchun; Cai, Dong
2017-01-01
Filtering of airborne laser scanning (ALS) point clouds into ground and nonground points is a core postprocessing step for ALS data. A hierarchical filtering method, which has high operating efficiency and accuracy because of the combination of multiscale morphology and progressive triangulated irregular network (TIN) densification (PTD), is proposed. In the proposed method, the grid is first constructed for the ALS point clouds, and virtual seed points are set by analyzing the shape and elevation distribution of points within the grid. Then, the virtual seed points are classified as ground or nonground using the multiscale morphological method. Finally, the virtual ground seed points are utilized to generate the initial TIN, and the filter is completed by iteratively densifying the initial TIN. We used various ALS data to test the performance of the proposed method. The experimental results show that the proposed filtering method has strong applicability for a variety of landscapes and, in particular, has lower commission error than the classical PTD filtering method in urban areas.
Spadafore, Maxwell; Najarian, Kayvan; Boyle, Alan P
2017-11-29
Transcription factors (TFs) form a complex regulatory network within the cell that is crucial to cell functioning and human health. While methods to establish where a TF binds to DNA are well established, these methods provide no information describing how TFs interact with one another when they do bind. TFs tend to bind the genome in clusters, and current methods to identify these clusters are either limited in scope, unable to detect relationships beyond motif similarity, or not applied to TF-TF interactions. Here, we present a proximity-based graph clustering approach to identify TF clusters using either ChIP-seq or motif search data. We use TF co-occurrence to construct a filtered, normalized adjacency matrix and use the Markov Clustering Algorithm to partition the graph while maintaining TF-cluster and cluster-cluster interactions. We then apply our graph structure beyond clustering, using it to increase the accuracy of motif-based TFBS searching for an example TF. We show that our method produces small, manageable clusters that encapsulate many known, experimentally validated transcription factor interactions and that our method is capable of capturing interactions that motif similarity methods might miss. Our graph structure is able to significantly increase the accuracy of motif TFBS searching, demonstrating that the TF-TF connections within the graph correlate with biological TF-TF interactions. The interactions identified by our method correspond to biological reality and allow for fast exploration of TF clustering and regulatory dynamics.
A Proximal Fully Parallel Splitting Method for Stable Principal Component Pursuit
Directory of Open Access Journals (Sweden)
Hongchun Sun
2017-01-01
Full Text Available As a special three-block separable convex programming, the stable principal component pursuit (SPCP arises in many different disciplines, such as statistical learning, signal processing, and web data ranking. In this paper, we propose a proximal fully parallel splitting method (PFPSM for solving SPCP, in which the resulting subproblems all admit closed-form solutions and can be solved in distributed manners. Compared with other similar algorithms in the literature, PFPSM attaches a Glowinski relaxation factor η∈3/2,2/3 to the updating formula for its Lagrange multiplier, which can be used to accelerate the convergence of the generated sequence. Under mild conditions, the global convergence of PFPSM is proved. Preliminary computational results show that the proposed algorithm works very well in practice.
Apparatus and methods for determining at least one characteristic of a proximate environment
Novascone, Stephen R [Idaho Falls, ID; West, Phillip B [Idaho Falls, ID; Anderson, Michael J [Troy, ID
2008-04-15
Methods and an apparatus for determining at least one characteristic of an environment are disclosed. A vibrational energy may be imparted into an environment and a magnitude of damping of the vibrational energy may be measured and at least one characteristic of the environment may be determined. Particularly, a vibratory source may be operated and coupled to an environment. At least one characteristic of the environment may be determined based on a shift in at least one steady-state frequency of oscillation of the vibratory source. An apparatus may include at least one vibratory source and a structure for positioning the at least one vibratory source proximate to an environment. Further, the apparatus may include an analysis device for determining at least one characteristic of the environment based at least partially upon shift in a steady-state oscillation frequency of the vibratory source for the given impetus.
Directory of Open Access Journals (Sweden)
Arnaud Landry Suffo Kamela
2016-01-01
Full Text Available The effects of various processing methods on the proximate composition and dieting of Amaranthus hybridus and Amaranthus cruentus from West Cameroon were investigated in this study. Both amaranths leaves were subjected to same treatments (sun-dried and unsliced, sliced and cooked, milled, and analysed for their mineral and proximate composition. Thirty-Six Wistar albino rats of 21 to 24 days old were distributed in six groups and fed for 14 days with 10% protein based diets named D0 (protein-free diet, DI (egg white as reference protein, DII (sun-dried and unsliced A. hybridus, DIII (cooked and sliced A. hybridus, DIV (sun-dried and unsliced A. cruentus, and DV (cooked and sliced A. cruentus. The protein bioavailability and haematological and biochemical parameters were assessed in rats. The results showed that K, P, Mg, Zn, and Fe had the higher content in both samples regardless of processing method. The sun-dried and unsliced A. cruentus contained the highest value of crude protein 32.22 g/100 g DM (dry matter while the highest crude lipid, 3.80 and 2.58%, was observed, respectively, in sun-dried and unsliced A. hybridus and cooked and sliced A. cruentus. Cooked and sliced A. hybridus and A. cruentus contained high crude fiber of 14 and 12.18%, respectively. Rats fed with diet DIII revealed the best protein bioavailability and haematological parameters whereas 100% mortality rate was recorded with group fed with diet DIV. From this study, it is evident that cooked and sliced A. hybridus and A. cruentus could play a role in weight reduction regimes.
Post-Processing in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars Vabbersgaard
The material-point method (MPM) is a numerical method for dynamic or static analysis of solids using a discretization in time and space. The method has shown to be successful in modelling physical problems involving large deformations, which are difficult to model with traditional numerical tools......-point method. The first idea involves associating a volume with each material point and displaying the deformation of this volume. In the discretization process, the physical domain is divided into a number of smaller volumes each represented by a simple shape; here quadrilaterals are chosen for the presented...... strain problems. It is noted, that this idea is also relevant for other point based methods, such as smoothed particle hydrodynamics, where the history dependent variables are tracked by a set of particles. The second idea introduced in the article involves the fact that while the stresses may oscillate...
A Novel Fast Method for Point-sampled Model Simplification
Directory of Open Access Journals (Sweden)
Cao Zhi
2016-01-01
Full Text Available A novel fast simplification method for point-sampled statue model is proposed. Simplifying method for 3d model reconstruction is a hot topic in the field of 3D surface construction. But it is difficult as point cloud of many 3d models is very large, so its running time becomes very long. In this paper, a two-stage simplifying method is proposed. Firstly, a feature-preserved non-uniform simplification method for cloud points is presented, which simplifies the data set to remove the redundancy while keeping down the features of the model. Secondly, an affinity clustering simplifying method is used to classify the point cloud into a sharp point or a simple point. The advantage of Affinity Propagation clustering is passing messages among data points and fast speed of processing. Together with the re-sampling, it can dramatically reduce the duration of the process while keep a lower memory cost. Both theoretical analysis and experimental results show that after the simplification, the performance of the proposed method is efficient as well as the details of the surface are preserved well.
C-point and V-point singularity lattice formation and index sign conversion methods
Kumar Pal, Sushanta; Ruchi; Senthilkumaran, P.
2017-06-01
The generic singularities in an ellipse field are C-points namely stars, lemons and monstars in a polarization distribution with C-point indices (-1/2), (+1/2) and (+1/2) respectively. Similar to C-point singularities, there are V-point singularities that occur in a vector field and are characterized by Poincare-Hopf index of integer values. In this paper we show that the superposition of three homogenously polarized beams in different linear states leads to the formation of polarization singularity lattice. Three point sources at the focal plane of the lens are used to create three interfering plane waves. A radial/azimuthal polarization converter (S-wave plate) placed near the focal plane modulates the polarization states of the three beams. The interference pattern is found to host C-points and V-points in a hexagonal lattice. The C-points occur at intensity maxima and V-points occur at intensity minima. Modulating the state of polarization (SOP) of three plane waves from radial to azimuthal does not essentially change the nature of polarization singularity lattice as the Poincare-Hopf index for both radial and azimuthal polarization distributions is (+1). Hence a transformation from a star to a lemon is not trivial, as such a transformation requires not a single SOP change, but a change in whole spatial SOP distribution. Further there is no change in the lattice structure and the C- and V-points appear at locations where they were present earlier. Hence to convert an interlacing star and V-point lattice into an interlacing lemon and V-point lattice, the interferometer requires modification. We show for the first time a method to change the polarity of C-point and V-point indices. This means that lemons can be converted into stars and stars can be converted into lemons. Similarly the positive V-point can be converted to negative V-point and vice versa. The intensity distribution in all these lattices is invariant as the SOPs of the three beams are changed in an
Analysis of Stress Updates in the Material-point Method
DEFF Research Database (Denmark)
2009-01-01
are solved on a background computational grid. Several references state, that one of the main advantages of the material-point method is the easy application of complicated material behaviour as the constitutive response is updated individually for each material point. However, as discussed here, the MPM way...
Surface processing methods for point sets using finite elements
Clarenz, Ulrich; Rumpf, Martin; Telea, Alexandru
2004-01-01
We present a framework for processing point-based surfaces via partial differential equations (PDEs). Our framework efficiently and effectively brings well-known PDE-based processing techniques to the field of point-based surfaces. At the core of our method is a finite element discretization of PDEs
Novel Ratio Subtraction and Isoabsorptive Point Methods for ...
African Journals Online (AJOL)
Purpose: To develop and validate two innovative spectrophotometric methods used for the simultaneous determination of ambroxol hydrochloride and doxycycline in their binary mixture. Methods: Ratio subtraction and isoabsorptive point methods were used for the simultaneous determination of ambroxol hydrochloride ...
Image to Point Cloud Method of 3D-MODELING
Chibunichev, A. G.; Galakhov, V. P.
2012-07-01
This article describes the method of constructing 3D models of objects (buildings, monuments) based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.
A new screening method to detect proximal dental caries using fluorescence imaging.
Kim, Eun-Soo; Lee, Eun-Song; Kang, Si-Mook; Jung, Eun-Ha; de Josselin de Jong, Elbert; Jung, Hoi-In; Kim, Baek-Il
2017-12-01
This study aimed to assess the screening performance of the quantitative light-induced fluorescence (QLF) technology to detect proximal caries using both fluorescence loss and red fluorescence in a clinical situation. Moreover, a new simplified QLF score for the proximal caries (QS-Proximal) is proposed and its validity for detecting proximal caries was evaluated as well. This clinical study included 280 proximal surfaces, which were assessed by visual-tactile and radiographic examinations and scored by each scoring system according to lesion severity. The occlusal QLF images were analysed in two different ways: (1) a quantitative analysis producing fluorescence loss (ΔF) and red fluorescence (ΔR) parameters; and (2) a new QLF scoring index. For both quantitative parameters and QS-Proximal, the sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) were calculated as a function of the radiographic scoring index at the enamel and dentine caries levels. Both ΔF and ΔR showed excellent AUROC values at the dentine caries level (ΔF=0.860, ΔR=0.902) whereas a relatively lower value was observed at the enamel caries level (ΔF=0.655, ΔR=0.686). The QS-Proximal also showed excellent AUROC ranged from 0.826 to 0.864 for detecting proximal caries at the dentine level. The QS-Proximal, which represents fluorescence changes, showed excellent performance in detecting proximal caries using the radiographic score as the gold standard. Copyright © 2017 Elsevier B.V. All rights reserved.
Kemoli, A.M.; van Amerongen, W.E.; Opinya, G.N.
2010-01-01
AIM: This was to evaluate the influence of two methods of tooth-isolation on the survival rate of proximal ART restorations in the primary molars. METHODS: The study was conducted in two rural divisions in Kenya, with 7 operators randomly paired to a group of 8 assistants. A total of 804 children
Natural Preconditioning and Iterative Methods for Saddle Point Systems
Pestana, Jennifer
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. The solution of quadratic or locally quadratic extremum problems subject to linear(ized) constraints gives rise to linear systems in saddle point form. This is true whether in the continuous or the discrete setting, so saddle point systems arising from the discretization of partial differential equation problems, such as those describing electromagnetic problems or incompressible flow, lead to equations with this structure, as do, for example, interior point methods and the sequential quadratic programming approach to nonlinear optimization. This survey concerns iterative solution methods for these problems and, in particular, shows how the problem formulation leads to natural preconditioners which guarantee a fast rate of convergence of the relevant iterative methods. These preconditioners are related to the original extremum problem and their effectiveness - in terms of rapidity of convergence - is established here via a proof of general bounds on the eigenvalues of the preconditioned saddle point matrix on which iteration convergence depends.
Selective Integration in the Material-Point Method
DEFF Research Database (Denmark)
2009-01-01
The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared to a tra......The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....
Material-point Method Analysis of Bending in Elastic Beams
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
The aim of this paper is to test different types of spatial interpolation for the materialpoint method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...
A fixed point method to compute solvents of matrix polynomials
Marcos, Fernando; Pereira, Edgar
2009-01-01
Matrix polynomials play an important role in the theory of matrix differential equations. We develop a fixed point method to compute solutions of matrix polynomials equations, where the matricial elements of the matrix polynomial are considered separately as complex polynomials. Numerical examples illustrate the method presented.
Full-step interior-point methods for symmetric optimization
Gu, G.
2009-01-01
In [SIAM J. Optim., 16(4):1110--1136 (electronic), 2006] Roos proposed a full-Newton step Infeasible Interior-Point Method (IIPM) for Linear Optimization (LO). It is a primal-dual homotopy method; it differs from the classical IIPMs in that it uses only full steps. This means that no line searches
Novel TPPO Based Maximum Power Point Method for Photovoltaic System
Directory of Open Access Journals (Sweden)
ABBASI, M. A.
2017-08-01
Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.
A Point-Set-Based Footprint Model and Spatial Ranking Method for Geographic Information Retrieval
Directory of Open Access Journals (Sweden)
Yong Gao
2016-07-01
Full Text Available In the recent big data era, massive spatial related data are continuously generated and scrambled from various sources. Acquiring accurate geographic information is also urgently demanded. How to accurately retrieve desired geographic information has become the prominent issue, needing to be resolved in high priority. The key technologies in geographic information retrieval are modeling document footprints and ranking documents based on their similarity evaluation. The traditional spatial similarity evaluation methods are mainly performed using a MBR (Minimum Bounding Rectangle footprint model. However, due to its nature of simplification and roughness, the results of traditional methods tend to be isotropic and space-redundant. In this paper, a new model that constructs the footprints in the form of point-sets is presented. The point-set-based footprint coincides the nature of place names in web pages, so it is redundancy-free, consistent, accurate, and anisotropic to describe the spatial extents of documents, and can handle multi-scale geographic information. The corresponding spatial ranking method is also presented based on the point-set-based model. The new similarity evaluation algorithm of this method firstly measures multiple distances for the spatial proximity across different scales, and then combines the frequency of place names to improve the accuracy and precision. The experimental results show that the proposed method outperforms the traditional methods with higher accuracies under different searching scenarios.
Directory of Open Access Journals (Sweden)
Yuk Fai Lau
2017-12-01
Full Text Available Broken medullary tubes have been used for intramedullary (IM nailing of femoral and tibial fractures. In these reported cases, fragments of the medullary tube were retrieved by opening the fracture sites or left in situ, which might jeopardize periosteal blood supply. We herein present the case of a 58-year-old woman who underwent IM nailing for proximal humeral fracture, which was complicated by breakage of the medullary tube intraoperatively. Different instruments including guide rods, straight forceps, and cement extract hook were used to retrieve the retained fragments from the medullary canal, but these attempts were unsuccessful. Finally, the fragments were successfully removed using an anterior cruciate ligament (ACL ENDOBUTTON depth gauge. This case highlights that medullary tubes can break during humeral IM nailing, which could be minimized by ensuring integrity of the medullary tube prior to surgery and disposing medullary tubes with more than 100 exposures. A novel method of using ACL ENDOBUTTON depth gauge to retrieve retained tube fragments is recommended because of its long and slim design.
Bussaneli, D G; Restrepo, M; Boldieri, T; Albertoni, T H; Santos-Pinto, L; Cordeiro, R C L
2015-12-01
The aim of this clinical study was to evaluate and compare the performance of visual exam with use of the Nyvad criteria (visual examination - (VE)), interproximal radiography (BW), laser fluorescence device (DIAGNOdent Pen-DDPen), and their association in the diagnosis of proximal lesions in primary teeth. For this purpose, 45 children (n = 59 surfaces) of both sexes, aged between 5 and 9 years were selected, who presented healthy primary molars or primary molars with signs suggestive of the presence of caries lesions. The surfaces were clinically evaluated and coded according to the Nyvad criteria and immediately afterwards with the DDPen. Radiographic exam was performed only on the surfaces coded with Nyvad scores 2, 3, 5, or 6. Active caries lesions and/or those with discontinuous surfaces were restored, considering the depth of lesion as reference standard. Sensitivity, specificity, accuracy, and area under ROC curve were calculated for each technique and its associations. Visual exam with Nyvad criteria presented the highest specificity, accuracy, and area under ROC curve values. The DDPen presented the highest sensitivity values. Association with one or more methods resulted in an increase in specificity. The performance of visual, radiographic, and DDpen exams and their associations were good; however, the clinical examination with the Nyvad criteria was sufficient for the diagnosis of interproximal lesions in primary teeth.
Detection method of proximal caries using line profile in digital intra-oral radiography
Energy Technology Data Exchange (ETDEWEB)
Choi, Yong Suk; Kim, Gyu Tae; Hwang, Eui Hwan; Lee, Min Ja; Choi, Sam Jin; Park, Hun Kuk [Department of Oral and Maxillofacial Radiology, School of Dentistry and Institute of Oral Biology, Kyung Hee University, Seoul (Korea, Republic of); Park, Jeong Hoon [Department of Biomedical Engineering, College of Medicine, Kyung Hee University, Seoul (Korea, Republic of)
2009-12-15
The purpose of this study was to investigate how to detect proximal caries using line profile and validate linear measurements of proximal caries lesions by basic digital manipulation of radiographic images. The X-ray images of control group (15) and caries teeth (15) from patients were used. For each image, the line profile at the proximal caries-susceptible zone was calculated. To evaluate the contrast as a function of line profile to detect proximal caries, a difference coefficient (D) that indicates the relative difference between caries and sound dentin or intact enamel was measured. Mean values of D were 0.0354 {+-} 0.0155 in non-caries and 0.2632 {+-} 0.0982 in caries (p<0.001). The mean values of caries group were higher than non-caries group and there was correlation between proximal dental caries and D. It is demonstrated that the mean value of D from caries group was higher than that of control group. From the result, values of D possess great potentiality as a new detection parameter for proximal dental caries.
Analysis of Spatial Interpolation in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2010-01-01
This paper analyses different types of spatial interpolation for the material-point method The interpolations include quadratic elements and cubic splines in addition to the standard linear shape functions usually applied. For the small-strain problem of a vibrating bar, the best results are obta......This paper analyses different types of spatial interpolation for the material-point method The interpolations include quadratic elements and cubic splines in addition to the standard linear shape functions usually applied. For the small-strain problem of a vibrating bar, the best results...
Micro-four-point Probe Hall effect Measurement method
DEFF Research Database (Denmark)
Petersen, Dirch Hjorth; Hansen, Ole; Lin, Rong
2008-01-01
contributions may be separated using dual configuration measurements. The method differs from conventional van der Pauw measurements since the probe pins are placed in the interior of the sample region, not just on the perimeter. We experimentally verify the method by micro-four-point probe measurements...... on ultrashallow junctions in silicon and germanium. On a cleaved silicon ultrashallow junction sample we determine carrier mobility, sheet carrier density, and sheet resistance from micro-four-point probe measurements under various experimental conditions, and show with these conditions reproducibility within...
Multi-point probe for testing electrical properties and a method of producing a multi-point probe
DEFF Research Database (Denmark)
2011-01-01
A multi-point probe for testing electrical properties of a number of specific locations of a test sample comprises a supporting body defining a first surface, a first multitude of conductive probe arms (101-101'''), each of the probe arms defining a proximal end and a distal end. The probe arms...... are connected to the supporting body (105) at the proximal ends, and the distal ends are freely extending from the supporting body, giving individually flexible motion to the probe arms. Each of the probe arms defines a maximum width perpendicular to its perpendicular bisector and parallel with its line...... of contact with the supporting body, and a maximum thickness perpendicular to its perpendicular bisector and its line of contact with the supporting body. Each of the probe arms has a specific area or point of contact (111-111''') at its distal end for contacting a specific location among the number...
Modelling of Landslides with the Material-point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...
Incompressible material point method for free surface flow
Zhang, Fan; Zhang, Xiong; Sze, Kam Yim; Lian, Yanping; Liu, Yan
2017-02-01
To overcome the shortcomings of the weakly compressible material point method (WCMPM) for modeling the free surface flow problems, an incompressible material point method (iMPM) is proposed based on operator splitting technique which splits the solution of momentum equation into two steps. An intermediate velocity field is first obtained by solving the momentum equations ignoring the pressure gradient term, and then the intermediate velocity field is corrected by the pressure term to obtain a divergence-free velocity field. A level set function which represents the signed distance to free surface is used to track the free surface and apply the pressure boundary conditions. Moreover, an hourglass damping is introduced to suppress the spurious velocity modes which are caused by the discretization of the cell center velocity divergence from the grid vertexes velocities when solving pressure Poisson equations. Numerical examples including dam break, oscillation of a cubic liquid drop and a droplet impact into deep pool show that the proposed incompressible material point method is much more accurate and efficient than the weakly compressible material point method in solving free surface flow problems.
Material-Point Method Analysis of Bending in Elastic Beams
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2007-01-01
cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...
Material-Point-Method Analysis of Collapsing Slopes
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
-interpolation material-point method, combining a Eulerian grid for solving the governing equations of a continuum with a Lagrangian description for the material. The method is extended to analyse interaction between multiple bodies, introducing a master-slave algorithm for frictional contact along interfaces. Further......, a deformed material description is introduced, based on time integration of the deformation gradient and utilising Gauss quadrature over the volume associated with each material point. The method has been implemented in a Fortran code and employed for the analysis of a landslide that took place during...... the night of December 1st, 2008, near Lønstrup, Denmark. Using a simple Mohr-Coulomb model for the soil, the computational model is able to reproduce the change in the slope geometry at the site....
Dai, Wenqing; Richardella, Anthony; Du, Renzhong; Zhao, Weiwei; Liu, Xin; Liu, C X; Huang, Song-Hsun; Sankar, Raman; Chou, Fangcheng; Samarth, Nitin; Li, Qi
2017-08-09
Proximity-effect-induced superconductivity was studied in epitaxial topological insulator Bi 2 Se 3 thin films grown on superconducting NbSe 2 single crystals. A point contact spectroscopy (PCS) method was used at low temperatures down to 40 mK. An induced superconducting gap in Bi 2 Se 3 was observed in the spectra, which decreased with increasing Bi 2 Se 3 layer thickness, consistent with the proximity effect in the bulk states of Bi 2 Se 3 induced by NbSe 2 . At very low temperatures, an extra point contact feature which may correspond to a second energy gap appeared in the spectrum. For a 16 quintuple layer Bi 2 Se 3 on NbSe 2 sample, the bulk state gap value near the top surface is ~159 μeV, while the second gap value is ~120 μeV at 40 mK. The second gap value decreased with increasing Bi 2 Se 3 layer thickness, but the ratio between the second gap and the bulk state gap remained about the same for different Bi 2 Se 3 thicknesses. It is plausible that this is due to superconductivity in Bi 2 Se 3 topological surface states induced through the bulk states. The two induced gaps in the PCS measurement are consistent with the three-dimensional bulk state and the two-dimensional surface state superconducting gaps observed in the angle-resolved photoemission spectroscopy (ARPES) measurement.
A Review on the Modified Finite Point Method
Directory of Open Access Journals (Sweden)
Nan-Jing Wu
2014-01-01
Full Text Available The objective of this paper is to make a review on recent advancements of the modified finite point method, named MFPM hereafter. This MFPM method is developed for solving general partial differential equations. Benchmark examples of employing this method to solve Laplace, Poisson, convection-diffusion, Helmholtz, mild-slope, and extended mild-slope equations are verified and then illustrated in fluid flow problems. Application of MFPM to numerical generation of orthogonal grids, which is governed by Laplace equation, is also demonstrated.
Computation of multi-material interactions using point method
Energy Technology Data Exchange (ETDEWEB)
Zhang, Duan Z [Los Alamos National Laboratory; Ma, Xia [Los Alamos National Laboratory; Giguere, Paul T [Los Alamos National Laboratory
2009-01-01
Calculations of fluid flows are often based on Eulerian description, while calculations of solid deformations are often based on Lagrangian description of the material. When the Eulerian descriptions are used to problems of solid deformations, the state variables, such as stress and damage, need to be advected, causing significant numerical diffusion error. When Lagrangian methods are used to problems involving large solid deformat ions or fluid flows, mesh distortion and entanglement are significant sources of error, and often lead to failure of the calculation. There are significant difficulties for either method when applied to problems involving large deformation of solids. To address these difficulties, particle-in-cell (PIC) method is introduced in the 1960s. In the method Eulerian meshes stay fixed and the Lagrangian particles move through the Eulerian meshes during the material deformation. Since its introduction, many improvements to the method have been made. The work of Sulsky et al. (1995, Comput. Phys. Commun. v. 87, pp. 236) provides a mathematical foundation for an improved version, material point method (MPM) of the PIC method. The unique advantages of the MPM method have led to many attempts of applying the method to problems involving interaction of different materials, such as fluid-structure interactions. These problems are multiphase flow or multimaterial deformation problems. In these problems pressures, material densities and volume fractions are determined by satisfying the continuity constraint. However, due to the difference in the approximations between the material point method and the Eulerian method, erroneous results for pressure will be obtained if the same scheme used in Eulerian methods for multiphase flows is used to calculate the pressure. To resolve this issue, we introduce a numerical scheme that satisfies the continuity requirement to higher order of accuracy in the sense of weak solutions for the continuity equations
An improved maximum power point tracking method for photovoltaic systems
Energy Technology Data Exchange (ETDEWEB)
Tafticht, T.; Agbossou, K.; Doumbia, M.L.; Cheriti, A. [Institut de recherche sur l' hydrogene, Departement de genie electrique et genie informatique, Universite du Quebec a Trois-Rivieres, C.P. 500, Trois-Rivieres (QC) (Canada)
2008-07-15
In most of the maximum power point tracking (MPPT) methods described currently in the literature, the optimal operation point of the photovoltaic (PV) systems is estimated by linear approximations. However these approximations can lead to less than optimal operating conditions and hence reduce considerably the performances of the PV system. This paper proposes a new approach to determine the maximum power point (MPP) based on measurements of the open-circuit voltage of the PV modules, and a nonlinear expression for the optimal operating voltage is developed based on this open-circuit voltage. The approach is thus a combination of the nonlinear and perturbation and observation (P and O) methods. The experimental results show that the approach improves clearly the tracking efficiency of the maximum power available at the output of the PV modules. The new method reduces the oscillations around the MPP, and increases the average efficiency of the MPPT obtained. The new MPPT method will deliver more power to any generic load or energy storage media. (author)
Multiperiod hydrothermal economic dispatch by an interior point method
Directory of Open Access Journals (Sweden)
Kimball L. M.
2002-01-01
Full Text Available This paper presents an interior point algorithm to solve the multiperiod hydrothermal economic dispatch (HTED. The multiperiod HTED is a large scale nonlinear programming problem. Various optimization methods have been applied to the multiperiod HTED, but most neglect important network characteristics or require decomposition into thermal and hydro subproblems. The algorithm described here exploits the special bordered block diagonal structure and sparsity of the Newton system for the first order necessary conditions to result in a fast efficient algorithm that can account for all network aspects. Applying this new algorithm challenges a conventional method for the use of available hydro resources known as the peak shaving heuristic.
A Robust Shape Reconstruction Method for Facial Feature Point Detection.
Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi
2017-01-01
Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.
Perceived effectiveness of teaching methods for point of care ultrasound.
Cartier, Rudolph A; Skinner, Carl; Laselle, Brooks
2014-07-01
Point of care ultrasound (POCUS) is a rapidly expanding aspect of both the practice and education of emergency physicians. The most effective methods of teaching these valuable skills have not been explored. This project aimed to identify those methods that provide the best educational value as determined by the learner. Data was collected from pre- and post-course surveys administered to students of the introductory POCUS course provided to emergency medicine residents each year at our facility. Data were collected in 2010 and 2011. Participants were asked to evaluate the effectiveness of small- vs. large-group format, still images vs. video clips, and PowerPoint slides vs. live demonstration vs. hands-on scanning. Students felt the most effective methods to be small-group format, video-clip examples, and hands-on scanning sessions. Students also rated hands-on sessions, still images, and video images as more effective in post-course surveys as compared with pre-course surveys. The methods perceived as most effective for POCUS education are small-group format, video-clip examples, and hands-on scanning sessions. Published by Elsevier Inc.
Directory of Open Access Journals (Sweden)
Jingyu Sun
2014-07-01
Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.
Fast calculation method of a CGH for a patch model using a point-based method.
Ogihara, Y; Sakamoto, Y
2015-01-01
Holography is three-dimensional display technology. Computer-generated holograms (CGHs) are created by simulating light propagation on a computer, and they are able to display a virtual object. There are mainly two types of calculation methods of CGHs, a point-based method and the fast Fourier-transform (FFT)-based method. The FFT-based method is based on a patch model, and it is suited to accelerating the calculations as it calculates the light propagation across a patch as a whole. The calculations with the point-based method are characterized by a high degree of parallelism, and it is suited to accelerating graphics processing units (GPUs). The point-based method is not suitable for calculation with the patch model. This paper proposes a fast calculation algorithm for a patch model with the point-based method. The proposed method calculates the line on a patch as a whole regardless of the number of points on the line. When the proposed method is implemented on a GPU, the calculation time of the proposed method is shorter than with the point-based method.
Fujibuchi, Taketsugu; Matsumoto, Seiichi; Shimoji, Takashi; Ae, Keisuke; Tanizawa, Taisuke; Gokita, Tabu; Hayakawa, Keiko
2015-06-01
Endoprosthetic reconstruction of the proximal humerus is one of the standard procedures after resection of tumors of the proximal humerus and has been considered a reliable method to reconstruct the proximal humerus in recent reports. However, instability of the shoulder joint caused by loss of the rotator cuff and deltoid muscle function is often observed after such an endoprosthetic reconstruction. We performed the endoprosthesis suspension method with polypropylene monofilament knitted mesh. This suspension method, by which the endoprosthesis is suspended from the bone structure, was used after resection of tumors in 9 patients. We assessed postoperative stability of the shoulder joint by comparing these patients with 12 patients who underwent the conventional surgical technique, by which the mesh-wrapped endoprosthesis is attached only to soft tissue. In radiographic and physical evaluation, 4 of the 12 patients in the soft tissue reconstruction group showed shoulder joint instability. No patient in the suspension method group showed subluxation of the humeral prosthesis. The mean shoulder flexion was 35° and 65° and the mean shoulder abduction was 40° and 40° for the soft tissue reconstruction group and the suspension method group, respectively. Shoulder joint subluxation sometimes occurs because of elongation of the attached soft tissue in the conventional reconstruction with mesh, whereas no shoulder joint subluxation occurs after endoprosthetic reconstruction in the suspension method because the bone structure has no leeway for elongation. Excellent stability of our new method enables exercise of the surgical shoulder at an early stage, leading to improved range of shoulder joint motion. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Modeling of Landslides with the Material Point Method
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2008-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...... is employed for the soil. The slide is triggered for the initially stable slope by removing the cohesion of the soil and the slide is followed from the triggering until a state of equilibrium is again reached. Parameter studies, in which the angle of internal friction of the soil and the degree...
A Searching Method of Candidate Segmentation Point in SPRINT Classification
Directory of Open Access Journals (Sweden)
Zhihao Wang
2016-01-01
Full Text Available SPRINT algorithm is a classical algorithm for building a decision tree that is a widely used method of data classification. However, the SPRINT algorithm has high computational cost in the calculation of attribute segmentation. In this paper, an improved SPRINT algorithm is proposed, which searches better candidate segmentation point for the discrete and continuous attributes. The experiment results demonstrate that the proposed algorithm can reduce the computation cost and improve the efficiency of the algorithm by improving the segmentation of continuous attributes and discrete attributes.
Li, Jun; Woods, Susan L.; Healey, Sue; Beesley, Jonathan; Chen, Xiaoqing; Lee, Jason S.; Sivakumaran, Haran; Wayte, Nicci; Nones, Katia; Waterfall, Joshua J.; Pearson, John; Patch, Anne-Marie; Senz, Janine; Ferreira, Manuel A.; Kaurah, Pardeep; Mackenzie, Robertson; Heravi-Moussavi, Alireza; Hansford, Samantha; Lannagan, Tamsin R.M.; Spurdle, Amanda B.; Simpson, Peter T.; da Silva, Leonard; Lakhani, Sunil R.; Clouston, Andrew D.; Bettington, Mark; Grimpen, Florian; Busuttil, Rita A.; Di Costanzo, Natasha; Boussioutas, Alex; Jeanjean, Marie; Chong, George; Fabre, Aurélie; Olschwang, Sylviane; Faulkner, Geoffrey J.; Bellos, Evangelos; Coin, Lachlan; Rioux, Kevin; Bathe, Oliver F.; Wen, Xiaogang; Martin, Hilary C.; Neklason, Deborah W.; Davis, Sean R.; Walker, Robert L.; Calzone, Kathleen A.; Avital, Itzhak; Heller, Theo; Koh, Christopher; Pineda, Marbin; Rudloff, Udo; Quezado, Martha; Pichurin, Pavel N.; Hulick, Peter J.; Weissman, Scott M.; Newlin, Anna; Rubinstein, Wendy S.; Sampson, Jone E.; Hamman, Kelly; Goldgar, David; Poplawski, Nicola; Phillips, Kerry; Schofield, Lyn; Armstrong, Jacqueline; Kiraly-Borri, Cathy; Suthers, Graeme K.; Huntsman, David G.; Foulkes, William D.; Carneiro, Fatima; Lindor, Noralane M.; Edwards, Stacey L.; French, Juliet D.; Waddell, Nicola; Meltzer, Paul S.; Worthley, Daniel L.; Schrader, Kasmintan A.; Chenevix-Trench, Georgia
2016-01-01
Gastric adenocarcinoma and proximal polyposis of the stomach (GAPPS) is an autosomal-dominant cancer-predisposition syndrome with a significant risk of gastric, but not colorectal, adenocarcinoma. We mapped the gene to 5q22 and found loss of the wild-type allele on 5q in fundic gland polyps from affected individuals. Whole-exome and -genome sequencing failed to find causal mutations but, through Sanger sequencing, we identified point mutations in APC promoter 1B that co-segregated with disease in all six families. The mutations reduced binding of the YY1 transcription factor and impaired activity of the APC promoter 1B in luciferase assays. Analysis of blood and saliva from carriers showed allelic imbalance of APC, suggesting that these mutations lead to decreased allele-specific expression in vivo. Similar mutations in APC promoter 1B occur in rare families with familial adenomatous polyposis (FAP). Promoter 1A is methylated in GAPPS and sporadic FGPs and in normal stomach, which suggests that 1B transcripts are more important than 1A in gastric mucosa. This might explain why all known GAPPS-affected families carry promoter 1B point mutations but only rare FAP-affected families carry similar mutations, the colonic cells usually being protected by the expression of the 1A isoform. Gastric polyposis and cancer have been previously described in some FAP-affected individuals with large deletions around promoter 1B. Our finding that GAPPS is caused by point mutations in the same promoter suggests that families with mutations affecting the promoter 1B are at risk of gastric adenocarcinoma, regardless of whether or not colorectal polyps are present. PMID:27087319
A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD
Directory of Open Access Journals (Sweden)
J. Tang
2017-09-01
Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
Robust maximum power point tracking method for photovoltaic cells
Energy Technology Data Exchange (ETDEWEB)
Chu, C.C.; Chen, C.L. [National Cheng Kung Univ., Taiwan (China). Dept. of Aeronautics and Astronautics
2007-07-01
This paper described a peak power tracking method that uses a sliding mode control system. The method was designed to track the maximum peak power (MPP) of photovoltaic (PV) applications. The performance of the controller was demonstrated through a series of numerical studies that simulated a PV module designed to deliver a maximum of 60 W of power. An approaching control approach was used to guarantee that system states reached the PV surface and produced MPP consistently. A state space averaging method was used to represent system dynamics. The proposed control law ensured that output voltage remained higher than input voltage. The PV model and proposed approach were modelled and evaluated in relation to robustness to irradiance, temperature, and load. The study demonstrated that the sliding mode approach maintained maximum power output while remaining robust in various external conditions. The system attained steady state irradiance levels within an order of milliseconds. The system was also tested under rapid changes of temperature, where the sliding mode approach was able to maintain output at optimum points. It was concluded that the approach almost reaches the theoretical maximum power of known irradiance and temperature. 20 refs., 1 tab., 9 figs.
A Monocular SLAM Method to Estimate Relative Pose During Satellite Proximity Operations
2015-03-26
Memory: 2.0 GB RAM Processor Cores: 4 System type: 64-bit 35 A Point Grey Research Flea 3 camera was used to acquire video sequences ana- lyzed in...Point Grey Research, Inc. Product Name: Flea 3 USB 3.0 Model: FL3-U3-13S2C/M-CS Megapixels: 1.3 Imaging Sensor: Sony IMX035 CMOS, 1/3” Max...threshold is assigned based on the rate of relative motion. Auto- mated initialization based on initial track point motion results in two pose estimates
Estimation of Water Stress in Grapevines Using Proximal and Remote Sensing Methods
Directory of Open Access Journals (Sweden)
Alessandro Matese
2018-01-01
Full Text Available In light of climate change and its impacts on plant physiology, optimizing water usage and improving irrigation practices play a crucial role in crop management. In recent years, new optical remote sensing techniques have become widespread since they allow a non-invasive evaluation of plant water stress dynamics in a timely manner. Unmanned aerial vehicles (UAV currently represent one of the most advanced platforms for remote sensing applications. In this study, remote and proximal sensing measurements were compared with plant physiological variables, with the aim of testing innovative services and support systems to farmers for optimizing irrigation practices and scheduling. The experiment, conducted in two vineyards located in Sardinia, Italy, consisted of two regulated deficit irrigation (RDI treatments and two reference treatments maintained under stress and well-watered conditions. Indicators of crop water status (Crop Water Stress Index—CWSI—and linear thermal index were calculated from UAV images and ground infrared thermal images and then related to physiological measurements. The CWSI values for moderate water deficit (RDI-1 were 0.72, 0.28 and 0.43 for ‘Vermentino’, ‘Cabernet’ and ‘Cagnulari’ respectively, while for severe (RDI-2 water deficit the values were 0.90, 0.34 and 0.51. The highest differences for net photosynthetic rate (Pn and stomatal conductance (Gs between RDI-1 and RDI-2 were observed in ‘Vermentino’. The highest significant correlations were found between CWSI with Pn (R = −0.80, with ΦPSII (R = −0.49 and with Fv’/Fm’ (R = −0.48 on ‘Cagnulari’, while a unique significant correlation between CWSI and non-photochemical quenching (NPQ (R = 0.47 was found on ‘Vermentino’. Pn, as well as the efficiency of light use by the photosystem II (PSII, declined under stress conditions and when CWSI values increased. Under the experimental water stress conditions, grapevines were able to recover
Energy Technology Data Exchange (ETDEWEB)
Schultz-Fellenz, Emily S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-09-09
A portion of LANL’s FY15 SPE objectives includes initial ground-based or ground-proximal investigations at the SPE Phase 2 site. The area of interest is the U2ez location in Yucca Flat. This collection serves as a baseline for discrimination of surface features and acquisition of topographic signatures prior to any development or pre-shot activities associated with SPE Phase 2. Our team originally intended to perform our field investigations using previously vetted ground-based (GB) LIDAR methodologies. However, the extended proposed time frame of the GB LIDAR data collection, and associated data processing time and delivery date, were unacceptable. After technical consultation and careful literature research, LANL identified an alternative methodology to achieve our technical objectives and fully support critical model parameterization. Very-low-altitude unmanned aerial systems (UAS) photogrammetry appeared to satisfy our objectives in lieu of GB LIDAR. The SPE Phase 2 baseline collection was used as a test of this UAS photogrammetric methodology.
Gao, Hao
2015-01-01
This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Specifically, FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected to the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergenc...
Kawabata, Yusuke; Matsuo, Kosuke; Nezu, Yutaka; Kamiishi, Takayuki; Inaba, Yutaka; Saito, Tomoyuki
2017-09-01
Patients who have lytic bone lesions in their proximal femurs are at risk for pathological fracture. Lesions with high fracture risk are surgically treated using prophylactic osteosynthesis, whereas low-risk lesions are treated conservatively. However, it is difficult to discriminate between high- and low-risk lesions based on clinical and radiographic findings. The computed tomography (CT)-based finite element (FE) models are useful for predicting the fracture load on proximal femoral lytic lesions. FE models were constructed from the quantitative CT scans of the femurs using software that created individual bone shapes and density distributions. Three independent observers measured the lesion size, Mirels' score, and thickness of the proximal femur along the horizontal plane. The predictive risk values of the proximal femur measured using the CT-based FE analysis were statistically compared. The patients were divided into two groups (high and low risk). The mean fracture load was significantly higher in the high-risk group than in the low-risk group (5395 ± 525 N, 2622 ± 364 N, respectively, p = 0.0003). No significant differences in age, body weight, lesion size or Mirels' score were observed between groups. However, the thickness of the medial cortex in the high-risk group according to the FE analysis was significantly thinner than that in the low-risk group. Furthermore, the medial cortex thickness was positively correlated with the predicted fracture load. An optimal cut-off value of 3.67 mm for the thickness of the inner cortex resulted in 100% sensitivity and 75.1% specificity values for classifying the patients based on their fracture risk. Our findings indicate that the FE method is useful for the prediction of the pathological fracture. This method shows a versatile potential for the prediction of pathological fracture and might aid in judging the optimal treatment to prevent fracture. Copyright © 2017 The Japanese Orthopaedic Association
Directory of Open Access Journals (Sweden)
Yong Ye
2016-05-01
Full Text Available A novel method for proximity detection of moving targets (with high dielectric constants using a large-scale (the size of each sensor is 31 cm × 19 cm planar capacitive sensor system (PCSS is proposed. The capacitive variation with distance is derived, and a pair of electrodes in a planar capacitive sensor unit (PCSU with a spiral shape is found to have better performance on sensitivity distribution homogeneity and dynamic range than three other shapes (comb shape, rectangular shape, and circular shape. A driving excitation circuit with a Clapp oscillator is proposed, and a capacitance measuring circuit with sensitivity of 0.21 V p − p / pF is designed. The results of static experiments and dynamic experiments demonstrate that the voltage curves of static experiments are similar to those of dynamic experiments; therefore, the static data can be used to simulate the dynamic curves. The dynamic range of proximity detection for three projectiles is up to 60 cm, and the results of the following static experiments show that the PCSU with four neighboring units has the highest sensitivity (the sensitivities of other units are at least 4% lower; when the attack angle decreases, the intensity of sensor signal increases. This proposed method leads to the design of a feasible moving target detector with simple structure and low cost, which can be applied in the interception system.
Phase-integral method allowing nearlying transition points
Fröman, Nanny
1996-01-01
The efficiency of the phase-integral method developed by the present au thors has been shown both analytically and numerically in many publica tions. With the inclusion of supplementary quantities, closely related to new Stokes constants and obtained with the aid of comparison equation technique, important classes of problems in which transition points may approach each other become accessible to accurate analytical treatment. The exposition in this monograph is of a mathematical nature but has important physical applications, some examples of which are found in the adjoined papers. Thus, we would like to emphasize that, although we aim at mathematical rigor, our treatment is made primarily with physical needs in mind. To introduce the reader into the background of this book, we start by de scribing the phase-integral approximation of arbitrary order generated from an unspecified base function. This is done in Chapter 1, which is reprinted, after minor changes, from a review article. Chapter 2 is the re...
Directory of Open Access Journals (Sweden)
Lin Jin
2016-01-01
Full Text Available Background. The use of locking plates has gained popularity to treat proximal humeral fractures. However, the complication rates remain high. Biomechanical study suggested that subchondral screw-tip abutment significantly increased the stability of plant. We present a simple method to obtain the proper screw length through the depth gauge in elderly patients and compared the clinical effects with traditional measuring method. Methods. 40 patients were separated into two groups according to the two surgical methods: the probing method with depth gauge and the traditional measuring method. The intraoperative indexes and postoperative complications were recorded. The Constant and Murley score was used for the functional assessment in the 12th month. Results. Operative time and intraoperative blood loss indicated no statistical differences. X-ray exposure time and the patients with screw path penetrating the articular cartilage significantly differed. Postoperative complications and Constant and Murley score showed no statistical differences. Conclusions. Probing method with depth gauge is an appropriate alternative to determine the screw length, which can make the screw-tip adjoin the subchondral bone and keep the articular surface of humeral head intact and at the same time effectively avoid frequent X-ray fluoroscopy and adjusting the screws.
Kraft, Kate H.; Shukla, Aseem R.; Canning, Douglas A.
2011-01-01
Hypospadias results from abnormal development of the penis that leaves the urethral meatus proximal to its normal glanular position. Meatal position may be located anywhere along the penile shaft, but more severe forms of hypospadias may have a urethral meatus located at the scrotum or perineum. The spectrum of abnormalities may also include ventral curvature of the penis, a dorsally redundant prepuce, and atrophic corpus spongiosum. Due to the severity of these abnormalities, proximal hypospadias often requires more extensive reconstruction in order to achieve an anatomically and functionally successful result. We review the spectrum of proximal hypospadias etiology, presentation, correction, and possible associated complications. PMID:21516286
de Agustin, Jose Alberto; Mejia, Hernan; Viliani, Dafne; Marcos-Alberca, Pedro; Gomez de Diego, Jose Juan; Nuñez-Gil, Ivan Javier; Almeria, Carlos; Rodrigo, Jose Luis; Luaces, Maria; Garcia-Fernandez, Miguel Angel; Macaya, Carlos; Perez de Isla, Leopoldo
2014-08-01
The two-dimensional (2D) proximal isovelocity surface area (PISA) method has important technical limitations for mitral valve orifice area (MVA) assessment in mitral stenosis (MS), mainly the geometric assumptions of PISA shape and the requirement of an angle correction factor. Single-beat real-time three-dimensional (3D) color Doppler imaging allows the direct measurement of PISA without geometric assumptions or the requirement of an angle correction factor. The aim of this study was to validate this method in patients with rheumatic MS. Sixty-three consecutive patients with rheumatic MS were included. MVA was assessed using the transthoracic 2D and 3D PISA methods. Planimetry of MVA (2D and 3D) and the pressure half-time method were used as reference methods. The 3D PISA method had better correlations with the reference methods (with 2D planimetry, r = 0.85, P PISA method (with 2D planimetry, r = 0.63, P PISA method was observed. A high percentage (30%) of patients with nonsevere MS by 3D planimetry were misclassified by the 2D PISA method as having severe MS (effective regurgitant orifice area PISA method had 94% agreement with 3D planimetry. Good intra- and interobserver agreement for 3D PISA measurements were observed, with intraclass correlation coefficients of 0.95 and 0.90, respectively. MVA assessment using PISA by single-beat real-time 3D color Doppler echocardiography is feasible in the clinical setting and more accurate than the conventional 2D PISA method. Copyright © 2014 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.
A Radical Method for Calculating Muzzle Motion from Proximity Sensor Data
2013-09-01
10 -8 -6 -4 -2 0 2 Time (ms) D is pl ac em en t ( m m ) Horiz . (+ Right) Vert. (+ Up) Shot Exit Shot 27698, M203, M433 40 mm Jump Test - Rear...Displacement -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 -10 -8 -6 -4 -2 0 2 Time (ms) D is pl ac em en t ( m m ) Horiz . (+ Right) Vert. (+ Up) Shot Exit...4 -2 0 2 Po in tin g A ng le (m ra d) Time (ms) 40 mm Jump Test - Pointing Angles Horiz . (+ Right) Vert. (+ Up) Shot Exit Shot 27698, M203, M433
System and method for confining an object to a region of fluid flow having a stagnation point
Schroeder, Charles M. (Inventor); Shaqfeh, Eric S. G. (Inventor); Babcock, Hazen P. (Inventor); Chu, Steven (Inventor)
2006-01-01
A device for confining an object to a region proximate to a fluid flow stagnation point includes one or more inlets for carrying the fluid into the region, one or more outlets for carrying the fluid out of the region, and a controller, in fluidic communication with the inlets and outlets, for adjusting the motion of the fluid to produce a stagnation point in the region, thereby confining the object to the region. Applications include, for example, prolonged observation of the object, manipulation of the object, etc. The device optionally may employ a feedback control mechanism, a sensing apparatus (e.g., for imaging), and a storage medium for storing, and a computer for analyzing and manipulating, data acquired from observing the object. The invention further provides methods of using such a device and system in a number of fields, including biology, chemistry, physics, material science, and medical science.
Quantum-Mechanical Methods for Quantifying Incorporation of Contaminants in Proximal Minerals
Directory of Open Access Journals (Sweden)
Lindsay C. Shuller-Nickles
2014-07-01
Full Text Available Incorporation reactions play an important role in dictating immobilization and release pathways for chemical species in low-temperature geologic environments. Quantum-mechanical investigations of incorporation seek to characterize the stability and geometry of incorporated structures, as well as the thermodynamics and kinetics of the reactions themselves. For a thermodynamic treatment of incorporation reactions, a source of the incorporated ion and a sink for the released ion is necessary. These sources/sinks in a real geochemical system can be solids, but more commonly, they are charged aqueous species. In this contribution, we review the current methods for ab initio calculations of incorporation reactions, many of which do not consider incorporation from aqueous species. We detail a recently-developed approach for the calculation of incorporation reactions and expand on the part that is modeling the interaction of periodic solids with aqueous source and sink phases and present new research using this approach. To model these interactions, a systematic series of calculations must be done to transform periodic solid source and sink phases to aqueous-phase clusters. Examples of this process are provided for three case studies: (1 neptunyl incorporation into studtite and boltwoodite: for the layered boltwoodite, the incorporation energies are smaller (more favorable for reactions using environmentally relevant source and sink phases (i.e., ΔErxn(oxides > ΔErxn(silicates > ΔErxn(aqueous. Estimates of the solid-solution behavior of Np5+/P5+- and U6+/Si4+-boltwoodite and Np5+/Ca2+- and U6+/K+-boltwoodite solid solutions are used to predict the limit of Np-incorporation into boltwoodite (172 and 768 ppm at 300 °C, respectively; (2 uranyl and neptunyl incorporation into carbonates and sulfates: for both carbonates and sulfates, it was found that actinyl incorporation into a defect site is more favorable than incorporation into defect-free periodic
Directory of Open Access Journals (Sweden)
Kong Minxiu
2016-01-01
Full Text Available Optimal point-to-point motion planning of flexible parallel manipulator was investigated in this paper and the 3RRR parallel manipulator is taken as the object. First, an optimal point-to-point motion planning problem was constructed with the consideration of the rigid-flexible coupling dynamic model and actuator dynamics. Then, the multi-interval Legendre–Gauss–Radau (LGR pseudospectral method was introduced to transform the optimal control problem into Nonlinear Programming (NLP problem. At last, the simulation and experiment were carried out on the flexible parallel manipulator. Compared with the line motion of quantic polynomial planning, the proposed method could constrain the flexible displacement amplitude and suppress the residue vibration.
Simulation Method of Cumulative Flow without of Axial Stagnation Point
Directory of Open Access Journals (Sweden)
I. V. Minin
2015-01-01
Full Text Available The paper describes a developed analytical model of non-stationary formation of a cumulative jet without axial stagnation point. It shows that it is possible to control the weight, size, speed, and momentum of the jet with the parameters, which are not achievable in the classical mode of jet formation. Considered jet formation principle can be used to conduct laboratory simulation of astro-like plasma jets.
Comparing Single-Point and Multi-point Calibration Methods in Modulated DSC
Energy Technology Data Exchange (ETDEWEB)
Van Buskirk, Caleb Griffith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-06-14
Heat capacity measurements for High Density Polyethylene (HDPE) and Ultra-high Molecular Weight Polyethylene (UHMWPE) were performed using Modulated Differential Scanning Calorimetry (mDSC) over a wide temperature range, -70 to 115 °C, with a TA Instruments Q2000 mDSC. The default calibration method for this instrument involves measuring the heat capacity of a sapphire standard at a single temperature near the middle of the temperature range of interest. However, this method often fails for temperature ranges that exceed a 50 °C interval, likely because of drift or non-linearity in the instrument's heat capacity readings over time or over the temperature range. Therefore, in this study a method was developed to calibrate the instrument using multiple temperatures and the same sapphire standard.
Development of a Multi-Point Microwave Interferometry (MPMI) Method
Energy Technology Data Exchange (ETDEWEB)
Specht, Paul Elliott [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cooper, Marcia A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jilek, Brook Anton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-09-01
A multi-point microwave interferometer (MPMI) concept was developed for non-invasively tracking a shock, reaction, or detonation front in energetic media. Initially, a single-point, heterodyne microwave interferometry capability was established. The design, construction, and verification of the single-point interferometer provided a knowledge base for the creation of the MPMI concept. The MPMI concept uses an electro-optic (EO) crystal to impart a time-varying phase lag onto a laser at the microwave frequency. Polarization optics converts this phase lag into an amplitude modulation, which is analyzed in a heterodyne interfer- ometer to detect Doppler shifts in the microwave frequency. A version of the MPMI was constructed to experimentally measure the frequency of a microwave source through the EO modulation of a laser. The successful extraction of the microwave frequency proved the underlying physical concept of the MPMI design, and highlighted the challenges associated with the longer microwave wavelength. The frequency measurements made with the current equipment contained too much uncertainty for an accurate velocity measurement. Potential alterations to the current construction are presented to improve the quality of the measured signal and enable multiple accurate velocity measurements.
Esfandiar, Habib; Habibnejad Korayem, Moharam
2017-01-01
This paper aims at planning an optimal point to point path for a flexible manipulator under large deformation. For this purpose, the researchers use a direct method and meta-heuristic optimization process. In this paper, the maximum load carried by the manipulator and the minimum transmission time are taken as objective functions of the optimization process to get optimal path profiles. Kinematic constraints, the maximum velocity and acceleration, the dynamic constraint of the maximum torque ...
Djorojevic, Mirjana; Roldán, Concepción; Botella, Miguel; Alemán, Inmaculada
2016-01-01
The current study was undertaken to test the validity and reproducibility of the Purkait triangle method and some alternative proposals for sex prediction from the proximal femur in the adult population of Spain. To that end, sexual dimorphism of the maximum femoral head diameter and the minimum femoral neck diameter were also evaluated. The study was conducted on 186 femora (109 males and 77 females) taken from the San José collection of identified individuals (Southern Spain). Discriminant function analyses (DFA) employing the jackknife procedure for cross-validations were considered. Overall, more than 94% of individuals of both sexes were correctly classified. The most dimorphic single variable from the triangle method was the intertrochanteric apex distance (BC) that reached 85.5% accuracy, falling below those obtained for the femoral head and femoral neck diameter, respectively, (89.8 and 91.9%). Combining BC with the neck diameter, the predictive ability increased to 92.5%; when femoral head diameter was added to the latter two, the classification success rate improved further up to 94.6% (94.1% after cross-validation). We conclude that the classification success rates of the Purkait's method remained considerably below any of those obtained with the models proposed in the present study which proved to be a much better and more reliable choice both as single predictors and in combination with other variables.
Novel method for rail wear inspection based on the sparse iterative closest point method
Yi, Bing; Yang, Yue; Yi, Qian; Dai, Wanlin; Li, Xiongbing
2017-12-01
As trains become progressively faster, it is becoming imperative to automatically and precisely inspect the rail profile of high-speed railways to ensure their safety and reliability. To realize this, a new method based on the sparse iterative closest point method is proposed in this study. Moreover, the noncontact method is mainly used for convenience and practicality. First, a line laser-based measurement system is constructed, and the position of the line laser is calculated to ensure that both the top and sides of the rail are in range of the line laser. Then, the measured data of the rail profile are separated into a baseline part and worn part. The baseline is involved in registering the measured data and reference profile by the sparse iterative closest point method. The worn part is then transformed by the same matrix of the baseline part. Finally, the Hausdorff distance is introduced to measure the distance between the wear model and reference model. The experimental results demonstrate the effectiveness and efficiency of the proposed method.
Eubanks-Carter, Catherine; Gorman, Bernard S; Muran, J Christopher
2012-01-01
Analysis of change points in psychotherapy process could increase our understanding of mechanisms of change. In particular, naturalistic change point detection methods that identify turning points or breakpoints in time series data could enhance our ability to identify and study alliance ruptures and resolutions. This paper presents four categories of statistical methods for detecting change points in psychotherapy process: criterion-based methods, control chart methods, partitioning methods, and regression methods. Each method's utility for identifying shifts in the alliance is illustrated using a case example from the Beth Israel Psychotherapy Research program. Advantages and disadvantages of the various methods are discussed.
2013-05-29
together with the Novatel OEMV-1 GPS receiver and patch antenna , complete the guidance navigation and control suite. During proximity operations, the...are produced by the upper atmospheric particles colliding with the CubeSat. The worst case scenario of solar max activity is modeled for the cal...atmosphere at the nominal altitude of 500km gives a composition of 94% Oxygen and 6% Nitrogen. The number and mass densities are n=3.769×1014m−3 and ρ
Improved fixed point iterative method for blade element momentum computations
DEFF Research Database (Denmark)
Sun, Zhenye; Shen, Wen Zhong; Chen, Jin
2017-01-01
, the convergence ability of the iterative method will be greatly enhanced. Numerical tests have been performed under different combinations of local tip speed ratio, local solidity, local twist and airfoil aerodynamic data. Results show that the simple iterative methods have a good convergence ability which...... to the physical solution, especially for the locations near the blade tip and root where the failure rate of the iterative method is high. The stability and accuracy of aerodynamic calculations and optimizations are greatly reduced due to this problem. The intrinsic mechanisms leading to convergence problems...
De la Sen, Manuel; Abbas, Mujahid; Saleem, Naeem
2016-01-01
This paper discusses some convergence properties in fuzzy ordered proximal approaches defined by [Formula: see text]-sequences of pairs, where [Formula: see text] is a surjective self-mapping and [Formula: see text] where Aand Bare nonempty subsets of and abstract nonempty set X and [Formula: see text] is a partially ordered non-Archimedean fuzzy metric space which is endowed with a fuzzy metric M, a triangular norm * and an ordering [Formula: see text] The fuzzy set M takes values in a sequence or set [Formula: see text] where the elements of the so-called switching rule [Formula: see text] are defined from [Formula: see text] to a subset of [Formula: see text] Such a switching rule selects a particular realization of M at the nth iteration and it is parameterized by a growth evolution sequence [Formula: see text] and a sequence or set [Formula: see text] which belongs to the so-called [Formula: see text]-lower-bounding mappings which are defined from [0, 1] to [0, 1]. Some application examples concerning discrete systems under switching rules and best approximation solvability of algebraic equations are discussed.
Novel Ratio Subtraction and Isoabsorptive Point Methods for ...
African Journals Online (AJOL)
Zeany BA. Three different methods for determination of binary mixture of Amlodipine and Atorvastatin using Dual. Wavelength Spectrophotometry. Spectrochim Acta A. Mol Biomol Spectrosc. 2012; 104: 70-76. 3. Prasad C, Gautam A, Bharadwaj V, ...
Directory of Open Access Journals (Sweden)
Yano Seiji
2011-05-01
Full Text Available Abstract Here we report the method of anastomosis based on double stapling technique (hereinafter, DST using a trans-oral anvil delivery system (EEATM OrVilTM for reconstructing the esophagus and lifted jejunum following laparoscopic total gastrectomy or proximal gastric resection. As a basic technique, laparoscopic total gastrectomy employed Roux-en-Y reconstruction, laparoscopic proximal gastrectomy employed double tract reconstruction, and end-to-side anastomosis was used for the cut-off stump of the esophagus and lifted jejunum. We used EEATM OrVilTM as a device that permitted mechanical purse-string suture similarly to conventional EEA, and endo-Surgitie. After the gastric lymph node dissection, the esophagus was cut off using an automated stapler. EEATM OrVilTM was orally and slowly inserted from the valve tip, and a small hole was created at the tip of the obliquely cut-off stump with scissors to let the valve tip pass through. Yarn was cut to disconnect the anvil from a tube and the anvil head was retained in the esophagus. The end-Surgitie was inserted at the right subcostal margin, and after the looped-shaped thread was wrapped around the esophageal stump opening, assisting Maryland forceps inserted at the left subcostal and left abdomen were used to grasp the left and right esophageal stump. The surgeon inserted anvil grasping forceps into the right abdomen, and after grasping the esophagus with the forceps, tightened the end Surgitie, thereby completing the purse-string suture on the esophageal stump. The main unit of the automated stapler was inserted from the cut-off stump of the lifted jejunum, and a trocar was made to pass through. To prevent dropout of the small intestines from the automated stapler, the automated stapler and the lifted jejunum were fastened with silk thread, the abdomen was again inflated, and the lifted jejunum was led into the abdominal cavity. When it was confirmed that the automated stapler and center rod
Unemployment estimation: Spatial point referenced methods and models
Pereira, Soraia
2017-06-26
Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to
Estimation of focusing operators using the Common Focal Point method
Bolte, J.F.B.
2003-01-01
The objective of this PhD project is to present a data-driven method to determine one-way focusing operators. Focusing operators are the input for imaging a subsurface structure from measurements at the surface. They can be used in imaging the earth's interior, and also in non-destructive imaging of
Surveying method of points on axis of cyllindrical pipe
Directory of Open Access Journals (Sweden)
Xu Jinjun
2012-02-01
Full Text Available To determine axes line of pipe or structural building which has a round normal section is recurrent in actual building works. The verticality measurement of chimney, TV tower and the deflection measurement of pipe are important to find out the deviation from design in construction, installation and completion. This paper discussed the measurement technique and data processing method for axes line of round normal section based on reflectorless distance measure. Simulation and practical results showed its feasibility and high efficiency.
Hamilton, H. H., II
1982-01-01
An approximate method for calculating heating rates at general three dimensional stagnation points is presented. The application of the method for making stagnation point heating calculations during atmospheric entry is described. Comparisons with results from boundary layer calculations indicate that the method should provide an accurate method for engineering type design and analysis applications.
Bakkum, Arjan J. T.; Janssen, Thomas W. J.; Rolf, Marijn P.; Roos, Jan C.; Burcksen, Jos; Knol, Dirk L.; de Groot, Sonja
Purpose: To assess the intra- and inter-rater reliability of a standardized protocol for measuring proximal tibia and distal femur bone mineral density (BMD) using dual-energy X-ray absorptiometry (DXA). Methods: Ten able-bodied individuals (7 males) participated in this study. During one
Directory of Open Access Journals (Sweden)
Gilles Drogue
2016-12-01
New hydrological insights for the region: Results show that when streamflow is known at the outlet of a catchment, optimal rainfall input for a lumped catchment model is mostly computed with a subset of raingages. When streamflow is unkown at the outlet of a catchment, a regionalisation approach of model parameter values based on spatial proximity is not able to take advantage of a neighbor-catchment based knowledge of optimal rainfall input. This report encourages to search for a catchment model regionalization approach based on spatial proximity which makes no explicit use of measured rainfall to estimate streamflow at an ungauged location.
A Fixed-Point of View on Gradient Methods for Big Data
Directory of Open Access Journals (Sweden)
Alexander Jung
2017-09-01
Full Text Available Interpreting gradient methods as fixed-point iterations, we provide a detailed analysis of those methods for minimizing convex objective functions. Due to their conceptual and algorithmic simplicity, gradient methods are widely used in machine learning for massive data sets (big data. In particular, stochastic gradient methods are considered the de-facto standard for training deep neural networks. Studying gradient methods within the realm of fixed-point theory provides us with powerful tools to analyze their convergence properties. In particular, gradient methods using inexact or noisy gradients, such as stochastic gradient descent, can be studied conveniently using well-known results on inexact fixed-point iterations. Moreover, as we demonstrate in this paper, the fixed-point approach allows an elegant derivation of accelerations for basic gradient methods. In particular, we will show how gradient descent can be accelerated by a fixed-point preserving transformation of an operator associated with the objective function.
Wang, Zhao-Hui; Deng, Dun; Chen, Li-Qiu; Zhang, Wei-Kang; Yan, Hai-Bo; Chen, Xiao-Yu; Liang, Zhong; Jiang, Zheng-Hui
2013-05-01
To evaluate the clinical effects of combined methods of minimally invasive percutaneous proximal humeral internal locking system (PHILOS) and injectable bone for the treatment of proximal humerus fractures in elderly patients. From January 2006 to January 2012, 80 patients with proximal humerus fractures were randomly divided into two groups (n = 40). The patients in the research group were treated with minimally invasive PHILOS fixation combined with injectable bone, including 20 males and 20 females, with an average age of (68.4 +/- 11.9) years; according to AO classification, 2 cases of type A1, 3 cases of type A2, 6 cases of type B1, 7 cases of type B2, 9 cases of type B3, 6 cases of type C1, 7 cases of type C2. The patients in the control group were treated with PHILOS fixation, including 18 males and 22 females, with an average age of (65.4 +/- 10.7) years; according to AO classification, 3 cases of type A1, 4 cases of type A2, 5 cases of type B1, 8 cases of type B2, 10 cases of type B3, 5 cases of type C, and 5 cases of type C2. The BMD, satisfactory rate, postoperative complications,bone healing time, Constant-Murley score in the two groups were reviewed and compared. In the research group, no patients had necrosis of femoral head, 1 patient had shoulder varus, 1 patient had internal fixation loosening, 36 patients were satisfactory with the treatment results, BMD was (1.013 +/- 0.109) g/cm2, bone healing time averaged (12.00 +/- 3.79) weeks, and the Constant-Murley score was 97.2 +/- 4.6. In the control group, 3 patients had necrosis of femoral head, 5 patients had shoulder varus, 6 patients had internal fixation loosening, 32 patients were satisfactory with the treatment results, BMD was (0.812 +/- 0.089) g/cm2, bone healing time averaged (20.00 +/- 8.67) weeks,and the Constant-Murley score was 78.5 +/- 3.2. The results of BMD, satisfactory rate, postoperative complications, bone healing time, and Constant-Murley score in the research group were better
Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam
2018-03-01
We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.
Interior Point Method Evaluation for Reactive Power Flow Optimization in the Power System
Directory of Open Access Journals (Sweden)
Zbigniew Lubośny
2013-03-01
Full Text Available The paper verifies the performance of an interior point method in reactive power flow optimization in the power system. The study was conducted on a 28 node CIGRE system, using the interior point method optimization procedures implemented in Power Factory software.
Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method)
DEFF Research Database (Denmark)
Hansen, Susanne Brunsgaard; Berg, Rolf W.; Stenby, Erling Halfdan
Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf......Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf...
Directory of Open Access Journals (Sweden)
Ali Behnamfard
2017-06-01
Full Text Available The proximate analysis is the most common form of coal evaluation and it reveals the quality of a coal sample. It examines four factors including the moisture, ash, volatile matter (VM, and fixed carbon (FC within the coal sample. Every factor is determined through a distinct experimental procedure under ASTM specified conditions. These determinations are time consuming and require a significant amount of laboratory equipment. The calorific value is one of the most important properties of a solid fuel and its experimental determination requires special instrumentation and highly trained analyst to operate it. This paper develops mathematical and ANFIS models for estimation of two factors of proximate analysis based on the other two factors. Furthermore, the estimation of calorific value of coal samples based on proximate analysis factors is performed using multivariable regression, the Minitab 16 software package, and the ANFIS, Matlab software package. The results indicate that ANFIS is a more powerful tool for estimation of proximate analysis factors and calorific value than multivariable regression method. The following equation estimates the calorific value of coal samples with high precision: Calorific value (btu/lb= 12204 - 170 Moisture + 46.8 FC - 127 Ash
Miller, Stephen; Pike, James; Chapman, Jared; Xie, Bin; Hilton, Brian N.; Ames, Susan L.; Stacy, Alan W.
2017-01-01
This study examines the point-of-sale marketing practices used to promote electronic cigarettes at stores near schools that serve at-risk youths. One hundred stores selling tobacco products within a half-mile of alternative high schools in Southern California were assessed for this study. Seventy percent of stores in the sample sold electronic…
Shock waves simulated using the dual domain material point method combined with molecular dynamics
Zhang, Duan Z.; Dhakal, Tilak R.
2017-04-01
In this work we combine the dual domain material point method with molecular dynamics in an attempt to create a multiscale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically nonequilibrium state, and conventional constitutive relations or equations of state are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a molecular dynamics simulation of a group of atoms surrounding the material point. Rather than restricting the multiscale simulation in a small spatial region, such as phase interfaces, or crack tips, this multiscale method can be used to consider nonequilibrium thermodynamic effects in a macroscopic domain. This method takes the advantage that the material points only communicate with mesh nodes, not among themselves; therefore molecular dynamics simulations for material points can be performed independently in parallel. The dual domain material point method is chosen for this multiscale method because it can be used in history dependent problems with large deformation without generating numerical noise as material points move across cells, and also because of its convergence and conservation properties. To demonstrate the feasibility and accuracy of this method, we compare the results of a shock wave propagation in a cerium crystal calculated using the direct molecular dynamics simulation with the results from this combined multiscale calculation.
Facial plastic surgery area acquisition method based on point cloud mathematical model solution.
Li, Xuwu; Liu, Fei
2013-09-01
It is one of the hot research problems nowadays to find a quick and accurate method of acquiring the facial plastic surgery area to provide sufficient but irredundant autologous or in vitro skin source for covering extensive wound, trauma, and burnt area. At present, the acquisition of facial plastic surgery area mainly includes model laser scanning, point cloud data acquisition, pretreatment of point cloud data, three-dimensional model reconstruction, and computation of area. By using this method, the area can be computed accurately, but it is hard to control the random error, and it requires a comparatively longer computation period. In this article, a facial plastic surgery area acquisition method based on point cloud mathematical model solution is proposed. This method applies symmetric treatment to the point cloud based on the pretreatment of point cloud data, through which the comparison diagram color difference map of point cloud error before and after symmetry is obtained. The slicing mathematical model of facial plastic area is got through color difference map diagram. By solving the point cloud data in this area directly, the facial plastic area is acquired. The point cloud data are directly operated in this method, which can accurately and efficiently complete the surgery area computation. The result of the comparative analysis shows the method is effective in facial plastic surgery area.
Mears, Chad S; Langston, Tanner D; Phippen, Colton M; Burkhead, Wayne Z; Skedros, John G
2017-10-01
Measurements made on routine A-P radiographs can predict strength/quality of the proximal humerus, as shown in terms of two easy-to-measure parameters: Cortical index (CI) and mean-combined cortical thickness (MCCT). Because of high variability inherent when using established methods to measure these parameters, we describe a new orientation system. Using digitized radiographs of 33 adult proximal humeri, five observers measured anatomical reference locations in accordance with: (i) Tingart et al. (2003) method, (ii) Mather et al. (2013) method, and (iii) our new humeral head Circle-Fit method (CFM). The Tingart and Mather methods measure CI and MCCT with respect to upper and lower edges of 20 mm tall rectangles fit to a proximal diaphyseal location where endosteal (Tingart) or periosteal (Mather) cortical margins become parallel. But high intra- and inter-observer variability occurs when placing the rectangles because of uncertainty in identifying cortical parallelism. With the CFM an adjustable circle is fit to the humeral head articular surface, which reliably and easily establishes a proximal metaphyseal landmark (M1) at the surgical neck. Distal locations are then designated at successive 10 mm increments below M1, including a second metaphyseal landmark (M2) followed by diaphyseal (D) locations (D1, D2 ⋯D6). D1 corresponds most closely to the proximal edges of the rectangles used in the other methods. Results showed minimal inter-observer variations (mean error, 1.5 ± 1.1 mm) when the CFM is used to establish diaphyseal locations for making CI and MCCT measurements when compared to each of the other methods (mean error range, 10.7 ± 5.9 to 13.3 ± 6.7 mm) (p < 0.001). © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:2313-2322, 2017. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Full-Newton step interior-point methods for conic optimization
Mansouri, H.
2008-01-01
In the theory of polynomial-time interior-point methods (IPMs) two important classes of methods are distinguished: small-update and large-update methods, respectively. Small-update IPMs have the best theoretical iteration bound and IPMs with full-Newton steps belong to this class of methods. Within
A hybrid solar panel maximum power point search method that uses light and temperature sensors
Ostrowski, Mariusz
2016-04-01
Solar cells have low efficiency and non-linear characteristics. To increase the output power solar cells are connected in more complex structures. Solar panels consist of series of connected solar cells with a few bypass diodes, to avoid negative effects of partial shading conditions. Solar panels are connected to special device named the maximum power point tracker. This device adapt output power from solar panels to load requirements and have also build in a special algorithm to track the maximum power point of solar panels. Bypass diodes may cause appearance of local maxima on power-voltage curve when the panel surface is illuminated irregularly. In this case traditional maximum power point tracking algorithms can find only a local maximum power point. In this article the hybrid maximum power point search algorithm is presented. The main goal of the proposed method is a combination of two algorithms: a method that use temperature sensors to track maximum power point in partial shading conditions and a method that use illumination sensor to track maximum power point in equal illumination conditions. In comparison to another methods, the proposed algorithm uses correlation functions to determinate the relationship between values of illumination and temperature sensors and the corresponding values of current and voltage in maximum power point. In partial shading condition the algorithm calculates local maximum power points bases on the value of temperature and the correlation function and after that measures the value of power on each of calculated point choose those with have biggest value, and on its base run the perturb and observe search algorithm. In case of equal illumination algorithm calculate the maximum power point bases on the illumination value and the correlation function and on its base run the perturb and observe algorithm. In addition, the proposed method uses a special coefficient modification of correlation functions algorithm. This sub
Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis
Directory of Open Access Journals (Sweden)
Yuan Gao
2014-01-01
Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.
Energy Technology Data Exchange (ETDEWEB)
Choi, Jang-Hwan, E-mail: jhchoi21@stanford.edu [Department of Radiology, Stanford University, Stanford, California 94305 and Department of Mechanical Engineering, Stanford University, Stanford, California 94305 (United States); Constantin, Dragos [Microwave Physics R& E, Varian Medical Systems, Palo Alto, California 94304 (United States); Ganguly, Arundhuti; Girard, Erin; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Morin, Richard L. [Mayo Clinic Jacksonville, Jacksonville, Florida 32224 (United States); Dixon, Robert L. [Department of Radiology, Wake Forest University, Winston-Salem, North Carolina 27157 (United States)
2015-08-15
Purpose: To propose new dose point measurement-based metrics to characterize the dose distributions and the mean dose from a single partial rotation of an automatic exposure control-enabled, C-arm-based, wide cone angle computed tomography system over a stationary, large, body-shaped phantom. Methods: A small 0.6 cm{sup 3} ion chamber (IC) was used to measure the radiation dose in an elliptical body-shaped phantom made of tissue-equivalent material. The IC was placed at 23 well-distributed holes in the central and peripheral regions of the phantom and dose was recorded for six acquisition protocols with different combinations of minimum kVp (109 and 125 kVp) and z-collimator aperture (full: 22.2 cm; medium: 14.0 cm; small: 8.4 cm). Monte Carlo (MC) simulations were carried out to generate complete 2D dose distributions in the central plane (z = 0). The MC model was validated at the 23 dose points against IC experimental data. The planar dose distributions were then estimated using subsets of the point dose measurements using two proposed methods: (1) the proximity-based weighting method (method 1) and (2) the dose point surface fitting method (method 2). Twenty-eight different dose point distributions with six different point number cases (4, 5, 6, 7, 14, and 23 dose points) were evaluated to determine the optimal number of dose points and their placement in the phantom. The performances of the methods were determined by comparing their results with those of the validated MC simulations. The performances of the methods in the presence of measurement uncertainties were evaluated. Results: The 5-, 6-, and 7-point cases had differences below 2%, ranging from 1.0% to 1.7% for both methods, which is a performance comparable to that of the methods with a relatively large number of points, i.e., the 14- and 23-point cases. However, with the 4-point case, the performances of the two methods decreased sharply. Among the 4-, 5-, 6-, and 7-point cases, the 7-point case (1
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-02-27
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the
AN IMPROVEMENT ON GEOMETRY-BASED METHODS FOR GENERATION OF NETWORK PATHS FROM POINTS
Directory of Open Access Journals (Sweden)
Z. Akbari
2014-10-01
Full Text Available Determining network path is important for different purposes such as determination of road traffic, the average speed of vehicles, and other network analysis. One of the required input data is information about network path. Nevertheless, the data collected by the positioning systems often lead to the discrete points. Conversion of these points to the network path have become one of the challenges which different researchers, presents many ways for solving it. This study aims at investigating geometry-based methods to estimate the network paths from the obtained points and improve an existing point to curve method. To this end, some geometry-based methods have been studied and an improved method has been proposed by applying conditions on the best method after describing and illustrating weaknesses of them.
Synthesis of Numerical Methods for Modeling Wave Energy Converter-Point Absorbers: Preprint
Energy Technology Data Exchange (ETDEWEB)
Li, Y.; Yu, Y. H.
2012-05-01
During the past few decades, wave energy has received significant attention among all ocean energy formats. Industry has proposed hundreds of prototypes such as an oscillating water column, a point absorber, an overtopping system, and a bottom-hinged system. In particular, many researchers have focused on modeling the floating-point absorber as the technology to extract wave energy. Several modeling methods have been used such as the analytical method, the boundary-integral equation method, the Navier-Stokes equations method, and the empirical method. However, no standardized method has been decided. To assist the development of wave energy conversion technologies, this report reviews the methods for modeling the floating-point absorber.
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method
Directory of Open Access Journals (Sweden)
Yueqian Shen
2016-12-01
Full Text Available A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.
Energy Technology Data Exchange (ETDEWEB)
Yoo, Hyun Suk; Lee, Jeong Min; Yoon, Jeong Hee; Lee, Dong Ho; Chang, Won; Han, Joon Koo [Seoul National University Hospital, Seoul (Korea, Republic of)
2016-09-15
To prospectively compare technical success rate and reliable measurements of virtual touch quantification (VTQ) elastography and elastography point quantification (ElastPQ), and to correlate liver stiffness (LS) measurements obtained by the two elastography techniques. Our study included 85 patients, 80 of whom were previously diagnosed with chronic liver disease. The technical success rate and reliable measurements of the two kinds of point shear wave elastography (pSWE) techniques were compared by χ{sup 2} analysis. LS values measured using the two techniques were compared and correlated via Wilcoxon signed-rank test, Spearman correlation coefficient, and 95% Bland-Altman limit of agreement. The intraobserver reproducibility of ElastPQ was determined by 95% Bland-Altman limit of agreement and intraclass correlation coefficient (ICC). The two pSWE techniques showed similar technical success rate (98.8% for VTQ vs. 95.3% for ElastPQ, p = 0.823) and reliable LS measurements (95.3% for VTQ vs. 90.6% for ElastPQ, p = 0.509). The mean LS measurements obtained by VTQ (1.71 ± 0.47 m/s) and ElastPQ (1.66 ± 0.41 m/s) were not significantly different (p = 0.209). The LS measurements obtained by the two techniques showed strong correlation (r = 0.820); in addition, the 95% limit of agreement of the two methods was 27.5% of the mean. Finally, the ICC of repeat ElastPQ measurements was 0.991. Virtual touch quantification and ElastPQ showed similar technical success rate and reliable measurements, with strongly correlated LS measurements. However, the two methods are not interchangeable due to the large limit of agreement.
Fujii, Atsunori; Ohsugi, Yudai; Yamamoto, Yuki; Nakamura, Takabun; Sugiura, Toshifumi; Tauchi, Masaki
2007-05-01
In order to find out the most suitable and accurate pointing methods to study the sound localizability of persons with visual impairment, we compared the accuracy of three different pointing methods for indicating the direction of sound sources in a semi-anechoic dark room. Six subjects with visual impairment (two totally blind and four with low vision) participated in this experiment. The three pointing methods employed were (1) directing the face, (2) directing the body trunk on a revolving chair and (3) indicating a tactile cue placed horizontally in front of the subject. Seven sound emitters were arranged in a semicircle 2.0 m from the subject, 0 degrees to +/-80 degrees of the subject's midline, at a height of 1.2 m. The accuracy of the pointing methods was evaluated by measuring the deviation between the angle of the target sound source and that of the subject's response. The result was that all methods indicated that as the angle of the sound source increased from midline, the accuracy decreased. The deviations recorded toward the left and the right of midline were symmetrical. In the whole frontal area (-80 degrees to +80 degrees from midline), both the tactile cue and the body trunk methods were more accurate than the face-pointing method. There was no significant difference in the center (-40 degrees to +40 degrees from midline). In the periphery (-80 degrees and +80 degrees ), the tactile cue pointing method was the most accurate of all and the body trunk method was the next best. These results suggest that the most suitable pointing methods to study the sound localizability of the frontal azimuth for subjects who are visually impaired are the tactile cue and the body trunk methods because of their higher accuracy in the periphery.
Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.
2017-06-01
Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.
A feature point identification method for positron emission particle tracking with multiple tracers
Energy Technology Data Exchange (ETDEWEB)
Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)
2017-01-21
A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.
A Novel Line Space Voting Method for Vanishing-Point Detection of General Road Images
Directory of Open Access Journals (Sweden)
Zongsheng Wu
2016-06-01
Full Text Available Vanishing-point detection is an important component for the visual navigation system of an autonomous mobile robot. In this paper, we present a novel line space voting method for fast vanishing-point detection. First, the line segments are detected from the road image by the line segment detector (LSD method according to the pixel’s gradient and texture orientation computed by the Sobel operator. Then, the vanishing-point of the road is voted on by considering the points of the lines and their neighborhood spaces with weighting methods. Our algorithm is simple, fast, and easy to implement with high accuracy. It has been experimentally tested with over hundreds of structured and unstructured road images. The experimental results indicate that the proposed method is effective and can meet the real-time requirements of navigation for autonomous mobile robots and unmanned ground vehicles.
Directory of Open Access Journals (Sweden)
Hongwei Ying
2014-08-01
Full Text Available An extreme point of scale space extraction method for binary multiscale and rotation invariant local feature descriptor is studied in this paper in order to obtain a robust and fast method for local image feature descriptor. Classic local feature description algorithms often select neighborhood information of feature points which are extremes of image scale space, obtained by constructing the image pyramid using certain signal transform method. But build the image pyramid always consumes a large amount of computing and storage resources, is not conducive to the actual applications development. This paper presents a dual multiscale FAST algorithm, it does not need to build the image pyramid, but can extract feature points of scale extreme quickly. Feature points extracted by proposed method have the characteristic of multiscale and rotation Invariant and are fit to construct the local feature descriptor.
Pei-Jing Rong; Jing-Jun Zhao; Lei Wang; Li-Qun Zhou
2016-01-01
The international standardization of auricular acupuncture points (AAPs) is an important basis for auricular therapy or auricular diagnosis and treatment. The study on the international standardization of AAPs has gone through a long process, in which the location method is one of the key research projects. There are different points of view in the field of AAPs among experts from different countries or regions. By only analyzing the nine representative location methods, this paper tried to o...
Directory of Open Access Journals (Sweden)
Pei-Jing Rong
2016-01-01
Full Text Available The international standardization of auricular acupuncture points (AAPs is an important basis for auricular therapy or auricular diagnosis and treatment. The study on the international standardization of AAPs has gone through a long process, in which the location method is one of the key research projects. There are different points of view in the field of AAPs among experts from different countries or regions. By only analyzing the nine representative location methods, this paper tried to offer a proper location method to locate AAPs. Through analysis of the pros and cons of each location method, the location method applied in the WFAS international standard of AAPs is thoroughly considered as an appropriate method. It is important to keep the right direction during developing an International Organization for Standardization (ISO international standard of auricular acupuncture points and to improve the research quality of international standardization for AAPs.
An effective method based on reference point for glucose sensing at 1100-1600nm
Zheng, Jiaxiang; Xu, Kexin; Yang, Yue
2011-03-01
Non-invasive blood glucose sensing by near-infrared spectroscopy is easily interrupted by the strong background variations compared to the weak glucose signals. In this work, according to the distribution of diffuse reflectance intensity at different source-detector separations, a method based on a reference point and a measuring point, where the diffuse reflectance intensity is insensitive and most sensitive to the variation of glucose concentration, respectively, is applied. And the data processing method based on the information of two points is investigated to improve the precision of glucose sensing. Based on the Monte Carlo simulation in 5% intralipid solution model, the corresponding optical probe is designed which includes two detecting points: a reference point located at 1.3-1.7mm and a measuring point located at 1.7-2.1mm. Using the probe, the in vitro experiment with different glucose concentrations in the intralipid solution is conducted at 1100-1600nm. As a result, compared to the PLS model built by the signal of the measuring point, the root mean square error of prediction (RMSEP) and root mean square error of cross calibration (RMSEC) of the corrected model built by reference point and measuring point reduces by 45.10%, and 32.15% respectively.
A method for improved accuracy in three dimensions for determining wheel/rail contact points
Yang, Xinwen; Gu, Shaojie; Zhou, Shunhua; Zhou, Yu; Lian, Songliang
2015-11-01
Searching for the contact points between wheels and rails is important because these points represent the points of exerted contact forces. In order to obtain an accurate contact point and an in-depth description of the wheel/rail contact behaviours on a curved track or in a turnout, a method with improved accuracy in three dimensions is proposed to determine the contact points and the contact patches between the wheel and the rail when considering the effect of the yaw angle and the roll angle on the motion of the wheel set. The proposed method, with no need of the curve fitting of the wheel and rail profiles, can accurately, directly, and comprehensively determine the contact interface distances between the wheel and the rail. The range iteration algorithm is used to improve the computation efficiency and reduce the calculation required. The present computation method is applied for the analysis of the contact of rails of CHINA (CHN) 75 kg/m and wheel sets of wearing type tread of China's freight cars. In addition, it can be proved that the results of the proposed method are consistent with that of Kalker's program CONTACT, and the maximum deviation from the wheel/rail contact patch area of this two methods is approximately 5%. The proposed method, can also be used to investigate static wheel/rail contact. Some wheel/rail contact points and contact patch distributions are discussed and assessed, wheel and rail non-worn and worn profiles included.
Non-Linear Aeroelastic Analysis Using the Point Transformation Method, Part 1: Freeplay Model
LIU, L.; WONG, Y. S.; LEE, B. H. K.
2002-05-01
A point transformation technique is developed to investigate the non-linear behavior of a two-dimensional aeroelastic system with freeplay models. Two formulations of the point transformation method are presented, which can be applied to accurately predict the frequency and amplitude of limit cycle oscillations. Moreover, it is demonstrated that the developed formulations are capable of detecting complex aeroelastic responses such as periodic motions with harmonics, period doubling, chaotic motions and the coexistence of stable limit cycles. Applications of the point transformation method to several test examples are presented. It is concluded that the formulations developed in this paper are efficient and effective.
Lange, Aleksandra; Palka, Przemyslaw; Donnelly, J; Burstow, Darryl
2002-11-01
The evaluation of mitral regurgitation (MR) by 3-dimensional (3D) echo has generally been performed by reconstruction of Doppler regurgitant jets but there are little data on measuring anatomic regurgitant orifice area (AROA) directly from 3D mitral valve (MV) reconstructions. Transoesophageal echo (TOE) 3D images were acquired from 38 unselected patients (age 59+/-11 years, ten in atrial fibrillation) with various degrees of MR. In all patients MV was reconstructed en face from the left atrium (LA) and the left ventricle (LV). AROA was measured by planimetry from 3D pictures and compared to the effective regurgitant orifice area (EROA) by proximal isovelocity surface area and proximal MR jet width from 2D echo. AROA was measured in 95% of patients from LA, 89% from LV and in 84% from both LA and LV. Good correlation was found between EROA and AROA measured from both LA (r=0.97, Por=25 mm(2) differentiated mild MR (graded 1-2) from moderately severe (graded 3-4) with 80-90% accuracy. 3D TOE provides important quantitative information on both the mechanism and the severity of MR in an unselected group of patients. AROA enables quantification of MR with excellent agreement with the accepted clinical method of proximal flow convergence.
Creating the data basis for environmental evaluations with the Oil Point Method
DEFF Research Database (Denmark)
Bey, Niki; Lenau, Torben Anker
1999-01-01
with rules-of-thumb. The central idea is that missing indicators can be calculated or estimated by the designers themselves.After discussing energy-related environmental evaluation and arguing for its application in evaluation of concepts, the paper focuses on the basic problem of missing data and describes...... the way in which the problem may be solved by making Oil Point evaluations. Sources of energy data are mentioned. Typical deficits to be aware of - such as the negligence of efficiency factors - are revealed and discussed. Comparative case studies which have shown encouraging results are mentioned as well.......A simple, indicator-based method for environmental evaluations, the Oil Point Method, has been developed. Oil Points are derived from energy data and refer to kilograms of oil, therefore the name. In the Oil Point Method, a certain degree of inaccuracy is explicitly accepted like it is the case...
The Closest Point Method and Multigrid Solvers for Elliptic Equations on Surfaces
Chen, Yujia
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. Elliptic partial differential equations are important from both application and analysis points of view. In this paper we apply the closest point method to solve elliptic equations on general curved surfaces. Based on the closest point representation of the underlying surface, we formulate an embedding equation for the surface elliptic problem, then discretize it using standard finite differences and interpolation schemes on banded but uniform Cartesian grids. We prove the convergence of the difference scheme for the Poisson\\'s equation on a smooth closed curve. In order to solve the resulting large sparse linear systems, we propose a specific geometric multigrid method in the setting of the closest point method. Convergence studies in both the accuracy of the difference scheme and the speed of the multigrid algorithm show that our approaches are effective.
A steady-state target calculation method based on "point" model for integrating processes.
Pang, Qiang; Zou, Tao; Zhang, Yanyan; Cong, Qiumei
2015-05-01
Aiming to eliminate the influences of model uncertainty on the steady-state target calculation for integrating processes, this paper presented an optimization method based on "point" model and a method determining whether or not there is a feasible solution of steady-state target. The optimization method resolves the steady-state optimization problem of integrating processes under the framework of two-stage structure, which builds a simple "point" model for the steady-state prediction, and compensates the error between "point" model and real process in each sampling interval. Simulation results illustrate that the outputs of integrating variables can be restricted within the constraints, and the calculation errors between actual outputs and optimal set-points are small, which indicate that the steady-state prediction model can predict the future outputs of integrating variables accurately. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Ricardo Souza e Silva Morelli
2010-01-01
Full Text Available OBJETIVO: Comparar o resultado do tratamento das fraturas da extremidade proximal do úmero. osteossíntese com a placa em t de pequenos fragmentos (grupo a, promovendo uma estabilização relativa, em contraposição à placa com parafusos bloqueados (grupo b. MÉTODOS: São alocados de forma aleatória 18 pacientes e avaliados prospectivamente, segundo critérios clínicos, escala funcional e parâmetros radiográficos da redução obtida. RESULTADOS: Pela escala analógica de dor a média aos seis meses de evolução foi 2,1 para o grupo a e 2,2 para o grupo b, a amplitude de elevação no grupo a foi de 140ºe de 143ºno grupo b e a pontuação na escala funcional da ucla foi respectivamente 30 e 31. Nas radiografias avaliadas; no grupo a, três pacientes obtiveram ângulos medidos após a estabilização entre 0º e 10º de desvio em relação à anatomia normal e seis entre 11º e 40º, no grupo b sete pacientes com ângulos entre 0º e 10º e dois entre 11º e 20º. CONCLUSÕES: Nos resultados precoces e tardios não ocorreram diferenças clínicas e funcionais nos dois grupos, prevalecendo uma alta incidência de bons resultados. as medidas radiográficas das reduções obtidas ficaram mais próximas do anatômico no grupo tratado com placas bloqueadas.OBJECTIVES: The present study compares results of the treatment of patients with proximal humerus fractures using two different fixation methods: the t plate (group a for small segments that provides a relative stabilization is compared to the locking screw plate that promotes a rigid fixation. METHODS: eighteen patients were randomly divided into two groups and evaluated prospectively according to clinical aspect, functional score and radiographic parameters of displacement after fixation. RESULTS: using the visual analogue scale - vas, the mean pain at six months of follow-up was 2.1 for group a and 2.2 for group b. the mean range of forward elevation was 140º in group a and 143º in
Milk freezing point determination with infrared spectroscopy and thermistor cryoscopy method
Directory of Open Access Journals (Sweden)
Nataša Pintić Pukec
2009-09-01
Full Text Available Two analytical methods were used for determination of the freezing point on identical test raw milk samples. The aim of this research was to investigate possibility of usage infrared spectrometry method, with MilcoScan FT 6000 milk analyzer for determination of milk freezing point, comparing to results obtained by using a reference thermistor cryoscopy method with Cryoscope 4C3 analyzer. During period of four months, total of 320 milk samples were analyzed. Once a week milk samples were sampled at collection reservoirs from twenty milk producers. Milk freezing point was analyzed with each of investigated methods in three consecutive testing respectively repetition. The results of freezing point were recorded as higher by reference in comparison to infrared spectroscopy method. Mean difference from 1.31 to 5.28 m°C respectively 3.43 m°C was determined between results obtained with infrared spectroscopy and reference method. Mean repeatability results for both investigated methods showed slight difference, sr%=0.194 for the reference method and sr%=0.193 for the infrared spectrometry method. Statistically significant difference between the means of the obtained results with two different investigated methods (P>0.05; P>0,01 was not determined. The results indicate the conclusion that infrared spectroscopy method can be used for detecting adulteration of milk with water addition as screening method. Based upon the obtained results usage of infrared spectrometry method in determination of raw milk freezing point is recommended because it is faster and can be carried out with current analyzers used for determination of other milk quality parameters, for example analyzer MilkoScan FT 6000.
DEFF Research Database (Denmark)
Palm, Henrik; Teixidor, Jordi
2015-01-01
-displaced femoral neck fractures and prosthesis for displaced among the elderly; and sliding hip screw for stabile- and intramedullary nails for unstable- and sub-trochanteric fractures) but they are based on a variety of criteria and definitions - and often leave wide space for the individual surgeons' subjective...... guidelines for hip fracture surgery and discuss a method for future pathway/guideline implementation and evaluation. METHODS: By a PubMed search in March 2015 six studies of surgical treatment pathways covering all types of proximal femoral fractures with publication after 1995 were identified. Also we...... searched the homepages of the national heath authorities and national orthopedic societies in West Europe and found 11 national or regional (in case of no national) guidelines including any type of proximal femoral fracture surgery. RESULTS: Pathway consensus is outspread (internal fixation for un...
Proximal renal tubular acidosis
Renal tubular acidosis - proximal; Type II RTA; RTA - proximal; Renal tubular acidosis type II ... by alkaline substances, mainly bicarbonate. Proximal renal tubular acidosis (Type II RTA) occurs when bicarbonate is not ...
Slope failure with the material point method : An investigation of post-peak material behaviour
Vardon, P.J.; Wang, B.; Hicks, M.A.
2017-01-01
The material point method (MPM) has the potential to simulate the onset, the full evolution and the final condition of a slope failure. It is a variant of the finite element method (FEM), where the material is able to move through the mesh, thereby solving one of the major problems in FEM of mesh
Lee, Jennifer
2012-01-01
The intent of this study was to examine the relationship between media multitasking orientation and grade point average. The study utilized a mixed-methods approach to investigate the research questions. In the quantitative section of the study, the primary method of statistical analyses was multiple regression. The independent variables for the…
Compound material point method (CMPM) to improve stress recovery for quasi-static problems
Gonzalez Acosta, J.L.; Vardon, P.J.; Hicks, M.A.
2017-01-01
Stress oscillations and inaccuracies are commonly reported in the material point method (MPM). This paper investigates the causes and presents a method to reduce them. The oscillations are shown to result from, at least in part, two distinctly different causes, both originating from the shape
On the practical use of the Material Point Method for offshore geotechnical applications
Brinkgreve, R.B.J.; Burg, M; Liim, L.J.; Andreykiv, A
2017-01-01
The Material Point Method (MPM) has been developed as a special finite element-based method for large deformation analysis, material flow and contact problems. When it comes to applications in soil, MPM can provide solutions where conventional FEM faces its limitations. Examples of geotechnical
A Control Method for Maximum Power Point Tracking in Stand-Alone-Type PV Generation Systems
Itako, Kazutaka; Mori, Takeaki
In this paper, a new control method for maximum power point tracking (MPPT) in stand-alone-type PV generaton systems is proposed. In this control method, the operations detecting the maximum power point and tracking its point are alternately carried out by using a step-up DC—DC converter. This method requires neither the measurement of temperature and insolation level nor PV array model. In a stand-alone-type application with a battery load, the design method for the boost inductance L of the step-up DC—DC converter is described, and the experimental results show that the use of the proposed MPPT control increases the PV generated energy by 14.8% compared to the conventional system.
Evaluation of methods for rapid determination of freezing point of aviation fuels
Mathiprakasam, B.
1982-01-01
Methods for identification of the more promising concepts for the development of a portable instrument to rapidly determine the freezing point of aviation fuels are described. The evaluation process consisted of: (1) collection of information on techniques previously used for the determination of the freezing point, (2) screening and selection of these techniques for further evaluation of their suitability in a portable unit for rapid measurement, and (3) an extensive experimental evaluation of the selected techniques and a final selection of the most promising technique. Test apparatuses employing differential thermal analysis and the change in optical transparency during phase change were evaluated and tested. A technique similar to differential thermal analysis using no reference fuel was investigated. In this method, the freezing point was obtained by digitizing the data and locating the point of inflection. Results obtained using this technique compare well with those obtained elsewhere using different techniques. A conceptual design of a portable instrument incorporating this technique is presented.
DEFF Research Database (Denmark)
Choi, Ui-Min; Blaabjerg, Frede; Lee, Kyo-Beum
2015-01-01
time of small- and medium-voltage vectors. However, if the power factor is lower, there is a limitation to eliminate neutral-point oscillations. In this case, the proposed method can be improved by changing the switching sequence properly. Additionally, a method for neutral-point voltage balancing...
a Data Driven Method for Flat Roof Building Reconstruction from LiDAR Point Clouds
Mahphood, A.; Arefi, H.
2017-09-01
3D building modeling is one of the most important applications in photogrammetry and remote sensing. Airborne LiDAR (Light Detection And Ranging) is one of the primary information sources for building modeling. In this paper, a new data-driven method is proposed for 3D building modeling of flat roofs. First, roof segmentation is implemented using region growing method. The distance between roof points and the height difference of the points are utilized in this step. Next, the building edge points are detected using a new method that employs grid data, and then roof lines are regularized using the straight line approximation. The centroid point and direction for each line are estimated in this step. Finally, 3D model is reconstructed by integrating the roof and wall models. In the end, a qualitative and quantitative assessment of the proposed method is implemented. The results show that the proposed method could successfully model the flat roof buildings using LiDAR point cloud automatically.
A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
A. Mahphood
2017-09-01
Full Text Available 3D building modeling is one of the most important applications in photogrammetry and remote sensing. Airborne LiDAR (Light Detection And Ranging is one of the primary information sources for building modeling. In this paper, a new data-driven method is proposed for 3D building modeling of flat roofs. First, roof segmentation is implemented using region growing method. The distance between roof points and the height difference of the points are utilized in this step. Next, the building edge points are detected using a new method that employs grid data, and then roof lines are regularized using the straight line approximation. The centroid point and direction for each line are estimated in this step. Finally, 3D model is reconstructed by integrating the roof and wall models. In the end, a qualitative and quantitative assessment of the proposed method is implemented. The results show that the proposed method could successfully model the flat roof buildings using LiDAR point cloud automatically.
A method for automatic feature points extraction of human vertebrae three-dimensional model
Wu, Zhen; Wu, Junsheng
2017-05-01
A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.
Study of characteristic point identification and preprocessing method for pulse wave signals.
Sun, Wei; Tang, Ning; Jiang, Guiping
2015-02-01
Characteristics in pulse wave signals (PWSs) include the information of physiology and pathology of human cardiovascular system. Therefore, identification of characteristic points in PWSs plays a significant role in analyzing human cardiovascular system. Particularly, the characteristic points show personal dependent features and are easy to be affected. Acquiring a signal with high signal-to-noise ratio (SNR) and integrity is fundamentally important to precisely identify the characteristic points. Based on the mathematical morphology theory, we design a combined filter, which can effectively suppress the baseline drift and remove the high-frequency noise simultaneously, to preprocess the PWSs. Furthermore, the characteristic points of the preprocessed signal are extracted according to its position relations with the zero-crossing points of wavelet coefficients of the signal. In addition, the differential method is adopted to calibrate the position offset of characteristic points caused by the wavelet transform. We investigated four typical PWSs reconstructed by three Gaussian functions with tunable parameters. The numerical results suggested that the proposed method could identify the characteristic points of PWSs accurately.
Apparatus and method for implementing power saving techniques when processing floating point values
Kim, Young Moon; Park, Sang Phill
2017-10-03
An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.
Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve
Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.
2009-04-01
Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.
A Novel Gaze Tracking Method Based on the Generation of Virtual Calibration Points
Directory of Open Access Journals (Sweden)
Hwan Heo
2013-08-01
Full Text Available Most conventional gaze-tracking systems require that users look at many points during the initial calibration stage, which is inconvenient for them. To avoid this requirement, we propose a new gaze-tracking method with four important characteristics. First, our gaze-tracking system uses a large screen located at a distance from the user, who wears a lightweight device. Second, our system requires that users look at only four calibration points during the initial calibration stage, during which four pupil centers are noted. Third, five additional points (virtual pupil centers are generated with a multilayer perceptron using the four actual points (detected pupil centers as inputs. Fourth, when a user gazes at a large screen, the shape defined by the positions of the four pupil centers is a distorted quadrangle because of the nonlinear movement of the human eyeball. The gaze-detection accuracy is reduced if we map the pupil movement area onto the screen area using a single transform function. We overcame this problem by calculating the gaze position based on multi-geometric transforms using the five virtual points and the four actual points. Experiment results show that the accuracy of the proposed method is better than that of other methods.
PROXIMATE AND ELEMENTAL COMPOSITION OF WHITE GRUBS
African Journals Online (AJOL)
DR. AMINU
PROXIMATE AND ELEMENTAL COMPOSITION OF WHITE GRUBS. 1 Alhassan, A. J. 1M .S. Sule, 1J. ... ABSTRACT. This study determined the proximate and mineral element composition of whole white grubs using standard methods of analysis. Proximate ... days, before pulverized to powder and kept in plastic container.
Alternative Methods for Estimating Plane Parameters Based on a Point Cloud
Stryczek, Roman
2017-12-01
Non-contact measurement techniques carried out using triangulation optical sensors are increasingly popular in measurements with the use of industrial robots directly on production lines. The result of such measurements is often a cloud of measurement points that is characterized by considerable measuring noise, presence of a number of points that differ from the reference model, and excessive errors that must be eliminated from the analysis. To obtain vector information points contained in the cloud that describe reference models, the data obtained during a measurement should be subjected to appropriate processing operations. The present paperwork presents an analysis of suitability of methods known as RANdom Sample Consensus (RANSAC), Monte Carlo Method (MCM), and Particle Swarm Optimization (PSO) for the extraction of the reference model. The effectiveness of the tested methods is illustrated by examples of measurement of the height of an object and the angle of a plane, which were made on the basis of experiments carried out at workshop conditions.
Directory of Open Access Journals (Sweden)
Goutaudier C.
2013-07-01
Full Text Available In many cases of miscibility gap in ternary systems, one critical point at least, stable or metastable, can be observed under isobaric and isothermal conditions. The experimental determination of this invariant point is difficult but its knowledge is essential. The authors propose a method for calculating the composition of the invariant solution starting from the composition of the liquid phases in equilibrium. The computing method is based on the barycentric properties of the conjugate solutions (binodal points and an extension of the straight diameter method. A systematic study was carried out on a large number of ternary systems involving diverse constituents (230 sets ternary systems at various temperatures. Thus the results are presented and analyzed by means of consistency tests.
Fang, W.; Quan, S. H.; Xie, C. J.; Tang, X. F.; Wang, L. L.; Huang, L.
2016-03-01
In this study, a direct-current/direct-current (DC/DC) converter with maximum power point tracking (MPPT) is developed to down-convert the high voltage DC output from a thermoelectric generator to the lower voltage required to charge batteries. To improve the tracking accuracy and speed of the converter, a novel MPPT control scheme characterized by an aggregated dichotomy and gradient (ADG) method is proposed. In the first stage, the dichotomy algorithm is used as a fast search method to find the approximate region of the maximum power point. The gradient method is then applied for rapid and accurate tracking of the maximum power point. To validate the proposed MPPT method, a test bench composed of an automobile exhaust thermoelectric generator was constructed for harvesting the automotive exhaust heat energy. Steady-state and transient tracking experiments under five different load conditions were carried out using a DC/DC converter with the proposed ADG and with three traditional methods. The experimental results show that the ADG method can track the maximum power within 140 ms with a 1.1% error rate when the engine operates at 3300 rpm@71 NM, which is superior to the performance of the single dichotomy method, the single gradient method and the perturbation and observation method from the viewpoint of improved tracking accuracy and speed.
APPLICATION OF POINT-CENTERED QUARTER METHOD FOR MEASUREMENT THE BEACH CRAB (OCYPODE SPP DENSITY
Directory of Open Access Journals (Sweden)
Hanifa Marisa
2015-10-01
Full Text Available Point Quarter Method is a plant community structure measurement procedure. The technique is base on measurement of distance of four plants or trees in every quarter that is made by four space in the cross line sampling field studies. In forest sampling, point centered quarter methods is considered as the efficient, reliable and accurate data, not only for mean distance and density, but for frequency and dominance of species. So it is important to try wether these method ould be applicated to animal, especially crab. These method was applicated for crab population in Padang Beach at December 22nd, 2014. Ten points quartered were made and the distance of every Ocypode sp crab burrow was counted by ruler. Mean distance of crabs burrow gained by divided of total number of quarter (20 with mean distance of every burrow to the point. Density per hectare is 10,000 m divided by quadratic of mean distance. Mean distance of burrow to points were counted and prediction of population per hectare could be found. In these case, mean distance was: 0.41 m and crab population is :59,488.34 individu per hectare. Compared to other species , eg Scylla serrata, its population is bigger, eventhough the condition of beach is polluted and wasted
Energy Technology Data Exchange (ETDEWEB)
Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator
Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai
2017-05-01
To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.
A Study of Impact Point Detecting Method Based on Seismic Signal
Huo, Pengju; Zhang, Yu; Xu, Lina; Huang, Yong
The projectile landing position has to be determined for its recovery and range in the targeting test. In this paper, a global search method based on the velocity variance is proposed. In order to verify the applicability of this method, simulation analysis within the scope of four million square meters has been conducted in the same array structure of the commonly used linear positioning method, and MATLAB was used to compare and analyze the two methods. The compared simulation results show that the global search method based on the speed of variance has high positioning accuracy and stability, which can meet the needs of impact point location.
Multiscale Modeling using Molecular Dynamics and Dual Domain Material Point Method
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division. Fluid Dynamics and Solid Mechanics Group, T-3; Rice Univ., Houston, TX (United States)
2016-07-07
For problems involving large material deformation rate, the material deformation time scale can be shorter than the material takes to reach a thermodynamical equilibrium. For such problems, it is difficult to obtain a constitutive relation. History dependency become important because of thermodynamic non-equilibrium. Our goal is to build a multi-scale numerical method which can bypass the need for a constitutive relation. In conclusion, multi-scale simulation method is developed based on the dual domain material point (DDMP). Molecular dynamics (MD) simulation is performed to calculate stress. Since the communication among material points is not necessary, the computation can be done embarrassingly parallel in CPU-GPU platform.
DEFF Research Database (Denmark)
Skajaa, Anders; Andersen, Erling D.; Ye, Yinyu
2013-01-01
We present two strategies for warmstarting primal-dual interior point methods for the homogeneous self-dual model when applied to mixed linear and quadratic conic optimization problems. Common to both strategies is their use of only the final (optimal) iterate of the initial problem and their neg......We present two strategies for warmstarting primal-dual interior point methods for the homogeneous self-dual model when applied to mixed linear and quadratic conic optimization problems. Common to both strategies is their use of only the final (optimal) iterate of the initial problem...
A primal-dual interior point method for large-scale free material optimization
DEFF Research Database (Denmark)
Weldeyesus, Alemseged Gebrehiwot; Stolpe, Mathias
2015-01-01
Free Material Optimization (FMO) is a branch of structural optimization in which the design variable is the elastic material tensor that is allowed to vary over the design domain. The requirements are that the material tensor is symmetric positive semidefinite with bounded trace. The resulting...... optimization problem is a nonlinear semidefinite program with many small matrix inequalities for which a special-purpose optimization method should be developed. The objective of this article is to propose an efficient primal-dual interior point method for FMO that can robustly and accurately solve large...... of iterations the interior point method requires is modest and increases only marginally with problem size. The computed optimal solutions obtain a higher precision than other available special-purpose methods for FMO. The efficiency and robustness of the method is demonstrated by numerical experiments on a set...
Use of Finite Point Method for Wave Propagation in Nonhomogeneous Unbounded Domains
Directory of Open Access Journals (Sweden)
S. Moazam
2015-01-01
Full Text Available Wave propagation in an unbounded domain surrounding the stimulation resource is one of the important issues for engineers. Past literature is mainly concentrated on the modelling and estimation of the wave propagation in partially layered, homogeneous, and unbounded domains with harmonic properties. In this study, a new approach based on the Finite Point Method (FPM has been introduced to analyze and solve the problems of wave propagation in any nonhomogeneous unbounded domain. The proposed method has the ability to use the domain properties by coordinate as an input. Therefore, there is no restriction in the form of the domain properties, such as being periodical as in the case of existing similar numerical methods. The proposed method can model the boundary points between phases with trace of errors and the results of this method satisfy both conditions of decay and radiation.
The Three-Point Sinuosity Method for Calculating the Fractal Dimension of Machined Surface Profile
Zhou, Yuankai; Li, Yan; Zhu, Hua; Zuo, Xue; Yang, Jianhua
2015-04-01
The three-point sinuosity (TPS) method is proposed to calculate the fractal dimension of surface profile accurately. In this method, a new measure, TPS is defined to present the structural complexity of fractal curves, and has been proved to follow the power law. Thus, the fractal dimension can be calculated through the slope of the fitted line in the log-log plot. The Weierstrass-Mandelbrot (W-M) fractal curves, as well as the real surface profiles obtained by grinding, sand blasting and turning, are used to validate the effectiveness of the proposed method. The calculation values are compared to those obtained from root-mean-square (RMS) method, box-counting (BC) method and variation method. The results show that the TPS method has the widest scaling region, the least fit error and the highest accuracy among the methods examined, which demonstrates that the fractal characteristics of the fractal curves can be well revealed by the proposed method.
The Curvature-Augmented Closest Point method with vesicle inextensibility application
Vogl, Christopher J.
2017-09-01
The Closest Point method, initially developed by Ruuth and Merriman, allows for the numerical solution of surface partial differential equations without the need for a parameterization of the surface itself. Surface quantities are embedded into the surrounding domain by assigning each value at a given spatial location to the corresponding value at the closest point on the surface. This embedding allows for surface derivatives to be replaced by their Cartesian counterparts (e.g. ∇s = ∇). This equivalence is only valid on the surface, and thus, interpolation is used to enforce what is known as the side condition away from the surface. To improve upon the method, this work derives an operator embedding that incorporates curvature information, making it valid in a neighborhood of the surface. With this, direct enforcement of the side condition is no longer needed. Comparisons in R2 and R3 show that the resulting Curvature-Augmented Closest Point method has better accuracy and requires less memory, through increased matrix sparsity, than the Closest Point method, while maintaining similar matrix condition numbers. To demonstrate the utility of the method in a physical application, simulations of inextensible, bi-lipid vesicles evolving toward equilibrium shapes are also included.
The Curvature-Augmented Closest Point method with vesicle inextensibility application
Energy Technology Data Exchange (ETDEWEB)
Vogl, Christopher J.
2017-09-01
The Closest Point method, initially developed by Ruuth and Merriman, allows for the numerical solution of surface partial differential equations without the need for a parameterization of the surface itself. Surface quantities are embedded into the surrounding domain by assigning each value at a given spatial location to the corresponding value at the closest point on the surface. This embedding allows for surface derivatives to be replaced by their Cartesian counterparts (e.g. ∇s=∇). This equivalence is only valid on the surface, and thus, interpolation is used to enforce what is known as the side condition away from the surface. To improve upon the method, this work derives an operator embedding that incorporates curvature information, making it valid in a neighborhood of the surface. With this, direct enforcement of the side condition is no longer needed. Comparisons in R2 and R3 show that the resulting Curvature-Augmented Closest Point method has better accuracy and requires less memory, through increased matrix sparsity, than the Closest Point method, while maintaining similar matrix condition numbers. To demonstrate the utility of the method in a physical application, simulations of inextensible, bi-lipid vesicles evolving toward equilibrium shapes are also included.
Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.
2017-07-01
Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.
On the maximum likelihood method for estimating molecular trees: uniqueness of the likelihood point.
Fukami, K; Tateno, Y
1989-05-01
Studies are carried out on the uniqueness of the stationary point on the likelihood function for estimating molecular phylogenetic trees, yielding proof that there exists at most one stationary point, i.e., the maximum point, in the parameter range for the one parameter model of nucleotide substitution. The proof is simple yet applicable to any type of tree topology with an arbitrary number of operational taxonomic units (OTUs). The proof ensures that any valid approximation algorithm be able to reach the unique maximum point under the conditions mentioned above. An algorithm developed incorporating Newton's approximation method is then compared with the conventional one by means of computer simulation. The results show that the newly developed algorithm always requires less CPU time than the conventional one, whereas both algorithms lead to identical molecular phylogenetic trees in accordance with the proof.
Directory of Open Access Journals (Sweden)
Mroczka Janusz
2014-12-01
Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.
Deng, Kai-Wen; He, Fu-Yuan
2013-05-01
To analyze the status of reaching meridian research for the Chinese Matria Medica and to raise point-medicine method. To review and analyze the studied situation of the corresponding relationships between "materials", as the constituents in the Chinese materia medica (CMM) as reaching meridian material foundation, and "image", as the function states of the zang-fu viscera, to investigate the problems and the measures to solve it. There are imprinting relationships among "materials", as the constituents alike metabolic pathway in the CMM as reaching meridian material foundation, and "image", as the function of the zang-fu viscera related with meridians, and "symptom", the states of them, retroacted, represented and explored by the corresponding meridianed constituents in the CMM as quantitative pharmacologic parameters,also modified by special acupuncture points, finally to establish the new method of reaching meridian according to meridian point-medicine action and also to investigate the relations between the constituents in the CMM and network targets of disease as to kill two birds with one arrow. There are imprinting relationships among "materials", "image", "symptom" versus CMM, zang-fu viscera function related with meridians, their function status respectively, which are modified by acupuncture merisian points. The point-medicine method for assuring reaching meridian is the most simple way to investigate reaching meridian for CMM, is also a important way to investigate visceral and meridianal manifestations.
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
Diagnosis of solid breast lesions by elastography 5-point score and strain ratio method
Energy Technology Data Exchange (ETDEWEB)
Zhao, Qiao Ling, E-mail: imagingzhaoql@126.com [Department of Ultrasound, the First Affiliated Hospital, Medical College of Xi' an Jiaotong University, Xi' an Yanta West Road No. 277, Shaanxi 710061 (China); Ruan, Li Tao, E-mail: ruanlitao@163.com [Department of Ultrasound, the First Affiliated Hospital, Medical College of Xi' an Jiaotong University, Xi' an Yanta West Road No. 277, Shaanxi 710061 (China); Zhang, Hua, E-mail: Zhanghua54322@163.com [Department of Ultrasound, the First Affiliated Hospital, Medical College of Xi' an Jiaotong University, Xi' an Yanta West Road No. 277, Shaanxi 710061 (China); Yin, Yi Min, E-mail: yymxbh@yahoo.cn [Department of Ultrasound, the First Affiliated Hospital, Medical College of Xi' an Jiaotong University, Xi' an Yanta West Road No. 277, Shaanxi 710061 (China); Duan, Shao Xue, E-mail: doujiaoyueer@163.com [Department of Ultrasound, the First Affiliated Hospital, Medical College of Xi' an Jiaotong University, Xi' an Yanta West Road No. 277, Shaanxi 710061 (China)
2012-11-15
Purpose: To compare the diagnostic performance of 5-point scoring system and strain ratio by sonoelastography in the assessment of solid breast lesions. Material and methods: One hundred and eighty-seven solid masses in 155 patients were scanned by two-dimensional ultrasonography and sonoelastography. Elasticity scores were determined with a 5-point scoring method, and the strain ratio was based on the comparison of the average strain measured in the lesion with the adjacent breast tissue in the same depth. Pathological results were taken as gold standards to compare the diagnostic efficacy of two methods with clinical diagnostic test and receiver operating characteristic (ROC) curves. Results: Among 187 lesions, 130 were benign and 57 were malignant. The mean scores (1.62 {+-} 0.69 vs 4.07 {+-} 0.26, P < 0.05) and strain ratios (2.06 {+-} 1.27 vs 6.66 {+-} 4.62, P < 0.05) were significantly higher of malignant than benign lesions. The area under the curve for the 5-point scoring system and for strain ratio-based elastographic analysis was 0.892 and 0.909, respectively (P > 0.05). For 5-point scoring, sonoelastography had 84.2% sensitivity, 84.6% specificity, 84.5% accuracy, 70.6% positive predictive value and 92.4% negative predictive value. When a cutoff point of 3.06 was used, sensitivity, specificity, accuracy, positive and negative predictive values were 87.7%, 88.5%, 88.2%, 76.9% and 94.3%, respectively for the strain ratio (P > 0.05). Conclusions: The 5-point scoring system and strain ratio has similar diagnostic performance, and the strain ratio could be more objective to differentiate the masses when those masses were difficult to be judged by using 5-point scoring system in sonoelastographic images.
Energy Technology Data Exchange (ETDEWEB)
Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, California 90095 (United States); Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method
Mathematical Method for Predicting Nickel Deposit Based on Data from Drilling Points
Directory of Open Access Journals (Sweden)
Edi Cahyono
2011-01-01
Full Text Available In this article we discuss several methods for predicting nickel ore content inside the soil under a given area/region. The prediction is the main objective of the exploration activity which is very important for conducting the exploitation activity from economic point of view. The prediction methods are based on the data obtained from the drilling activity at several ‘points’. The data yields information on the nickel density at those points. Nickel density over the region is approximated (with an approximate function by applying interpolation and/or extrapolation based on the data from those points. The nickel content is predicted by applying integral of the approximate function over the given region
Rachakonda, Prem; Muralikrishnan, Bala; Cournoyer, Luc; Cheok, Geraldine; Lee, Vincent; Shilling, Meghan; Sawyer, Daniel
2017-10-01
The Dimensional Metrology Group at the National Institute of Standards and Technology is performing research to support the development of documentary standards within the ASTM E57 committee. This committee is addressing the point-to-point performance evaluation of a subclass of 3D imaging systems called terrestrial laser scanners (TLSs), which are laser-based and use a spherical coordinate system. This paper discusses the usage of sphere targets for this effort, and methods to minimize the errors due to the determination of their centers. The key contributions of this paper include methods to segment sphere data from a TLS point cloud, and the study of some of the factors that influence the determination of sphere centers.
Interior Point Methods on GPU with application to Model Predictive Control
DEFF Research Database (Denmark)
Gade-Nielsen, Nicolai Fog
The goal of this thesis is to investigate the application of interior point methods to solve dynamical optimization problems, using a graphical processing unit (GPU) with a focus on problems arising in Model Predictice Control (MPC). Multi-core processors have been available for over ten years now......, and manycore processors, such as GPUs, have also become a standard component in any consumer computer. The GPU offers faster floating point operations and higher memory bandwidth than the CPU, but requires algorithms to be redesigned and implemented, to match the underlying architecture. A large number...... software package called GPUOPT, available under the non-restrictive MIT license. GPUOPT includes includes a primal-dual interior-point method, which supports both the CPU and the GPU. It is implemented as multiple components, where the matrix operations and solver for the Newton directions is separated...
Multiple Break-Points Detection in Array CGH Data via the Cross-Entropy Method.
Priyadarshana, W J R M; Sofronov, Georgy
2015-01-01
Array comparative genome hybridization (aCGH) is a widely used methodology to detect copy number variations of a genome in high resolution. Knowing the number of break-points and their corresponding locations in genomic sequences serves different biological needs. Primarily, it helps to identify disease-causing genes that have functional importance in characterizing genome wide diseases. For human autosomes the normal copy number is two, whereas at the sites of oncogenes it increases (gain of DNA) and at the tumour suppressor genes it decreases (loss of DNA). The majority of the current detection methods are deterministic in their set-up and use dynamic programming or different smoothing techniques to obtain the estimates of copy number variations. These approaches limit the search space of the problem due to different assumptions considered in the methods and do not represent the true nature of the uncertainty associated with the unknown break-points in genomic sequences. We propose the Cross-Entropy method, which is a model-based stochastic optimization technique as an exact search method, to estimate both the number and locations of the break-points in aCGH data. We model the continuous scale log-ratio data obtained by the aCGH technique as a multiple break-point problem. The proposed methodology is compared with well established publicly available methods using both artificially generated data and real data. Results show that the proposed procedure is an effective way of estimating number and especially the locations of break-points with high level of precision. Availability: The methods described in this article are implemented in the new R package breakpoint and it is available from the Comprehensive R Archive Network at http://CRAN.R-project.org/package=breakpoint.
DEFF Research Database (Denmark)
Bey, Niki
2000-01-01
to three essential assessment steps, the method enables rough environmental evaluations and supports in this way material- and process-related decision-making in the early stages of design. In its overall structure, the Oil Point Method is related to Life Cycle Assessment - except for two main differences...... industrial activity world-wide - makes it increasingly evident that our current way of life is not sustainable. A major contribution of society's negative impact on the environment is related to industrial products and the processes during their life cycle, from raw materials extraction over manufacturing...... of environmental evaluation and only approximate information about the product and its life cycle. This dissertation addresses this challenge in presenting a method, which is tailored to these requirements of designers - the Oil Point Method (OPM). In providing environmental key information and confining itself...
A novel method of measuring the melting point of animal fats.
Lloyd, S S; Dawkins, S T; Dawkins, R L
2014-10-01
The melting point (TM) of fat is relevant to health, but available methods of determining TM are cumbersome. One of the standard methods of measuring TM for animal and vegetable fats is the slip point, also known as the open capillary method. This method is imprecise and not amenable to automation or mass testing. We have developed a technique for measuring TM of animal fat using the Rotor-Gene Q (Qiagen, Hilden, Germany). The assay has an intra-assay SD of 0.08°C. A single operator can extract and assay up to 250 samples of animal fat in 24 h, including the time to extract the fat from the adipose tissue. This technique will improve the quality of research into genetic and environmental contributions to fat composition of meat.
DEFF Research Database (Denmark)
an overview of existing triangulation methods with emphasis on performance versus optimality, and will suggest a fast triangulation algorithm based on linear constraints. The structure and camera motion estimation in a SFM system is based on the minimization of some norm of the reprojection error between...... the 3D points and their images in the cameras. Most classical methods are based on minimizing the sum of squared errors, the L2 norm, after initializing the structure by an algebraic method ([2]). It has been shown (in [4] amongst others) that first, the algebraic method can produce initial estimates...
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan
2015-11-01
To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…
William J. Zielinski; Fredrick V. Schlexer; T. Luke George; Kristine L. Pilgrim; Michael K. Schwartz
2013-01-01
The Point Arena mountain beaver (Aplodontia rufa nigra) is federally listed as an endangered subspecies that is restricted to a small geographic range in coastal Mendocino County, California. Management of this imperiled taxon requires accurate information on its demography and vital rates. We developed noninvasive survey methods, using hair snares to sample DNA and to...
Novel methods for point-of-care diagnosis of nerve agent exposure (Abstract)
Noort, D.; Schans, M.J. van der; Fidder, A.; Verstappen, D.R.W.; Hulst, A.G.; Mars-Groenendijk, R.
2012-01-01
Methods to unequivocally and rapidly assess exposure to nerve agents are highly valuable from a military and security perspective. Within this framework we currently follow two different approaches towards rapid point-of-care diagnosis. Regarding the first approach we hypothesized that proteins in
Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood
Asadi, A.R.; Roos, C.
2015-01-01
In this paper, we design a class of infeasible interior-point methods for linear optimization based on large neighborhood. The algorithm is inspired by a full-Newton step infeasible algorithm with a linear convergence rate in problem dimension that was recently proposed by the second author.
Exploring the potential of the descending-point method to measure ...
African Journals Online (AJOL)
The descending-point method of vegetation survey proved effective in measuring meaningful plant cover changes during a grazing period. No significant changes in basal cover or plant height were detected. Changes in canopy spread and canopy cover could only be used to detect changes in utilization at levels lighter ...
DEFF Research Database (Denmark)
Sørensen, Chris Khadgi; Thach, Tine; Hovmøller, Mogens Støvring
2016-01-01
The fungus Puccinia striiformis causes yellow (stripe) rust on wheat worldwide. In the present article, new methods utilizing an engineered fluid (Novec 7100) as a carrier of urediniospores were compared with commonly used inoculation methods. In general, Novec 7100 facilitated a faster and more...... for the assessment of quantitative epidemiological parameters. New protocols for spray and point inoculation of P. striiformis on wheat are presented, along with the prospect for applying these in rust research and resistance breeding activities....
A point-value enhanced finite volume method based on approximate delta functions
Xuan, Li-Jun; Majdalani, Joseph
2018-02-01
We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.
A Two-Point Newton Method Suitable for Nonconvergent Cases and with Super-Quadratic Convergence
Directory of Open Access Journals (Sweden)
Ababu Teklemariam Tiruneh
2013-01-01
Full Text Available An iterative formula based on Newton’s method alone is presented for the iterative solutions of equations that ensures convergence in cases where the traditional Newton Method may fail to converge to the desired root. In addition, the method has super-quadratic convergence of order 2.414 (i.e., . Newton method is said to fail in certain cases leading to oscillation, divergence to increasingly large number, or offshooting away to another root further from the desired domain or offshooting to an invalid domain where the function may not be defined. In addition when the derivative at the iteration point is zero, Newton method stalls. In most of these cases, hybrids of several methods such as Newton, bisection, and secant methods are suggested as substitute methods and Newton method is essentially blended with other methods or altogether abandoned. This paper argues that a solution is still possible in most of these cases by the application of Newton method alone without resorting to other methods and with the same computational effort (two functional evaluations per iteration like the traditional Newton method. In addition, the proposed modified formula based on Newton method has better convergence characteristics than the traditional Newton method.
DEFF Research Database (Denmark)
Choi, Uimin; Lee, Kyo-Beum; Blaabjerg, Frede
2013-01-01
This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time-offset to the three phase turn-on times. The proper time-offset is simply calculated considering the phase currents and dwe...
Directory of Open Access Journals (Sweden)
Takahiro Yamaguchi
2015-05-01
Full Text Available As smartphones become widespread, a variety of smartphone applications are being developed. This paper proposes a method for indoor localization (i.e., positioning that uses only smartphones, which are general-purpose mobile terminals, as reference point devices. This method has the following features: (a the localization system is built with smartphones whose movements are confined to respective limited areas. No fixed reference point devices are used; (b the method does not depend on the wireless performance of smartphones and does not require information about the propagation characteristics of the radio waves sent from reference point devices, and (c the method determines the location at the application layer, at which location information can be easily incorporated into high-level services. We have evaluated the level of localization accuracy of the proposed method by building a software emulator that modeled an underground shopping mall. We have confirmed that the determined location is within a small area in which the user can find target objects visually.
A new method to extract stable feature points based on self-generated simulation images
Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen
2015-10-01
Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.
A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets
Directory of Open Access Journals (Sweden)
Vilius Matiukas
2011-08-01
Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English
A DATA DRIVEN METHOD FOR BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
M. Sajadian
2014-10-01
Full Text Available Airborne laser scanning, commonly referred to as LiDAR, is a superior technology for three-dimensional data acquisition from Earth's surface with high speed and density. Building reconstruction is one of the main applications of LiDAR system which is considered in this study. For a 3D reconstruction of the buildings, the buildings points should be first separated from the other points such as; ground and vegetation. In this paper, a multi-agent strategy has been proposed for simultaneous extraction and segmentation of buildings from LiDAR point clouds. Height values, number of returned pulse, length of triangles, direction of normal vectors, and area are five criteria which have been utilized in this step. Next, the building edge points are detected using a new method named "Grid Erosion". A RANSAC based technique has been employed for edge line extraction. Regularization constraints are performed to achieve the final lines. Finally, by modelling of the roofs and walls, 3D building model is reconstructed. The results indicate that the proposed method could successfully extract the building from LiDAR data and generate the building models automatically. A qualitative and quantitative assessment of the proposed method is then provided.
[Method to Calculate the Yield Load of Bone Plate in Four-point Bending Test].
Jia, Xiaohang; Zhou, Jun; Ma, Jun; Wen, Yan
2015-09-01
This paper developed a calculation method to acquire the yield load P of bone plate during four-point bending test. This method is based on the displacement--force (δ-F) curve function f(M)(δ) obtained from the test, each slope of the curve was calculated using piecewise smooth function and the line segment in f(M)(δ) elastic deformation area was searched by setting the minimum slope T. Slope S was obtained through linear fit so as to build parallel displacement function f(L)(δ). Then, approximating intersection point of f(M)(δ) and f(L)(δ) was obtained through linear interpolation. Thus, yield load P was acquired. The method in the paper was loyal to YY/T 0342-2002 regulation and was liable to program calculation. The calculating process was nothing to do with whether the initial point during the test was preloaded or unloaded, and there was no need to correct the original point. In addition, T was set in an ideal fitting level guaranteed by the fitting coefficient of determination R2, and thus S was very close to the real value, and P was with a high accuracy.
An Improved Method for Power-Line Reconstruction from Point Cloud Data
Directory of Open Access Journals (Sweden)
Bo Guo
2016-01-01
Full Text Available This paper presents a robust algorithm to reconstruct power-lines using ALS technology. Point cloud data are automatically classified into five target classes before reconstruction. In order to improve upon the defaults of only using the local shape properties of a single power-line span in traditional methods, the distribution properties of power-line group between two neighbor pylons and contextual information of related pylon objects are used to improve the reconstruction results. First, the distribution properties of power-line sets are detected using a similarity detection method. Based on the probability of neighbor points belonging to the same span, a RANSAC rule based algorithm is then introduced to reconstruct power-lines through two important advancements: reliable initial parameters fitting and efficient candidate sample detection. Our experiments indicate that the proposed method is effective for reconstruction of power-lines from complex scenarios.
Directory of Open Access Journals (Sweden)
Wilson Rodríguez Calderón
2015-04-01
Full Text Available When we need to determine the solution of a nonlinear equation there are two options: closed-methods which use intervals that contain the root and during the iterative process reduce the size of natural way, and, open-methods that represent an attractive option as they do not require an initial interval enclosure. In general, we know open-methods are more efficient computationally though they do not always converge. In this paper we are presenting a divergence case analysis when we use the method of fixed point iteration to find the normal height in a rectangular channel using the Manning equation. To solve this problem, we propose applying two strategies (developed by authors that allow to modifying the iteration function making additional formulations of the traditional method and its convergence theorem. Although Manning equation is solved with other methods like Newton when we use the iteration method of fixed-point an interesting divergence situation is presented which can be solved with a convergence higher than quadratic over the initial iterations. The proposed strategies have been tested in two cases; a study of divergence of square root of real numbers was made previously by authors for testing. Results in both cases have been successful. We present comparisons because are important for seeing the advantage of proposed strategies versus the most representative open-methods.
[Online soft sensing method for freezing point of diesel fuel based on NIR spectrometry].
Wu, De-Hui
2008-07-01
To solve the problems of real-time online measurement for the freezing point of diesel fuel products, a soft sensing method by near-infrared (NIR) spectrometry was proposed. Firstly, the information of diesel fuel samples in the spectral region of 750-1 550 nm was extracted by spectrum analyzer, and the polynomial convolution algorithm was also applied in spectrogram smoothness, baseline correction and standardization. Principal component analysis (PCA) was then used to extract the features of NIR spectrum data sets, which not only reduced the number of input dimension, but increased their sensitivity to output. Finally the soft sensing model for freezing point was built using SVR algorithm. One hundred fifty diesel fuel samples were used as experimental materials, 100 of which were used as training (calibrating) samples and the others as testing samples. Four hundred and one dimensional original NIR absorption spectrum data sets, through PCA, were reduced to 6 dimensions. To investigate the measuring effect, the freezing points of the testing samples were estimated by four different soft sensing models, BP, SVR, PCA-BP and PCA+SVR. Experimental results show that (1) the soft sensing models using PCA to extract features are generally better than those used directly in spectrum wavelength domain; (2) SVR based model outperforms its main competitors-BP model in the limited training data, the error of which is only half of the latter; (3) The MSE between the estimated values by the presented method and the standard chemical values of freezing point by condensing method are less than 4.2. The research suggests that the proposed method can be used in fast measurement of the freezing point of diesel fuel products by NIRS.
Proximate Sources of Collective Teacher Efficacy
Adams, Curt M.; Forsyth, Patrick B.
2006-01-01
Purpose: Recent scholarship has augmented Bandura's theory underlying efficacy formation by pointing to more proximate sources of efficacy information involved in forming collective teacher efficacy. These proximate sources of efficacy information theoretically shape a teacher's perception of the teaching context, operationalizing the difficulty…
da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham
2017-06-01
The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.
Lee, Tao Yu; Tseng, Chi-Jen; Chiao, Chia-Ding; Chiou, Chuen-Wang; Mar, Guang-Yuan; Liu, Chun-Peng; Lin, Shao Lin; Chiang, Hung-Tin
2004-01-01
Evaluation of the severity of valvular mitral stenosis and measurements of the effective rheumatic mitral valve area by noninvasive echocardiography has been well accepted. The area is measured by the two-dimensional planimetry (PLM) method and the Doppler pressure half-time (PHT) method. Recently, the proximal isovelocity surface area (PISA) by color Doppler technique has been used as a quantitative measurement for valvular heart disease. However, this method needs more validation. The aim of this study was therefore to investigate the clinical applicability of the PISA method in the measurements of effective mitral valve area in patients with rheumatic valvular heart disease. Forty-seven patients aged from 23 to 71 years, with a mean age of 53 +/- 13 (25 male and 22 female, 15 with sinus rhythm, mean heart rate of 83 +/- 14 beats per minute, with rheumatic valvular mitral stenosis without hemodynamically significant mitral regurgitation) were included in the study. Effective mitral valve area (MVA) derived by the PISA method was calculated as follows: 2 x Pi x (proximal aliasing color zone radius)2x aliasing velocity/peak velocity across mitral orifice. Effective mitral valve areas measured by three different methods (PLM, PHT, and PISA) were compared and correlated with those calculated by the "gold standard" invasive Gorlin's formula. The MVA derived from PHT, PLM, PISA and Gorlin's formula were 1.00 +/- 0.31cm2, 0.99 +/- 0.30 cm2, 0.95 +/- 0.30 cm2 and 0.91 +/- 0.29 cm2, respectively. The correlation coefficients (r value) between PHT, PLM, PISA, and Gorlin's formula, respectively, were 0.66 (P = 0.032, SEE = 0.64), 0.67 (P = 0.25, SEE = 0.72) and 0.80 (P = 0.002, SEE = 0.53). In conclusion, the PISA method is useful clinically in the measurement of effective mitral valve area in patients with rheumatic mitral valve stenosis. The technique is relatively simple, highly feasible and accurate when compared with the PHT, PLM, and Gorlin's formula. Therefore, this
SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD
Directory of Open Access Journals (Sweden)
J. Zhang
2013-05-01
Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.
A novel point-of-use water treatment method by antimicrobial nanosilver textile material.
Liu, Hongjun; Tang, Xiaosheng; Liu, Qishan
2014-12-01
Pathogenic bacteria are one of the main reasons for worldwide water-borne disease causing a big threat to public health, hence there is an urgent need to develop cost-effective water treatment technologies. Nano-materials in point-of-use systems have recently attracted considerable research and commercial interests as they can overcome the drawbacks of traditional water treatment techniques. We have developed a new point-of-use water disinfection kit with nanosilver textile material. The silver nanoparticles were in-situ generated and immobilized onto cotton textile, followed by fixing to a plastic tube to make a water disinfection kit. By soaking and stirring the kit in water, pathogenic bacteria have been killed within minutes. The silver leaching from the kit was insignificant, with values water. Herein, the nanosilver textile water disinfection kit could be a new, efficient and cost-effective point-of-use water treatment method for rural areas and emergency preparedness.
Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study
Directory of Open Access Journals (Sweden)
Javier Eduardo Diaz Zamboni
2017-01-01
Full Text Available The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.
A Novel Complementary Method for the Point-Scan Nondestructive Tests Based on Lamb Waves
Directory of Open Access Journals (Sweden)
Rahim Gorgin
2014-01-01
Full Text Available This study presents a novel area-scan damage identification method based on Lamb waves which can be used as a complementary method for point-scan nondestructive techniques. The proposed technique is able to identify the most probable locations of damages prior to point-scan test which lead to decreasing the time and cost of inspection. The test-piece surface was partitioned with some smaller areas and the damage probability presence of each area was evaluated. A0 mode of Lamb wave was generated and collected using a mobile handmade transducer set at each area. Subsequently, a damage presence probability index (DPPI based on the energy of captured responses was defined for each area. The area with the highest DPPI value highlights the most probable locations of damages in test-piece. Point-scan nondestructive methods can then be used once these areas are found to identify the damage in detail. The approach was validated by predicting the most probable locations of representative damages including through-thickness hole and crack in aluminum plates. The obtained experimental results demonstrated the high potential of developed method in defining the most probable locations of damages in structures.
Comparison of point-of-care-compatible lysis methods for bacteria and viruses.
Heiniger, Erin K; Buser, Joshua R; Mireles, Lillian; Zhang, Xiaohong; Ladd, Paula D; Lutz, Barry R; Yager, Paul
2016-09-01
Nucleic acid sample preparation has been an especially challenging barrier to point-of-care nucleic acid amplification tests in low-resource settings. Here we provide a head-to-head comparison of methods for lysis of, and nucleic acid release from, several pathogenic bacteria and viruses-methods that are adaptable to point-of-care usage in low-resource settings. Digestion with achromopeptidase, a mixture of proteases and peptidoglycan-specific hydrolases, followed by thermal deactivation in a boiling water bath, effectively released amplifiable nucleic acid from Staphylococcus aureus, Bordetella pertussis, respiratory syncytial virus, and influenza virus. Achromopeptidase was functional after dehydration and reconstitution, even after eleven months of dry storage without refrigeration. Mechanical lysis methods proved to be effective against a hard-to-lyse Mycobacterium species, and a miniature bead-mill, the AudioLyse, is shown to be capable of releasing amplifiable DNA and RNA from this species. We conclude that point-of-care-compatible sample preparation methods for nucleic acid tests need not introduce amplification inhibitors, and can provide amplification-ready lysates from a wide range of bacterial and viral pathogens. Copyright © 2016. Published by Elsevier B.V.
Point-source localization in blurred images by a frequency-domain eigenvector-based method.
Gunsay, M; Jeffs, B D
1995-01-01
We address the problem of resolving and localizing blurred point sources in intensity images. Telescopic star-field images blurred by atmospheric turbulence or optical aberrations are typical examples of this class of images, a new approach to image restoration is introduced, which is a generalization of 2-D sensor array processing techniques originating from the field of direction of arrival estimation (DOA). It is shown that in the frequency domain, blurred point source images can be modeled with a structure analogous to the response of linear sensor arrays to coherent signal sources. Thus, the problem may be cast into the form of DOA estimation, and eigenvector based subspace decomposition algorithms, such as MUSIC, may be adapted to search for these point sources. For deterministic point images the signal subspace is degenerate, with rank one, so rank enhancement techniques are required before MUSIC or related algorithms may be used. The presence of blur prohibits the use of existing rank enhancement methods. A generalized array smoothing method is introduced for rank enhancement in the presence of blur, and to regularize the ill posed nature of the image restoration. The new algorithm achieves inter-pixel super-resolution and is computationally efficient. Examples of star image deblurring using the algorithm are presented.
Salem Omar, Alaa Mabrouk; Tanaka, Hidekazu; AbdelDayem, Tarek Khairy; Sadek, Ayman S; Raslaan, Halah; Al-Sherbiny, Ashraf; Yamawaki, Kohei; Ryo, Keiko; Fukuda, Yuko; Norisada, Kazuko; Tatsumi, Kazuhiro; Onishi, Tetsuari; Matsumoto, Kensuke; Kawai, Hiroya; Hirata, Ken-ichi
2011-04-01
The aim of this study was to test the hypothesis that, unlike calculation of the mitral valve area (MVA) with the pressure half-time method (PHT), the proximal isovelocity surface area method (PISA) is not affected by changes in net atrioventricular compliance (C(n)). We studied 51 patients with mitral stenosis (MS) from two centres. MVA was assessed with the PISA (MVA(PISA)), PHT (MVA(PHT)), and planimetry (MVA(PLN), serving as the gold standard) method. C(n) was calculated with a previously validated equation using 2D echocardiography. MVA(PISA) closely correlated with MVA(PLN) (r = 0.96, P PISA), MVA(PLN), and C(n) (r = 0.1, P = 0.388). MVA calculated with both the PISA and PHT methods correlated well with MVA calculated with the planimetry method. However, the PISA rather than PHT is recommended for patients with MS and extreme C(n) values because PISA, unlike PHT, is not affected by changes in C(n).
Directory of Open Access Journals (Sweden)
Omer Yiginer
2011-08-01
Full Text Available Aim To simplify proximal isovelocity surface area (PISA method for mitral valve area (MVA calculation that does not necessitate the usage of a calculator and angle correction, and to compare values estimated using this novel method with the values obtained by the conventional PISA, planimetry and pressure half-time (PHT methods.Methods We evaluated patients with a wide range of mitral stenosis (MS severity. The MVA was measured by the methods of PHT (MVA PHT, planimetry (MVApl, conventional PISA (MVAC-PISA and the novel method of simple PISA (MVAS-PISA. Application of simple PISA was performed subsequently by division of the peak mitral inflow velocity by four; measurement of the radius by adjusting the aliasing velocity to this value; square of the radius gives the MVAS-PISA. Results Twenty patients were enrolled in the study. Peak and mean pressure gradients of patients were 20 ± 6 mmHg and 10±4 mmHg,respectively. The average values of MVApl, MVAPHT, MVAC-PISA, and MVA S-PISA were 1,54±0,41, 1,65±0,40, 1,58±0,42, 1,57 ± 0,44 cm2, respectively. MVAS-PISA had a strong correlation with the MVAC-PISA, MVApl and MVAPHT . Furthermore, there was no signi- ficant difference between simple PISA and the other methods. The agreement between planimetry and simple PISA methods for detecting severe mitral stenosis (MVA<1.5 cm2 determined by ROC analysis was very good with a sensitivity and specificity of 100 % and 92%, respectively. Conclusion Simple PISA is a user friendly method which does not take time and gives simple and correct results. If the diagnostic power of the technique is proven by more comprehensive studies, it can supersede the conventional PISA method.
A simple method for determining the critical point of the soil water retention curve
DEFF Research Database (Denmark)
Chen, Chong; Hu, Kelin; Ren, Tusheng
2017-01-01
he transition point between capillary water and adsorbed water, which is the critical point Pc [defined by the critical matric potential (ψc) and the critical water content (θc)] of the soil water retention curve (SWRC), demarcates the energy and water content region where flow is dominated...... by capillarity or liquid film flow. Accurate estimation of Pc is crucial for modeling water movement in the vadose zone. By modeling the dry-end (matric potential –104.2 cm H2O) sections of the SWRC using the models of Campbell and Shiozawa, and of van Genuchten......, a fixed tangent line method was developed to estimate Pc as an alternative to the commonly used flexible tangent line method. The relationships between Pc, and particle-size distribution and specific surface area (SSA) were analyzed. For 27 soils with various textures, the mean RMSE of water content from...
Cassereau, Didier; Nauleau, Pierre; Bendjoudi, Aniss; Minonzio, Jean-Gabriel; Laugier, Pascal; Bossy, Emmanuel; Grimal, Quentin
2014-07-01
The development of novel quantitative ultrasound (QUS) techniques to measure the hip is critically dependent on the possibility to simulate the ultrasound propagation. One specificity of hip QUS is that ultrasounds propagate through a large thickness of soft tissue, which can be modeled by a homogeneous fluid in a first approach. Finite difference time domain (FDTD) algorithms have been widely used to simulate QUS measurements but they are not adapted to simulate ultrasonic propagation over long distances in homogeneous media. In this paper, an hybrid numerical method is presented to simulate hip QUS measurements. A two-dimensional FDTD simulation in the vicinity of the bone is coupled to the semi-analytic calculation of the Rayleigh integral to compute the wave propagation between the probe and the bone. The method is used to simulate a setup dedicated to the measurement of circumferential guided waves in the cortical compartment of the femoral neck. The proposed approach is validated by comparison with a full FDTD simulation and with an experiment on a bone phantom. For a realistic QUS configuration, the computation time is estimated to be sixty times less with the hybrid method than with a full FDTD approach. Copyright © 2013 Elsevier B.V. All rights reserved.
An ECL-PCR method for quantitative detection of point mutation
Zhu, Debin; Xing, Da; Shen, Xingyan; Chen, Qun; Liu, Jinfeng
2005-04-01
A new method for identification of point mutations was proposed. Polymerase chain reaction (PCR) amplification of a sequence from genomic DNA was followed by digestion with a kind of restriction enzyme, which only cut the wild-type amplicon containing its recognition site. Reaction products were detected by electrochemiluminescence (ECL) assay after adsorption of the resulting DNA duplexes to the solid phase. One strand of PCR products carries biotin to be bound on a streptavidin-coated microbead for sample selection. Another strand carries Ru(bpy)32+ (TBR) to react with tripropylamine (TPA) to emit light for ECL detection. The method was applied to detect a specific point mutation in H-ras oncogene in T24 cell line. The results show that the detection limit for H-ras amplicon is 100 fmol and the linear range is more than 3 orders of magnitude, thus, make quantitative analysis possible. The genotype can be clearly discriminated. Results of the study suggest that ECL-PCR is a feasible quantitative method for safe, sensitive and rapid detection of point mutation in human genes.
Energy Technology Data Exchange (ETDEWEB)
Xia, Donghui [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Huang, Mei [Southwestern Institute of Physics, 610041 Chengdu (China); Wang, Zhijiang, E-mail: wangzj@hust.edu.cn [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Zhang, Feng [Southwestern Institute of Physics, 610041 Chengdu (China); Zhuang, Ge [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China)
2016-10-15
Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.
A Direct Maximum Power Point Tracking Method for Single-Phase Grid Connected PV Inverters
DEFF Research Database (Denmark)
EL Aamri, Faicel; Maker, Hattab; Sera, Dezso
2018-01-01
in dynamic conditions, especially in low irradiance when the measurement of signals becomes more sensitive to noise. The proposed MPPT is designed for single-phase single-stage grid-connected PV inverters, and is based on estimating the instantaneous PV power and voltage ripples, using second......A direct maximum power point tracking (MPPT) method for PV systems has been proposed in this work. This method solves two of the main drawbacks of the Perturb and Observe (P&O) MPPT, namely: i) the tradeoff between the speed and the oscillations in steady-state, ii) and the poor effectiveness...
Thyagarajan David; Haridas Samarth; Jones Denise; Dent Colin; Evans Richard; Williams Rhys
2009-01-01
Aim: To assess the functional outcome following internal fixation with the PHILOS (proximal humeral interlocking system) for displaced proximal humeral fractures. Patients and Methods: We reviewed 30 consecutive patients treated surgically with the proximal humeral locking plate for a displaced proximal humeral fracture. Functional outcome was determined using the American Shoulder and Elbow Society (ASES) score and Constant Murley score. Results: Average age of the patients was 58 years...
The research of motion in a neighborhood of collinear libration point by conservative methods
Shmyrov, A.; Shmyrov, V.; Shymanchuk, D.
2017-10-01
In this paper we research the orbital motion described by equations in hamiltonian form. The shift mapping along a trajectory of motion is canonical one and it makes possible to apply conservative methods. The examples of application of such methods in problems of celestial mechanics are given. The first order approximation of generating function of shift mapping along the trajectory is constructed for uncontrolled motion in a neighborhood of collinear libration point of Sun-Earth system. Also this approach is applied to controllable motion with special kind of control, which ensuring the preservation of hamiltonian form of the equations of motion. The form of iterative schemes for numerical modeling of motion is given. For fixed number of iterations the accuracy of presented numerical method is estimated in comparison with Runge-Kutta method of the fourth order. The analytical representation of the generating function up to second-order terms with respect to time increment is given.
Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method
Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.
2018-01-01
Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.
Generalized four-point characterization method using capacitive and ohmic contacts.
Kim, Brian S; Zhou, Wang; Shah, Yash D; Zhou, Chuanle; Işık, N; Grayson, M
2012-02-01
In this paper, a four-point characterization method is developed for samples that have either capacitive or ohmic contacts. When capacitive contacts are used, capacitive current- and voltage-dividers result in a capacitive scaling factor not present in four-point measurements with only ohmic contacts. From a circuit equivalent of the complete measurement system, one can determine both the measurement frequency band and capacitive scaling factor for various four-point characterization configurations. This technique is first demonstrated with a discrete element four-point test device and then with a capacitively and ohmically contacted Hall bar sample over a wide frequency range (1 Hz-100 kHz) using lock-in measurement techniques. In all the cases, data fit well to a circuit simulation of the entire measurement system, and best results are achieved with large area capacitive contacts and a high input-impedance preamplifier stage. An undesirable asymmetry offset in the measurement signal is described which can arise due to asymmetric voltage contacts.
King, Nathan D.; Ruuth, Steven J.
2017-05-01
Maps from a source manifold M to a target manifold N appear in liquid crystals, color image enhancement, texture mapping, brain mapping, and many other areas. A numerical framework to solve variational problems and partial differential equations (PDEs) that map between manifolds is introduced within this paper. Our approach, the closest point method for manifold mapping, reduces the problem of solving a constrained PDE between manifolds M and N to the simpler problems of solving a PDE on M and projecting to the closest points on N. In our approach, an embedding PDE is formulated in the embedding space using closest point representations of M and N. This enables the use of standard Cartesian numerics for general manifolds that are open or closed, with or without orientation, and of any codimension. An algorithm is presented for the important example of harmonic maps and generalized to a broader class of PDEs, which includes p-harmonic maps. Improved efficiency and robustness are observed in convergence studies relative to the level set embedding methods. Harmonic and p-harmonic maps are computed for a variety of numerical examples. In these examples, we denoise texture maps, diffuse random maps between general manifolds, and enhance color images.
Analysis of tree stand horizontal structure using random point field methods
Directory of Open Access Journals (Sweden)
O. P. Sekretenko
2015-06-01
Full Text Available This paper uses the model approach to analyze the horizontal structure of forest stands. The main types of models of random point fields and statistical procedures that can be used to analyze spatial patterns of trees of uneven and even-aged stands are described. We show how modern methods of spatial statistics can be used to address one of the objectives of forestry – to clarify the laws of natural thinning of forest stand and the corresponding changes in its spatial structure over time. Studying natural forest thinning, we describe the consecutive stages of modeling: selection of the appropriate parametric model, parameter estimation and generation of point patterns in accordance with the selected model, the selection of statistical functions to describe the horizontal structure of forest stands and testing of statistical hypotheses. We show the possibilities of a specialized software package, spatstat, which is designed to meet the challenges of spatial statistics and provides software support for modern methods of analysis of spatial data. We show that a model of stand thinning that does not consider inter-tree interaction can project the size distribution of the trees properly, but the spatial pattern of the modeled stand is not quite consistent with observed data. Using data of three even-aged pine forest stands of 25, 55, and 90-years old, we demonstrate that the spatial point process models are useful for combining measurements in the forest stands of different ages to study the forest stand natural thinning.
Iwakura, Katsuomi; Ito, Hiroshi; Kawano, Shigeo; Okamura, Atsushi; Kurotobi, Toshiya; Date, Motoo; Inoue, Koichi; Fujii, Kenshi
2006-06-01
Effective regurgitant orifice area is a useful index of the severity of mitral regurgitation (MR). The calculation of regurgitant orifice area using the proximal isovelocity surface area (PISA) method has some technical limitations. Three-dimensional reconstruction of the MR jet was performed using the Live 3D system on a Sonos 7500 to measure regurgitant orifice area directly in 109 cases of MR. Regurgitant orifice area was also measured by quantitative 2-dimensional echocardiography and by the PISA method. To analyze the shape of the regurgitant orifice, the ratio of the long axis to the short axis of the orifice (the L/S ratio) was calculated. Regurgitant orifice area on 3-dimensional echocardiography showed an almost identical correlation with that obtained by quantitative echocardiography (r = 0.91, p PISA method (r = 0.93, p echocardiography was significantly larger than that obtained using the PISA method in the whole study group and in the 62 cases of MR with L/S ratios >1.5, whereas the correlation was almost identical in cases of MR with L/S ratios PISA method also underestimated that obtained by quantitative echocardiography in cases of MR with L/S ratios >1.5. Three-dimensional echocardiography provided robust values independent of the eccentricity of the MR jet or of cardiac rhythm. In conclusion, the direct measurement of the regurgitant orifice area of MR with 3-dimensional Doppler echocardiography could be a promising method to overcome the limitations of the PISA method, especially in cases of MR with elliptic orifice shapes.
3D registration method based on scattered point cloud from B-model ultrasound image
Hu, Lei; Xu, Xiaojun; Wang, Lifeng; Guo, Na; Xie, Feng
2017-01-01
The paper proposes a registration method on 3D point cloud of the bone tissue surface extracted by B-mode ultrasound image and the CT model . The B-mode ultrasound is used to get two-dimensional images of the femur tissue . The binocular stereo vision tracker is used to obtain spatial position and orientation of the optical positioning device fixed on the ultrasound probe. The combining of the two kind of data generates 3D point cloud of the bone tissue surface. The pixel coordinates of the bone surface are automatically obtained from ultrasound image using an improved local phase symmetry (phase symmetry, PS) . The mapping of the pixel coordinates on the ultrasound image and 3D space is obtained through a series of calibration methods. In order to detect the effect of registration, six markers are implanted on a complete fresh pig femoral .The actual coordinates of the marks are measured with two methods. The first method is to get the coordinates with measuring tools under a coordinate system. The second is to measure the coordinates of the markers in the CT model registered with 3D point cloud using the ICP registration algorithm under the same coordinate system. Ten registration experiments are carried out in the same way. Error results are obtained by comparing the two sets of mark point coordinates obtained by two different methods. The results is that a minimum error is 1.34mm, the maximum error is 3.22mm,and the average error of 2.52mm; ICP registration algorithm calculates the average error of 0.89mm and a standard deviation of 0.62mm.This evaluation standards of registration accuracy is different from the average error obtained by the ICP registration algorithm. It can be intuitive to show the error caused by the operation of clinical doctors. Reference to the accuracy requirements of different operation in the Department of orthopedics, the method can be apply to the bone reduction and the anterior cruciate ligament surgery.
Energy Technology Data Exchange (ETDEWEB)
Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica
2015-07-01
In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)
Beam-pointing error compensation method of phased array radar seeker with phantom-bit technology
Directory of Open Access Journals (Sweden)
Qiuqiu WEN
2017-06-01
Full Text Available A phased array radar seeker (PARS must be able to effectively decouple body motion and accurately extract the line-of-sight (LOS rate for target missile tracking. In this study, the real-time two-channel beam pointing error (BPE compensation method of PARS for LOS rate extraction is designed. The PARS discrete beam motion principium is analyzed, and the mathematical model of beam scanning control is finished. According to the principle of the antenna element shift phase, both the antenna element shift phase law and the causes of beam-pointing error under phantom-bit conditions are analyzed, and the effect of BPE caused by phantom-bit technology (PBT on the extraction accuracy of the LOS rate is examined. A compensation method is given, which includes coordinate transforms, beam angle margin compensation, and detector dislocation angle calculation. When the method is used, the beam angle margin in the pitch and yaw directions is calculated to reduce the effect of the missile body disturbance and to improve LOS rate extraction precision by compensating for the detector dislocation angle. The simulation results validate the proposed method.
Iterative method to compute the Fermat points and Fermat distances of multiquarks
Energy Technology Data Exchange (ETDEWEB)
Bicudo, P. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)], E-mail: bicudo@ist.utl.pt; Cardoso, M. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)
2009-04-13
The multiquark confining potential is proportional to the total distance of the fundamental strings linking the quarks and antiquarks. We address the computation of the total string distance and of the Fermat points where the different strings meet. For a meson the distance is trivially the quark-antiquark distance. For a baryon the problem was solved geometrically from the onset by Fermat and by Torricelli, it can be determined just with a rule and a compass, and we briefly review it. However we also show that for tetraquarks, pentaquarks, hexaquarks, etc., the geometrical solution is much more complicated. Here we provide an iterative method, converging fast to the correct Fermat points and the total distances, relevant for the multiquark potentials.
Energy Technology Data Exchange (ETDEWEB)
Tumelero, Fernanda; Petersen, Claudio Zen; Goncalves, Glenio Aguiar [Universidade Federal de Pelotas, Capao do Leao, RS (Brazil). Programa de Pos Graduacao em Modelagem Matematica; Schramm, Marcelo [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica
2016-12-15
In this work, we report a solution to solve the Neutron Point Kinetics Equations applying the Polynomial Approach Method. The main idea is to expand the neutron density and delayed neutron precursors as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions and the analytical continuation is used to determine the solutions of the next intervals. A genuine error control is developed based on an analogy with the Rest Theorem. For illustration, we also report simulations for different approaches types (linear, quadratic and cubic). The results obtained by numerical simulations for linear approximation are compared with results in the literature.
A new method for retrieving silver points and separated instruments from root canals.
Suter, B
1998-06-01
A new method for the removal of metallic canal obstructions is presented. After gaining access to the coronal end of the separated instrument or silver point, a circular groove is prepared around it using ultrasonic tips. A short piece of fine stainless-steel tubing can now be pushed over the exposed end of the object. A Hedström file is pushed in a clockwise turning motion through the tube to wedge between the tube and end of the object. This produces a good interlocking between the separated instrument or silver point, the tube, and Hedström file. The three connected objects can now be removed coronally using relatively high forces. This technique may be more efficient than the endo extractor technique which is using a tube and cyanoacrylate.
Directory of Open Access Journals (Sweden)
A. Kandil
2016-10-01
Full Text Available A multiset is a collection of objects in which repetition of elements is essential. This paper is an attempt to explore the theoretical aspects of multiset by extending the notions of compact, proximity relation and proximal neighborhood to the multiset context. Examples of new multiset topologies, open multiset cover, compact multiset and many identities involving the concept of multiset have been introduced. Further, an integral examples of multiset proximity relations are obtained. A multiset topology induced by a multiset proximity relation on a multiset M has been presented. Also the concept of multiset δ- neighborhood in the multiset proximity space which furnishes an alternative approach to the study of multiset proximity spaces has been mentioned. Finally, some results on this new approach have been obtained and one of the most important results is: every T4- multiset space is semi-compatible with multiset proximity relation δ on M (Theorem 5.10.
Directory of Open Access Journals (Sweden)
Urriza I
2010-01-01
Full Text Available Abstract This paper presents a word length selection method for the implementation of digital controllers in both fixed-point and floating-point hardware on FPGAs. This method uses the new types defined in the VHDL-2008 fixed-point and floating-point packages. These packages allow customizing the word length of fixed and floating point representations and shorten the design cycle simplifying the design of arithmetic operations. The method performs bit-true simulations in order to determine the word length to represent the constant coefficients and the internal signals of the digital controller while maintaining the control system specifications. A mixed-signal simulation tool is used to simulate the closed loop system as a whole in order to analyze the impact of the quantization effects and loop delays on the control system performance. The method is applied to implement a digital controller for a switching power converter. The digital circuit is implemented on an FPGA, and the simulations are experimentally verified.
Lague, D.
2014-12-01
High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.
Eren, M; Dagdeviren, B; Bolca, O; Polat, M; Gürlertop, Y; Norgaz, T; Tezel, T
2001-02-01
This study was designed to assess the reliability of the proximal isovelocity surface area (PISA) method for the estimation of shunt quantification in perimembranous ventricular septal defects (PVSD). The study group was composed of 30 patients (age 11 +/- 7 years, 13 female) with PVSD. The shunt flow (Qp-Qs) and the ratio of the pulmonary flow to the systemic flow (Qp/Qs) were calculated by spectral Doppler and catheterization. The Qp-Qs, the defect area (DA), and the shunt volume (SV) were obtained by the PISA method. The PISA method estimated the DA (cm(2)/m(2)), the SV (cm(3)/m(2)), and the Qp-Qs (L/min/m(2)) to be equal to (2 x pi x R(2) x NL)/(V(max) x Body surface area), DA x TVI(shunt), and to SV x Heart rate, respectively (R is the distance of the maximal PISA from the first aliasing line to the left ventricular side of the defect, NL is the nyquist limit, and V(max) and TVI(shunt) are the peak velocity and time-velocity integral of transdefect Doppler tracing obtained by continuous-wave Doppler). The PISA method (3.4 +/- 1.5 L/min/m(2)) underestimated the Qp-Qs according to spectral Doppler (r = 0.96, P PISA findings (Qp-Qs, DA, SV) and the catheterization Qp/Qs (r = 0.86, 0.84, and 0.86; P PISA findings in identifying large defects were high (0.90, 0.93, and 0.90 for cut-off values of Qp-Qs = 3.67 L/min/m(2), DA = 0.44 cm(2)/m(2), and SV = 43 cm(3)/m(2), respectively). As a result, the PISA method can be a simple and reliable alternative to the spectral Doppler method in the identification of large shunts in PVSD.
Optimization Research on Ampacity of Underground High Voltage Cable Based on Interior Point Method
Huang, Feng; Li, Jing
2017-12-01
The conservative operation method which takes unified current-carrying capacity as maximum load current can’t make full use of the overall power transmission capacity of the cable. It’s not the optimal operation state for the cable cluster. In order to improve the transmission capacity of underground cables in cluster, this paper regards the maximum overall load current as the objective function and the temperature of any cables lower than maximum permissible temperature as constraint condition. The interior point method which is very effective for nonlinear problem is put forward to solve the extreme value of the problem and determine the optimal operating current of each loop. The results show that the optimal solutions obtained with the purposed method is able to increase the total load current about 5%. It greatly improves the economic performance of the cable cluster.
Matsuki, Keisuke; Kenmoku, Tomonori; Ochiai, Nobuyasu; Sugaya, Hiroyuki; Banks, Scott A
2016-06-14
Several published articles have reported 3-dimensional glenohumeral kinematics using model-image registration techniques. However, different methods to compute the translations were used in these articles. The purpose of this study was to compare glenohumeral translations calculated with three different methods. Fifteen healthy males with a mean age of 31 years (range, 27-36 years old) were enrolled in this study. Fluoroscopic images during scapular plane elevation were recorded at 30 frames per second for the right shoulder in each subject, and CT-derived models of the humerus and the scapula were matched with the silhouette of the bones in the fluoroscopic images using model-image registration techniques. Glenohumeral translations were computed with three methods: relative position of the origins of the humeral and scapular models, contact points of the two models, and relative positions based upon the calculated glenohumeral center of rotation (CoR). In the supero-inferior direction, translations calculated with the three methods were roughly parallel, with the maximum difference of 1.6mm (Ptranslations with the origins and CoR were parallel; however, translations computed with the origins and contact point describe arcs that differ by almost 2mm at low humeral elevation angles and converge at higher degrees of humeral elevation (Ptranslations calculated using three methods showed statistically significant differences that may be important when comparing detailed results of different studies. However, these relatively small differences are likely subclinical, so that all three methods can reasonably be used for description of glenohumeral translations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Preliminary phytochemical screening, proximate and elemental ...
African Journals Online (AJOL)
The seed powder of Moringa oleifera was analysed for its phytochemical, proximate and elemental composition using Folin-Denis spectrophotometric method, gravimetric method and energy dispersing X-ray fluorescence (EDXRF) transmission emission technique respectively. The seed powder had the following proximate ...
A RECOGNITION METHOD FOR AIRPLANE TARGETS USING 3D POINT CLOUD DATA
Directory of Open Access Journals (Sweden)
M. Zhou
2012-07-01
Full Text Available LiDAR is capable of obtaining three dimension coordinates of the terrain and targets directly and is widely applied in digital city, emergent disaster mitigation and environment monitoring. Especially because of its ability of penetrating the low density vegetation and canopy, LiDAR technique has superior advantages in hidden and camouflaged targets detection and recognition. Based on the multi-echo data of LiDAR, and combining the invariant moment theory, this paper presents a recognition method for classic airplanes (even hidden targets mainly under the cover of canopy using KD-Tree segmented point cloud data. The proposed algorithm firstly uses KD-tree to organize and manage point cloud data, and makes use of the clustering method to segment objects, and then the prior knowledge and invariant recognition moment are utilized to recognise airplanes. The outcomes of this test verified the practicality and feasibility of the method derived in this paper. And these could be applied in target measuring and modelling of subsequent data processing.
Curvature computation in volume-of-fluid method based on point-cloud sampling
Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.
2018-01-01
This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.
Directory of Open Access Journals (Sweden)
L. Gézero
2017-05-01
Full Text Available In the last few years, LiDAR sensors installed in terrestrial vehicles have been revealed as an efficient method to collect very dense 3D georeferenced information. The possibility of creating very dense point clouds representing the surface surrounding the sensor, at a given moment, in a very fast, detailed and easy way, shows the potential of this technology to be used for cartography and digital terrain models production in large scale. However, there are still some limitations associated with the use of this technology. When several acquisitions of the same area with the same device, are made, differences between the clouds can be observed. The range of that differences can go from few centimetres to some several tens of centimetres, mainly in urban and high vegetation areas where the occultation of the GNSS system introduces a degradation of the georeferenced trajectory. Along this article a different method point cloud registration is proposed. In addition to the efficiency and speed of execution, the main advantages of the method are related to the fact that the adjustment is continuously made over the trajectory, based on the GPS time. The process is fully automatic and only information recorded in the standard LAS files is used, without the need for any auxiliary information, in particular regarding the trajectory.
Gézero, L.; Antunes, C.
2017-05-01
In the last few years, LiDAR sensors installed in terrestrial vehicles have been revealed as an efficient method to collect very dense 3D georeferenced information. The possibility of creating very dense point clouds representing the surface surrounding the sensor, at a given moment, in a very fast, detailed and easy way, shows the potential of this technology to be used for cartography and digital terrain models production in large scale. However, there are still some limitations associated with the use of this technology. When several acquisitions of the same area with the same device, are made, differences between the clouds can be observed. The range of that differences can go from few centimetres to some several tens of centimetres, mainly in urban and high vegetation areas where the occultation of the GNSS system introduces a degradation of the georeferenced trajectory. Along this article a different method point cloud registration is proposed. In addition to the efficiency and speed of execution, the main advantages of the method are related to the fact that the adjustment is continuously made over the trajectory, based on the GPS time. The process is fully automatic and only information recorded in the standard LAS files is used, without the need for any auxiliary information, in particular regarding the trajectory.
METHOD OF GREEN FUNCTIONS IN MATHEMATICAL MODELLING FOR TWO-POINT BOUNDARY-VALUE PROBLEMS
Directory of Open Access Journals (Sweden)
E. V. Dikareva
2015-01-01
Full Text Available Summary. In many applied problems of control, optimization, system theory, theoretical and construction mechanics, for problems with strings and nods structures, oscillation theory, theory of elasticity and plasticity, mechanical problems connected with fracture dynamics and shock waves, the main instrument for study these problems is a theory of high order ordinary differential equations. This methodology is also applied for studying mathematical models in graph theory with different partitioning based on differential equations. Such equations are used for theoretical foundation of mathematical models but also for constructing numerical methods and computer algorithms. These models are studied with use of Green function method. In the paper first necessary theoretical information is included on Green function method for multi point boundary-value problems. The main equation is discussed, notions of multi-point boundary conditions, boundary functionals, degenerate and non-degenerate problems, fundamental matrix of solutions are introduced. In the main part the problem to study is formulated in terms of shocks and deformations in boundary conditions. After that the main results are formulated. In theorem 1 conditions for existence and uniqueness of solutions are proved. In theorem 2 conditions are proved for strict positivity and equal measureness for a pair of solutions. In theorem 3 existence and estimates are proved for the least eigenvalue, spectral properties and positivity of eigenfunctions. In theorem 4 the weighted positivity is proved for the Green function. Some possible applications are considered for a signal theory and transmutation operators.
Approximate Dual Averaging Method for Multiagent Saddle-Point Problems with Stochastic Subgradients
Directory of Open Access Journals (Sweden)
Deming Yuan
2014-01-01
Full Text Available This paper considers the problem of solving the saddle-point problem over a network, which consists of multiple interacting agents. The global objective function of the problem is a combination of local convex-concave functions, each of which is only available to one agent. Our main focus is on the case where the projection steps are calculated approximately and the subgradients are corrupted by some stochastic noises. We propose an approximate version of the standard dual averaging method and show that the standard convergence rate is preserved, provided that the projection errors decrease at some appropriate rate and the noises are zero-mean and have bounded variance.
A Riccati-Based Interior Point Method for Efficient Model Predictive Control of SISO Systems
DEFF Research Database (Denmark)
Hagdrup, Morten; Johansson, Rolf; Bagterp Jørgensen, John
2017-01-01
model parts separate. The controller is designed based on the deterministic model, while the Kalman filter results from the stochastic part. The controller is implemented as a primal-dual interior point (IP) method using Riccati recursion and the computational savings possible for SISO systems......This paper presents an algorithm for Model Predictive Control of SISO systems. Based on a quadratic objective in addition to (hard) input constraints it features soft upper as well as lower constraints on the output and an input rate-of-change penalty term. It keeps the deterministic and stochastic...
Mang, Samuel; Bucher, Hannes; Nickolaus, Peter
2016-01-01
The scintillation proximity assay (SPA) technology has been widely used to establish high throughput screens (HTS) for a range of targets in the pharmaceutical industry. PDE12 (aka. 2'- phosphodiesterase) has been published to participate in the degradation of oligoadenylates that are involved in the establishment of an antiviral state via the activation of ribonuclease L (RNAse-L). Degradation of oligoadenylates by PDE12 terminates these antiviral activities, leading to decreased resistance of cells for a variety of viral pathogens. Therefore inhibitors of PDE12 are discussed as antiviral therapy. Here we describe the use of the yttrium silicate SPA bead technology to assess inhibitory activity of compounds against PDE12 in a homogeneous, robust HTS feasible assay using tritiated adenosine-P-adenylate ([3H]ApA) as substrate. We found that the used [3H]ApA educt, was not able to bind to SPA beads, whereas the product [3H]AMP, as known before, was able to bind to SPA beads. This enables the measurement of PDE12 activity on [3H]ApA as a substrate using a wallac microbeta counter. This method describes a robust and high throughput capable format in terms of specificity, commonly used compound solvents, ease of detection and assay matrices. The method could facilitate the search for PDE12 inhibitors as antiviral compounds.
New spatial upscaling methods for multi-point measurements: From normal to p-normal
Liu, Feng; Li, Xin
2017-12-01
Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.
Wang, D.; Hollaus, M.; Pfeifer, N.
2017-09-01
Classification of wood and leaf components of trees is an essential prerequisite for deriving vital tree attributes, such as wood mass, leaf area index (LAI) and woody-to-total area. Laser scanning emerges to be a promising solution for such a request. Intensity based approaches are widely proposed, as different components of a tree can feature discriminatory optical properties at the operating wavelengths of a sensor system. For geometry based methods, machine learning algorithms are often used to separate wood and leaf points, by providing proper training samples. However, it remains unclear how the chosen machine learning classifier and features used would influence classification results. To this purpose, we compare four popular machine learning classifiers, namely Support Vector Machine (SVM), Na¨ıve Bayes (NB), Random Forest (RF), and Gaussian Mixture Model (GMM), for separating wood and leaf points from terrestrial laser scanning (TLS) data. Two trees, an Erytrophleum fordii and a Betula pendula (silver birch) are used to test the impacts from classifier, feature set, and training samples. Our results showed that RF is the best model in terms of accuracy, and local density related features are important. Experimental results confirmed the feasibility of machine learning algorithms for the reliable classification of wood and leaf points. It is also noted that our studies are based on isolated trees. Further tests should be performed on more tree species and data from more complex environments.
Directory of Open Access Journals (Sweden)
D. Wang
2017-09-01
Full Text Available Classification of wood and leaf components of trees is an essential prerequisite for deriving vital tree attributes, such as wood mass, leaf area index (LAI and woody-to-total area. Laser scanning emerges to be a promising solution for such a request. Intensity based approaches are widely proposed, as different components of a tree can feature discriminatory optical properties at the operating wavelengths of a sensor system. For geometry based methods, machine learning algorithms are often used to separate wood and leaf points, by providing proper training samples. However, it remains unclear how the chosen machine learning classifier and features used would influence classification results. To this purpose, we compare four popular machine learning classifiers, namely Support Vector Machine (SVM, Na¨ıve Bayes (NB, Random Forest (RF, and Gaussian Mixture Model (GMM, for separating wood and leaf points from terrestrial laser scanning (TLS data. Two trees, an Erytrophleum fordii and a Betula pendula (silver birch are used to test the impacts from classifier, feature set, and training samples. Our results showed that RF is the best model in terms of accuracy, and local density related features are important. Experimental results confirmed the feasibility of machine learning algorithms for the reliable classification of wood and leaf points. It is also noted that our studies are based on isolated trees. Further tests should be performed on more tree species and data from more complex environments.
A Semantic Modelling Framework-Based Method for Building Reconstruction from Point Clouds
Directory of Open Access Journals (Sweden)
Qingdong Wang
2016-09-01
Full Text Available Over the past few years, there has been an increasing need for semantic information in automatic city modelling. However, due to the complexity of building structure, the semantic reconstruction of buildings is still a challenging task because it is difficult to extract architectural rules and semantic information from the data. To improve the insufficiencies, we present a semantic modelling framework-based approach for automated building reconstruction using the semantic information extracted from point clouds or images. In this approach, a semantic modelling framework is designed to describe and generate the building model, and a workflow is established for extracting the semantic information of buildings from an unorganized point cloud and converting the semantic information into the semantic modelling framework. The technical feasibility of our method is validated using three airborne laser scanning datasets, and the results are compared with other related works comprehensively, which indicate that our approach can simplify the reconstruction process from a point cloud and generate 3D building models with high accuracy and rich semantic information.
Feature extraction from 3D lidar point clouds using image processing methods
Zhu, Ling; Shortridge, Ashton; Lusch, David; Shi, Ruoming
2011-10-01
Airborne LiDAR data have become cost-effective to produce at local and regional scales across the United States and internationally. These data are typically collected and processed into surface data products by contractors for state and local communities. Current algorithms for advanced processing of LiDAR point cloud data are normally implemented in specialized, expensive software that is not available for many users, and these users are therefore unable to experiment with the LiDAR point cloud data directly for extracting desired feature classes. The objective of this research is to identify and assess automated, readily implementable GIS procedures to extract features like buildings, vegetated areas, parking lots and roads from LiDAR data using standard image processing tools, as such tools are relatively mature with many effective classification methods. The final procedure adopted employs four distinct stages. First, interpolation is used to transfer the 3D points to a high-resolution raster. Raster grids of both height and intensity are generated. Second, multiple raster maps - a normalized surface model (nDSM), difference of returns, slope, and the LiDAR intensity map - are conflated to generate a multi-channel image. Third, a feature space of this image is created. Finally, supervised classification on the feature space is implemented. The approach is demonstrated in both a conceptual model and on a complex real-world case study, and its strengths and limitations are addressed.
An unconventional GIS-based method to assess landslide susceptibility using point data features
Adami, S.; Bresolin, M.; Carraretto, M.; Castelletti, P.; Corò, D.; Di Mario, F.; Fiaschi, S.; Frasson, T.; Gandolfo, L.; Mazzalai, L.; Padovan, T.; Sartori, F.; Viganò, A.; Zulian, A.; De Agostini, A.; Pajola, M.; Floris, M.
2012-04-01
In this work are reported the results of a project performed by the students attending the course "GIS techniques in Applied Geology", in the master level of the Geological Sciences degree from the Department of Geosciences, University of Padua. The project concerns the evaluation of landslide susceptibility in the Val d'Agno basin, located in the North-Eastern Italian Alps and included in the Vicenza Province (Veneto Region, NE Italy). As well known, most of the models proposed to assess landslide susceptibility are based on the availability of spatial information on landslides and related predisposing environmental factors. Landslides and related factors are spatially combined in GIS systems to weight the influence of each predisposing factor and produce landslide susceptibility maps. The first and most important input factor is the layer landslide, which has to contain as minimum information shape and type of landslides, so it must be a polygon feature. In Italy, as well as in many countries all around the world, location and type of landslides are available in the main spatial databases (AVI project and IFFI project), but in few cases mass movements are delimited, thus they are spatially represented by point features. As an example, in the Vicenza Province, the IFFI database contains 1692 landslides stored in a point feature, but only 383 were delimited and stored in a polygon feature. In order to provide a method that allows to use all the information available and make an effective spatial prediction also in areas where mass movements are mainly stored in point features, punctual data representing landslide in the Val d'Agno basin have been buffered obtaining polygon features, which have been combined with morphometric (elevation, slope, aspect and curvature) and non-morphometric (land use, distance of roads and distance of river) factors. Two buffers have been created: the first has a radius of 10 meters, the minimum required for the analysis, and the second
Directory of Open Access Journals (Sweden)
Ilaria Iaconeta
2017-09-01
Full Text Available The simulation of large deformation problems, involving complex history-dependent constitutive laws, is of paramount importance in several engineering fields. Particular attention has to be paid to the choice of a suitable numerical technique such that reliable results can be obtained. In this paper, a Material Point Method (MPM and a Galerkin Meshfree Method (GMM are presented and verified against classical benchmarks in solid mechanics. The aim is to demonstrate the good behavior of the methods in the simulation of cohesive-frictional materials, both in static and dynamic regimes and in problems dealing with large deformations. The vast majority of MPM techniques in the literatrue are based on some sort of explicit time integration. The techniques proposed in the current work, on the contrary, are based on implicit approaches, which can also be easily adapted to the simulation of static cases. The two methods are presented so as to highlight the similarities to rather than the differences from “standard” Updated Lagrangian (UL approaches commonly employed by the Finite Elements (FE community. Although both methods are able to give a good prediction, it is observed that, under very large deformation of the medium, GMM lacks robustness due to its meshfree natrue, which makes the definition of the meshless shape functions more difficult and expensive than in MPM. On the other hand, the mesh-based MPM is demonstrated to be more robust and reliable for extremely large deformation cases.
Chen, Yong; Chen, Chang
2014-08-01
In optical pressure measurement of wind-tunnel test, triangle mesh is usually built to rectify the images that are distorted in geometry. In this paper, a novel method of control points selection of triangle mesh is proposed by combining the artificial points and margin control points. For the problem that in the condition of wind the margin control point is difficult to extract due to model distortion and grey variation, an improved Smallest Univalue Segment Assimilating Nucleus algorithm based on region selection and adaptive threshold is designed. The connection method is employed to verify the availability of points, which avoids that the noisy points are mistakenly regarded as the angular points. The distorted images of aircraft model are rectified and the results are analyzed. Experiments demonstrate that the proposed method greatly improves the rectification effect.
Creating the Data Basis for Environmental Evaluations with the Oil Point Method
DEFF Research Database (Denmark)
Bey, Niki; Lenau, Torben Anker
1999-01-01
it is the case with rules-of-thumb. The central idea is that missing indicators can be calculated or estimated by the designers themselves.After discussing energy-related environmental evaluation and arguing for its application in evaluation of concepts, the paper focuses on the basic problem of missing data...... and describes the way in which the problem may be solved by making Oil Point evaluations. Sources of energy data are mentioned. Typical deficits to be aware of - such as the negligence of efficiency factors - are revealed and discussed. Comparative case studies which have shown encouraging results are mentioned......In order to support designers in decision-making, some methods have been developed which are based on environmental indicators. These methods, however, can only be used, if indicators for the specific product concept exist and are readily available.Based on this situation, the authors developed...
Distance-based microfluidic quantitative detection methods for point-of-care testing.
Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James
2016-04-07
Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.
A node-based smoothed point interpolation method for dynamic analysis of rotating flexible beams
Du, C. F.; Zhang, D. G.; Li, L.; Liu, G. R.
2017-10-01
We proposed a mesh-free method, the called node-based smoothed point interpolation method (NS-PIM), for dynamic analysis of rotating beams. A gradient smoothing technique is used, and the requirements on the consistence of the displacement functions are further weakened. In static problems, the beams with three types of boundary conditions are analyzed, and the results are compared with the exact solution, which shows the effectiveness of this method and can provide an upper bound solution for the deflection. This means that the NS-PIM makes the system soften. The NS-PIM is then further extended for solving a rigid-flexible coupled system dynamics problem, considering a rotating flexible cantilever beam. In this case, the rotating flexible cantilever beam considers not only the transverse deformations, but also the longitudinal deformations. The rigid-flexible coupled dynamic equations of the system are derived via employing Lagrange's equations of the second type. Simulation results of the NS-PIM are compared with those obtained using finite element method (FEM) and assumed mode method. It is found that compared with FEM, the NS-PIM has anti-ill solving ability under the same calculation conditions.
A novel method for fast Change-Point detection on simulated time series and electrocardiogram data.
Directory of Open Access Journals (Sweden)
Jin-Peng Qi
Full Text Available Although Kolmogorov-Smirnov (KS statistic is a widely used method, some weaknesses exist in investigating abrupt Change Point (CP problems, e.g. it is time-consuming and invalid sometimes. To detect abrupt change from time series fast, a novel method is proposed based on Haar Wavelet (HW and KS statistic (HWKS. First, the two Binary Search Trees (BSTs, termed TcA and TcD, are constructed by multi-level HW from a diagnosed time series; the framework of HWKS method is implemented by introducing a modified KS statistic and two search rules based on the two BSTs; and then fast CP detection is implemented by two HWKS-based algorithms. Second, the performance of HWKS is evaluated by simulated time series dataset. The simulations show that HWKS is faster, more sensitive and efficient than KS, HW, and T methods. Last, HWKS is applied to analyze the electrocardiogram (ECG time series, the experiment results show that the proposed method can find abrupt change from ECG segment with maximal data fluctuation more quickly and efficiently, and it is very helpful to inspect and diagnose the different state of health from a patient's ECG signal.
A method of 3D object recognition and localization in a cloud of points
Bielicki, Jerzy; Sitnik, Robert
2013-12-01
The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.
National Research Council Canada - National Science Library
Adane, T; Tilahun, B; Haki, Gulelat Desse; Shimelis, A; Negussie, R
2013-01-01
Although taro is widely grown in Ethiopia, it is an underutilized crop and little is known about its proximate and micro-element composition and the antinutritional factors of the raw, boiled and fermented products...
N. Somaratne; K. R. J. Smettem
2014-01-01
Application of the conventional chloride mass balance (CMB) method to point recharge dominant groundwater basins can substantially under-estimate long-term average annual recharge by not accounting for the effects of localized surface water inputs. This is because the conventional CMB method ignores the duality of infiltration and recharge found in karstic systems, where point recharge can be a contributing factor. When point recharge is present in groundwater basins,...
Directory of Open Access Journals (Sweden)
Jae Joon Hwang
Full Text Available Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT, evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23% by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul
2014-01-01
Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
Federal Laboratory Consortium — The Proximal Probes Facility consists of laboratories for microscopy, spectroscopy, and probing of nanostructured materials and their functional properties. At the...
Yang, Fengping; Xiao, Fangfei
2017-03-01
Current control methods include hardware control and software control corresponding to the inherent unbalance problem of neutral point voltage in three level NPC inverter. The hardware control is rarely used due to its high cost. In this paper, a new compound control method has been presented based on the vector method of virtual space and traditional hysteresis control of neutral point voltage, which can make up the shortcoming of the virtual control without the feedback control system of neutral point voltage and the blind area of hysteresis control and control the deviation and wave of neutral point voltage. The accuracy of this method has been demonstrated by simulation.
Nikazad, Touraj; Abbasi, Mokhtar
2017-04-01
In this paper, we introduce a subclass of strictly quasi-nonexpansive operators which consists of well-known operators as paracontracting operators (e.g., strictly nonexpansive operators, metric projections, Newton and gradient operators), subgradient projections, a useful part of cutter operators, strictly relaxed cutter operators and locally strongly Féjer operators. The members of this subclass, which can be discontinuous, may be employed by fixed point iteration methods; in particular, iterative methods used in convex feasibility problems. The closedness of this subclass, with respect to composition and convex combination of operators, makes it useful and remarkable. Another advantage with members of this subclass is the possibility to adapt them to handle convex constraints. We give convergence result, under mild conditions, for a perturbation resilient iterative method which is based on an infinite pool of operators in this subclass. The perturbation resilient iterative methods are relevant and important for their possible use in the framework of the recently developed superiorization methodology for constrained minimization problems. To assess the convergence result, the class of operators and the assumed conditions, we illustrate some extensions of existence research works and some new results.
Hartig, Dave; Waluga, Thomas; Scholl, Stephan
2015-09-25
The elution by characteristic point (ECP) method provides a rapid approach to determine whole isotherm data with small material usage. It is especially desired wherever the adsorbent or the adsorbate is expensive, toxic or only available in small amounts. However, the ECP method is limited to adsorbents that are well optimized for chromatographic use and therefore provide a high number of theoretical plates when packed into columns (2000 or more for Langmuir type isotherms are suggested). Here we present a novel approach that uses a new profile correction to apply the ECP method to poorly optimized adsorbents with less than 200 theoretical plates. Non-ideality effects are determined using a dead volume marker injection and the resulting marker profile is used to compensate the named effects considering their dependency from the actual concentration instead of assuming rectangular profiles. Experimental and literature data are used to compare the new ECP approach with batch method results. Copyright © 2015 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
YAN Li
2016-04-01
Full Text Available This paper proposes a rigorous registration method of multi-view point clouds constrained by closed-loop conditions for the problems of existing algorithms. In our approach, the point-to-tangent-plane iterative closest point algorithm is used firstly to calculate coordinate transformation parameters of all adjacent point clouds respectively. Then the single-site point cloud is regarded as registration unit and the transformation parameters are considered as random observations to construct conditional equations, and then the transformation parameters can be corrected by conditional adjustments to achieve global optimum. Two practical experiments of point clouds acquired by a terrestrial laser scanner are shown for demonstrating the feasibility and validity of our methods. Experimental results show that the registration accuracy and reliability of the point clouds with sampling interval of millimeter or centimeter level can be improved by increasing the scanning overlap.
Nguyen, Hoang Long; Belton, David; Helmholz, Petra
2016-06-01
The demand for accurate spatial data has been increasing rapidly in recent years. Mobile laser scanning (MLS) systems have become a mainstream technology for measuring 3D spatial data. In a MLS point cloud, the point clouds densities of captured point clouds of interest features can vary: they can be sparse and heterogeneous or they can be dense. This is caused by several factors such as the speed of the carrier vehicle and the specifications of the laser scanner(s). The MLS point cloud data needs to be processed to get meaningful information e.g. segmentation can be used to find meaningful features (planes, corners etc.) that can be used as the inputs for many processing steps (e.g. registration, modelling) that are more difficult when just using the point cloud. Planar features are dominating in manmade environments and they are widely used in point clouds registration and calibration processes. There are several approaches for segmentation and extraction of planar objects available, however the proposed methods do not focus on properly segment MLS point clouds automatically considering the different point densities. This research presents the extension of the segmentation method based on planarity of the features. This proposed method was verified using both simulated and real MLS point cloud datasets. The results show that planar objects in MLS point clouds can be properly segmented and extracted by the proposed segmentation method.
Directory of Open Access Journals (Sweden)
H. L. Nguyen
2016-06-01
Full Text Available The demand for accurate spatial data has been increasing rapidly in recent years. Mobile laser scanning (MLS systems have become a mainstream technology for measuring 3D spatial data. In a MLS point cloud, the point clouds densities of captured point clouds of interest features can vary: they can be sparse and heterogeneous or they can be dense. This is caused by several factors such as the speed of the carrier vehicle and the specifications of the laser scanner(s. The MLS point cloud data needs to be processed to get meaningful information e.g. segmentation can be used to find meaningful features (planes, corners etc. that can be used as the inputs for many processing steps (e.g. registration, modelling that are more difficult when just using the point cloud. Planar features are dominating in manmade environments and they are widely used in point clouds registration and calibration processes. There are several approaches for segmentation and extraction of planar objects available, however the proposed methods do not focus on properly segment MLS point clouds automatically considering the different point densities. This research presents the extension of the segmentation method based on planarity of the features. This proposed method was verified using both simulated and real MLS point cloud datasets. The results show that planar objects in MLS point clouds can be properly segmented and extracted by the proposed segmentation method.
Treatment of three- and four-part proximal humeral fractures with locking proximal humerus plate.
Sun, Jing-Cheng; Li, Yu-Lin; Ning, Guang-Zhi; Wu, Qiang; Feng, Shi-Qing
2013-08-01
The purpose of this study was to evaluate the effectiveness and complications of the locking proximal humerus plate to treat proximal humerus fractures. A retrospective clinical trial. Department of Orthopaedics, Tianjin Medical University General Hospital. Sixty-eight consecutive patients with three- or four-part fractures of the proximal humerus were treated with locking proximal humerus plates. The deltopectoral anterolateral acromial approach was used to the proximal humerus; open reduction and locking proximal humerus plate were applied. Constant Score was used to measure the shoulder functional recovery, and Visual Analog Scale (VAS) was used to measure subjective evaluation of pain. The radiology was observed. After average 26.7 months, the average Constant Score was 72.6 ± 13.2 points and the average VAS was 1.2 ± 0.8 points. All the complications such as screw perforation into the glenohumeral joint, screws loosening, soft tissue infections, avascular necrosis and delayed union occurred in eight cases (11.8 %). The effectiveness of the locking proximal humerus plate was similar to other published literatures on treating fractures of the proximal humerus; however, a lower complications rate in short follow-up time was observed in this study. It may potentially provide a favorable option for treating three- or four-part fractures of the proximal humerus. Dealing with each particular fracture pattern, surgeons should have a decision of appropriate way to internal fixation.
Zürcher, Fabian; Brugger, Nicolas; Jahren, Silje Ekroll; de Marchi, Stefano Fausto; Seiler, Christian
2017-05-01
The accuracy of the proximal isovelocity surface area (PISA) method for the quantification of mitral regurgitation (MR), in the case of multiple jets, is unknown. The aim of this study was to evaluate different two-dimensional (2D) and three-dimensional (3D) PISA methods using 3D color Doppler data sets. Several regurgitant volumes (Rvols) were simulated using a pulsatile pump connected to a phantom equipped with single and double regurgitant orifices of different sizes and interspaces. A flowmeter served as the reference method. Transthoracic (TTE) and transoesophageal echocardiography (TEE) were used to acquire the 3D data sets. Offline, Rvols were calculated by 2D PISA methods based on hemispheric and hemicylindric assumptions and by 3D integrated PISA. A fusion of the PISA was observed in the setting of narrow-spaced regurgitant orifices; compared with flowmeter, Rvol was underestimated using the single hemispheric PISA model (TTE: Bland-Altman bias ± limit of agreement, -17.5 ± 8.9 mL; TEE: -15.9 ± 7.3 mL) and overestimated using the double hemispheric PISA model (TTE: +7.1 ± 14.6 mL; TEE: +10.4 ± 11.9 mL). The combined approach (hemisphere for single orifice, hemicylinder with two bases for nonfused PISAs, and hemicylinder with one base for fused PISAs) was more precise (TTE: -3.4 ± 6.3 mL; TEE: -1.9 ± 5.6 mL). Three-dimensional integrated PISA was the most accurate method to quantify Rvol (TTE: -2.1 ± 6.5 mL; TEE -3.2 ± 4.8 mL). In the setting of double MR orifices, the 2D combined approach and integrated 3D PISA appear to be superior as compared with the conventional hemispheric method, thus providing tools for the challenging quantification of MR with multiple jets. Copyright © 2017 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
Robust numerical method for integration of point-vortex trajectories in two dimensions
Smith, Spencer A.; Boghosian, Bruce M.
2011-05-01
The venerable two-dimensional (2D) point-vortex model plays an important role as a simplified version of many disparate physical systems, including superfluids, Bose-Einstein condensates, certain plasma configurations, and inviscid turbulence. This system is also a veritable mathematical playground, touching upon many different disciplines from topology to dynamic systems theory. Point-vortex dynamics are described by a relatively simple system of nonlinear ordinary differential equations which can easily be integrated numerically using an appropriate adaptive time stepping method. As the separation between a pair of vortices relative to all other intervortex length scales decreases, however, the computational time required diverges. Accuracy is usually the most discouraging casualty when trying to account for such vortex motion, though the varying energy of this ostensibly Hamiltonian system is a potentially more serious problem. We solve these problems by a series of coordinate transformations: We first transform to action-angle coordinates, which, to lowest order, treat the close pair as a single vortex amongst all others with an internal degree of freedom. We next, and most importantly, apply Lie transform perturbation theory to remove the higher-order correction terms in succession. The overall transformation drastically increases the numerical efficiency and ensures that the total energy remains constant to high accuracy.
Wen, Xiaodong; He, Lei; Shi, Chunsheng; Deng, Qingwen; Wang, Jiwei; Zhao, Xia
2013-11-01
In this work, the analytical performance of conventional spectrophotometer was improved through the coupling of effective preconcentration method with spectrophotometric determination. Rapidly synergistic cloud point extraction (RS-CPE) was used to pre-concentrate ultra trace cobalt and firstly coupled with spectrophotometric determination. The developed coupling was simple, rapid and efficient. The factors influencing RS-CPE and spectrophotometer were optimized. Under the optimal conditions, the limit of detection (LOD) was 0.6 μg L-1, with sensitivity enhancement factor of 23. The relative standard deviation (RSD) for seven replicate measurements of 50 μg L-1 of cobalt was 4.3%. The recoveries for the spiked samples were in the acceptable range of 93.8-105%.
The Methods of Hilbert Spaces and Structure of the Fixed-Point Set of Lipschitzian Mapping
Directory of Open Access Journals (Sweden)
Jarosław Górnicki
2009-01-01
Full Text Available The purpose of this paper is to prove, by asymptotic center techniques and the methods of Hilbert spaces, the following theorem. Let H be a Hilbert space, let C be a nonempty bounded closed convex subset of H, and let M=[an,k]n,k≥1 be a strongly ergodic matrix. If T:C→C is a lipschitzian mapping such that liminfn→∞infm=0,1,...∑k=1∞an,k·‖Tk+m‖2<2, then the set of fixed points Fix T={x∈C:Tx=x} is a retract of C. This result extends and improves the corresponding results of [7, Corollary 9] and [8, Corollary 1].
Directory of Open Access Journals (Sweden)
Florin POPESCU
2017-12-01
Full Text Available Early warning system (EWS based on a reliable forecasting process has become a critical component of the management of large complex industrial projects in the globalized transnational environment. The purpose of this research is to critically analyze the forecasting methods from the point of view of early warning, choosing those useful for the construction of EWS. This research addresses complementary techniques, using Bayesian Networks, which addresses both uncertainties and causality in project planning and execution, with the goal of generating early warning signals for project managers. Even though Bayesian networks have been widely used in a range of decision-support applications, their application as early warning systems for project management is still new.
Directory of Open Access Journals (Sweden)
Kaijun Zhou
2017-09-01
Full Text Available The Jump Point Search (JPS algorithm is adopted for local path planning of the driverless car under urban environment, and it is a fast search method applied in path planning. Firstly, a vector Geographic Information System (GIS map, including Global Positioning System (GPS position, direction, and lane information, is built for global path planning. Secondly, the GIS map database is utilized in global path planning for the driverless car. Then, the JPS algorithm is adopted to avoid the front obstacle, and to find an optimal local path for the driverless car in the urban environment. Finally, 125 different simulation experiments in the urban environment demonstrate that JPS can search out the optimal and safety path successfully, and meanwhile, it has a lower time complexity compared with the Vector Field Histogram (VFH, the Rapidly Exploring Random Tree (RRT, A*, and the Probabilistic Roadmaps (PRM algorithms. Furthermore, JPS is validated usefully in the structured urban environment.
Improved incremental conductance method for maximum power point tracking using cuk converter
Directory of Open Access Journals (Sweden)
M. Saad Saoud
2014-03-01
Full Text Available The Algerian government relies on a strategy focused on the development of inexhaustible resources such as solar and uses to diversify energy sources and prepare the Algeria of tomorrow: about 40% of the production of electricity for domestic consumption will be from renewable sources by 2030, Therefore it is necessary to concentrate our forces in order to reduce the application costs and to increment their performances, Their performance is evaluated and compared through theoretical analysis and digital simulation. This paper presents simulation of improved incremental conductance method for maximum power point tracking (MPPT using DC-DC cuk converter. This improved algorithm is used to track MPPs because it performs precise control under rapidly changing Atmospheric conditions, Matlab/ Simulink were employed for simulation studies.
Solving eigenvalue problems on curved surfaces using the Closest Point Method
Macdonald, Colin B.
2011-06-01
Eigenvalue problems are fundamental to mathematics and science. We present a simple algorithm for determining eigenvalues and eigenfunctions of the Laplace-Beltrami operator on rather general curved surfaces. Our algorithm, which is based on the Closest Point Method, relies on an embedding of the surface in a higher-dimensional space, where standard Cartesian finite difference and interpolation schemes can be easily applied. We show that there is a one-to-one correspondence between a problem defined in the embedding space and the original surface problem. For open surfaces, we present a simple way to impose Dirichlet and Neumann boundary conditions while maintaining second-order accuracy. Convergence studies and a series of examples demonstrate the effectiveness and generality of our approach. © 2011 Elsevier Inc.
Combining Accuracy and Efficiency: An Incremental Focal-Point Method Based on Pair Natural Orbitals.
Fiedler, Benjamin; Schmitz, Gunnar; Hättig, Christof; Friedrich, Joachim
2017-12-12
In this work, we present a new pair natural orbitals (PNO)-based incremental scheme to calculate CCSD(T) and CCSD(T0) reaction, interaction, and binding energies. We perform an extensive analysis, which shows small incremental errors similar to previous non-PNO calculations. Furthermore, slight PNO errors are obtained by using TPNO = TTNO with appropriate values of 10-7 to 10-8 for reactions and 10-8 for interaction or binding energies. The combination with the efficient MP2 focal-point approach yields chemical accuracy relative to the complete basis-set (CBS) limit. In this method, small basis sets (cc-pVDZ, def2-TZVP) for the CCSD(T) part are sufficient in case of reactions or interactions, while some larger ones (e.g., (aug)-cc-pVTZ) are necessary for molecular clusters. For these larger basis sets, we show the very high efficiency of our scheme. We obtain not only tremendous decreases of the wall times (i.e., factors >102) due to the parallelization of the increment calculations as well as of the total times due to the application of PNOs (i.e., compared to the normal incremental scheme) but also smaller total times with respect to the standard PNO method. That way, our new method features a perfect applicability by combining an excellent accuracy with a very high efficiency as well as the accessibility to larger systems due to the separation of the full computation into several small increments.
Lenton, T M; Livina, V N; Dakos, V; van Nes, E H; Scheffer, M
2012-03-13
We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings.
Experimental Method for Determination of Self-Heating at the Point of Measurement
Sestan, D.; Zvizdic, D.; Grgec-Bermanec, L.
2017-09-01
This paper presents a new experimental method and algorithm for the determination of self-heating of platinum resistance thermometer (PRT) when the temperature instability of medium of interest would prevent an accurate self-heating determination using standard methods. In temperature measurements performed by PRT, self-heating is one of the most common sources of error and arises from the increase in sensor temperature caused by the dissipation of electrical heat when measurement current is applied to the temperature sensing element. This increase depends mainly on the applied current and the thermal resistances between thermometer sensing element and the environment surrounding the thermometer. The method is used for determination of self-heating of a 100 Ω industrial PRT which is intended for measurement of air temperature inside the saturation chamber of the primary dew/frost point generator at the Laboratory for Process Measurement (HMI/FSB-LPM). Self-heating is first determined for conditions present during the comparison calibration of the thermometer, using the calibration bath. The measurements were then repeated with thermometer being placed in an air stream inside the saturation chamber. The experiment covers the temperature range between -65°C and 10°C. Self-heating is determined for two different air velocities and two different vertical positions of PRT in relation to the chamber bottom.
Application of distributed point source method (DPSM) to wave propagation in anisotropic media
Fooladi, Samaneh; Kundu, Tribikram
2017-04-01
Distributed Point Source Method (DPSM) was developed by Placko and Kundu1, as a technique for modeling electromagnetic and elastic wave propagation problems. DPSM has been used for modeling ultrasonic, electrostatic and electromagnetic fields scattered by defects and anomalies in a structure. The modeling of such scattered field helps to extract valuable information about the location and type of defects. Therefore, DPSM can be used as an effective tool for Non-Destructive Testing (NDT). Anisotropy adds to the complexity of the problem, both mathematically and computationally. Computation of the Green's function which is used as the fundamental solution in DPSM is considerably more challenging for anisotropic media, and it cannot be reduced to a closed-form solution as is done for isotropic materials. The purpose of this study is to investigate and implement DPSM for an anisotropic medium. While the mathematical formulation and the numerical algorithm will be considered for general anisotropic media, more emphasis will be placed on transversely isotropic materials in the numerical example presented in this paper. The unidirectional fiber-reinforced composites which are widely used in today's industry are good examples of transversely isotropic materials. Development of an effective and accurate NDT method based on these modeling results can be of paramount importance for in-service monitoring of damage in composite structures.
LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics
Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel
2017-10-01
Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.
Numerical simulation of electromagnetic acoustic transducers using distributed point source method.
Eskandarzade, M; Kundu, T; Liebeaux, N; Placko, D; Mobadersani, F
2010-05-01
In spite of many advances in analytical and numerical modeling techniques for solving different engineering problems, an efficient solution technique for wave propagation modeling of an electromagnetic acoustic transducer (EMAT) system is still missing. Distributed point source method (DPSM) is a newly developed semi-analytical technique developed since 2000 by Placko and Kundu (2007) [12] that is very powerful and straightforward for solving various engineering problems, including acoustic and electromagnetic modeling problems. In this study DPSM has been employed to model the Lorentz type EMAT with a meander line and flat spiral type coil. The problem of wave propagation has been solved and eddy currents and Lorentz forces have been calculated. The displacement field has been obtained as well. While modeling the Lorentz force the effect of dynamic magnetic field has been considered that most current analyses ignore. Results from this analysis have been compared with the finite element method (FEM) based predictions. It should be noted that with the current state of knowledge this problem can be solved only by FEM. Copyright 2009 Elsevier B.V. All rights reserved.
Arahman, Nasrul; Maimun, Teuku; Mukramah, Syawaliah
2017-01-01
The composition of polymer solution and the methods of membrane preparation determine the solidification process of membrane. The formation of membrane structure prepared via non-solvent induced phase separation (NIPS) method is mostly determined by phase separation process between polymer, solvent, and non-solvent. This paper discusses the phase separation process of polymer solution containing Polyethersulfone (PES), N-methylpirrolidone (NMP), and surfactant Tetronic 1307 (Tet). Cloud point experiment is conducted to determine the amount of non-solvent needed on induced phase separation. Amount of water required as a non-solvent decreases by the addition of surfactant Tet. Kinetics of phase separation for such system is studied by the light scattering measurement. With the addition of Tet., the delayed phase separation is observed and the structure growth rate decreases. Moreover, the morphology of fabricated membrane from those polymer systems is analyzed by scanning electron microscopy (SEM). The images of both systems show the formation of finger-like macrovoids through the cross-section.
Directory of Open Access Journals (Sweden)
Ibrahim Karahan
2016-04-01
Full Text Available Let C be a nonempty closed convex subset of a real Hilbert space H. Let {T_{n}}:C›H be a sequence of nearly nonexpansive mappings such that F:=?_{i=1}^{?}F(T_{i}?Ø. Let V:C›H be a ?-Lipschitzian mapping and F:C›H be a L-Lipschitzian and ?-strongly monotone operator. This paper deals with a modified iterative projection method for approximating a solution of the hierarchical fixed point problem. It is shown that under certain approximate assumptions on the operators and parameters, the modified iterative sequence {x_{n}} converges strongly to x^{*}?F which is also the unique solution of the following variational inequality: ?0, ?x?F. As a special case, this projection method can be used to find the minimum norm solution of above variational inequality; namely, the unique solution x^{*} to the quadratic minimization problem: x^{*}=argmin_{x?F}?x?². The results here improve and extend some recent corresponding results of other authors.
Fernández-Peña, Rosario; Fuentes-Pumarola, Concepció; Malagón-Aguilera, M Carme; Bonmatí-Tomàs, Anna; Bosch-Farré, Cristina; Ballester-Ferrando, David
2016-09-01
Adapting university programmes to European Higher Education Area criteria has required substantial changes in curricula and teaching methodologies. Reflective learning (RL) has attracted growing interest and occupies an important place in the scientific literature on theoretical and methodological aspects of university instruction. However, fewer studies have focused on evaluating the RL methodology from the point of view of nursing students. To assess nursing students' perceptions of the usefulness and challenges of RL methodology. Mixed method design, using a cross-sectional questionnaire and focus group discussion. The research was conducted via self-reported reflective learning questionnaire complemented by focus group discussion. Students provided a positive overall evaluation of RL, highlighting the method's capacity to help them better understand themselves, engage in self-reflection about the learning process, optimize their strengths and discover additional training needs, along with searching for continuous improvement. Nonetheless, RL does not help them as much to plan their learning or identify areas of weakness or needed improvement in knowledge, skills and attitudes. Among the difficulties or challenges, students reported low motivation and lack of familiarity with this type of learning, along with concerns about the privacy of their reflective journals and about the grading criteria. In general, students evaluated RL positively. The results suggest areas of needed improvement related to unfamiliarity with the methodology, ethical aspects of developing a reflective journal and the need for clear evaluation criteria. Copyright © 2016 Elsevier Ltd. All rights reserved.
New encapsulation method using low-melting-point alloy for sealing micro heat pipes
Energy Technology Data Exchange (ETDEWEB)
Li, Congming; Wang, Xiaodong; Zhou, Chuanpeng; Luo, Yi; Li, Zhixin; Li, Sidi [Dalian University of Technology, Dalian (China)
2017-06-15
This study proposed a method using Low-melting-point alloy (LMPA) to seal Micro heat pipes (MHPs), which were made of Si substrates and glass covers. Corresponding MHP structures with charging and sealing channels were designed. Three different auxiliary structures were investigated to study the sealability of MHPs with LMPA. One structure is rectangular and the others are triangular with corner angles of 30° and 45°, respectively. Each auxiliary channel for LMPA is 0.5 mm wide and 135 μm deep. LMPA was heated to molten state, injected to channels, and then cooled to room temperature. According to the material characteristic of LMPA, the alloy should swell in the following 12 hours to form strong interaction force between LMPA and Si walls. Experimental results show that the flow speed of liquid LMPA in channels plays an important role in sealing MHPs, and the sealing performance of triangular structures is always better than that of rectangular structure. Therefore, triangular structures are more suitable in sealing MHPs than rectangular ones. LMPA sealing is a plane packaging method that can be applied in the thermal management of high-power IC device and LEDs. Meanwhile, implanting in commercialized fabrication of MHP is easy.
Fixed point theorems in locally convex spacesÃ¢Â€Â”the Schauder mapping method
Directory of Open Access Journals (Sweden)
S. Cobzaş
2006-03-01
Full Text Available In the appendix to the book by F. F. Bonsal, Lectures on Some Fixed Point Theorems of Functional Analysis (Tata Institute, Bombay, 1962 a proof by Singbal of the Schauder-Tychonoff fixed point theorem, based on a locally convex variant of Schauder mapping method, is included. The aim of this note is to show that this method can be adapted to yield a proof of Kakutani fixed point theorem in the locally convex case. For the sake of completeness we include also the proof of Schauder-Tychonoff theorem based on this method. As applications, one proves a theorem of von Neumann and a minimax result in game theory.
A Regularized Algorithm for the Proximal Split Feasibility Problem
Directory of Open Access Journals (Sweden)
Zhangsong Yao
2014-01-01
Full Text Available The proximal split feasibility problem has been studied. A regularized method has been presented for solving the proximal split feasibility problem. Strong convergence theorem is given.
Xu, Ying; Zhou, Hongde
2017-09-01
Soluble microbial products, consisting of protein, carbohydrate and humics, are generally considered as the main membrane foulants during the performance of membrane bioreactors. Nitrate and nitrite have been proved to affect the determination of carbohydrate when anthrone-sulfuric acid photometric method is used. In this study, three chemical analytical methods based on photometric assay, including the standard curve method, conventional standard addition method and H-point standard addition method, were assessed for the quantification of carbohydrate in order to reduce the interference. Three methods were carried out for both artificial and real wastewater sample analysis. The results indicated a significant amount of matrix interference, which could be eliminated through the use of H-point standard addition. This study proposed the H-point standard addition method as a more accurate and convenient option for carbohydrate determination.
Proximity Effects in Superconductor-Graphene Junctions
Cuellar, Fabian A.; Perconte, David; Martin, Marie-Blandine; Dlubak, Bruno; Piquemail, Maelis; Bernard, Rozenn; Trastoy, Juan; Moreau-Luchaire, Constance; Seneor, Pierre; Villegas, Javier E.; Kidambi, Piran; Hofmann, Stephan; Robertson, John
2015-03-01
Superconducting proximity effects are of particular interest in graphene: because of its band structure, an unconventional (specular) Andreev reflection is expected. In this context, high-Tc superconductor-graphene junctions are especially attractive. In these, the size of the superconducting energy-gap may exceed the graphene doping inhomogeneities around the Dirac point, which should favor the observation of the specular Andreev reflection. Yet, the fabrication of high-Tc superconductor-graphene junctions is challenging: the usual growth and lithography processes in both materials are incompatible. We report here on a fabrication method that allow us to fabricate planar cuprate superconductor-graphene junctions, which we characterize via conductance spectroscopy. We analyze the features in the conductance spectra as a function of graphene doping, and discuss them in the framework of the Andreev reflection. Work supported by Labex Nanosaclay.
Brunner, Alexander; Thormann, Sebastian; Babst, Reto
2012-08-01
This study evaluated our results after minimally invasive percutaneous plating of proximal humeral shaft fractures with the Proximal Humerus Internal Locking System (PHILOS, Synthes, Switzerland). Between 2005 and 2008, 15 patients with unilateral displaced proximal humeral shaft fractures were treated and followed up over a median period of 27 months (range, 12-38 months). The final follow-up included anteroposterior and lateral x-rays, range of shoulder motion, pain by visual analog scale (VAS), the Constant-Murley shoulder score, the Disabilities of Arm, Shoulder and Elbow (DASH) score, and the Short Form 36 (SF36) assessment. No intraoperative or postoperative complications occurred. No secondary fracture displacement or radial neuropathy was observed postoperatively. One patient had open reduction and internal fixation for pseudoarthrosis 16 months after the initial surgery. At the final follow-up, the median range of motion of the operated shoulder was flexion, 145°; extension, 45°; internal rotation, 40°; external rotation, 70°; and abduction, 135°. Median results on outcome assessments were VAS pain score, 0 points; Constant-Murley score, 74 points, representing 87.5% of the median Constant-Murley score of the unaffected shoulder; DASH score, 34 points, and the SF36, 83 points. Minimally invasive percutaneous plating with the PHILOS offers a valid option in the treatment of proximal humeral shaft fractures with comparable rates of nonunion and lower rates of radial neuropathy compared with open procedures. Furthermore, the results indicate that this method is associated with lower rates of wound infection and a shorter stay in the hospital for the patient. Copyright © 2012 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Development of a Cloud-Point Extraction Method for Cobalt Determination in Natural Water Samples
Directory of Open Access Journals (Sweden)
Mohammad Reza Jamali
2013-01-01
Full Text Available A new, simple, and versatile cloud-point extraction (CPE methodology has been developed for the separation and preconcentration of cobalt. The cobalt ions in the initial aqueous solution were complexed with 4-Benzylpiperidinedithiocarbamate, and Triton X-114 was added as surfactant. Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the cobalt content was measured by flame atomic absorption spectrometry. The main factors affecting CPE procedure, such as pH, concentration of ligand, amount of Triton X-114, equilibrium temperature, and incubation time were investigated and optimized. Under the optimal conditions, the limit of detection (LOD for cobalt was 0.5 μg L-1, with sensitivity enhancement factor (EF of 67. Calibration curve was linear in the range of 2–150 μg L-1, and relative standard deviation was 3.2% (c=100 μg L-1; n=10. The proposed method was applied to the determination of trace cobalt in real water samples with satisfactory analytical results.
CaFE: a tool for binding affinity prediction using end-point free energy methods.
Liu, Hui; Hou, Tingjun
2016-07-15
Accurate prediction of binding free energy is of particular importance to computational biology and structure-based drug design. Among those methods for binding affinity predictions, the end-point approaches, such as MM/PBSA and LIE, have been widely used because they can achieve a good balance between prediction accuracy and computational cost. Here we present an easy-to-use pipeline tool named Calculation of Free Energy (CaFE) to conduct MM/PBSA and LIE calculations. Powered by the VMD and NAMD programs, CaFE is able to handle numerous static coordinate and molecular dynamics trajectory file formats generated by different molecular simulation packages and supports various force field parameters. CaFE source code and documentation are freely available under the GNU General Public License via GitHub at https://github.com/huiliucode/cafe_plugin It is a VMD plugin written in Tcl and the usage is platform-independent. tingjunhou@zju.edu.cn. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
An end-point method based on graphene oxide for RNase H analysis and inhibitors screening.
Zhao, Chuan; Fan, Jialong; Peng, Lan; Zhao, Lijian; Tong, Chunyi; Wang, Wei; Liu, Bin
2017-04-15
As a highly conserved damage repair protein, RNase H can hydrolysis DNA-RNA heteroduplex endonucleolytically and cleave RNA-DNA junctions as well. In this study, we have developed an accurate and sensitive RNase H assay based on fluorophore-labeled chimeric substrate hydrolysis and the differential affinity of graphene oxide on RNA strand with different length. This end-point measurement method can detect RNase H in a range of 0.01 to 1 units /mL with a detection limit of 5.0×10 -3 units/ mL under optimal conditions. We demonstrate the utility of the assay by screening antibiotics, resulting in the identification of gentamycin, streptomycin and kanamycin as inhibitors with IC 50 of 60±5µM, 70±8µM and 300±20µM, respectively. Furthermore, the assay was reliably used to detect RNase H in complicated biosamples and found that RNase H activity in tumor cells was inhibited by gentamycin and streptomycin sulfate in a concentration-dependent manner. The average level of RNase H in serums of HBV infection group was similar to that of control group. In summary, the assay provides an alternative tool for biochemical analysis for this enzyme and indicates the feasibility of high throughput screening inhibitors of RNase H in vitro and in vivo. Copyright © 2016 Elsevier B.V. All rights reserved.
An efficient method for removing point sources from full-sky radio interferometric maps
Berger, Philippe; Oppermann, Niels; Pen, Ue-Li; Shaw, J. Richard
2017-12-01
A new generation of wide-field radio interferometers designed for 21-cm surveys is being built as drift scan instruments allowing them to observe large fractions of the sky. With large numbers of antennas and frequency channels, the enormous instantaneous data rates of these telescopes require novel, efficient, data management and analysis techniques. The m-mode formalism exploits the periodicity of such data with the sidereal day, combined with the assumption of statistical isotropy of the sky, to achieve large computational savings and render optimal analysis methods computationally tractable. We present an extension to that work that allows us to adopt a more realistic sky model and treat objects such as bright point sources. We develop a linear procedure for deconvolving maps, using a Wiener filter reconstruction technique, which simultaneously allows filtering of these unwanted components. We construct an algorithm, based on the Sherman-Morrison-Woodbury formula, to efficiently invert the data covariance matrix, as required for any optimal signal-to-noise ratio weighting. The performance of our algorithm is demonstrated using simulations of a cylindrical transit telescope.
A Numerical Investigation of CFRP-Steel Interfacial Failure with Material Point Method
Shen, Luming; Faleh, Haydar; Al-Mahaidi, Riadh
2010-05-01
The success of retrofitting steel structures by using the Carbon Fibre Reinforced Polymers (CFRP) significantly depends on the performance and integrity of CFRP-steel joint and the effectiveness of the adhesive used. Many of the previous numerical studies focused on the design and structural performance of the CFRP-steel system and neglected the mechanical responses of adhesive layer, which results in the lack of understanding in how the adhesive layer between the CFRP and steel performs during the loading and failure stages. Based on the recent observation on the failure of CFRP-steel bond in the double lap shear tests [1], a numerical approach is proposed in this study to simulate the delamination process of CFRP sheet from steel plate using the Material Point Method (MPM). In the proposed approach, an elastoplasticity model with a linear hardening and softening law is used to model the epoxy layer. The MPM [2], which does not employ fixed mesh-connectivity, is employed as a robust spatial discretization method to accommodate the multi-scale discontinuities involved in the CFRP-steel bond failure process. To demonstrate the potential of the proposed approach, a parametric study is conducted to investigate the effects of bond length and loading rates on the capacity and failure modes of CFRP-steel system. The evolution of the CFRP-steel bond failure and the distribution of stress and strain along bond length direction will be presented. The simulation results not only well match the available experimental data but also provide a better understanding on the physics behind the CFRP sheet delamination process.
Simulation of size segregation in granular flow with material point method
Directory of Open Access Journals (Sweden)
Fei Minglong
2017-01-01
Full Text Available Segregation is common in granular flows consisting of mixtures of particles differing in size or density. In gravity-driven flows, both gradients in total pressure (induced by gravity and gradients in velocity fluctuation fields (often associated with shear rate gradients work together to govern the evolution of segregation. Since the local shear rate and velocity fluctuations are dependent on the local concentration of the components, understanding the co-evolution of segregation and flow is critical for understanding and predicting flows where there can be a variety of particle sizes and densities, such as in nature and industry. Kinetic theory has proven to be a robust framework for predicting this simultaneous evolution but has a limit in its applicability to dense systems where collisions are highly correlated. In this paper, we introduce a model that captures the coevolution of these evolving dynamics for high density gravity driven granular mixtures. For the segregation dynamics we use a recently developed mixture theory (Fan & Hill 2011, New J. Phys; Hill & Tan 2014, J. Fluid Mech. which captures the combined effects of gravity and fluctuation fields on segregation evolution in high density granular flows. For the mixture flow dynamics, we use a recently proposed viscous-elastic-plastic constitutive model, which can describe the multi-state behaviors of granular materials, i.e. the granular solid, granular liquid and granular gas mechanical states (Fei et al. 2016, Powder Technol.. The platform we use for implementing this model is a modified Material Point Method (MPM, and we use discrete element method simulations of gravity-driven flow in an inclined channel to demonstrate that this new MPM model can predict the final segregation distribution as well as flow velocity profile well. We then discuss ongoing work where we are using this platform to test the effectiveness of particular segregation models under different boundary conditions.
Energy Technology Data Exchange (ETDEWEB)
Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)
2016-05-15
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced
Comparison of the Selected State-Of-The-Art 3D Indoor Scanning and Point Cloud Generation Methods
Directory of Open Access Journals (Sweden)
Ville V. Lehtola
2017-08-01
Full Text Available Accurate three-dimensional (3D data from indoor spaces are of high importance for various applications in construction, indoor navigation and real estate management. Mobile scanning techniques are offering an efficient way to produce point clouds, but with a lower accuracy than the traditional terrestrial laser scanning (TLS. In this paper, we first tackle the problem of how the quality of a point cloud should be rigorously evaluated. Previous evaluations typically operate on some point cloud subset, using a manually-given length scale, which would perhaps describe the ranging precision or the properties of the environment. Instead, the metrics that we propose perform the quality evaluation to the full point cloud and over all of the length scales, revealing the method precision along with some possible problems related to the point clouds, such as outliers, over-completeness and misregistration. The proposed methods are used to evaluate the end product point clouds of some of the latest methods. In detail, point clouds are obtained from five commercial indoor mapping systems, Matterport, NavVis, Zebedee, Stencil and Leica Pegasus: Backpack, and three research prototypes, Aalto VILMA , FGI Slammer and the Würzburg backpack. These are compared against survey-grade TLS point clouds captured from three distinct test sites that each have different properties. Based on the presented experimental findings, we discuss the properties of the proposed metrics and the strengths and weaknesses of the above mapping systems and then suggest directions for future research.
Application of a practical method for the isocenter point in vivo dosimetry by a transit signal
Energy Technology Data Exchange (ETDEWEB)
Piermattei, Angelo [UO di Fisica Sanitaria, Centro di Ricerca e Formazione ad Alta Tecnologia nelle Scienze Biomediche dell' Universita Cattolica Sacro Cuore, Campobasso (Italy); Fidanzio, Andrea [Istituto di Fisica, Universita Cattolica del Sacro Cuore, Rome (Italy); Azario, Luigi [Istituto di Fisica, Universita Cattolica del Sacro Cuore, Rome (Italy)] (and others)
2007-08-21
This work reports the results of the application of a practical method to determine the in vivo dose at the isocenter point, D{sub iso}, of brain thorax and pelvic treatments using a transit signal S{sub t}. The use of a stable detector for the measurement of the signal S{sub t} (obtained by the x-ray beam transmitted through the patient) reduces many of the disadvantages associated with the use of solid-state detectors positioned on the patient as their periodic recalibration, and their positioning is time consuming. The method makes use of a set of correlation functions, obtained by the ratio between S{sub t} and the mid-plane dose value, D{sub m}, in standard water-equivalent phantoms, both determined along the beam central axis. The in vivo measurement of D{sub iso} required the determination of the water-equivalent thickness of the patient along the beam central axis by the treatment planning system that uses the electron densities supplied by calibrated Hounsfield numbers of the computed tomography scanner. This way it is, therefore, possible to compare D{sub iso} with the stated doses, D{sub iso,TPS}, generally used by the treatment planning system for the determination of the monitor units. The method was applied in five Italian centers that used beams of 6 MV, 10 MV, 15 MV x-rays and {sup 60}Co {gamma}-rays. In particular, in four centers small ion-chambers were positioned below the patient and used for the S{sub t} measurement. In only one center, the S{sub t} signals were obtained directly by the central pixels of an EPID (electronic portal imaging device) equipped with commercial software that enabled its use as a stable detector. In the four centers where an ion-chamber was positioned on the EPID, 60 pelvic treatments were followed for two fields, an anterior-posterior or a posterior-anterior irradiation and a lateral-lateral irradiation. Moreover, ten brain tumors were checked for a lateral-lateral irradiation, and five lung tumors carried out with
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-05-01
To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have
A comparison of point counts with a new acoustic sampling method ...
African Journals Online (AJOL)
In our study, we compared results of traditional point counts with simultaneous acoustic samples obtained by automated soundscape recording units in the montane forest of Mount Cameroon. We showed that the estimates of species richness, abundance and community composition based on point counts and post-hoc ...
Energy Technology Data Exchange (ETDEWEB)
York, A.R. II [Sandia National Labs., Albuquerque, NM (United States). Engineering and Process Dept.
1997-07-01
The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.
Ammari, Michelle Mikhael; Soviero, Vera Mendes; da Silva Fidalgo, Tatiana Kelly; Lenzi, Michele; Ferreira, Daniele Masterson T P; Mattos, Cláudia Trindade; de Souza, Ivete Pomarico Ribeiro; Maia, Lucianne Cople
2014-10-01
The aim of this study was to perform a systematic review and meta-analysis on the effectiveness of sealing non-cavitated proximal caries lesions in primary and permanent teeth. Only controlled clinical trials and randomized controlled clinical trials that evaluated the effectiveness of sealing on non-cavitated proximal caries with a minimum follow-up of 12 months were included in the study. The primary outcome should be arrestment/progression of proximal caries evaluated by bitewing radiographs. A risk of bias evaluation based on the Cochrane Collaboration common scheme for bias was carried out for each study. The meta-analysis was performed on the studies considered low risk of bias and with pair-wise visual reading results through RevMan software. A comprehensive search was performed in the Systematic Electronic Databases: Pubmed, Cochrane Library, Scopus, IBI Web of Science, Lilacs, SIGLE, and on website Clinical trials.gov, through until June 2013. From 967 studies identified, 10 articles and 3 studies with partial results were assessed for eligibility. However three articles were excluded and our final sample included 10 studies. According to the risk of bias evaluation, six studies were considered "high" risk of bias, and four "low" risk of bias. The forest plot of the meta-analysis showed low heterogeneity (I(2)=29%) and a favourable outcome for the Infiltrant. The chance of caries progression when this technique was used was significantly lower (p=0.002) compared with Placebo. Our results suggest that the technique of sealing non-cavitated proximal caries seems to be effective in controlling proximal caries in the short and medium term. Further long-term randomized clinical trials are still necessary to increase this evidence. Contemporary dentistry is focused in minimally invasive approaches that prevent the destruction of sound dental tissues next to carious lesions. This paper searches for evidence of the efficacy of sealing/infiltrating non
Directory of Open Access Journals (Sweden)
L. Gézero
2017-05-01
Full Text Available The digital terrain models (DTM assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate “terrain points” from “no terrain points”, quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain’s shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.
Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A
2015-01-01
Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images.
Song, Zhiying; Jiang, Huiyan; Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu
2017-01-01
The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = -0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = -0.496, ED = 25.847) and the compared method (NC = -0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one.
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images
Directory of Open Access Journals (Sweden)
Zhiying Song
2017-01-01
Full Text Available The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS method and a dynamic threshold denoising (DTD method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933 on feature images and less Euclidean distance error (ED = 2.826 on landmark points, outperforming the source data (NC = −0.496, ED = 25.847 and the compared method (NC = −0.614, ED = 16.085. Moreover, our method is about ten times faster than the compared one.
Comparative study of building footprint estimation methods from LiDAR point clouds
Rozas, E.; Rivera, F. F.; Cabaleiro, J. C.; Pena, T. F.; Vilariño, D. L.
2017-10-01
Building area calculation from LiDAR points is still a difficult task with no clear solution. Their different characteristics, such as shape or size, have made the process too complex to automate. However, several algorithms and techniques have been used in order to obtain an approximated hull. 3D-building reconstruction or urban planning are examples of important applications that benefit of accurate building footprint estimations. In this paper, we have carried out a study of accuracy in the estimation of the footprint of buildings from LiDAR points. The analysis focuses on the processing steps following the object recognition and classification, assuming that labeling of building points have been previously performed. Then, we perform an in-depth analysis of the influence of the point density over the accuracy of the building area estimation. In addition, a set of buildings with different size and shape were manually classified, in such a way that they can be used as benchmark.
Calculation Method for Equilibrium Points in Dynamical Systems Based on Adaptive Sinchronization
Directory of Open Access Journals (Sweden)
Manuel Prian Rodríguez
2017-12-01
Full Text Available In this work, a control system is proposed as an equivalent numerical procedure whose aim is to obtain the natural equilibrium points of a dynamical system. These equilibrium points may be employed later as setpoint signal for different control techniques. The proposed procedure is based on the adaptive synchronization between an oscillator and a reference model driven by the oscillator state variables. A stability analysis is carried out and a simplified algorithm is proposed. Finally, satisfactory simulation results are shown.
Two-Step Robust Diagnostic Method for Identification of Multiple High Leverage Points
Arezoo Bagheri; Habshah Midi; A. H.M.R. Imon
2009-01-01
Problem statement: High leverage points are extreme outliers in the X-direction. In regression analysis, the detection of these leverage points becomes important due to their arbitrary large effects on the estimations as well as multicollinearity problems. Mahalanobis Distance (MD) has been used as a diagnostic tool for identification of outliers in multivariate analysis where it finds the distance between normal and abnormal groups of the data. Since the computation of MD relies on non-robus...
On the String Averaging Method for Sparse Common Fixed Points Problems.
Censor, Yair; Segal, Alexander
2009-07-01
We study the common fixed point problem for the class of directed operators. This class is important because many commonly used nonlinear operators in convex optimization belong to it. We propose a definition of sparseness of a family of operators and investigate a string-averaging algorithmic scheme that favorably handles the common fixed points problem when the family of operators is sparse. The convex feasibility problem is treated as a special case and a new subgradient projections algorithmic scheme is obtained.
Quality Assessment and Proximate Analysis of Amaranthus hybridus ...
African Journals Online (AJOL)
The aim of this research is to determine the quality and proximate composition of Amaranthus hybridus, Celosia argentea, and Talinum triangulare obtained from open markets in Benin City, Nigeria. Microbiological and proximate analysis were carried out using standard methods. Results of the proximate analysis revealed ...
Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization
Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.
2009-01-01
We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step,
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine
.e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i...... information in probabilistic inverse problems. Unfortunately, when this strategy is applied with the multiple-point-based simulation algorithm SNESIM the reproducibility of training image patterns is violated. In this study we suggest to combine sequential simulation with the frequency matching method...
Kronberg, James W.
1994-01-01
A proximity sensor based on a closed field circuit. The circuit comprises a ring oscillator using a symmetrical array of plates that creates an oscillating displacement current. The displacement current varies as a function of the proximity of objects to the plate array. Preferably the plates are in the form of a group of three pair of symmetric plates having a common center, arranged in a hexagonal pattern with opposing plates linked as a pair. The sensor produces logic level pulses suitable for interfacing with a computer or process controller. The proximity sensor can be incorporated into a load cell, a differential pressure gauge, or a device for measuring the consistency of a characteristic of a material where a variation in the consistency causes the dielectric constant of the material to change.
Neighborhoods and manageable proximity
Directory of Open Access Journals (Sweden)
Stavros Stavrides
2011-08-01
Full Text Available The theatricality of urban encounters is above all a theatricality of distances which allow for the encounter. The absolute “strangeness” of the crowd (Simmel 1997: 74 expressed, in its purest form, in the absolute proximity of a crowded subway train, does not generally allow for any movements of approach, but only for nervous hostile reactions and submissive hypnotic gestures. Neither forced intersections in the course of pedestrians or vehicles, nor the instantaneous crossing of distances by the technology of live broadcasting and remote control give birth to places of encounter. In the forced proximity of the metropolitan crowd which haunted the city of the 19th and 20th century, as well as in the forced proximity of the tele-presence which haunts the dystopic prospect of the future “omnipolis” (Virilio 1997: 74, the necessary distance, which is the stage of an encounter between different instances of otherness, is dissipated.
Atrofia muscular proximal familiar
Directory of Open Access Journals (Sweden)
José Antonio Levy
1962-09-01
Full Text Available Os autores relatam dois casos de atrofia muscular proximal familiar, moléstia caracterizada por déficit motor e atrofias musculares de distribuição proximal, secundárias a lesão de neurônios periféricos. Assim, como em outros casos descritos na literatura, foi feito inicialmente o diagnóstico de distrofia muscular progressiva. O diagnóstico correto foi conseguido com auxílio da eletromiografia e da biopsia muscular.
Habit control during growth on GaN point seed crystals by Na-flux method
Honjo, Masatomo; Imanishi, Masayuki; Imabayashi, Hiroki; Nakamura, Kosuke; Murakami, Kosuke; Matsuo, Daisuke; Maruyama, Mihoko; Imade, Mamoru; Yoshimura, Masashi; Mori, Yusuke
2017-01-01
The formation of the pyramidal habit is one of the requirements for the dramatic reduction of dislocations during growth on a tiny GaN seed called a “point seed”. In this study, we focus on controlling the growth habit to form a pyramidal shape in order to reduce the number of dislocations in the c-growth sector during growth on GaN point seeds. High temperature growth was found to change the growth habit from the truncated pyramidal shape to the pyramidal shape. As a result, the number of dislocations in the c-growth sector tended to decrease with increasing growth temperature.
Vannecke, T P W; Lampens, D R A; Ekama, G A; Volcke, E I P
2015-01-01
Simple titration methods certainly deserve consideration for on-site routine monitoring of volatile fatty acid (VFA) concentration and alkalinity during anaerobic digestion (AD), because of their simplicity, speed and cost-effectiveness. In this study, the 5 and 8 pH point titration methods for measuring the VFA concentration and carbonate system alkalinity (H2CO3*-alkalinity) were assessed and compared. For this purpose, synthetic solutions with known H2CO3*-alkalinity and VFA concentration as well as samples from anaerobic digesters treating three different kind of solid wastes were analysed. The results of these two related titration methods were verified with photometric and high-pressure liquid chromatography measurements. It was shown that photometric measurements lead to overestimations of the VFA concentration in the case of coloured samples. In contrast, the 5 pH point titration method provides an accurate estimation of the VFA concentration, clearly corresponding with the true value. Concerning the H2CO3*-alkalinity, the most accurate and precise estimations, showing very similar results for repeated measurements, were obtained using the 8 pH point titration. Overall, it was concluded that the 5 pH point titration method is the preferred method for the practical monitoring of AD of solid wastes due to its robustness, cost efficiency and user-friendliness.
Bockmann, Benjamin; Buecking, Benjamin; Franz, Daniel; Zettl, Ralph; Ruchholtz, Steffen; Mohr, Juliane
2015-07-04
The optimal treatment for proximal humeral fractures remains under debate. In this article, we report the mid-term results of patients who underwent the less-invasive implantation of a polyaxial locking plate for displaced proximal humeral fractures. This study included patients who were treated with a polyaxial locking plate via an anterolateral deltoid split approach from May 2008 to December 2011. We evaluated outcome parameters after a minimum follow-up period of 2.5 years (median 4.5 years, follow-up rate 62 %) including the age- and gender-dependent Constant score, the activities of daily living score, and the visual analog scale for both pain and subjective shoulder function. Of the 140 patients who underwent surgery, 114 were included in the follow-up and 71 completed the questionnaire. Fifteen patients (21 %) exhibited 2-fragment fractures, and 56 patients (79 %) exhibited 3- and 4-part fractures. The Constant score improved significantly (4.5 years: 70 ± 21, p fracture levels (before trauma: 27 ± 5, 4.5 years: 20 ± 8, p fracture morphology or gender. Although the less-invasive surgical procedure is a feasible treatment option in proximal humeral fractures with acceptable complications and considerable improvement during the first six months, a lengthy recovery time is required. The majority of our patients did not become pain-free or reach pre-fracture activity levels.
Point-of-use filtration method for the prevention of fungal contamination of hospital water.
Warris, A.; Onken, A.; Gaustad, P.; Janssen, W.; Lee, H. van der; Verweij, P.E.; Abrahamsen, T.G.
2010-01-01
Published data implicate hospital water as a potential source of opportunistic fungi that may cause life-threatening infections in immunocompromised patients. Point-of-care filters are known to retain bacteria, but little is known about their efficacy in reducing exposure to moulds. We investigated
Doing Close-Relative Research: Sticking Points, Method and Ethical Considerations
Degabriele Pace, Geraldine
2015-01-01
Doing insider research can raise many problematic issues, particularly if the insiders are also close relatives. This paper deals with complexities arising from research which is participatory in nature. Thus, this paper seeks to describe the various sticking points that were encountered by the researcher when she decided to embark on insider…
Tian, Zhen; Jia, Xun; Jiang, Steve B
2013-01-01
In the treatment plan optimization for intensity modulated radiation therapy (IMRT), dose-deposition coefficient (DDC) matrix is often pre-computed to parameterize the dose contribution to each voxel in the volume of interest from each beamlet of unit intensity. However, due to the limitation of computer memory and the requirement on computational efficiency, in practice matrix elements of small values are usually truncated, which inevitably compromises the quality of the resulting plan. A fixed-point iteration scheme has been applied in IMRT optimization to solve this problem, which has been reported to be effective and efficient based on the observations of the numerical experiments. In this paper, we aim to point out the mathematics behind this scheme and to answer the following three questions: 1) whether the fixed-point iteration algorithm converges or not? 2) when it converges, whether the fixed point solution is same as the original solution obtained with the complete DDC matrix? 3) if not the same, wh...
Shi, Yixun
2009-01-01
Based on a sequence of points and a particular linear transformation generalized from this sequence, two recent papers (E. Mauch and Y. Shi, "Using a sequence of number pairs as an example in teaching mathematics". Math. Comput. Educ., 39 (2005), pp. 198-205; Y. Shi, "Case study projects for college mathematics courses based on a particular…
Numerical Time Integration Methods for a Point Absorber Wave Energy Converter
DEFF Research Database (Denmark)
Zurkinden, Andrew Stephen; Kramer, Morten
2012-01-01
The objective of this abstract is to provide a review of models for motion simulation of marine structures with a special emphasis on wave energy converters. The time-domain model is applied to a point absorber system working in pitch mode only. The device is similar to the well-known Wavestar...
Direct measurement of surface-state conductance by microscopic four-point probe method
DEFF Research Database (Denmark)
Hasegawa, S.; Shiraki, I.; Tanikawa, T.
2002-01-01
For in situ measurements of local electrical conductivity of well defined crystal surfaces in ultrahigh vacuum, we have developed microscopic four-point probes with a probe spacing of several micrometres, installed in a scanning-electron - microscope/electron-diffraction chamber. The probe...
Proximal collagenous gastroenteritides:
DEFF Research Database (Denmark)
Nielsen, Ole Haagen; Riis, Lene Buhl; Danese, Silvio
2014-01-01
a systematic review of collagenous gastritis, collagenous sprue, and a combination thereof. METHOD: The search yielded 117 studies which were suitable for inclusion in the systematic review. Excluding repeated cases, 89 case reports and 28 case series were reported, whereas no prospective studies...... of these disorders is presented. The prognosis of both collagenous gastritis and sprue seems not to be as dismal as considered previously. Data point to involvement of immune or autoimmune mechanisms potentially driven by luminal antigens initiating the fibroinflammatory condition. CONCLUSIONS: To reach...
Painful Spastic Hip Dislocation: Proximal Femoral Resection
Albiñana, Javier; Gonzalez-Moran, Gaspar
2002-01-01
The dislocated hip in a non-ambulatory child with spastic paresis tends to be a painful interference to sleep, sitting upright, and perineal care. Proximal femoral resection-interposition arthroplasty is one method of treatment for this condition. We reviewed eight hips, two bilateral cases, with a mean follow-up of 30 months. Clinical improvement was observed in all except one case, with respect to pain relief and sitting tolerance. Some proximal migration was observed in three cases, despit...
Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris
2016-04-01
The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence
Evaluation of a multi-point method for determining acoustic impedance
Jones, Michael G.; Parrott, Tony L.
1989-01-01
A multipoint method for determining acoustic impedance was evaluated in comparison with the traditional standing wave and two-microphone methods using 30 test samples covering the reflection factor magnitude range 0.004-0.999. The multipoint method is shown to combine the strengths of the standing wave and two-microphone methods while avoiding some of their inherent weaknesses. In particular, the results obtained suggest that the multipoint method will be less subject to flow induced random error than the two-microphone method in the presence of significant broadband noise levels associated with mean flow.
[Experimental proximal carpectomy. Biodynamics].
Kuhlmann, J N
1992-01-01
Proximal carpectomy was performed in 10 fresh cadavre wrists. Dynamic x-rays were taken and the forces necessary to obtain different movements before and after the operation were measured. Comparison of these parameters clearly defines the advantages and limitations of carpectomy and indicates the reasons.
Donahue, Craig J.; Rais, Elizabeth A.
2009-01-01
This lab experiment illustrates the use of thermogravimetric analysis (TGA) to perform proximate analysis on a series of coal samples of different rank. Peat and coke are also examined. A total of four exercises are described. These are dry exercises as students interpret previously recorded scans. The weight percent moisture, volatile matter,…
... the Big Toe Ailments of the Smaller Toes Diabetic Foot Treatments Currently selected Injections and other Procedures Treatments ... from which the bone was taken if the foot/ankle surgeries done at the same time allow for it. ... problems after a PTBG include infection, fracture of the proximal tibia and pain related ...
A primal-dual algorithm framework for convex saddle-point optimization.
Zhang, Benxin; Zhu, Zhibin
2017-01-01
In this study, we introduce a primal-dual prediction-correction algorithm framework for convex optimization problems with known saddle-point structure. Our unified frame adds the proximal term with a positive definite weighting matrix. Moreover, different proximal parameters in the frame can derive some existing well-known algorithms and yield a class of new primal-dual schemes. We prove the convergence of the proposed frame from the perspective of proximal point algorithm-like contraction methods and variational inequalities approach. The convergence rate [Formula: see text] in the ergodic and nonergodic senses is also given, where t denotes the iteration number.
Precision nutrition - review of methods for point-of-care assessment of nutritional status.
Srinivasan, Balaji; Lee, Seoho; Erickson, David; Mehta, Saurabh
2017-04-01
Precision nutrition encompasses prevention and treatment strategies for optimizing health that consider individual variability in diet, lifestyle, environment and genes by accurately determining an individual's nutritional status. This is particularly important as malnutrition now affects a third of the global population, with most of those affected or their care providers having limited means of determining their nutritional status. Similarly, program implementers often have no way of determining the impact or success of their interventions, thus hindering their scale-up. Exciting new developments in the area of point-of-care diagnostics promise to provide improved access to nutritional status assessment, as a first step towards enabling precision nutrition and tailored interventions at both the individual and community levels. In this review, we focus on the current advances in developing portable diagnostics for assessment of nutritional status at point-of-care, along with the numerous design challenges in this process and potential solutions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Evaluation of percutaneous pinning in unstable proximal humeral fractures: A novel technique
Directory of Open Access Journals (Sweden)
Nishikant Kumar
2013-01-01
Full Text Available Management of unstable proximal humeral fractures has remained controversial since ages. Open reduction and internal fixation have resulted in devastating complications like stiffness of shoulder joint, avascular necrosis, infection, etc., We are presenting a novel method of percutaneous pinning of unstable proximal humeral fractures. All cases (32 were done closely without soft tissue stripping. All cases were followed-up for a period of 3 years; and results were assessed according to 100 point constant score. A total of 75% cases showed excellent to good results. To minimize the complications like pin site infection, loosening, neurovascular damage we used fixed pin site insertion technique, and threaded pins in osteoporotic patients. So percutaneous pinning is a safe and novel method of management of unstable proximal humeral fractures if certain principles are borne in mind before using it.
Winzor, Donald J
2004-02-15
As a response to recent expression of concern about possible unreliability of vapor pressure deficit measurements (K. Kiyosawa, Biophys. Chem. 104 (2003) 171-188), the results of published studies on the temperature dependence of the osmotic pressure of aqueous polyethylene glycol solutions are shown to account for the observed discrepancies between osmolality estimates obtained by freezing point depression and vapor pressure deficit osmometry--the cause of the concern.
Point Measurements of Fermi Velocities by a Time-of-Flight Method
DEFF Research Database (Denmark)
Falk, David S.; Henningsen, J. O.; Skriver, Hans Lomholt
1972-01-01
obtained one component of the velocity along half the circumference of the centrally symmetric orbit for B→∥[100]. The results are in agreement with current models for the Fermi surface. For B→∥[011], the electrons involved are not moving in a symmetry plane of the Fermi surface. In such cases one cannot...... masses for symmetry orbits of the Fermi surface, but differing slightly at general points. The comparison favors the Fourier model....
Directory of Open Access Journals (Sweden)
Marwan Abukhaled
2013-01-01
Full Text Available The variational iteration method is applied to solve a class of nonlinear singular boundary value problems that arise in physiology. The process of the method, which produces solutions in terms of convergent series, is explained. The Lagrange multipliers needed to construct the correctional functional are found in terms of the exponential integral and Whittaker functions. The method easily overcomes the obstacle of singularities. Examples will be presented to test the method and compare it to other existing methods in order to confirm fast convergence and significant accuracy.
Uncemented allograft-prosthetic composite reconstruction of the proximal femur
Directory of Open Access Journals (Sweden)
Li Min
2014-01-01
Full Text Available Background: Allograft-prosthetic composite can be divided into three groups names cemented, uncemented, and partially cemented. Previous studies have mainly reported outcomes in cemented and partially cemented allograft-prosthetic composites, but have rarely focused on the uncemented allograft-prosthetic composites. The objectives of our study were to describe a surgical technique for using proximal femoral uncemented allograft-prosthetic composite and to present the radiographic and clinical results. Materials and Methods: Twelve patients who underwent uncemented allograft-prosthetic composite reconstruction of the proximal femur after bone tumor resection were retrospectively evaluated at an average followup of 24.0 months. Clinical records and radiographs were evaluated. Results: In our series, union occurred in all the patients (100%; range 5-9 months. Until the most recent followup, there were no cases with infection, nonunion of the greater trochanter, junctional bone resorption, dislocation, allergic reaction, wear of acetabulum socket, recurrence, and metastasis. But there were three periprosthetic fractures which were fixed using cerclage wire during surgery. Five cases had bone resorption in and around the greater trochanter. The average Musculoskeletal Tumor Society (MSTS score and Harris hip score (HHS were 26.2 points (range 24-29 points and 80.6 points (range 66.2-92.7 points, respectively. Conclusions: These results showed that uncemented allograft-prosthetic composite could promote bone union through compression at the host-allograft junction and is a good choice for proximal femoral resection. Although this technology has its own merits, long term outcomes are yet not validated.
Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji
2016-01-20
The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method.
Energy Technology Data Exchange (ETDEWEB)
Zhu, Yong; Jiang, Wan-lu; Kong, Xiang-dong [Yanshan University, Hebei (China)
2017-02-15
In mechanical fault diagnosis and condition monitoring, extracting and eliminating the trend term of machinery signal are necessary. In this paper, an adaptive extraction method for trend term of machinery signal based on Extreme-point symmetric mode decomposition (ESMD) was proposed. This method fully utilized ESMD, including the self-adaptive decomposition feature and optimal fitting strategy. The effectiveness and practicability of this method are tested through simulation analysis and measured data validation. Results indicate that this method can adaptively extract various trend terms hidden in machinery signal, and has commendable self-adaptability. Moreover, the extraction results are better than those of empirical mode decomposition.
Energy Technology Data Exchange (ETDEWEB)
Baldwin, J.M. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems
1996-04-01
The Dimensional Inspection Techniques Specification (DITS) Project is an ongoing effort to produce tools and guidelines for optimum sampling and data analysis of machined parts, when measured using point-sample methods of dimensional metrology. This report is a compilation of results of a literature survey, conducted in support of the DITS. Over 160 citations are included, with author abstracts where available.
DEFF Research Database (Denmark)
Khoshfetrat Pakazad, Sina; Hansson, Anders; Andersen, Martin S.
2017-01-01
In this paper, we propose a distributed algorithm for solving coupled problems with chordal sparsity or an inherent tree structure which relies on primal–dual interior-point methods. We achieve this by distributing the computations at each iteration, using message-passing. In comparison to existi...
Solving a system of Volterra-Fredholm integral equations of the second kind via fixed point method
Hasan, Talaat I.; Salleh, Shaharuddin; Sulaiman, Nejmaddin A.
2015-12-01
In this paper, we consider the system of Volterra-Fredholm integral equations of the second kind (SVFI-2). We propose fixed point method (FPM) to solve SVFI-2. In addition, a few theorems and new algorithm is introduced. They are supported by numerical examples and simulations using Matlab. The results are reasonably good when compared with the exact solutions.
Czech Academy of Sciences Publication Activity Database
Pastorek, Lukáš; Sobol, Margaryta; Hozák, Pavel
2016-01-01
Roč. 146, č. 4 (2016), s. 391-406 ISSN 0948-6143 R&D Projects: GA TA ČR(CZ) TE01020118; GA ČR GA15-08738S; GA MŠk(CZ) ED1.1.00/02.0109; GA MŠk(CZ) LM2015062 Grant - others:Human Frontier Science Program(FR) RGP0017/2013 Institutional support: RVO:68378050 Keywords : Colocalization * Quantitative analysis * Pointed patterns * Transmission electron microscopy * Manders' coefficients * Immunohistochemistry Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.553, year: 2016
Payrau, Bernard; Qu?r?, Nadine; Bois, Danis
2011-01-01
Background A first study on vascular fasciatherapy enabled us to observe the turning of a turbulent blood flow into a laminar one, and a questioning on the process involved in this transformation emerged. The first question was: What is the nature of artery from the point of view of fascia? And a second question was: Which is the link permitting the observed process working in our first study? So this time, we are investigating a specific aspect of the big question that polarizes the interest...
Directory of Open Access Journals (Sweden)
Dominique Placko
2016-10-01
Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.
McGraner, Kristin L.; Robbins, Daniel
2010-01-01
Although many research questions in English education demand the use of qualitative methods, this paper will briefly explore how English education researchers and doctoral students may use statistics and quantitative methods to inform, complement, and/or deepen their inquiries. First, the authors will provide a general overview of the survey areas…
Calculation of condition indices for road structures using a deduct points method
CSIR Research Space (South Africa)
Roux, MP
2016-07-01
Full Text Available ) and relevancy (R) rating. The DER-rating method has been included in the Draft TMH19 Manual for the Visual Assessment of Road Structures. The D, E, and R ratings are used to calculate condition indices for road structures. The method used is a deduct...
Electrostatics of a Point Charge between Intersecting Planes: Exact Solutions and Method of Images
Mei, W. N.; Holloway, A.
2005-01-01
In this work, the authors present a commonly used example in electrostatics that could be solved exactly in a conventional manner, yet expressed in a compact form, and simultaneously work out special cases using the method of images. Then, by plotting the potentials and electric fields obtained from these two methods, the authors demonstrate that…
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
Mauro, Craig S.
2011-01-01
Proximal humeral fractures may present with many different configurations in patients with varying co-morbities and expectations. As a result, the treating physician must understand the fracture pattern, the quality of the bone, other patient-related factors, and the expanding range of reconstructive options to achieve the best functional outcome and to minimize complications. Current treatment options range from non-operative treatment with physical therapy to fracture fixation using percuta...
Directory of Open Access Journals (Sweden)
Du Wei-Shih
2011-01-01
Full Text Available Abstract In this paper, we introduce a new approach method to find a common element in the intersection of the set of the solutions of a finite family of equilibrium problems and the set of fixed points of a nonexpansive mapping in a real Hilbert space. Under appropriate conditions, some strong convergence theorems are established. The results obtained in this paper are new, and a few examples illustrating these results are given. Finally, we point out that some 'so-called' mixed equilibrium problems and generalized equilibrium problems in the literature are still usual equilibrium problems. 2010 Mathematics Subject Classification: 47H09; 47H10, 47J25.
Energy Technology Data Exchange (ETDEWEB)
Rider, M.J.; Castro, C.A.; Garcia, A.V. [State University of Campinas (Brazil). Electric Energy Systems Dept.; Paucar, V.L. [Federal University of Maranhao (Brazil). Electrical Engineering Dept.
2004-07-01
A method for computing the minimum active power loss in competitive electric power markets is proposed. The active power loss minimisation problem is formulated as an optimal power flow (OPF) with equality and inequality nonlinear constraints which take into account the power system security. The OPF has been solved using the multiple predictor-corrector interior-point method (MPC) of the family of higher-order interior-point methods, enhanced with a procedure for step-length computation during Newton iterations. The utilisation of the proposed enhanced MPC leads to convergence with a smaller number of iterations and better computational times than some results reported in the literature. An efficient computation of the primal and dual step-sizes is capable of reducing the primal and dual objective function errors, respectively, assuring continuously decreasing errors during the iterations of the interior-point method procedure. The proposed method has been simulated for several IEEF- test systems and two real systems including a 464 bus configuration of the interconnected Peruvian power system, and a 2256 bus scenario of the South-Southeast interconnected Brazilian system. Results of the tests have shown that the convergence is facilitated and the number of iterations may be small. (author)
Treatment of phantom pain with contralateral injection into tender points: a new method of treatment
Directory of Open Access Journals (Sweden)
Alaa A El Aziz Labeeb
2015-01-01
Conclusion Contralateral injections of 1 ml of 0.25% bupivacaine in the myofascial hyperalgesic areas attenuated phantom limb pain and affected phantom limb sensation. Our study gives a basis of a new method of management of that kind of severe pain to improve the method of rehabilitation of amputee. However, further longitudinal studies with larger number of patients are needed to confirm our study.
DEFF Research Database (Denmark)
Barfod, Adrian; Straubhaar, Julien; Høyer, Anne-Sophie
2017-01-01
Creating increasingly realistic hydrological models involves the inclusion of additional geological and geophysical data in the hydrostratigraphic modelling procedure. Using Multiple Point Statistics (MPS) for stochastic hydrostratigraphic modelling provides a degree of flexibility that allows......2. The comparison of the stochastic hydrostratigraphic MPS models is carried out in an elaborate scheme of visual inspection, mathematical similarity and consistency with boreholes. Using the Kasted survey data, a practical example for modelling new survey areas is presented. A cognitive...... soft data variable. The computation time of 2-3 h for snesim was in between DS and iqsim. The snesim implementation used here is part of the Stanford Geostatistical Modeling Software, or SGeMS. The snesim setup was not trivial, with numerous parameter settings, usage of multiple grids and a search tree...
Directory of Open Access Journals (Sweden)
Phayap Katchang
2010-01-01
Full Text Available The purpose of this paper is to investigate the problem of finding a common element of the set of solutions for mixed equilibrium problems, the set of solutions of the variational inclusions with set-valued maximal monotone mappings and inverse-strongly monotone mappings, and the set of fixed points of a family of finitely nonexpansive mappings in the setting of Hilbert spaces. We propose a new iterative scheme for finding the common element of the above three sets. Our results improve and extend the corresponding results of the works by Zhang et al. (2008, Peng et al. (2008, Peng and Yao (2009, as well as Plubtieng and Sriprad (2009 and some well-known results in the literature.
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
DEFF Research Database (Denmark)
Hasheminamin, Maryam; Agelidis, Vassilios; Ahmadi, Abdollah
2018-01-01
Voltage rise (VR) due to reverse power flow is an important obstacle for high integration of Photovoltaic (PV) into residential networks. This paper introduces and elaborates a novel methodology of an index-based single-point-reactive power-control (SPRPC) methodology to mitigate voltage rise by ...... system with high r/x ratio. Efficacy, effectiveness and cost study of SPRPC is compared to droop control to evaluate its advantages....... by absorbing adequate reactive power from one selected point. The proposed index utilizes short circuit analysis to select the best point to apply this Volt/Var control method. SPRPC is supported technically and financially by distribution network operator that makes it cost effective, simple and efficient...
DEFF Research Database (Denmark)
Barfod, Adrian; Straubhaar, Julien; Høyer, Anne-Sophie
2017-01-01
the incorporation of elaborate datasets and provides a framework for stochastic hydrostratigraphic modelling. This paper focuses on comparing three MPS methods: snesim, DS and iqsim. The MPS methods are tested and compared on a real-world hydrogeophysical survey from Kasted in Denmark, which covers an area of 45 km...... soft data variable. The computation time of 2-3 h for snesim was in between DS and iqsim. The snesim implementation used here is part of the Stanford Geostatistical Modeling Software, or SGeMS. The snesim setup was not trivial, with numerous parameter settings, usage of multiple grids and a search tree...
Multiple intramedullary nailing of proximal phalangeal fractures of hand
Directory of Open Access Journals (Sweden)
Patankar Hemant
2008-01-01
Full Text Available Background: Proximal phalangeal fractures are commonly encountered fractures in the hand. Majority of them are stable and can be treated by non-operative means. However, unstable fractures i.e. those with shortening, displacement, angulation, rotational deformity or segmental fractures need surgical intervention. This prospective study was undertaken to evaluate the functional outcome after surgical stabilization of these fractures with joint-sparing multiple intramedullary nailing technique. Materials and Methods: Thirty-five patients with 35 isolated unstable proximal phalangeal shaft fractures of hand were managed by surgical stabilization with multiple intramedullary nailing technique. Fractures of the thumb were excluded. All the patients were followed up for a minimum of six months. They were assessed radiologically and clinically. The clinical evaluation was based on two criteria. 1. total active range of motion for digital functional assessment as suggested by the American Society for Surgery of Hand and 2. grip strength. Results: All the patients showed radiological union at six weeks. The overall results were excellent in all the patients. Adventitious bursitis was observed at the point of insertion of nails in one patient. Conclusion: Joint-sparing multiple intramedullary nailing of unstable proximal phalangeal fractures of hand provides satisfactory results with good functional outcome and fewer complications.
Directory of Open Access Journals (Sweden)
Takashi Fuse
2017-12-01
Full Text Available Three-dimensional (3D road maps have garnered significant attention recently because of applications such as autonomous driving. For 3D road maps to remain accurate and up-to-date, an appropriate updating method is crucial. However, there are currently no updating methods with both satisfactorily high frequency and accuracy. An effective strategy would be to frequently acquire point clouds from regular vehicles, and then take detailed measurements only where necessary. However, there are three challenges when using data from regular vehicles. First, the accuracy and density of the points are comparatively low. Second, the measurement ranges vary for different measurements. Third, tentative changes such as pedestrians must be discriminated from real changes. The method proposed in this paper consists of registration and change detection methods. We first prepare the synthetic data obtained from regular vehicles using mobile mapping system data as a base reference. We then apply our proposed change detection method, in which the occupancy grid method is integrated with Dempster–Shafer theory to deal with occlusions and tentative changes. The results show that the proposed method can detect road environment changes, and it is easy to find changed parts through visualization. The work contributes towards sustainable updates and applications of 3D road maps.
Directory of Open Access Journals (Sweden)
Wei Liu
2016-12-01
Full Text Available High-accuracy surface measurement of large aviation parts is a significant guarantee of aircraft assembly with high quality. The result of boundary measurement is a significant parameter for aviation-part measurement. This paper proposes a measurement method for accurately measuring the surface and boundary of aviation part with feature compression extraction and directed edge-point criterion. To improve the measurement accuracy of both the surface and boundary of large parts, extraction method of global boundary and feature analysis of local stripe are combined. The center feature of laser stripe is obtained with high accuracy and less calculation using a sub-pixel centroid extraction method based on compress processing. This method consists of a compressing process of images and judgment criterion of laser stripe centers. An edge-point extraction method based on directed arc-length criterion is proposed to obtain accurate boundary. Finally, a high-precision reconstruction of aerospace part is achieved. Experiments are performed both in a laboratory and an industrial field. The physical measurements validate that the mean distance deviation of the proposed method is 0.47 mm. The results of the field experimentation show the validity of the proposed method.
An automated method for the evaluation of the pointing accuracy of Sun-tracking devices
Baumgartner, Dietmar J.; Pötzi, Werner; Freislich, Heinrich; Strutzmann, Heinz; Veronig, Astrid M.; Rieder, Harald E.
2017-03-01
The accuracy of solar radiation measurements, for direct (DIR) and diffuse (DIF) radiation, depends significantly on the precision of the operational Sun-tracking device. Thus, rigid targets for instrument performance and operation have been specified for international monitoring networks, e.g., the Baseline Surface Radiation Network (BSRN) operating under the auspices of the World Climate Research Program (WCRP). Sun-tracking devices that fulfill these accuracy requirements are available from various instrument manufacturers; however, none of the commercially available systems comprise an automatic accuracy control system allowing platform operators to independently validate the pointing accuracy of Sun-tracking sensors during operation. Here we present KSO-STREAMS (KSO-SunTRackEr Accuracy Monitoring System), a fully automated, system-independent, and cost-effective system for evaluating the pointing accuracy of Sun-tracking devices. We detail the monitoring system setup, its design and specifications, and the results from its application to the Sun-tracking system operated at the Kanzelhöhe Observatory (KSO) Austrian radiation monitoring network (ARAD) site. The results from an evaluation campaign from March to June 2015 show that the tracking accuracy of the device operated at KSO lies within BSRN specifications (i.e., 0.1° tracking accuracy) for the vast majority of observations (99.8 %). The evaluation of manufacturer-specified active-tracking accuracies (0.02°), during periods with direct solar radiation exceeding 300 W m-2, shows that these are satisfied in 72.9 % of observations. Tracking accuracies are highest during clear-sky conditions and on days where prevailing clear-sky conditions are interrupted by frontal movement; in these cases, we obtain the complete fulfillment of BSRN requirements and 76.4 % of observations within manufacturer-specified active-tracking accuracies. Limitations to tracking surveillance arise during overcast conditions and
GOCE in ocean modelling - Point mass method applied on GOCE gravity gradients
DEFF Research Database (Denmark)
Herceg, Matija
2009-01-01
This presentation is an introduction to my Ph.D project. The main objective of the study is to improve the methodology for combining GOCE gravity field models with satellite altimetry to derive optimal dynamic ocean topography models for oceanography. Here a method for geoid determination using...
A Bayesian MCMC method for point process models with intractable normalising constants
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper
2004-01-01
We present new methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. Our method is "on-line" as compared with alternative approaches to the problem which require "off-line" computations. Since it is needed...
A method for finding the ridge between saddle points applied to rare event rate estimates
DEFF Research Database (Denmark)
Maronsson, Jon Bergmann; Jónsson, Hannes; Vegge, Tejs
2012-01-01
to the path. The method is applied to Al adatom diffusion on the Al(100) surface to find the ridge between 2-, 3- and 4-atom concerted displacements and hop mechanisms. A correction to the harmonic approximation of transition state theory was estimated by direct evaluation of the configuration integral along...
Directory of Open Access Journals (Sweden)
П.В. Артамонов
2008-03-01
Full Text Available This article described findings investigation of a dynamic method measurement articulate the moments of tensometriced rudders surfaces, located on model half-wing of the plane. Measurements were spent in a wind tunnel at continuous moving model on the corners of attack in the selected diapason.
Directory of Open Access Journals (Sweden)
Braud Isabelle
2017-09-01
Full Text Available Topsoil field-saturated hydraulic conductivity, Kfs, is a parameter that controls the partition of rainfall between infiltration and runoff and is a key parameter in most distributed hydrological models. There is a mismatch between the scale of local in situ Kfs measurements and the scale at which the parameter is required in models for regional mapping. Therefore methods for extrapolating local Kfs values to larger mapping units are required. The paper explores the feasibility of mapping Kfs in the Cévennes-Vivarais region, in south-east France, using more easily available GIS data concerning geology and land cover. Our analysis makes uses of a data set from infiltration measurements performed in the area and its vicinity for more than ten years. The data set is composed of Kfs derived from infiltration measurements performed using various methods: Guelph permeameters, double ring and single ring infiltrotrometers and tension infiltrometers. The different methods resulted in a large variation in Kfs up to several orders of magnitude. A method is proposed to pool the data from the different infiltration methods to create an equivalent set of Kfs. Statistical tests showed significant differences in Kfs distributions in function of different geological formations and land cover. Thus the mapping of Kfs at regional scale was based on geological formations and land cover. This map was compared to a map based on the Rawls and Brakensiek (RB pedotransfer function (mainly based on texture and the two maps showed very different patterns. The RB values did not fit observed equivalent Kfs at the local scale, highlighting that soil texture alone is not a good predictor of Kfs.
Lin, Claire Yilin; Veneziani, Alessandro; Ruthotto, Lars
2017-10-26
We present novel numerical methods for polyline-to-point-cloud registration and their application to patient-specific modeling of deployed coronary artery stents from image data. Patient-specific coronary stent reconstruction is an important challenge in computational hemodynamics and relevant to the design and improvement of the prostheses. It is an invaluable tool in large-scale clinical trials that computationally investigate the effect of new generations of stents on hemodynamics and eventually tissue remodeling. Given a point cloud of strut positions, which can be extracted from images, our stent reconstruction method aims at finding a geometrical transformation that aligns a model of the undeployed stent to the point cloud. Mathematically, we describe the undeployed stent as a polyline, which is a piecewise linear object defined by its vertices and edges. We formulate the nonlinear registration as an optimization problem whose objective function consists of a similarity measure, quantifying the distance between the polyline and the point cloud, and a regularization functional, penalizing undesired transformations. Using projections of points onto the polyline structure, we derive novel distance measures. Our formulation supports most commonly used transformation models including very flexible nonlinear deformations. We also propose 2 regularization approaches ensuring the smoothness of the estimated nonlinear transformation. We demonstrate the potential of our methods using an academic 2D example and a real-life 3D bioabsorbable stent reconstruction problem. Our results show that the registration problem can be solved to sufficient accuracy within seconds using only a few number of Gauss-Newton iterations. Copyright © 2017 John Wiley & Sons, Ltd.
Development of Precise Point Positioning Method Using Global Positioning System Measurements
Directory of Open Access Journals (Sweden)
Byung-Kyu Choi
2011-09-01
Full Text Available Precise point positioning (PPP is increasingly used in several parts such as monitoring of crustal movement and maintaining an international terrestrial reference frame using global positioning system (GPS measurements. An accuracy of PPP data processing has been increased due to the use of the more precise satellite orbit/clock products. In this study we developed PPP algorithm that utilizes data collected by a GPS receiver. The measurement error modelling including the tropospheric error and the tidal model in data processing was considered to improve the positioning accuracy. The extended Kalman filter has been also employed to estimate the state parameters such as positioning information and float ambiguities. For the verification, we compared our results to other of International GNSS Service analysis center. As a result, the mean errors of the estimated position on the East-West, North-South and Up-Down direction for the five days were 0.9 cm, 0.32 cm, and 1.14 cm in 95% confidence level.
Directory of Open Access Journals (Sweden)
Byung-Kyu Choi
2012-09-01
Full Text Available Kinematic global positioning system precise point positioning (GPS PPP technology is widely used to the several area such as monitoring of crustal movement and precise orbit determination (POD using the dual-frequency GPS observations. In this study we developed a kinematic PPP technology and applied 3-pass (forward/backward/forward filter for the stabilization of the initial state of the parameters to be estimated. For verification of results, we obtained GPS data sets from six international GPS reference stations (ALGO, AMC2, BJFS, GRAZ, IENG and TSKB and processed in daily basis by using the developed software. As a result, the mean position errors by kinematic PPP showed 0.51 cm in the east-west direction, 0.31 cm in the north-south direction and 1.02 cm in the up-down direction. The root mean square values produced from them were 1.59 cm for the east-west component, 1.26 cm for the south-west component and 2.95 cm for the up-down component.
Leveraging Data Fusion Strategies in Multireceptor Lead Optimization MM/GBSA End-Point Methods.
Knight, Jennifer L; Krilov, Goran; Borrelli, Kenneth W; Williams, Joshua; Gunn, John R; Clowes, Alec; Cheng, Luciano; Friesner, Richard A; Abel, Robert
2014-08-12
Accurate and efficient affinity calculations are critical to enhancing the contribution of in silico modeling during the lead optimization phase of a drug discovery campaign. Here, we present a large-scale study of the efficacy of data fusion strategies to leverage results from end-point MM/GBSA calculations in multiple receptors to identify potent inhibitors among an ensemble of congeneric ligands. The retrospective analysis of 13 congeneric ligand series curated from publicly available data across seven biological targets demonstrates that in 90% of the individual receptor structures MM/GBSA scores successfully identify subsets of inhibitors that are more potent than a random selection, and data fusion strategies that combine MM/GBSA scores from each of the receptors significantly increase the robustness of the predictions. Among nine different data fusion metrics based on consensus scores or receptor rankings, the SumZScore (i.e., converting MM/GBSA scores into standardized Z-Scores within a receptor and computing the sum of the Z-Scores for a given ligand across the ensemble of receptors) is found to be a robust and physically meaningful metric for combining results across multiple receptors. Perhaps most surprisingly, even with relatively low to modest overall correlations between SumZScore and experimental binding affinities, SumZScore tends to reliably prioritize subsets of inhibitors that are at least as potent as those that are prioritized from a "best" single receptor identified from known compounds within the congeneric series.
Taheri, Navid; Rezasoltani, Asghar; Okhovatian, Farshad; Karami, Mehdi; Hosseini, Sayed Mohsen; Kouhzad Mohammadi, Hosein
2016-07-01
Myofascial pain syndrome (MPS) is a neuromuscular dysfunction consisting of both motor and sensory abnormalities. Considering the high prevalence of MPS and its related disabilities and costs, this study was designed to determine the reliability of new ultrasonographic indexes of the upper trapezius muscle as well as the sensitivity and specificity of 2D ultrasound imaging for diagnostic purposes. Furthermore, we sought to evaluate the effectiveness of dry needling (DN) on studied ultrasonographic indexes. This study will be performed in two steps with two different designs. The first is a pilot study and was designed as a semi-experimental study to determine the sensitivity and specificity of ultrasonography for the diagnosis of MPS and the reliability of ultrasonographic measurements like muscle thickness, area of myofascial trigger points (MTrPs) in longitudinal view, echogenicity of MTrPs in longitudinal view, echogenicity of muscle with MTrPs in longitudinal and transverse views, and pennation angle of upper trapezius muscle. The second study is an interventional study which was designed to investigate the effectiveness of DN on ultrasonographic measurements, for which the reliability was determined in the first study. we will quantify the effectiveness of DN on MTrPs and muscle tissue by using novel ultrasonographic indexes. The results of the current study will provide baseline information to design more interventional studies to improve the evaluation of other treatments of MPS. Copyright © 2015 Elsevier Ltd. All rights reserved.
Liu, Xiao-Na; Zheng, Qiu-Sheng; Che, Xiao-Qing; Wu, Zhi-Sheng; Qiao, Yan-Jiang
2017-03-01
The blending end-point determination of Angong Niuhuang Wan (AGNH) is a key technology problem. The control strategy based on quality by design (QbD) concept proposes a whole blending end-point determination method, and provides a methodology for blending the Chinese materia medica containing mineral substances. Based on QbD concept, the laser induced breakdown spectroscopy (LIBS) was used to assess the cinnabar, realgar and pearl powder blending of AGNH in a pilot-scale experiment, especially the whole blending end-point in this study. The blending variability of three mineral medicines including cinnabar, realgar and pearl powder, was measured by moving window relative standard deviation (MWRSD) based on LIBS. The time profiles of realgar and pearl powder did not produce consistent results completely, but all of them reached even blending at the last blending stage, so that the whole proposal blending end point was determined. LIBS is a promising Process Analytical Technology (PAT) for process control. Unlike other elemental determination technologies such ICP-OES, LIBS does not need an elaborate digestion procedure, which is a promising and rapid technique to understand the blending process of Chinese materia medica (CMM) containing cinnabar, realgar and other mineral traditional Chinese medicine. This study proposed a novel method for the research of large varieties of traditional Chinese medicines.. Copyright© by the Chinese Pharmaceutical Association.
preliminary phytochemical screening, proximate and elemental
African Journals Online (AJOL)
DR. AMINU
ABSTRACT. The seed powder of Moringa oleifera was analysed for its phytochemical, proximate and elemental composition using Folin-Denis spectrophotometric method, gravimetric method and energy dispersing X-ray fluorescence (EDXRF) transmission emission technique respectively. The seed powder had the ...
An Entry Point for Formal Methods: Specification and Analysis of Event Logs
Directory of Open Access Journals (Sweden)
Howard Barringer
2010-03-01
Full Text Available Formal specification languages have long languished, due to the grave scalability problems faced by complete verification methods. Runtime verification promises to use formal specifications to automate part of the more scalable art of testing, but has not been widely applied to real systems, and often falters due to the cost and complexity of instrumentation for online monitoring. In this paper we discuss work in progress to apply an event-based specification system to the logging mechanism of the Mars Science Laboratory mission at JPL. By focusing on log analysis, we exploit the "instrumentation" already implemented and required for communicating with the spacecraft. We argue that this work both shows a practical method for using formal specifications in testing and opens interesting research avenues, including a challenging specification learning problem.
DEFF Research Database (Denmark)
Barfod, Adrian
from a deterministic 3D geological model of the study area. The stochastic ensemble modeling approach is used to compare three different MPS methods (Paper II). However, visually comparing a large set of 3D hydrostratigraphic models is no trivial task. Therefore, a quantitative comparison technique......The PhD thesis presents a new method for analyzing the relationship between resistivity and lithology, as well as a method for quantifying the hydrostratigraphic modeling uncertainty related to Multiple-Point Statistical (MPS) methods. Three-dimensional (3D) geological models are im...... in two publicly available databases, the JUPITER and GERDA databases, which contain borehole and geophysical data, respectively. The large amounts of available data provided a unique opportunity for studying the resistivity-lithology relationship. The method for analyzing the resistivity...
Directory of Open Access Journals (Sweden)
Antonio Roberto Balbo
2012-01-01
Full Text Available This paper proposes a predictor-corrector primal-dual interior point method which introduces line search procedures (IPLS in both the predictor and corrector steps. The Fibonacci search technique is used in the predictor step, while an Armijo line search is used in the corrector step. The method is developed for application to the economic dispatch (ED problem studied in the field of power systems analysis. The theory of the method is examined for quadratic programming problems and involves the analysis of iterative schemes, computational implementation, and issues concerning the adaptation of the proposed algorithm to solve ED problems. Numerical results are presented, which demonstrate improvements and the efficiency of the IPLS method when compared to several other methods described in the literature. Finally, postoptimization analyses are performed for the solution of ED problems.
High precision micro-scale Hall Effect characterization method using in-line micro four-point probes
DEFF Research Database (Denmark)
Petersen, Dirch Hjorth; Hansen, Ole; Lin, Rong
2008-01-01
Accurate characterization of ultra shallow junctions (USJ) is important in order to understand the principles of junction formation and to develop the appropriate implant and annealing technologies. We investigate the capabilities of a new micro-scale Hall effect measurement method where Hall...... effect is measured with collinear micro four-point probes (M4PP). We derive the sensitivity to electrode position errors and describe a position error suppression method to enable rapid reliable Hall effect measurements with just two measurement points. We show with both Monte Carlo simulations...... and experimental measurements, that the repeatability of a micro-scale Hall effect measurement is better than 1 %. We demonstrate the ability to spatially resolve Hall effect on micro-scale by characterization of an USJ with a single laser stripe anneal. The micro sheet resistance variations resulting from...
Zaffina, S; Camisa, V; Poscia, A; Tucci, M G; Montaldi, V; Cerabona, V; Wachocka, M; Moscato, U
2012-01-01
Several studies have shown that occupational exposure to anesthetic gases might be higher during pediatric surgery, probably due to the increased use of inhalational induction techniques. Our study aims to assess the level of exposure to sevoflurane in two rooms of pediatric surgery, using multi-point sampling method for environmental monitoring. The gas concentrations as well as its dispersion were measured in strategic points in the rooms for a total of 44 surgical interventions. Although the average of these concentrations has been rather low (1.32, SD +/- 1:55 ppm), the results obtained have documented a significant distribution kinetics difference inside the rooms as function of multiple factors among which there were the anesthetic technique used and the team involved. Therefore the method described allows to correctly analyze the spread of anesthetic gases and suggests a different risk stratification which may be dependent on the professional work.
Energy Technology Data Exchange (ETDEWEB)
Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.; Chen, Yousu; Trudnowski, Daniel J.; Diao, Ruisheng; Fuller, Jason C.; Mittelstadt, William A.; Hauer, John F.; Dagle, Jeffery E.
2010-10-18
Small signal stability problems are one of the major threats to grid stability and reliability in the U.S. power grid. An undamped mode can cause large-amplitude oscillations and may result in system breakups and large-scale blackouts. There have been several incidents of system-wide oscillations. Of those incidents, the most notable is the August 10, 1996 western system breakup, a result of undamped system-wide oscillations. Significant efforts have been devoted to monitoring system oscillatory behaviors from measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision, time-synchronized data needed for detecting oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measurements to identify system oscillation modes and their damping. Low damping indicates potential system stability issues. Modal analysis has been demonstrated with phasor measurements to have the capability of estimating system modes from both oscillation signals and ambient data. With more and more phasor measurements available and ModeMeter techniques maturing, there is yet a need for methods to bring modal analysis from monitoring to actions. The methods should be able to associate low damping with grid operating conditions, so operators or automated operation schemes can respond when low damping is observed. The work presented in this report aims to develop such a method and establish a Modal Analysis for Grid Operation (MANGO) procedure to aid grid operation decision making to increase inter-area modal damping. The procedure can provide operation suggestions (such as increasing generation or decreasing load) for mitigating inter-area oscillations.
Modeling of Semiconductors and Correlated Oxides with Point Defects by First Principles Methods
Wang, Hao
2014-06-15
Point defects in silicon, vanadium dioxide, and doped ceria are investigated by density functional theory. Defects involving vacancies and interstitial oxygen and carbon in silicon are after formed in outer space and significantly affect device performances. The screened hybrid functional by Heyd-Scuseria-Ernzerhof is used to calculate formation energies, binding energies, and electronic structures of the defective systems because standard density functional theory underestimates the bang gap of silicon. The results indicate for the A-center a −2 charge state. Tin is proposed to be an effective dopant to suppress the formation of A-centers. For the total energy difference between the A- and B-type carbon related G-centers we find close agreement with the experiment. The results indicate that the C-type G-center is more stable than both the A- and B-types. The electronic structures of the monoclinic and rutile phases of vanadium dioxide are also studied using the Heyd-Scuseria-Ernzerhof functional. The ground states of the pure phases obtained by calculations including spin polarization disagree with the experimental observations that the monoclinic phase should not be magnetic, the rutile phase should be metallic, and the monoclinic phase should have a lower total energy than the rutile phase. By tuning the Hartree-Fock fraction α to 10% the agreement with experiments is improved in terms of band gaps and relative energies of the phases. A calculation scheme is proposed to simulate the relationship between the transition temperature of the metal-insulator transition and the dopant concentration in tungsten doped vanadium dioxide. We achieve good agreement with the experimental situation. 18.75% and 25% yttrium, lanthanum, praseodymium, samarium, and gadolinium doped ceria supercells generated by the special quasirandom structure approach are employed to investigate the impact of doping on the O diffusion. The experimental behavior of the conductivity for the
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Edlund, Kristian
2013-01-01
In this paper, we develop an efficient interior-point method (IPM) for the linear programs arising in economic model predictive control of linear systems. The novelty of our algorithm is that it combines a homogeneous and self-dual model, and a specialized Riccati iteration procedure. We test...... the algorithm in a conceptual study of power systems management. Simulations show that in comparison to state of the art software implementation of IPMs, our method is significantly faster and scales in a favourable way....
Directory of Open Access Journals (Sweden)
Jitpeera Thanyarat
2011-01-01
Full Text Available We introduce a new iterative method for finding a common element of the set of solutions for mixed equilibrium problem, the set of solutions of the variational inequality for a -inverse-strongly monotone mapping, and the set of fixed points of a family of finitely nonexpansive mappings in a real Hilbert space by using the viscosity and Cesàro mean approximation method. We prove that the sequence converges strongly to a common element of the above three sets under some mind conditions. Our results improve and extend the corresponding results of Kumam and Katchang (2009, Peng and Yao (2009, Shimizu and Takahashi (1997, and some authors.
Yehia, Ali M.
2013-05-01
New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines.
Unique construction methods used to expand Australia's Hay Point Port
Energy Technology Data Exchange (ETDEWEB)
McRobert, J.D.
1977-04-01
The expanded port was to accommodate a throughput of approximately 20,000,000 tons per year. This expansion required an extra rail loop and two-car tippler, two new rail-mounted stacker reclaimer units installed on two more double stockpile areas--each served by one stacker reclaimer, one on-shore surge bin of 1,000 tons capacity, a new approach conveyor over the existing conveyor trestle, and a new berth and shiploader. The new rail unloading system was identical to the original system of a twin car McDowell Wellman design. The new Dravo stacker reclaimers were each designed for an average output of 3,000,000 tons per hour. The new stockpile yard conveyors were interconnected with the old system at both ends of the approach trestle for maximum flexibility of loading. The on-land stockpile storage was increased to more than 3,000 tons. The new berth shiploader was designed to load ships at the rate of 6,000 tons per hour. Feed from the surge bins minimized deviation from the mean shiploading rate. Construction methods are described.
Ghasemi, Elham; Kaykhaii, Massoud
2015-01-01
A fast, simple, and economical method was developed for simultaneous spectrophotometric determination of uranium(VI) and vanadium(V) in water samples based on micro cloud point extraction (MCPE) at room temperature. This is the first report on the simultaneous extraction and determination of U(VI) and V(V). In this method, Triton X114 was employed as a non-ionic surfactant for the cloud point procedure and 4-(2-pyridylazo) resorcinol (PAR) was used as the chelating agent for both analytes. To reach the cloud point at room temperature, the MCPE procedure was carried out in brine. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, the linear calibration curve was found to be in the concentration range between 100 - 750 and 50 - 600 μg L(-1) for U(VI) and V(V), respectively, with a limit of detection of 17.03 μg L(-1) (U) and 5.51 μg L(-1) (V). Total analysis time including microextraction was less than 5 min.
Yamanouchi, Tsuneaki; Horiuchi, Kenichi; Ishii, Kazunari; Mimura, Yasuhiko; Kato, Atsushi; Adachi, Isao
2014-01-01
The adsorption of Bevacizumab, Trastuzumab, Rituximab, Nedaplatin, Vincristine sulfate, Nogitecan hydrochloride, Actinomycin D and Ramosetron hydrochloride to 0.2 μm endotoxin-retentive in-line filters was evaluated with pediatric doses by UV spectrophotometry. The results indicated that some drug adsorption was shown with Nogitecan hydrochloride, Actinomycin D and Ramosetron hydrochloride, and good recovery was shown with the other five drugs. For the three drugs which showed some losses, drug recovery was investigated at multiple test doses. The approximation formula for each drug adsorption was recorded as Y=100-A/X (X: dose (mg), Y: recovery rate (%), A: a constant for individual drug). The results showed there was high correlation between the reciprocal of test drug dose and the recovery rate. Furthermore, in the cases where adsorption to the filter were observed, it was found that it was possible to determine the relationship between dose and the recovery rate from a filterability test with one point pediatric dose. Since the recovery rate obtained from the approximation formula with multiple doses and that calculated from the prediction formula with one point pediatric dose were almost the same, then it was concluded that it is not necessary to conduct the filterability tests with multiple doses. We have shown that using UV spectrophotometry and carrying out a filterability test using one point pediatric dose is relatively easy method and reduces the effort and expense. This method for analysis of drug adsorption is extremely useful when using in-line filters with infusion therapy.
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
Ghane, Alireza; Mazaheri, Mehdi; Mohammad Vali Samani, Jamal
2016-09-15
The pollution of rivers due to accidental spills is a major threat to environment and human health. To protect river systems from accidental spills, it is essential to introduce a reliable tool for identification process. Backward Probability Method (BPM) is one of the most recommended tools that is able to introduce information related to the prior location and the release time of the pollution. This method was originally developed and employed in groundwater pollution source identification problems. One of the objectives of this study is to apply this method in identifying the pollution source location and release time in surface waters, mainly in rivers. To accomplish this task, a numerical model is developed based on the adjoint analysis. Then the developed model is verified using analytical solution and some real data. The second objective of this study is to extend the method to pollution source identification in river networks. In this regard, a hypothetical test case is considered. In the later simulations, all of the suspected points are identified, using only one backward simulation. The results demonstrated that all suspected points, determined by the BPM could be a possible pollution source. The proposed approach is accurate and computationally efficient and does not need any simplification in river geometry and flow. Due to this simplicity, it is highly recommended for practical purposes. Copyright © 2016. Published by Elsevier Ltd.
Yasokawa, Toshiki; Ishimaru, Ichirou; Kondo, Masahiro; Kuriyama, Shigeki; Masaki, Tsutomu; Takegawa, Kaoru; Tanaka, Naotaka
2007-07-01
This paper describes a method for measuring the three-dimensional (3D) refractive-index distribution in a single cell. The method can be used to observe the distribution of cell components without fluorescence staining. The two-dimensional optical path length distributions from multiple directions are obtained by non-contact rotation of the cell. These optical path lengths are converted into the line integrals of the refractive index, and the 3D refractive-index distribution is reconstructed by means of computed tomography. The refractive-index distribution in a breast cancer cell can be measured using a phase-shifting Mach—Zehnder interferometer in conjunction with proximal two-beam optical tweezers.
Diagnosis of Proximal Caries in Primary Molars with DIAGNOdent pen
Ermler, Romy
2010-01-01
Proximal surfaces, together with fissures, are the areas where most primary caries occur. Due to the anatomy of the deciduous molars, proximal caries cannot be detected at an early stage in crowded teeth by simply using a mirror and probe. Therefore, additional methods to find early proximal caries have to be used. KaVo uses laser fluorescence to detect caries. Originally, the DIAGNOdent devices were able to detect only occlusal caries (56, 61, 62, 65, 66). New results are now also available ...
Harper, Lane; Powell, Jeff; Pijl, Em M
2017-07-31
Given the current opioid crisis around the world, harm reduction agencies are seeking to help people who use drugs to do so more safely. Many harm reduction agencies are exploring techniques to test illicit drugs to identify and, where possible, quantify their constituents allowing their users to make informed decisions. While these technologies have been used for years in Europe (Nightlife Empowerment & Well-being Implementation Project, Drug Checking Service: Good Practice Standards; Trans European Drugs Information (TEDI) Workgroup, Factsheet on Drug Checking in Europe, 2011; European Monitoring Centre for Drugs and Drug Addiction, An Inventory of On-site Pill-Testing Interventions in the EU: Fact Files, 2001), they are only now starting to be utilized in this context in North America. The goal of this paper is to describe the most common methods for testing illicit substances and then, based on this broad, encompassing review, recommend the most appropriate methods for testing at point of care.Based on our review, the best methods for point-of-care drug testing are handheld infrared spectroscopy, Raman spectroscopy, and ion mobility spectrometry; mass spectrometry is the current gold standard in forensic drug analysis. It would be prudent for agencies or clinics that can obtain the funding to contact the companies who produce these devices to discuss possible usage in a harm reduction setting. Lower tech options, such as spot/color tests and immunoassays, are limited in their use but affordable and easy to use.
Some Properties of Fuzzy Soft Proximity Spaces
Demir, İzzettin; Özbakır, Oya Bedre
2015-01-01
We study the fuzzy soft proximity spaces in Katsaras's sense. First, we show how a fuzzy soft topology is derived from a fuzzy soft proximity. Also, we define the notion of fuzzy soft δ-neighborhood in the fuzzy soft proximity space which offers an alternative approach to the study of fuzzy soft proximity spaces. Later, we obtain the initial fuzzy soft proximity determined by a family of fuzzy soft proximities. Finally, we investigate relationship between fuzzy soft proximities and proximities. PMID:25793224
Complications in proximal humeral fractures.
Calori, Giorgio Maria; Colombo, Massimiliano; Bucci, Miguel Simon; Fadigati, Piero; Colombo, Alessandra Ines Maria; Mazzola, Simone; Cefalo, Vittorio; Mazza, Emilio
2016-10-01
Necrosis of the humeral head, infections and non-unions are among the most dangerous and difficult-to-treat complications of proximal humeral fractures. The aim of this work was to analyse in detail non-unions and post-traumatic bone defects and to suggest an algorithm of care. Treatment options are based not only on the radiological frame, but also according to a detailed analysis of the patient, who is classified using a risk factor analysis. This method enables the surgeon to choose the most suitable treatment for the patient, thereby facilitating return of function in the shortest possible time. The treatment of such serious complications requires the surgeon to be knowledgeable about the following possible solutions: increased mechanical stability; biological stimulation; and reconstructive techniques in two steps, with application of biotechnologies and prosthetic substitution. Copyright © 2016 Elsevier Ltd. All rights reserved.
Siahkouhian, M; Meamarbashi, A
2013-02-01
The aim of the present study was to compare determined heart rate deflection point (HRDP) by the long distance maximum (L.Dmax), short distance maximum (S.Dmax) and plasma lactate measurements as criterion. Fifteen healthy and active male volunteers, aged 20-24, were selected as subjects and performed the exhaustive testing protocol which took place on a calibrated electronically braked cycle ergometer. To determine the HRDP, each subject's data was recorded during the exercise test and analyzed by a designed computer program. Venous blood samples were drawn for the measurement of plasma lactate concentration by a direct method. Downward inflection of HRDP was noticed in all subjects. Comparison of the S.Dmax and L.Dmax methods with the criterion method (lactate method) showed that while HRDP determined by the S.Dmax and lactate methods were not significantly different (167±8.83 vs. 168±8.17 b/min; P=0.86), significant difference emerged between determined HRDP by the L.Dmax and lactate methods (167±8.83 vs. 139.56±6.73 b/min; P£0.001). Bland-Altman plots revealed a good agreement between S.Dmax and lactate methods (95% CI=-5 to +3.6 b/min), while there is no agreement between L.Dmax and lactate method (95% CI=+4.9 to +71.3 b/min). Significant correlation was observed between the criterion and S.Dmax model (r=0.944) whereas there was no significant correlation between the criterion and L.Dmax model (r=0.158). Based on these results, it could be suggested that S.Dmax method is an accurate and reliable alternative to the cumbersome, expensive, and time-consuming lactate method.
Proximate, mineral composition, antioxidant activity, and total ...
African Journals Online (AJOL)
Four varieties of the red pepper fruits (Capsicum species) were evaluated for chemical composition, antioxidant activity and total phenolic contents using standard analytical technique, ferric-ion reducing antioxidant potential (FRAP) assay and Folin-Colcalteau method respectively. The proximate composition values ...
Phytochemical Screening, Proximate and Mineral Composition of ...
African Journals Online (AJOL)
Leaves of sweet potato (Ipomoea batatas) grown in Tepi area was studied for their class of phytochemicals, mineral and proximate composition using standard analytical methods. The phytochemical screening revealed the presence of alkaloids, flavonoid, terpenoids, saponins, quinones, phenol, tannins, amino acid and ...
Disability occurrence and proximity to death
Klijs, Bart; Mackenbach, Johan P.; Kunst, Anton E.
2010-01-01
Purpose. This paper aims to assess whether disability occurrence is related more strongly to proximity to death than to age. Method. Self reported disability and vital status were available from six annual waves and a subsequent 12-year mortality follow-up of the Dutch GLOBE longitudinal study.
comparative proximate composition and antioxidant vitamins
African Journals Online (AJOL)
DR. AMINU
ABSTRACT. The proximate composition and antioxidant vitamins analysis of two varieties of honey (dark amber and light amber) were carried out using standard methods. The values for moisture, ash, crude lipid, crude protein and crude carbohydrate contents of the two honeys, (light amber and dark amber) are 9.39 ...
Preliminary Phytochemical Screening, Elemental and Proximate ...
African Journals Online (AJOL)
The study aimed at phytochemical screening, elemental and proximate composition of two varieties of Cyperus esculentus (tiger nut) big yellow and small brown nuts using standard methods. The phytochemicals tested for were alkaloid, saponin, tannin, glycoside, flavonoid, steroid and resin. All the aforementioned ...
Phytochemical screening, proximate and elemental analysis of ...
African Journals Online (AJOL)
Michael Horsfall
2009). The aim of this study was to analyses the extract of. Citrus sinensis peels for the phytochemical, proximate and elemental composition. MATERIALS AND METHODS. Plant materials Fresh peels of Citrus sinensis were collected from Uselu market in Benin City, Edo. State, Nigeria. It was identified and authenticated by.
Keldysh proximity action for disordered superconductors
Indian Academy of Sciences (India)
Abstract. We review a novel approach to the superconductive proximity effect in dis- ordered normal–superconducting (N–S) structures. The method is based on the multi- charge Keldysh action and is suitable for the treatment of interaction and fluctuation effects. As an application of the formalism, we study the subgap ...
The non-operative resin treatment of proximal caries lesions.
Ekstrand, Kim; Martignon, Stefania; Bakhshandeh, Azam; Ricketts, David N J
2012-11-01
Epidemiological data show that the prevalence of caries on proximal surfaces in need of operative treatment is very high around the world, both in the primary and the permanent dentition. This article presents two new treatment methods: proximal sealing and proximal infiltration. The indications are progressing proximal caries lesions, radiographically with a depth around the enamel-dentine junction. A small number of studies regarding the effect of sealing and infiltration on proximal caries versus the use of fluoride varnish, placebo treatment and flossing instructions have been carried out. About half of the studies disclose a not significant difference between test and control treatment. In the other half, the therapeutic effect is significant and corresponds to about 30% reduction in lesion progression. However, longitudinal studies of longer duration are lacking. Proximal sealing and proximal infiltration may have a place in the treatment of non-cavitated proximal lesions. Proximal caries is a problem in both primary and permanent dentitions. Proximal sealants or lesion infiltration are possible treatments.
DEFF Research Database (Denmark)
McKnight, Ursula S.; Sonne, Anne Thobo; Fjordbøge, Annika Sidelmann
2013-01-01
an increasingly important activity for the hydrogeological investigations of rivers and streams. In cases where groundwater contaminant plumes are discharging to streams, determination of flow paths and groundwater fluxes are essential for evaluating the transport, fate and potential impact of the plume...... by two major polluting point sources, Grindsted factory and Grindsted landfill, representing two of the 43 large-scale contaminated sites in Denmark. Our overall aim was therefore to (i) test the applicability of different methods for mapping groundwater pollution as it enters streams at a complex site...
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Skajaa, Anders; Frison, Gianluca
2013-01-01
In this paper, we present a warm-started homogenous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control (MPC) of linear systems. To exploit the structure in the optimization problems, our algorithm utilizes a Riccati iteration procedure...... algorithm in MATLAB and its performance is analyzed based on a smart grid power management case study. Closed loop simulations show that 1) our algorithm is significantly faster than state-of-the-art IPMs based on sparse linear algebra routines, and 2) warm-starting reduces the number of iterations...
DEFF Research Database (Denmark)
Li, Kai; Wei, Min; Xie, Chuan
2017-01-01
In order to control the neutral point voltage of inverter with discontinuous PWM (DPWM), this paper proposed a generalized discontinuous PWM (GDPWM) based neutral point voltage balancing method for three level neutral point clamped (NPC) voltage source inverter (VSI). Firstly, a triangle carrier ...
Industrial Computed Tomography using Proximal Algorithm
Zang, Guangming
2016-04-14
In this thesis, we present ProxiSART, a flexible proximal framework for robust 3D cone beam tomographic reconstruction based on the Simultaneous Algebraic Reconstruction Technique (SART). We derive the proximal operator for the SART algorithm and use it for minimizing the data term in a proximal algorithm. We show the flexibility of the framework by plugging in different powerful regularizers, and show its robustness in achieving better reconstruction results in the presence of noise and using fewer projections. We compare our framework to state-of-the-art methods and existing popular software tomography reconstruction packages, on both synthetic and real datasets, and show superior reconstruction quality, especially from noisy data and a small number of projections.
Aucouturier, Julien; Rance, Mélanie; Meyer, Martine; Isacco, Laurie; Thivel, David; Fellmann, Nicole; Duclos, Martine; Duché, Pascale
2009-01-01
We aimed to examine the interchangeability of techniques used to assess maximal oxygen consumption (VO2max) and maximal aerobic power (MAP) employed to express the maximal fat oxidation point in obese children and adolescents. Rate of fat oxidation were measured in 24 obese subjects (13.0 +/- 2.4 years; Body Mass Index 30.2 +/- 6.3 kg m(-2)) who performed a five 4-min stages submaximal incremental cycling exercise. A second cycling exercise was performed to measure VO2max. Results are those of the 20 children who achieved the criterion of RER (>1.02) to assess the attainment of VO2max. Although correlations between results obtained by different methods were strong, Bland-Altman plots showed little agreement between the maximal fat oxidation point expressed as a percentage of measured VO2max and as % VO2max estimated according to ACSM guidelines (underestimation : -5.9%) or using the predictive equations of Wasserman (-13.9%). Despite a mean underestimation of 1.4% several values were out of the limits of agreement when comparing measured MAP and Theoretical MAP. Estimations of VO2max lead to underestimations of the maximal fat oxidation point.
PROXIMITY MANAGEMENT IN CRISIS CONDITIONS
Directory of Open Access Journals (Sweden)
Ion Dorin BUMBENECI
2010-01-01
Full Text Available The purpose of this study is to evaluate the level of assimilation for the terms "Proximity Management" and "Proximity Manager", both in the specialized literature and in practice. The study has two parts: the theoretical research of the two terms, and an evaluation of the use of Proximity management in 32 companies in Gorj, Romania. The object of the evaluation resides in 27 companies with less than 50 employees and 5 companies with more than 50 employees.
Directory of Open Access Journals (Sweden)
Ana Tobar
Full Text Available BACKGROUND: Obesity is associated with glomerular hyperfiltration, increased proximal tubular sodium reabsorption, glomerular enlargement and renal hypertrophy. A single experimental study reported an increased glomerular urinary space in obese dogs. Whether proximal tubular volume is increased in obese subjects and whether their glomerular and tubular urinary spaces are enlarged is unknown. OBJECTIVE: To determine whether proximal tubules and glomerular and tubular urinary space are enlarged in obese subjects with proteinuria and glomerular hyperfiltration. METHODS: Kidney biopsies from 11 non-diabetic obese with proteinuria and 14 non-diabetic lean patients with a creatinine clearance above 50 ml/min and with mild or no interstitial fibrosis were retrospectively analyzed using morphometric methods. The cross-sectional area of the proximal tubular epithelium and lumen, the volume of the glomerular tuft and of Bowman's space and the nuclei number per tubular profile were estimated. RESULTS: Creatinine clearance was higher in the obese than in the lean group (P=0.03. Proteinuria was similarly increased in both groups. Compared to the lean group, the obese group displayed a 104% higher glomerular tuft volume (P=0.001, a 94% higher Bowman's space volume (P=0.003, a 33% higher cross-sectional area of the proximal tubular epithelium (P=0.02 and a 54% higher cross-sectional area of the proximal tubular lumen (P=0.01. The nuclei number per proximal tubular profile was similar in both groups, suggesting that the increase in tubular volume is due to hypertrophy and not to hyperplasia. CONCLUSIONS: Obesity-related glomerular hyperfiltration is associated with proximal tubular epithelial hypertrophy and increased glomerular and tubular urinary space volume in subjects with proteinuria. The expanded glomerular and urinary space is probably a direct consequence of glomerular hyperfiltration. These effects may be involved in the pathogenesis of obesity
Directory of Open Access Journals (Sweden)
Md Nabiul Islam Khan
Full Text Available In the Point-Centred Quarter Method (PCQM, the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1 and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns plant populations and empirical ones.PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3 show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition. If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1/(π ∑ R2 but not 12N/(π ∑ R2, of PCQM2 is 4(8N - 1/(π ∑ R2 but not 28N/(π ∑ R2 and of PCQM3 is 4(12N - 1/(π ∑ R2 but not 44N/(π ∑ R2 as published.If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process
Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S M Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid
2016-01-01
In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns) plant populations and empirical ones. PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N - 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N - 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process
Directory of Open Access Journals (Sweden)
Jose Ernie C. Lope
2013-12-01
Full Text Available In their 2012 work, Lope, Roque, and Tahara considered singular nonlinear partial differential equations of the form tut = F(t; x; u; ux, where the function F is assumed to be continuous in t and holomorphic in the other variables. They have shown that under some growth conditions on the coefficients of the partial Taylor expansion of F as t 0, the equation has a unique solution u(t; x with the same growth order as that of F(t; x; 0; 0. Koike considered systems of partial differential equations using the Banach fixed point theorem and the iterative method of Nishida and Nirenberg. In this paper, we prove the result obtained by Lope and others using the method of Koike, thereby avoiding the repetitive step of differentiating a recursive equation with respect to x as was done by the aforementioned authors.
Directory of Open Access Journals (Sweden)
David I Flores
Full Text Available The automatic identification of catalytic residues still remains an important challenge in structural bioinformatics. Sequence-based methods are good alternatives when the query shares a high percentage of identity with a well-annotated enzyme. However, when the homology is not apparent, which occurs with many structures from the structural genome initiative, structural information should be exploited. A local structural comparison is preferred to a global structural comparison when predicting functional residues. CMASA is a recently proposed method for predicting catalytic residues based on a local structure comparison. The method achieves high accuracy and a high value for the Matthews correlation coefficient. However, point substitutions or a lack of relevant data strongly affect the performance of the method. In the present study, we propose a simple extension to the CMASA method to overcome this difficulty. Extensive computational experiments are shown as proof of concept instances, as well as for a few real cases. The results show that the extension performs well when the catalytic site contains mutated residues or when some residues are missing. The proposed modification could correctly predict the catalytic residues of a mutant thymidylate synthase, 1EVF. It also successfully predicted the catalytic residues for 3HRC despite the lack of information for a relevant side chain atom in the PDB file.
Directory of Open Access Journals (Sweden)
T. R. Jordana
2016-06-01
Full Text Available Documentation of the three-dimensional (3D cultural landscape has traditionally been conducted during site visits using conventional photographs, standard ground surveys and manual measurements. In recent years, there have been rapid developments in technologies that produce highly accurate 3D point clouds, including aerial LiDAR, terrestrial laser scanning, and photogrammetric data reduction from unmanned aerial systems (UAS images and hand held photographs using Structure from Motion (SfM methods. These 3D point clouds can be precisely scaled and used to conduct measurements of features even after the site visit has ended. As a consequence, it is becoming increasingly possible to collect non-destructive data for a wide variety of cultural site features, including landscapes, buildings, vegetation, artefacts and gardens. As part of a project for the U.S. National Park Service, a variety of data sets have been collected for the Wormsloe State Historic Site, near Savannah, Georgia, USA. In an effort to demonstrate the utility and versatility of these methods at a range of scales, comparisons of the features mapped with different techniques will be discussed with regards to accuracy, data set completeness, cost and ease-of-use.
Sato, Hiroyuki; Hirakawa, Akihiro; Hamada, Chikuma
2016-10-15
The paradigm of oncology drug development is expanding from developing cytotoxic agents to developing biological or molecularly targeted agents (MTAs). Although it is common for the efficacy and toxicity of cytotoxic agents to increase monotonically with dose escalation, the efficacy of some MTAs may exhibit non-monotonic patterns in their dose-efficacy relationships. Many adaptive dose-finding approaches in the available literature account for the non-monotonic dose-efficacy behavior by including additional model parameters. In this study, we propose a novel adaptive dose-finding approach based on binary efficacy and toxicity outcomes in phase I trials for monotherapy using an MTA. We develop a dose-efficacy model, the parameters of which are allowed to change in the vicinity of the change point of the dose level, in order to consider the non-monotonic pattern of the dose-efficacy relationship. The change point is obtained as the dose that maximizes the log-likelihood of the assumed dose-efficacy and dose-toxicity models. The dose-finding algorithm is based on the weighted Mahalanobis distance, calculated using the posterior probabilities of efficacy and toxicity outcomes. We compare the operating characteristics between the proposed and existing methods and examine the sensitivity of the proposed method by simulation studies under various scenarios. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A three-point backward finite-difference method has been derived for a system of mixed hyperbolic¯¯parabolic (convection¯¯diffusion) partial differential equations (mixed PDEs). The method resorts to the three-point backward differenci...
Determination of Proximate Composition and Amino Acid Profile of ...
African Journals Online (AJOL)
The proximate composition and amino acid profile of the seed of 30 Nigerian sesame genotypes were determined based on the standard methods of the Association of Official Analytical Chemists (AOAC) and the Sequential Multi- sample amino acid Analyzer (TSM). Proximate analysis showed that sesame seed contained ...
Management of proximal interphalangeal joint dislocations in athletes.
Bindra, Randy R; Foster, Brian J
2009-08-01
Proximal interphalangeal joint dislocations are common athletic injuries. In dislocations and fracture dislocations, the most important treatment principle is congruent joint reduction and maintenance of stability. This article reviews the relevant anatomy, injury characteristics, and treatment options for proximal interphalangeal joint dislocations and fracture dislocations. Treatment methods discussed include closed reduction, percutaneous fixation, and open reduction.
Proximate composition and mineral contents of Pebbly fish, Alestes ...
African Journals Online (AJOL)
ACSS
The objective of this study was to determine the proximate composition and mineral contents of A. ... and also develop suitable processing method. This study determined the proximate composition and mineral contents of A. baremoze fillets based on fish size. Materials and .... moisture contents can vary with sex of the fish ...
Internal fixation of proximal humerus fractures using the T2-proximal humeral nail.
Popescu, Dragos; Fernandez-Valencia, Jenaro A; Rios, Moisés; Cuñé, Jordi; Domingo, Anna; Prat, Salvi
2009-09-01
Surgical management of proximal humerus fractures remains controversial and there is an increasing interest in intramedullary nailing. Created to improve previous designs, the T2-proximal humeral nail (PHN) (Stryker) has been recently released, and the English literature lacks a series evaluating its results. We present a clinical prospective study evaluating this implant for proximal humeral fractures. We evaluated the functional and radiological results and possible complications. Twenty-nine patients with displaced fractures of the proximal humerus were treated with this nail. One patient was lost right after surgery and excluded from the assessment. Eighteen patients were older than 70 years. There were 21 fractures of the proximal part of the humerus and 7 fractures that also involved the shaft; 15 of the fractures were two-part fractures (surgical neck), 5 were three-part fractures, and 1 was a four-part fracture. All fractures healed in a mean period of 2.7 months. There was one delayed union that healed in 4 months. One case of avascular necrosis of the humeral head was observed (a four-part fracture), but remained asymptomatic and did not require further treatment. In one case a back-out of one proximal screw was observed. A final evaluation with a minimum 1 year follow-up was performed by an independent observer; in 18 patients, the mean Constant score was 65.7 or 76.1% with the adjustment of age and gender; in 19 patients, the mean Oxford Shoulder Score was 21.7. The results obtained with the T2-PHN nail indicate that it represents a safe and reliable method in the treatment of two- and three-part fractures of the proximal humerus. The proximal fixation mechanism diminishes the rate of back-out of the screws, a frequent complication described in the literature. Better functional results were obtained from the patients younger than 70 years, but these were not statistically significant.
Eskandari, Habibollah
2006-02-01
H-point standard addition method (HPSAM) has been applied for simultaneous determination of palladium and cobalt in trace levels, using disodium 1-nitroso-2-naphthol-3, 6-disulphonate (nitroso-R salt) as a selective chromogenic reagent. Palladium and cobalt in the neutral pHs form red color complexes with nitroso-R in aqueous solutions and making spectrophotometric monitoring possible. Simultaneous determination of palladium and cobalt were performed by HPSAM--first derivative spectrophotometry. First derivative signals at the two pairs of wavelengths, 523 and 589 nm or 513 and 554 nm were monitored with the addition of standard solutions of palladium or cobalt, respectively. The method is able to accurately determine palladium/cobalt ratio 1:10 to 15:1 (wt/wt). Accuracy and reproducibility of the determination method on the various amounts of palladium and cobalt known were evaluated in their binary mixtures. To investigate selectivity of the method and to ensure that no serious interferences were observed the effects of diverse ions on the determination of palladium and cobalt were also studied. The recommended procedure was successfully applied to real and synthetic cobalt or palladium alloys, B-complex ampoules, a palladium-charcoal mixture and real water matrices.
Roberts, T Edward; Bridge, Thomas C; Caley, M Julian; Baird, Andrew H
2016-01-01
Understanding patterns in species richness and diversity over environmental gradients (such as altitude and depth) is an enduring component of ecology. As most biological communities feature few common and many rare species, quantifying the presence and abundance of rare species is a crucial requirement for analysis of these patterns. Coral reefs present specific challenges for data collection, with limitations on time and site accessibility making efficiency crucial. Many commonly used methods, such as line intercept transects (LIT), are poorly suited to questions requiring the detection of rare events or species. Here, an alternative method for surveying reef-building corals is presented; the point count transect (PCT). The PCT consists of a count of coral colonies at a series of sample stations, located at regular intervals along a transect. In contrast the LIT records the proportion of each species occurring under a transect tape of a given length. The same site was surveyed using PCT and LIT to compare species richness estimates between the methods. The total number of species increased faster per individual sampled and unit of time invested using PCT. Furthermore, 41 of the 44 additional species recorded by the PCT occurred ≤ 3 times, demonstrating the increased capacity of PCT to detect rare species. PCT provides a more accurate estimate of local-scale species richness than the LIT, and is an efficient alternative method for surveying reef corals to address questions associated with alpha-diversity, and rare or incidental events.
Directory of Open Access Journals (Sweden)
Mohammad Hosein Soruraddin
2011-01-01
Full Text Available A simple, rapid, and sensitive spectrophotometric method for the determination of trace amounts of selenium (IV was described. In this method, all selenium spices reduced to selenium (IV using 6 M HCl. Cloud point extraction was applied as a preconcentration method for spectrophotometric determination of selenium (IV in aqueous solution. The proposed method is based on the complexation of Selenium (IV with dithizone at pH < 1 in micellar medium (Triton X-100. After complexation with dithizone, the analyte was quantitatively extracted to the surfactant-rich phase by centrifugation and diluted to 5 mL with methanol. Since the absorption maxima of the complex (424 nm and dithizone (434 nm overlap, hence, the corrected absorbance, Acorr, was used to overcome the problem. With regard to the preconcentration, the tested parameters were the pH of the extraction, the concentration of the surfactant, the concentration of dithizone, and equilibration temperature and time. The detection limit is 4.4 ng mL-1; the relative standard deviation for six replicate measurements is 2.18% for 50 ng mL-1 of selenium. The procedure was applied successfully to the determination of selenium in two kinds of pharmaceutical samples.
Proximal caries detection: Sirona Sidexis versus Kodak Ektaspeed Plus.
Khan, Emad A; Tyndall, Donald A; Ludlow, John B; Caplan, Daniel
2005-01-01
This study compared the accuracy of intraoral film and a charge-coupled device (CCD) receptor for proximal caries detection. Four observers evaluated images of the proximal surfaces of 40 extracted posterior teeth. The presence or absence of caries was scored using a five-point confidence scale. The actual status of each surface was determined from ground section histology. Responses were evaluated by means of receiver operating characteristic (ROC) analysis. Areas under ROC curves (Az) were assessed through a paired t-test. The performance of the CCD-based intraoral sensor was not different statistically from Ektaspeed Plus film in detecting proximal caries.
Directory of Open Access Journals (Sweden)
Huili Jiang
2015-01-01
Conclusion: EA at distal + proximal acupoints, distal points, as well as proximal points attenuated upregulation of spinal IL-1β, alleviated the extent of neuropathic pain hypersensitivity, and promoted mechanical withdrawal threshold, resulting in EA analgesia.
Mario Du Preez; T Lottering
2011-01-01
This study applied the hedonic pricing method to determine whether a disused, solid waste landfill site has an adverse effect on the prices of low-cost houses in New Brighton, a neighbourhood of the Nelson Mandela Metropole, Eastern Cape, South Africa. The results of the study show that the landfill site has a negative effect on New Brighton house prices. The average increase in house value is R36.00 per one hundred metres from the landfill site. This increase amounts to 0.44 percent of the v...
Roma, E; Bond, T; Jeffrey, P
2014-09-01
Many scientific studies have suggested that point-of-use water treatment can improve water quality and reduce the risk of infectious diseases. Despite the ease of use and relatively low cost of such methods, experience shows the potential benefits derived from provision of such systems depend on recipients' acceptance of the technology and its sustained use. To date, few contributions have addressed the problem of user experience in the post-implementation phase. This can diagnose challenges, which undermine system longevity and its sustained use. A qualitative evaluation of two household water treatment systems, solar disinfection (SODIS) and chlorine tablets (Aquatabs), in three villages was conducted by using a diagnostic tool focusing on technology performance and experience. Cross-sectional surveys and in-depth interviews were used to investigate perceptions of involved stakeholders (users, implementers and local government). Results prove that economic and functional factors were significant in using SODIS, whilst perceptions of economic, taste and odour components were important in Aquatabs use. Conclusions relate to closing the gap between factors that technology implementers and users perceive as key to the sustained deployment of point-of-use disinfection technologies.
Two-Step Proximal Gradient Algorithm for Low-Rank Matrix Completion
Directory of Open Access Journals (Sweden)
Qiuyu Wang
2016-06-01
Full Text Available In this paper, we propose a two-step proximal gradient algorithm to solve nuclear norm regularized least squares for the purpose of recovering low-rank data matrix from sampling of its entries. Each iteration generated by the proposed algorithm is a combination of the latest three points, namely, the previous point, the current iterate, and its proximal gradient point. This algorithm preserves the computational simplicity of classical proximal gradient algorithm where a singular value decomposition in proximal operator is involved. Global convergence is followed directly in the literature. Numerical results are reported to show the efficiency of the algorithm.
Ising Ferromagnets on Proximity Graphs with Varying Disorder of the Node Placement.
Schawe, Hendrik; Norrenbrock, Christoph; Hartmann, Alexander K
2017-08-14
We perform Monte Carlo simulations to determine the critical temperatures of Ising Ferromagnets (IFM) on different types of two-dimensional proximity graphs, in which the distribution of their underlying node sets has been changed systematically by means of a parameter σ. This allows us to interpolate between regular grids and proximity graphs based on complete random placement of nodes. Each edge of the planar proximity graphs carries a weighted ferromagnetic coupling. The coupling strengths are determined via the Euclidean distances between coupled spins. The simulations are carried out on graphs with N = 162 to N = 1282 nodes utilising the Wolff cluster algorithm and parallel tempering method in a wide temperature range around the critical point to measure the Binder cumulant in order to obtain the critical temperature for different values of σ. Interestingly, the critical temperatures depend partially non-monotonously on the disorder parameter σ, corresponding to a non-monotonous change of the graph structure. For completeness, we further verify using finite-size scaling methods that the IFM on proximity graphs is for all values of the disorder in the same universality class as the IFM on the two-dimensional square lattice.
Nabwey, Hossam A.; Boumazgour, Mohamed; Rashad, A. M.
2017-07-01
The group method analysis is applied to study the steady mixed convection stagnation-point flow of a non-Newtonian nanofluid towards a vertical stretching surface. The model utilized for the nanofluid incorporates the Brownian motion and thermophoresis effects. Applying the one-parameter transformation group which reduces the number of independent variables by one and thus, the system of governing partial differential equations has been converted to a set of nonlinear ordinary differential equations, and these equations are then computed numerically using the implicit finite-difference scheme. Comparison with previously published studies is executed and the results are found to be in excellent agreement. Results for the velocity, temperature, and the nanoparticle volume fraction profiles as well as the local skin-friction coefficient and local Nusselt number are presented in graphical and tabular forms, and discussed for different values of the governing parameters to show interesting features of the solutions.
Energy Technology Data Exchange (ETDEWEB)
Reimann, Rene; Haack, Christian; Leuermann, Martin; Raedel, Leif; Schoenen, Sebastian; Schimp, Michael; Wiebusch, Christopher [III. Physikalisches Institut, RWTH Aachen (Germany); Collaboration: IceCube-Collaboration
2015-07-01
IceCube, a cubic-kilometer sized neutrino detector at the geographical South Pole, has recently measured a flux of high-energy astrophysical neutrinos. Although this flux has now been observed in multiple analyses, no point sources or source classes could be identified yet. Standard point source searches test many points in the sky for a point source of astrophysical neutrinos individually and therefore produce many trials. Our approach is to additionally use the measured diffuse spectrum to constrain the number of possible point sources and their properties. Initial studies of the method performance are shown.
Lee, Joseph G L; Henriksen, Lisa; Myers, Allison E; Dauphinee, Amanda L; Ribisl, Kurt M
2014-03-01
Over four-fifths of reported expenditures for marketing tobacco products occur at the retail point of sale (POS). To date, no systematic review has synthesised the methods used for surveillance of POS marketing. This review sought to describe the audit objectives, methods and measures used to study retail tobacco environments. We systematically searched 11 academic databases for papers indexed on or before 14 March 2012, identifying 2906 papers. Two coders independently reviewed each abstract or full text to identify papers with the following criteria: (1) data collectors visited and assessed (2) retail environments using (3) a data collection instrument for (4) tobacco products or marketing. We excluded papers where limited measures of products and/or marketing were incidental. Two abstractors independently coded included papers for research aims, locale, methods, measures used and measurement properties. We calculated descriptive statistics regarding the use of four P's of marketing (product, price, placement, promotion) and for measures of study design, sampling strategy and sample size. We identified 88 store audit studies. Most studies focus on enumerating the number of signs or other promotions. Several strengths, particularly in sampling, are noted, but substantial improvements are indicated in the reporting of reliability, validity and audit procedures. Audits of POS tobacco marketing have made important contributions to understanding industry behaviour, the uses of marketing and resulting health behaviours. Increased emphasis on standardisation and the use of theory are needed in the field. We propose key components of audit methodology that should be routinely reported.
Yin, Gaohong
2016-05-01
Since the failure of the Scan Line Corrector (SLC) instrument on Landsat 7, observable gaps occur in the acquired Landsat 7 imagery, impacting the spatial continuity of observed imagery. Due to the highly geometric and radiometric accuracy provided by Landsat 7, a number of approaches have been proposed to fill the gaps. However, all proposed approaches have evident constraints for universal application. The main issues in gap-filling are an inability to describe the continuity features such as meandering streams or roads, or maintaining the shape of small objects when filling gaps in heterogeneous areas. The aim of the study is to validate the feasibility of using the Direct Sampling multiple-point geostatistical method, which has been shown to reconstruct complicated geological structures satisfactorily, to fill Landsat 7 gaps. The Direct Sampling method uses a conditional stochastic resampling of known locations within a target image to fill gaps and can generate multiple reconstructions for one simulation case. The Direct Sampling method was examined across a range of land cover types including deserts, sparse rural areas, dense farmlands, urban areas, braided rivers and coastal areas to demonstrate its capacity to recover gaps accurately for various land cover types. The prediction accuracy of the Direct Sampling method was also compared with other gap-filling approaches, which have been previously demonstrated to offer satisfactory results, under both homogeneous area and heterogeneous area situations. Studies have shown that the Direct Sampling method provides sufficiently accurate prediction results for a variety of land cover types from homogeneous areas to heterogeneous land cover types. Likewise, it exhibits superior performances when used to fill gaps in heterogeneous land cover types without input image or with an input image that is temporally far from the target image in comparison with other gap-filling approaches.
Directory of Open Access Journals (Sweden)
Mario Du Preez
2011-08-01
Full Text Available This study applied the hedonic pricing method to determine whether a disused, solid waste landfill site has an adverse effect on the prices of low-cost houses in New Brighton, a neighbourhood of the Nelson Mandela Metropole, Eastern Cape, South Africa. The results of the study show that the landfill site has a negative effect on New Brighton house prices. The average increase in house value is R36.00 per one hundred metres from the landfill site. This increase amounts to 0.44 percent of the value of a house per 100 metres from the landfill. When the change in value is summed for all the properties in the sample area (allowing for variation in value change due to differing distances from the landfill site the total disamenity effect of the landfill site is approximately R1.4 million.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
Kheifets, Leeka; Crespi, Catherine M; Hooper, Chris; Oksuzyan, Sona; Cockburn, Myles; Ly, Thomas; Mezei, Gabor
2015-01-01
We conducted a large epidemiologic case-control study in California to examine the association between childhood cancer risk and distance from the home address at birth to the nearest high-voltage overhead transmission line as a replication of the study of Draper et al. in the United Kingdom. We present a detailed description of the study design, methods of case ascertainment, control selection, exposure assessment and data analysis plan. A total of 5788 childhood leukemia cases and 3308 childhood central nervous system cancer cases (included for comparison) and matched controls were available for analysis. Birth and diagnosis addresses of cases and birth addresses of controls were geocoded. Distance from the home to nearby overhead transmission lines was ascertained on the basis of the electric power companies' geographic information system (GIS) databases, additional Google Earth aerial evaluation and site visits to selected residences. We evaluated distances to power lines up to 2000 m and included consideration of lower voltages (60-69 kV). Distance measures based on GIS and Google Earth evaluation showed close agreement (Pearson correlation >0.99). Our three-tiered approach to exposure assessment allowed us to achieve high specificity, which is crucial for studies of rare diseases with low exposure prevalence.
Directory of Open Access Journals (Sweden)
Nuttada Panpradist
Full Text Available BACKGROUND: The global need for disease detection and control has increased effort to engineer point-of-care (POC tests that are simple, robust, affordable, and non-instrumented. In many POC tests, sample collection involves swabbing the site (e.g., nose, skin, agitating the swab in a fluid to release the sample, and transferring the fluid to a device for analysis. Poor performance in sample transfer can reduce sensitivity and reproducibility. METHODS: In this study, we compared bacterial release efficiency of seven swab types using manual-agitation methods typical of POC devices. Transfer efficiency was measured using quantitative PCR (qPCR for Staphylococcus aureus under conditions representing a range of sampling scenarios: 1 spiking low-volume samples onto the swab, 2 submerging the swab in excess-volume samples, and 3 swabbing dried sample from a surface. RESULTS: Excess-volume samples gave the expected recovery for most swabs (based on tip fluid capacity; a polyurethane swab showed enhanced recovery, suggesting an ability to accumulate organisms during sampling. Dry samples led to recovery of ∼20-30% for all swabs tested, suggesting that swab structure and volume is less important when organisms are applied to the outer swab surface. Low-volume samples led to the widest range of transfer efficiencies between swab types. Rayon swabs (63 µL capacity performed well for excess-volume samples, but showed poor recovery for low-volume samples. Nylon (100 µL and polyester swabs (27 µL showed intermediate recovery for low-volume and excess-volume samples. Polyurethane swabs (16 µL showed excellent recovery for all sample types. This work demonstrates that swab transfer efficiency can be affected by swab material, structure, and fluid capacity and details of the sample. Results and quantitative analysis methods from this study will assist POC assay developers in selecting appropriate swab types and transfer methods.
De Rosario, Helios; Page, Alvaro; Mata, Vicente
2014-05-07
This paper proposes a variation of the instantaneous helical pivot technique for locating centers of rotation. The point of optimal kinematic error (POKE), which minimizes the velocity at the center of rotation, may be obtained by just adding a weighting factor equal to the square of angular velocity in Woltring׳s equation of the pivot of instantaneous helical axes (PIHA). Calculations are simplified with respect to the original method, since it is not necessary to make explicit calculations of the helical axis, and the effect of accidental errors is reduced. The improved performance of this method was validated by simulations based on a functional calibration task for the gleno-humeral joint center. Noisy data caused a systematic dislocation of the calculated center of rotation towards the center of the arm marker cluster. This error in PIHA could even exceed the effect of soft tissue artifacts associated to small and medium deformations, but it was successfully reduced by the POKE estimation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Lurdes Borges Silva
2017-01-01
Full Text Available Tree density is an important parameter affecting ecosystems functions and management decisions, while tree distribution patterns affect sampling design. Pittosporum undulatum stands in the Azores are being targeted with a biomass valorization program, for which efficient tree density estimators are required. We compared T-Square sampling, Point Centered Quarter Method (PCQM, and N-tree sampling with benchmark quadrat (QD sampling in six 900 m2 plots established at P. undulatum stands in São Miguel Island. A total of 15 estimators were tested using a data resampling approach. The estimated density range (344–5056 trees/ha was found to agree with previous studies using PCQM only. Although with a tendency to underestimate tree density (in comparison with QD, overall, T-Square sampling appeared to be the most accurate and precise method, followed by PCQM. Tree distribution pattern was found to be slightly aggregated in 4 of the 6 stands. Considering (1 the low level of bias and high precision, (2 the consistency among three estimators, (3 the possibility of use with aggregated patterns, and (4 the possibility of obtaining a larger number of independent tree parameter estimates, we recommend the use of T-Square sampling in P. undulatum stands within the framework of a biomass valorization program.
Gupta, Ritu; Reifenberger, Ronald G; Kulkarni, Giridhar U
2014-03-26
In this study, we demonstrate that a disposable chip periodically patterned with suitable ligands, an ordinary cellphone camera, and a simple pattern recognition software, can potentially be used for quantitative diagnostics. A key factor in this demonstration is the design of a calibration grid around the chip that, through a contrast transfer process, enables reliable analysis of the images collected under variable ambient lighting conditions. After exposure to a dispersion of amine terminated silica beads used as analyte mimicking pathogens, an epoxy-terminated glass substrate microcontact printed with octadecyltrichlorosilane (250 μm periodicity) developed a characteristic pattern of beads which could be easily imaged with a cellphone camera of 3.2 MP pixels. A simple pattern recognition algorithm using fast Fourier transform produced a quantitative estimate of the analyte concentration present in the test solution. In this method importantly, neither the chip fabrication process nor the fill-factor of the periodic pattern need be perfect to arrive at a conclusive diagnosis. The method suggests a viable platform that may potentially find use in fault-tolerant and robust point-of-care diagnostic applications.
Energy Technology Data Exchange (ETDEWEB)
Su Xiaoxing, E-mail: xxsu@bjtu.edu.c [School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044 (China); Wang Yuesheng [Institute of Engineering Mechanics, Beijing Jiaotong University, Beijing 100044 (China)
2010-09-01
In this paper, a new postprocessing method for the finite difference time domain (FDTD) calculation of the point defect states in two-dimensional (2D) phononic crystals (PNCs) is developed based on the chirp Z transform (CZT), one of the frequency zooming techniques. The numerical results for the defect states in 2D solid/liquid PNCs with single or double point defects show that compared with the fast Fourier transform (FFT)-based postprocessing method, the method can improve the estimation accuracy of the eigenfrequencies of the point defect states significantly when the FDTD calculation is run with relatively few iterations; and furthermore it can yield the point defect bands without calculating all eigenfrequencies outside the band gaps. The efficiency and accuracy of the FDTD method can be improved significantly with this new postprocessing method.
Fractures of the proximal humerus
DEFF Research Database (Denmark)
Brorson, Stig
2013-01-01
. The bandages were further supported by splints made of wood or coarse grass. Healing was expected in forty days. Different fracture patterns have been discussed and classified since Ancient Greece. Current classification of proximal humeral fractures mainly relies on the classifications proposed by Charles......, classification of proximal humeral fractures remains a challenge for the conduct, reporting, and interpretation of clinical trials. The evidence for the benefits of surgery in complex fractures of the proximal humerus is weak. In three systematic reviews I studied the outcome after locking plate osteosynthesis......Fractures of the proximal humerus have been diagnosed and managed since the earliest known surgical texts. For more than four millennia the preferred treatment was forceful traction, closed reduction, and immobilization with linen soaked in combinations of oil, honey, alum, wine, or cerate...
The infrastructure of psychological proximity
DEFF Research Database (Denmark)
Nickelsen, Niels Christian Mossfeldt
2015-01-01
). The experience of psychological proximity between patient and nurse is provided through confidence, continuity and the practical set-up. This constitutes an important enactment of skillfulness, which may render telemedicine a convincing health service in the future. Methodology: The study draws on a pilot...... (Langstrup & Winthereik 2008). This study contributes by showing the infrastructure of psychological proximity, which is provided by way of device, confidence, continuity and accountability....
Inter-organizational proximity in the context of logistics – research challenges
Directory of Open Access Journals (Sweden)
Patrycja Klimas
2015-03-01
Full Text Available Background: One of major areas of modern research econnected with management issues covers inter-organizational networks (including supply chains and cooperation processes aimed at improvement of the effectiveness of their performance to be found in such networks. The logistics is the main factor responsible for effectiveness of the supply chain. A possible and a quite new direction of research in the area of the performance of processes of the inter-organizational cooperation is the proximity hypothesis that is considered in five dimensions (geographical, organizational, social, cognitive, and institutional. However, according to many authors, there is a lack of research on supply chains conducted from the logistics point of view. The proximity hypothesis in this area of research can be seen as a kind of novum. Therefore, this paper presents the proximity concept from the perspective of the management science, the overview of prior research covering the inter-organizational proximity with supply chain from the logistics point of view as well as the possible future directions of the empirical efforts. Methods: The aim of this paper is to present previous theoretical and empirical results of research covering inter-organizational proximity in logistics and to show current and up-to-date research challenges in this area. The method of the critical analysis of literature is used to realize the goal constructed this way. Results: Knowledge about the influence of the inter-organizational proximity on the performance of supply chains is rather limited, and the research conducted so far, is rather fragmentary and not free of limitations of the conceptual and methodological nature. Additional rationales for further research in this area include knowledge and cognitive gaps indentified in this paper. According to authors the aim of future empirical research should be as follows: (1 unification and update of used conceptual and methodological approaches
National Oceanic and Atmospheric Administration, Department of Commerce — The Line Point-Intercept (LPI) method is one of two benthic surveys conducted in Puerto Rico as part of the National Coral Reef Monitoring Program (NCRMP). The LPI...