Method and apparatus for continuously detecting and monitoring the hydrocarbon dew-point of gas
Energy Technology Data Exchange (ETDEWEB)
Boyle, G.J.; Pritchard, F.R.
1987-08-04
This patent describes a method and apparatus for continuously detecting and monitoring the hydrocarbon dew-point of a gas. A gas sample is supplied to a dew-point detector and the temperature of a portion of the sample gas stream to be investigated is lowered progressively prior to detection until the dew-point is reached. The presence of condensate within the flowing gas is detected and subsequently the supply gas sample is heated to above the dew-point. The procedure of cooling and heating the gas stream continuously in a cyclical manner is repeated.
Energy Technology Data Exchange (ETDEWEB)
Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, California 90095 (United States); Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan
2015-11-01
To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter
Energy Technology Data Exchange (ETDEWEB)
Tumelero, Fernanda; Petersen, Claudio Zen; Goncalves, Glenio Aguiar [Universidade Federal de Pelotas, Capao do Leao, RS (Brazil). Programa de Pos Graduacao em Modelagem Matematica; Schramm, Marcelo [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica
2016-12-15
In this work, we report a solution to solve the Neutron Point Kinetics Equations applying the Polynomial Approach Method. The main idea is to expand the neutron density and delayed neutron precursors as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions and the analytical continuation is used to determine the solutions of the next intervals. A genuine error control is developed based on an analogy with the Rest Theorem. For illustration, we also report simulations for different approaches types (linear, quadratic and cubic). The results obtained by numerical simulations for linear approximation are compared with results in the literature.
Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A
2015-01-01
Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.
Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun
2017-03-05
In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission
Ewald Electrostatics for Mixtures of Point and Continuous Line Charges.
Antila, Hanne S; Tassel, Paul R Van; Sammalkorpi, Maria
2015-10-15
Many charged macro- or supramolecular systems, such as DNA, are approximately rod-shaped and, to the lowest order, may be treated as continuous line charges. However, the standard method used to calculate electrostatics in molecular simulation, the Ewald summation, is designed to treat systems of point charges. We extend the Ewald concept to a hybrid system containing both point charges and continuous line charges. We find the calculated force between a point charge and (i) a continuous line charge and (ii) a discrete line charge consisting of uniformly spaced point charges to be numerically equivalent when the separation greatly exceeds the discretization length. At shorter separations, discretization induces deviations in the force and energy, and point charge-point charge correlation effects. Because significant computational savings are also possible, the continuous line charge Ewald method presented here offers the possibility of accurate and efficient electrostatic calculations.
CONTINUOUS ANALYZER UTILIZING BOILING POINT DETERMINATION
Pappas, W.S.
1963-03-19
A device is designed for continuously determining the boiling point of a mixture of liquids. The device comprises a distillation chamber for boiling a liquid; outlet conduit means for maintaining the liquid contents of said chamber at a constant level; a reflux condenser mounted above said distillation chamber; means for continuously introducing an incoming liquid sample into said reflux condenser and into intimate contact with vapors refluxing within said condenser; and means for measuring the temperature of the liquid flowing through said distillation chamber. (AEC)
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Sysala, Stanislav
2015-01-01
Roč. 70, č. 11 (2015), s. 2621-2637 ISSN 0898-1221 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:68145535 Keywords : system of nonlinear equations * Newton method * load increment method * elastoplasticity Subject RIV: IN - Informatics, Computer Science Impact factor: 1.398, year: 2015 http://www.sciencedirect.com/science/article/pii/S0898122115003818
International Nuclear Information System (INIS)
Mimura, Hiroaki; Sone, Teruki; Takahashi, Yoshitake
2008-01-01
Optimal setting of the input function is essential for the measurement of regional cerebral blood flow (rCBF) based on the microsphere model using N-isopropyl-4-[ 123 I]iodoamphetamine ( 123 I-IMP), and usually the arterial 123 I-IMP concentration (integral value) in the initial 5 min is used for this purpose. We have developed a new convenient method in which 123 I-IMP concentration in arterial blood sample is estimated from that in venous blood sample. Brain perfusion single photon emission computed tomography (SPECT) with 123 I-IMP was performed in 110 cases of central nervous system disorders. The causality was analyzed between the various parameters of SPECT data and the ratio of octanol-extracted arterial radioactivity concentration during the first 5 min (Caoct) to octanol-extracted venous radioactivity concentration at 27 min after intravenous injection of 123 I-IMP (Cvoct). A high correlation was observed between the measured and estimated values of Caoct/Cvoct (r=0.856) when the following five parameters were included in the regression formula: radioactivity concentration in venous blood sampled at 27 min (Cv), Cvoct, Cvoct/Cv, and total brain radioactivity counts that were measured by a four-head gamma camera 5 min and 28 min after 123 I-IMP injection. Furthermore, the rCBF values obtained using the input parameters estimated by this method were also highly correlated with the rCBF values measured using the continuous arterial blood sampling method (r=0.912). These results suggest that this method would serve as the new, convenient and less invasive method of rCBF measurement in clinical setting. (author)
Automatic continuous dew point measurement in combustion gases
Energy Technology Data Exchange (ETDEWEB)
Fehler, D.
1986-08-01
Low exhaust temperatures serve to minimize energy consumption in combustion systems. This requires accurate, continuous measurement of exhaust condensation. An automatic dew point meter for continuous operation is described. The principle of measurement, the design of the measuring system, and practical aspects of operation are discussed.
Parametric methods for spatial point processes
DEFF Research Database (Denmark)
Møller, Jesper
is studied in Section 4, and Bayesian inference in Section 5. On one hand, as the development in computer technology and computational statistics continues,computationally-intensive simulation-based methods for likelihood inference probably will play a increasing role for statistical analysis of spatial...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models......(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...
A Continuation Method for Weakly Kannan Maps
Directory of Open Access Journals (Sweden)
Ariza-Ruiz David
2010-01-01
Full Text Available The first continuation method for contractive maps in the setting of a metric space was given by Granas. Later, Frigon extended Granas theorem to the class of weakly contractive maps, and recently Agarwal and O'Regan have given the corresponding result for a certain type of quasicontractions which includes maps of Kannan type. In this note we introduce the concept of weakly Kannan maps and give a fixed point theorem, and then a continuation method, for this class of maps.
Continuous method of natrium purification
International Nuclear Information System (INIS)
Batoux, B.; Laurent-Atthalin, A.; Salmon, M.
1975-01-01
An improvement of the known method for the production of highly pure sodium from technically pure sodium which still contains several hundred ppm metallic impurities is proposed. These impurities, first of all Ca and Ba, are separated by oxidation with sodium peroxide. The continuous method is new which can also be performed on a technically large scale and which results in a degree of purity of less than 10 ppm Ca. Under N 2 -atmosphere, highly dispersed sodium peroxide is added to a flow of sodium, and at 100 0 C to 150 0 C, thoroughly mixed, the suspension is heated under turbulence to 200 0 C to 300 0 C, and the forming oxides are separated. Exact data for an optimum reaction guide as well as a flow diagram are supplied. (UWI) [de
Continuous method of natrium purification
Energy Technology Data Exchange (ETDEWEB)
Batoux, B; Laurent-Atthalin, A; Salmon, M
1975-05-28
An improvement of the known method for the production of highly pure sodium from technically pure sodium which still contains several hundred ppm metallic impurities is proposed. These impurities, first of all Ca and Ba, are separated by oxidation with sodium peroxide. The new continuous method can be performed on a technically large scale and results in a degree of purity of less than 10 ppm Ca. Under N/sub 2/ -atmosphere, highly dispersed sodium peroxide is added to a flow of sodium, and at 100/sup 0/C to 150/sup 0/C, thoroughly mixed, the suspension is heated under turbulence to 200/sup 0/C to 300/sup 0/C, and the forming oxides are separated. Exact data for an optimum reaction guide as well as a flow diagram are supplied.
Method of continuously cleaning condensers
International Nuclear Information System (INIS)
Tomita, Akira; Takahashi, Sankichi.
1982-01-01
Purpose: To prevent marine livings from depositing to the inside of ball recycling pipeways. Method: Copper electrodes are provided to the downstream of a sponge ball collector in a sponge ball recycling pipeways for cleaning through the cooling pipes of a condenser. Electrical current is supplied by way of a variable resister to the electrodes and copper ions resulted from the dissolution of the electrodes are fed in the pipes to kill the marine livings such as barnacles and prevent the marine livings from depositing to the inside of the sponge ball recycling pipeways. (Seki, T.)
Bailly, J. S.; Dartevelle, M.; Delenne, C.; Rousseau, A.
2017-12-01
Floodplain and major river bed topography govern many river biophysical processes during floods. Despite the grow of direct topographic measurements from LiDARS on riverine systems, it still room to develop methods for large (e.g. deltas) or very local (e.g. ponds) riverine systems that take advantage of information coming from simple SAR or optical image processing on floodplain, resulting from waterbodies delineation during flood up or down, and producing ordered coutour lines. The next challenge is thus to exploit such data in order to estimate continuous topography on the floodplain combining heterogeneous data: a topographic points dataset and a located but unknown and ordered contourline dataset. This article is comparing two methods designed to estimate continuous topography on the floodplain mixing ordinal coutour lines and continuous topographic points. For both methods a first estimation step is to value each contourline with elevation and a second step is next to estimate the continuous field from both topographic points and valued contourlines. The first proposed method is a stochastic method starting from multigaussian random-fields and conditional simualtion. The second is a deterministic method based on radial spline fonction for thin layers used for approximated bivariate surface construction. Results are first shown and discussed from a set of synoptic case studies presenting various topographic points density and topographic smoothness. Next, results are shown and discuss on an actual case study in the Montagua laguna, located in the north of Valparaiso, Chile.
Advanced continuous cultivation methods for systems microbiology.
Adamberg, Kaarel; Valgepea, Kaspar; Vilu, Raivo
2015-09-01
Increasing the throughput of systems biology-based experimental characterization of in silico-designed strains has great potential for accelerating the development of cell factories. For this, analysis of metabolism in the steady state is essential as only this enables the unequivocal definition of the physiological state of cells, which is needed for the complete description and in silico reconstruction of their phenotypes. In this review, we show that for a systems microbiology approach, high-resolution characterization of metabolism in the steady state--growth space analysis (GSA)--can be achieved by using advanced continuous cultivation methods termed changestats. In changestats, an environmental parameter is continuously changed at a constant rate within one experiment whilst maintaining cells in the physiological steady state similar to chemostats. This increases the resolution and throughput of GSA compared with chemostats, and, moreover, enables following of the dynamics of metabolism and detection of metabolic switch-points and optimal growth conditions. We also describe the concept, challenge and necessary criteria of the systematic analysis of steady-state metabolism. Finally, we propose that such systematic characterization of the steady-state growth space of cells using changestats has value not only for fundamental studies of metabolism, but also for systems biology-based metabolic engineering of cell factories.
Voltage stability, bifurcation parameters and continuation methods
Energy Technology Data Exchange (ETDEWEB)
Alvarado, F L [Wisconsin Univ., Madison, WI (United States)
1994-12-31
This paper considers the importance of the choice of bifurcation parameter in the determination of the voltage stability limit and the maximum power load ability of a system. When the bifurcation parameter is power demand, the two limits are equivalent. However, when other types of load models and bifurcation parameters are considered, the two concepts differ. The continuation method is considered as a method for determination of voltage stability margins. Three variants of the continuation method are described: the continuation parameter is the bifurcation parameter the continuation parameter is initially the bifurcation parameter, but is free to change, and the continuation parameter is a new `arc length` parameter. Implementations of voltage stability software using continuation methods are described. (author) 23 refs., 9 figs.
Continuation of connecting orbits in 3d-ODEs' (i) point-to-cycle connections.
Doedel, E.J.; Kooi, B.W.; van Voorn, G.A.K.; Kuznetzov, Y.A.
2008-01-01
We propose new methods for the numerical continuation of point-to-cycle connecting orbits in three-dimensional autonomous ODE's using projection boundary conditions. In our approach, the projection boundary conditions near the cycle are formulated using an eigenfunction of the associated adjoint
Genealogical series method. Hyperpolar points screen effect
International Nuclear Information System (INIS)
Gorbatov, A.M.
1991-01-01
The fundamental values of the genealogical series method -the genealogical integrals (sandwiches) have been investigated. The hyperpolar points screen effect has been found. It allows one to calculate the sandwiches for the Fermion systems with large number of particles and to ascertain the validity of the iterated-potential method as well. For the first time the genealogical-series method has been realized numerically for the central spin-independent potential
THE GROWTH POINTS OF STATISTICAL METHODS
Orlov A. I.
2014-01-01
On the basis of a new paradigm of applied mathematical statistics, data analysis and economic-mathematical methods are identified; we have also discussed five topical areas in which modern applied statistics is developing as well as the other statistical methods, i.e. five "growth points" – nonparametric statistics, robustness, computer-statistical methods, statistics of interval data, statistics of non-numeric data
Source splitting via the point source method
International Nuclear Information System (INIS)
Potthast, Roland; Fazi, Filippo M; Nelson, Philip A
2010-01-01
We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields u j , j = 1, ..., n of n element of N sound sources supported in different bounded domains G 1 , ..., G n in R 3 from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u 1 + ... + u n on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions g 1 ,…, g n , n element of N, to construct u l for l = 1, ..., n from u| Λ in the form u l (x) = ∫ Λ g l,x (y)u(y)ds(y), l=1,... n. (1) We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online
Natural Preconditioning and Iterative Methods for Saddle Point Systems
Pestana, Jennifer
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. The solution of quadratic or locally quadratic extremum problems subject to linear(ized) constraints gives rise to linear systems in saddle point form. This is true whether in the continuous or the discrete setting, so saddle point systems arising from the discretization of partial differential equation problems, such as those describing electromagnetic problems or incompressible flow, lead to equations with this structure, as do, for example, interior point methods and the sequential quadratic programming approach to nonlinear optimization. This survey concerns iterative solution methods for these problems and, in particular, shows how the problem formulation leads to natural preconditioners which guarantee a fast rate of convergence of the relevant iterative methods. These preconditioners are related to the original extremum problem and their effectiveness - in terms of rapidity of convergence - is established here via a proof of general bounds on the eigenvalues of the preconditioned saddle point matrix on which iteration convergence depends.
Pointing Verification Method for Spaceborne Lidars
Directory of Open Access Journals (Sweden)
Axel Amediek
2017-01-01
Full Text Available High precision acquisition of atmospheric parameters from the air or space by means of lidar requires accurate knowledge of laser pointing. Discrepancies between the assumed and actual pointing can introduce large errors due to the Doppler effect or a wrongly assumed air pressure at ground level. In this paper, a method for precisely quantifying these discrepancies for airborne and spaceborne lidar systems is presented. The method is based on the comparison of ground elevations derived from the lidar ranging data with high-resolution topography data obtained from a digital elevation model and allows for the derivation of the lateral and longitudinal deviation of the laser beam propagation direction. The applicability of the technique is demonstrated by using experimental data from an airborne lidar system, confirming that geo-referencing of the lidar ground spot trace with an uncertainty of less than 10 m with respect to the used digital elevation model (DEM can be obtained.
Continuous improvement methods in the nuclear industry
International Nuclear Information System (INIS)
Heising, Carolyn D.
1995-01-01
The purpose of this paper is to investigate management methods for improved safety in the nuclear power industry. Process improvement management, methods of business process reengineering, total quality management, and continued process improvement (KAIZEN) are explored. The anticipated advantages of extensive use of improved process oriented management methods in the nuclear industry are increased effectiveness and efficiency in virtually all tasks of plant operation and maintenance. Important spin off include increased plant safety and economy. (author). 6 refs., 1 fig
Novel TPPO Based Maximum Power Point Method for Photovoltaic System
Directory of Open Access Journals (Sweden)
ABBASI, M. A.
2017-08-01
Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.
Minimizing convex functions by continuous descent methods
Directory of Open Access Journals (Sweden)
Sergiu Aizicovici
2010-01-01
Full Text Available We study continuous descent methods for minimizing convex functions, defined on general Banach spaces, which are associated with an appropriate complete metric space of vector fields. We show that there exists an everywhere dense open set in this space of vector fields such that each of its elements generates strongly convergent trajectories.
A continuation method for emission tomography
International Nuclear Information System (INIS)
Lee, M.; Zubal, I.G.
1993-01-01
One approach to improved reconstructions in emission tomography has been the incorporation of additional source information via Gibbs priors that assume a source f that is piecewise smooth. A natural Gibbs prior for expressing such constraints is an energy function E(f,l) defined on binary valued line processes l as well as f. MAP estimation leads to the difficult problem of minimizing a mixed (continuous and binary) variable objective function. Previous approaches have used Gibbs 'potential' functions, φ(f v ) and φ(f h ), defined solely on spatial derivatives, f v and f h , of the source. These φ functions implicitly incorporate line processes, but only in an approximate manner. The correct φ function, φ * , consistent with the use of line processes, leads to difficult minimization problems. In this work, the authors present a method wherein the correct φ * function is approached through a sequence of smooth φ functions. This is the essence of a continuation method in which the minimum of the energy function corresponding to one member of the φ function sequence is used as an initial condition for the minimization of the next, less approximate, stage. The continuation method is implemented using a GEM-ICM procedure. Simulation results show improvement using the continuation method relative to using φ * alone, and to conventional EM reconstructions
Osada, Hirofumi; Osada, Shota
2018-01-01
We prove tail triviality of determinantal point processes μ on continuous spaces. Tail triviality has been proved for such processes only on discrete spaces, and hence we have generalized the result to continuous spaces. To do this, we construct tree representations, that is, discrete approximations of determinantal point processes enjoying a determinantal structure. There are many interesting examples of determinantal point processes on continuous spaces such as zero points of the hyperbolic Gaussian analytic function with Bergman kernel, and the thermodynamic limit of eigenvalues of Gaussian random matrices for Sine_2 , Airy_2 , Bessel_2 , and Ginibre point processes. Our main theorem proves all these point processes are tail trivial.
Method Points: towards a metric for method complexity
Directory of Open Access Journals (Sweden)
Graham McLeod
1998-11-01
Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.
Continuous Extraction of Subway Tunnel Cross Sections Based on Terrestrial Point Clouds
Directory of Open Access Journals (Sweden)
Zhizhong Kang
2014-01-01
Full Text Available An efficient method for the continuous extraction of subway tunnel cross sections using terrestrial point clouds is proposed. First, the continuous central axis of the tunnel is extracted using a 2D projection of the point cloud and curve fitting using the RANSAC (RANdom SAmple Consensus algorithm, and the axis is optimized using a global extraction strategy based on segment-wise fitting. The cross-sectional planes, which are orthogonal to the central axis, are then determined for every interval. The cross-sectional points are extracted by intersecting straight lines that rotate orthogonally around the central axis within the cross-sectional plane with the tunnel point cloud. An interpolation algorithm based on quadric parametric surface fitting, using the BaySAC (Bayesian SAmpling Consensus algorithm, is proposed to compute the cross-sectional point when it cannot be acquired directly from the tunnel points along the extraction direction of interest. Because the standard shape of the tunnel cross section is a circle, circle fitting is implemented using RANSAC to reduce the noise. The proposed approach is tested on terrestrial point clouds that cover a 150-m-long segment of a Shanghai subway tunnel, which were acquired using a LMS VZ-400 laser scanner. The results indicate that the proposed quadric parametric surface fitting using the optimized BaySAC achieves a higher overall fitting accuracy (0.9 mm than the accuracy (1.6 mm obtained by the plain RANSAC. The results also show that the proposed cross section extraction algorithm can achieve high accuracy (millimeter level, which was assessed by comparing the fitted radii with the designed radius of the cross section and comparing corresponding chord lengths in different cross sections and high efficiency (less than 3 s/section on average.
Hydrothermal optimal power flow using continuation method
International Nuclear Information System (INIS)
Raoofat, M.; Seifi, H.
2001-01-01
The problem of optimal economic operation of hydrothermal electric power systems is solved using powerful continuation method. While in conventional approach, fixed generation voltages are used to avoid convergence problems, in the algorithm, they are treated as variables so that better solutions can be obtained. The algorithm is tested for a typical 5-bus and 17-bus New Zealand networks. Its capabilities and promising results are assessed
Directory of Open Access Journals (Sweden)
Jingyu Sun
2014-07-01
Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.
Method and apparatus for continuous sampling
International Nuclear Information System (INIS)
Marcussen, C.
1982-01-01
An apparatus and method for continuously sampling a pulverous material flow includes means for extracting a representative subflow from a pulverous material flow. A screw conveyor is provided to cause the extracted subflow to be pushed upwardly through a duct to an overflow. Means for transmitting a radiation beam transversely to the subflow in the duct, and means for sensing the transmitted beam through opposite pairs of windows in the duct are provided to measure the concentration of one or more constituents in the subflow. (author)
Solving Singular Two-Point Boundary Value Problems Using Continuous Genetic Algorithm
Directory of Open Access Journals (Sweden)
Omar Abu Arqub
2012-01-01
Full Text Available In this paper, the continuous genetic algorithm is applied for the solution of singular two-point boundary value problems, where smooth solution curves are used throughout the evolution of the algorithm to obtain the required nodal values. The proposed technique might be considered as a variation of the finite difference method in the sense that each of the derivatives is replaced by an appropriate difference quotient approximation. This novel approach possesses main advantages; it can be applied without any limitation on the nature of the problem, the type of singularity, and the number of mesh points. Numerical examples are included to demonstrate the accuracy, applicability, and generality of the presented technique. The results reveal that the algorithm is very effective, straightforward, and simple.
Renson, Ludovic; Barton, David A. W.; Neild, Simon A.
Control-based continuation (CBC) is a means of applying numerical continuation directly to a physical experiment for bifurcation analysis without the use of a mathematical model. CBC enables the detection and tracking of bifurcations directly, without the need for a post-processing stage as is often the case for more traditional experimental approaches. In this paper, we use CBC to directly locate limit-point bifurcations of a periodically forced oscillator and track them as forcing parameters are varied. Backbone curves, which capture the overall frequency-amplitude dependence of the system’s forced response, are also traced out directly. The proposed method is demonstrated on a single-degree-of-freedom mechanical system with a nonlinear stiffness characteristic. Results are presented for two configurations of the nonlinearity — one where it exhibits a hardening stiffness characteristic and one where it exhibits softening-hardening.
CONTINUOUSLY DEFORMATION MONITORING OF SUBWAY TUNNEL BASED ON TERRESTRIAL POINT CLOUDS
Directory of Open Access Journals (Sweden)
Z. Kang
2012-07-01
Full Text Available The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the common control points can be used by each station and thus the error accumulation avoided within a section. Afterwards, the central axis of the subway tunnel is determined through RANSAC (Random Sample Consensus algorithm and curve fitting. Although with very high resolution, laser points are still discrete and thus the vertical section is computed via the quadric fitting of the vicinity of interest, instead of the fitting of the whole model of a subway tunnel, which is determined by the intersection line rotated about the central axis of tunnel within a vertical plane. The extraction of the vertical section is then optimized using RANSAC for the purpose of filtering out noises. Based on the extracted vertical sections, the volume of tunnel deformation is estimated by the comparison between vertical sections extracted at the same position from different epochs of point clouds. Furthermore, the continuously extracted vertical sections are deployed to evaluate the convergent tendency of the tunnel. The proposed algorithms are verified using real datasets in terms of accuracy and computation efficiency. The experimental result of fitting accuracy analysis shows the maximum deviation between interpolated point and real point is 1.5 mm, and the minimum one is 0.1 mm; the convergent tendency of the tunnel was detected by the comparison of adjacent fitting radius. The maximum error is 6 mm, while the minimum one is 1 mm. The computation cost of vertical section abstraction is within 3 seconds/section, which proves high efficiency..
Interior-Point Methods for Linear Programming: A Review
Singh, J. N.; Singh, D.
2002-01-01
The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…
Method of continuously regenerating decontaminating electrolytic solution
International Nuclear Information System (INIS)
Sasaki, Takashi; Kobayashi, Toshio; Wada, Koichi.
1985-01-01
Purpose: To continuously recover radioactive metal ions from the electrolytic solution used for the electrolytic decontamination of radioactive equipment and increased with the radioactive dose, as well as regenerate the electrolytic solution to a high concentration acid. Method: A liquid in an auxiliary tank is recycled to a cathode chamber containing water of an electro depositing regeneration tank to render pH = 2 by way of a pH controller and a pH electrode. The electrolytic solution in an electrolytic decontaminating tank is introduced by way of an injection pump to an auxiliary tank and, interlocking therewith, a regenerating solution is introduced from a regenerating solution extracting pump by way of a extraction pipeway to an electrolytic decontaminating tank. Meanwhile, electric current is supplied to the electrode to deposit radioactive metal ions dissolved in the cathode chamber on the capturing electrode. While on the other hand, anions are transferred by way of a partition wall to an anode chamber to regenerate the electrolytic solution to high concentration acid solution. While on the other hand, water is supplied by way of an electromagnetic valve interlocking with the level meter to maintain the level meter constant. This can decrease the generation of the liquid wastes and also reduce the amount of the radioactive secondary wastes. (Horiuchi, T.)
Energy Technology Data Exchange (ETDEWEB)
Foedisch, Holger; Schulz, Joerg; Schengber, Petra; Dietrich, Gabriele [Dr. Foedisch Umweltmesstechnik AG, Markranstaedt (Germany)
2009-07-01
The reduction of flue gas losses is one option to increase power plant efficiency. The target is the optimised low waste gas temperature. When applying lignite and other high-sulphur fuels the temperature of the flue gas is mainly determined by the acid dew point. Temperature of the flue gas system is to amount some 10 to 20 K above the assumed acid dew point. The acid dew point measuring system AMD 08 is able to detect the real acid dew point in a quasi-continuous way. Thus, it is possible to deliberately decrease waste gas temperature. (orig.)
LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources
Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin
2017-12-01
Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable
DEFF Research Database (Denmark)
Iskhakov, Fedor; Jørgensen, Thomas H.; Rust, John
2017-01-01
We present a fast and accurate computational method for solving and estimating a class of dynamic programming models with discrete and continuous choice variables. The solution method we develop for structural estimation extends the endogenous grid-point method (EGM) to discrete-continuous (DC) p...
Post-Processing in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars Vabbersgaard
The material-point method (MPM) is a numerical method for dynamic or static analysis of solids using a discretization in time and space. The method has shown to be successful in modelling physical problems involving large deformations, which are difficult to model with traditional numerical tools...... such as the finite element method. In the material-point method, a set of material points is utilized to track the problem in time and space, while a computational background grid is utilized to obtain spatial derivatives relevant to the physical problem. Currently, the research within the material-point method......-point method. The first idea involves associating a volume with each material point and displaying the deformation of this volume. In the discretization process, the physical domain is divided into a number of smaller volumes each represented by a simple shape; here quadrilaterals are chosen for the presented...
Continuously rotating cat scanning apparatus and method
International Nuclear Information System (INIS)
Bax, R.F.
1980-01-01
A tomographic scanner with a continuously rotating source of radiation is energized by converting inertial mechanical energy to electrical energy. The mechanical-to-electrical conversion apparatus is mounted with the x-ray source to be energized on a rotating flywheel. The inertial mechanical energy stored in the rotating conversion apparatus, flywheel and x-ray source is utilized for generating electrical energy used, in turn, to energize the x-ray source
NREL Patents Method for Continuous Monitoring of Materials During
Manufacturing | News | NREL NREL Patents Method for Continuous Monitoring of Materials During Manufacturing News Release: NREL Patents Method for Continuous Monitoring of Materials During Manufacturing NREL's Energy Systems Integration Facility (ESIF). More information, including the published patent, can
The cross-over points in lattice gauge theories with continuous gauge groups
International Nuclear Information System (INIS)
Cvitanovic, P.; Greensite, J.; Lautrup, B.
1981-01-01
We obtain a closed expression for the weak-to-strong coupling cross-over point in all Wilson type lattice gauge theories with continuous gauge groups. We use a weak-coupling expansion of the mean-field self-consistency equation. In all cases where our results can be compared with Monte Carlo calculations the agreement is excellent. (orig.)
DEFF Research Database (Denmark)
Buron, Jonas Christian Due; Pizzocchero, Filippo; Jessen, Bjarke Sørensen
2014-01-01
noninvasive conductance characterization methods: ultrabroadband terahertz time-domain spectroscopy and micro four-point probe, which probe the electrical properties of the graphene film on different length scales, 100 nm and 10 μm, respectively. Ultrabroadband terahertz time-domain spectroscopy allows......- and microscale electrical continuity of single layer graphene grown on centimeter-sized single crystal copper with that of previously studied graphene films, grown on commercially available copper foil, after transfer to SiO2 surfaces. The electrical continuity of the graphene films is analyzed using two....... Micro four-point probe resistance values measured on graphene grown on single crystalline copper in two different voltage-current configurations show close agreement with the expected distributions for a continuous 2D conductor, in contrast with previous observations on graphene grown on commercial...
Analysis of Stress Updates in the Material-point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
The material-point method (MPM) is a new numerical method for analysis of large strain engineering problems. The MPM applies a dual formulation, where the state of the problem (mass, stress, strain, velocity etc.) is tracked using a finite set of material points while the governing equations...... are solved on a background computational grid. Several references state, that one of the main advantages of the material-point method is the easy application of complicated material behaviour as the constitutive response is updated individually for each material point. However, as discussed here, the MPM way...
Selective Integration in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Lars; Andersen, Søren; Damkilde, Lars
2009-01-01
The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....
Continuously deformation monitoring of subway tunnel based on terrestrial point clouds
Kang, Z.; Tuo, L.; Zlatanova, S.
2012-01-01
The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the
Fixed-point data-collection method of video signal
International Nuclear Information System (INIS)
Tang Yu; Yin Zejie; Qian Weiming; Wu Xiaoyi
1997-01-01
The author describes a Fixed-point data-collection method of video signal. The method provides an idea of fixed-point data-collection, and has been successfully applied in the research of real-time radiography on dose field, a project supported by National Science Fund
Continual integration method in the polaron model
International Nuclear Information System (INIS)
Kochetov, E.A.; Kuleshov, S.P.; Smondyrev, M.A.
1981-01-01
The article is devoted to the investigation of a polaron system on the base of a variational approach formulated on the language of continuum integration. The variational method generalizing the Feynman one for the case of the system pulse different from zero has been formulated. The polaron state has been investigated at zero temperature. A problem of the bound state of two polarons exchanging quanta of a scalar field as well as a problem of polaron scattering with an external field in the Born approximation have been considered. Thermodynamics of the polaron system has been investigated, namely, high-temperature expansions for mean energy and effective polaron mass have been studied [ru
Strike Point Control on EAST Using an Isoflux Control Method
International Nuclear Information System (INIS)
Xing Zhe; Xiao Bingjia; Luo Zhengping; Walker, M. L.; Humphreys, D. A.
2015-01-01
For the advanced tokamak, the particle deposition and thermal load on the divertor is a big challenge. By moving the strike points on divertor target plates, the position of particle deposition and thermal load can be shifted. We could adjust the Poloidal Field (PF) coil current to achieve the strike point position feedback control. Using isoflux control method, the strike point position can be controlled by controlling the X point position. On the basis of experimental data, we establish relational expressions between X point position and strike point position. Benchmark experiments are carried out to validate the correctness and robustness of the control methods. The strike point position is successfully controlled following our command in the EAST operation. (paper)
Pilot points method for conditioning multiple-point statistical facies simulation on flow data
Ma, Wei; Jafarpour, Behnam
2018-05-01
We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
End-point detection in potentiometric titration by continuous wavelet transform.
Jakubowska, Małgorzata; Baś, Bogusław; Kubiak, Władysław W
2009-10-15
The aim of this work was construction of the new wavelet function and verification that a continuous wavelet transform with a specially defined dedicated mother wavelet is a useful tool for precise detection of end-point in a potentiometric titration. The proposed algorithm does not require any initial information about the nature or the type of analyte and/or the shape of the titration curve. The signal imperfection, as well as random noise or spikes has no influence on the operation of the procedure. The optimization of the new algorithm was done using simulated curves and next experimental data were considered. In the case of well-shaped and noise-free titration data, the proposed method gives the same accuracy and precision as commonly used algorithms. But, in the case of noisy or badly shaped curves, the presented approach works good (relative error mainly below 2% and coefficients of variability below 5%) while traditional procedures fail. Therefore, the proposed algorithm may be useful in interpretation of the experimental data and also in automation of the typical titration analysis, specially in the case when random noise interfere with analytical signal.
Second derivative continuous linear multistep methods for the ...
African Journals Online (AJOL)
step methods (LMM), with properties that embed the characteristics of LMM and hybrid methods. This paper gives a continuous reformulation of the Enright [5] second derivative methods. The motivation lies in the fact that the new formulation ...
Extensions of vector-valued Baire one functions with preservation of points of continuity
Czech Academy of Sciences Publication Activity Database
Koc, M.; Kolář, Jan
2016-01-01
Roč. 442, č. 1 (2016), s. 138-148 ISSN 0022-247X R&D Projects: GA ČR(CZ) GA14-07880S Institutional support: RVO:67985840 Keywords : vector-valued Baire one functions * extensions * non-tangential limit * continuity points Subject RIV: BA - General Mathematics Impact factor: 1.064, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022247X1630097X
Material-Point Method Analysis of Bending in Elastic Beams
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2007-01-01
The aim of this paper is to test different types of spatial interpolation for the material-point method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...
C-point and V-point singularity lattice formation and index sign conversion methods
Kumar Pal, Sushanta; Ruchi; Senthilkumaran, P.
2017-06-01
The generic singularities in an ellipse field are C-points namely stars, lemons and monstars in a polarization distribution with C-point indices (-1/2), (+1/2) and (+1/2) respectively. Similar to C-point singularities, there are V-point singularities that occur in a vector field and are characterized by Poincare-Hopf index of integer values. In this paper we show that the superposition of three homogenously polarized beams in different linear states leads to the formation of polarization singularity lattice. Three point sources at the focal plane of the lens are used to create three interfering plane waves. A radial/azimuthal polarization converter (S-wave plate) placed near the focal plane modulates the polarization states of the three beams. The interference pattern is found to host C-points and V-points in a hexagonal lattice. The C-points occur at intensity maxima and V-points occur at intensity minima. Modulating the state of polarization (SOP) of three plane waves from radial to azimuthal does not essentially change the nature of polarization singularity lattice as the Poincare-Hopf index for both radial and azimuthal polarization distributions is (+1). Hence a transformation from a star to a lemon is not trivial, as such a transformation requires not a single SOP change, but a change in whole spatial SOP distribution. Further there is no change in the lattice structure and the C- and V-points appear at locations where they were present earlier. Hence to convert an interlacing star and V-point lattice into an interlacing lemon and V-point lattice, the interferometer requires modification. We show for the first time a method to change the polarity of C-point and V-point indices. This means that lemons can be converted into stars and stars can be converted into lemons. Similarly the positive V-point can be converted to negative V-point and vice versa. The intensity distribution in all these lattices is invariant as the SOPs of the three beams are changed in an
TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL
Directory of Open Access Journals (Sweden)
N. Zhu
2016-06-01
Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.
Evaluation of the point-centred-quarter method of sampling ...
African Journals Online (AJOL)
-quarter method.The parameter which was most efficiently sampled was species composition relativedensity) with 90% replicate similarity being achieved with 100 point-centred-quarters. However, this technique cannot be recommended, even ...
Novel Ratio Subtraction and Isoabsorptive Point Methods for ...
African Journals Online (AJOL)
Purpose: To develop and validate two innovative spectrophotometric methods used for the simultaneous determination of ambroxol hydrochloride and doxycycline in their binary mixture. Methods: Ratio subtraction and isoabsorptive point methods were used for the simultaneous determination of ambroxol hydrochloride ...
IMAGE TO POINT CLOUD METHOD OF 3D-MODELING
Directory of Open Access Journals (Sweden)
A. G. Chibunichev
2012-07-01
Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.
New methods of subcooled water recognition in dew point hygrometers
Weremczuk, Jerzy; Jachowicz, Ryszard
2001-08-01
Two new methods of sub-cooled water recognition in dew point hygrometers are presented in this paper. The first one- impedance method use a new semiconductor mirror in which the dew point detector, the thermometer and the heaters were integrated all together. The second one an optical method based on a multi-section optical detector is discussed in the report. Experimental results of both methods are shown. New types of dew pont hydrometers of ability to recognized sub-cooled water were proposed.
Summary statistics for end-point conditioned continuous-time Markov chains
DEFF Research Database (Denmark)
Hobolth, Asger; Jensen, Jens Ledet
Continuous-time Markov chains are a widely used modelling tool. Applications include DNA sequence evolution, ion channel gating behavior and mathematical finance. We consider the problem of calculating properties of summary statistics (e.g. mean time spent in a state, mean number of jumps between...... two states and the distribution of the total number of jumps) for discretely observed continuous time Markov chains. Three alternative methods for calculating properties of summary statistics are described and the pros and cons of the methods are discussed. The methods are based on (i) an eigenvalue...... decomposition of the rate matrix, (ii) the uniformization method, and (iii) integrals of matrix exponentials. In particular we develop a framework that allows for analyses of rather general summary statistics using the uniformization method....
International Nuclear Information System (INIS)
McCoy, M. L.; Moradi, R.; Lankarani, H. M.
2011-01-01
Agricultural and construction equipment are commonly implemented with rectangular tubing in their structural frame designs. A typical joining method to fabricate these frames is by welding and the use of ancillary structural plating at the connections. This aids two continuous members to pass through an intersection point of the frame with some degree of connectivity, but the connections are highly unbalanced as the tubing centroids exhibit asymmetry. Due to the practice of welded continuous member frame intersections in current agricultural equipment designs, a conviction may exist that welded continuous member frames are superior in structural strength over that of structural frame intersections implementing welded non-continuous members where the tubing centroids lie within two planes of symmetry, a connection design that would likely fabricating a more fatigue resistant structural frame. Three types of welded continuous tubing frame intersections currently observed in the designs of agricultural equipment were compared to two non-continuous frame intersection designs. Each design was subjected to the same loading condition and then examined for stress levels using the Finite Element Method to predict fatigue life. Results demonstrated that a lighter weight, non-continuous member frame intersection design was two magnitudes superior in fatigue resistance than some current implemented frame designs when using Stress-Life fatigue prediction methods and empirical fatigue strengths for fillet welds. Stress-Life predictions were also made using theoretical fatigue strength calculations for the fatigue strength at the welds for comparison to the empirical derived weld fatigue strength
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Automated and continuously operating acid dew point measuring instrument for flue gases
Energy Technology Data Exchange (ETDEWEB)
Reckmann, D.; Naundorf, G.
1986-06-01
Design and operation is explained for a sulfuric acid dew point indicator for continuous flue gas temperature control. The indicator operated successfully in trial tests over several years with brown coal, gas and oil combustion in a measurement range of 60 to 180 C. The design is regarded as uncomplicated and easy to manufacture. Its operating principle is based on electric conductivity measurement on a surface on which sulfuric acid vapor has condensed. A ring electrode and a PtRh/Pt thermal element as central electrode are employed. A scheme of the equipment design is provided. Accuracy of the indicator was compared to manual dew point sondes manufactured by Degussa and showed a maximum deviation of 5 C. Manual cleaning after a number of weeks of operation is required. Fly ash with a high lime content increases dust buildup and requires more frequent cleaning cycles.
Material-point Method Analysis of Bending in Elastic Beams
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
The aim of this paper is to test different types of spatial interpolation for the materialpoint method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...
Analytic continuation of massless two-loop four-point functions
International Nuclear Information System (INIS)
Gehrmann, T.; Remiddi, E.
2002-01-01
We describe the analytic continuation of two-loop four-point functions with one off-shell external leg and internal massless propagators from the Euclidean region of space-like 1→3 decay to Minkowskian regions relevant to all 1→3 and 2→2 reactions with one space-like or time-like off-shell external leg. Our results can be used to derive two-loop master integrals and unrenormalized matrix elements for hadronic vector-boson-plus-jet production and deep inelastic two-plus-one-jet production, from results previously obtained for three-jet production in electron-positron annihilation. (author)
Primal-Dual Interior Point Multigrid Method for Topology Optimization
Czech Academy of Sciences Publication Activity Database
Kočvara, Michal; Mohammed, S.
2016-01-01
Roč. 38, č. 5 (2016), B685-B709 ISSN 1064-8275 Grant - others:European Commission - EC(XE) 313781 Institutional support: RVO:67985556 Keywords : topology optimization * multigrid method s * interior point method Subject RIV: BA - General Mathematics Impact factor: 2.195, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/kocvara-0462418.pdf
Interior Point Methods for Large-Scale Nonlinear Programming
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2005-01-01
Roč. 20, č. 4-5 (2005), s. 569-582 ISSN 1055-6788 R&D Projects: GA AV ČR IAA1030405 Institutional research plan: CEZ:AV0Z10300504 Keywords : nonlinear programming * interior point methods * KKT systems * indefinite preconditioners * filter methods * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.477, year: 2005
A generalized endogenous grid method for discrete-continuous choice
John Rust; Bertel Schjerning; Fedor Iskhakov
2012-01-01
This paper extends Carroll's endogenous grid method (2006 "The method of endogenous gridpoints for solving dynamic stochastic optimization problems", Economic Letters) for models with sequential discrete and continuous choice. Unlike existing generalizations, we propose solution algorithm that inherits both advantages of the original method, namely it avoids all root finding operations, and also efficiently deals with restrictions on the continuous decision variable. To further speed up the s...
Taylor's series method for solving the nonlinear point kinetics equations
International Nuclear Information System (INIS)
Nahla, Abdallah A.
2011-01-01
Highlights: → Taylor's series method for nonlinear point kinetics equations is applied. → The general order of derivatives are derived for this system. → Stability of Taylor's series method is studied. → Taylor's series method is A-stable for negative reactivity. → Taylor's series method is an accurate computational technique. - Abstract: Taylor's series method for solving the point reactor kinetics equations with multi-group of delayed neutrons in the presence of Newtonian temperature feedback reactivity is applied and programmed by FORTRAN. This system is the couples of the stiff nonlinear ordinary differential equations. This numerical method is based on the different order derivatives of the neutron density, the precursor concentrations of i-group of delayed neutrons and the reactivity. The r th order of derivatives are derived. The stability of Taylor's series method is discussed. Three sets of applications: step, ramp and temperature feedback reactivities are computed. Taylor's series method is an accurate computational technique and stable for negative step, negative ramp and temperature feedback reactivities. This method is useful than the traditional methods for solving the nonlinear point kinetics equations.
Primal Interior Point Method for Minimization of Generalized Minimax Functions
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2010-01-01
Roč. 46, č. 4 (2010), s. 697-721 ISSN 0023-5954 R&D Projects: GA ČR GA201/09/1957 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * nonsmooth optimization * generalized minimax optimization * interior-point methods * modified Newton methods * variable metric methods * global convergence * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://dml.cz/handle/10338.dmlcz/140779
A new comparison method for dew-point generators
Heinonen, Martti
1999-12-01
A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.
Probabilistic Power Flow Method Considering Continuous and Discrete Variables
Directory of Open Access Journals (Sweden)
Xuexia Zhang
2017-04-01
Full Text Available This paper proposes a probabilistic power flow (PPF method considering continuous and discrete variables (continuous and discrete power flow, CDPF for power systems. The proposed method—based on the cumulant method (CM and multiple deterministic power flow (MDPF calculations—can deal with continuous variables such as wind power generation (WPG and loads, and discrete variables such as fuel cell generation (FCG. In this paper, continuous variables follow a normal distribution (loads or a non-normal distribution (WPG, and discrete variables follow a binomial distribution (FCG. Through testing on IEEE 14-bus and IEEE 118-bus power systems, the proposed method (CDPF has better accuracy compared with the CM, and higher efficiency compared with the Monte Carlo simulation method (MCSM.
Zhou, Yu; Wang, Tianyi; Dai, Bing; Li, Wenjun; Wang, Wei; You, Chengwu; Wang, Kejia; Liu, Jinsong; Wang, Shenglie; Yang, Zhengang
2018-02-01
Inspired by the extensive application of terahertz (THz) imaging technologies in the field of aerospace, we exploit a THz frequency modulated continuous-wave imaging method with continuous wavelet transform (CWT) algorithm to detect a multilayer heat shield made of special materials. This method uses the frequency modulation continuous-wave system to catch the reflected THz signal and then process the image data by the CWT with different basis functions. By calculating the sizes of the defects area in the final images and then comparing the results with real samples, a practical high-precision THz imaging method is demonstrated. Our method can be an effective tool for the THz nondestructive testing of composites, drugs, and some cultural heritages.
Dual reference point temperature interrogating method for distributed temperature sensor
International Nuclear Information System (INIS)
Ma, Xin; Ju, Fang; Chang, Jun; Wang, Weijie; Wang, Zongliang
2013-01-01
A novel method based on dual temperature reference points is presented to interrogate the temperature in a distributed temperature sensing (DTS) system. This new method is suitable to overcome deficiencies due to the impact of DC offsets and the gain difference in the two signal channels of the sensing system during temperature interrogation. Moreover, this method can in most cases avoid the need to calibrate the gain and DC offsets in the receiver, data acquisition and conversion. An improved temperature interrogation formula is presented and the experimental results show that this method can efficiently estimate the channel amplification and system DC offset, thus improving the system accuracy. (letter)
Analysis of Spatial Interpolation in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2010-01-01
are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...
Modeling of Landslides with the Material Point Method
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2008-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...
Modelling of Landslides with the Material-point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...
Symbol recognition produced by points of tactile stimulation: the illusion of linear continuity.
Gonzales, G R
1996-11-01
To determine whether tactile receptive communication is possible through the use of a mechanical device that produces the phi phenomenon on the body surface. Twenty-six subjects (11 blind and 15 sighted participants) were tested with use of a tactile communication device (TCD) that produces an illusion of linear continuity forming numbers on the dorsal aspect of the wrist. Recognition of a number or number set was the goal. A TCD with protruding and vibrating solenoids produced sequentially delivered points of cutaneous stimulation along a pattern resembling numbers and created the illusion of dragging a vibrating stylet to form numbers, similar to what might be felt by testing for graphesthesia. Blind subjects recognized numbers with fewer trials than did sighted subjects, although all subjects were able to recognize all the numbers produced by the TCD. Subjects who had been blind since birth and had no prior tactile exposure to numbers were able to draw the numbers after experiencing them delivered by the TCD even though they did not recognize their meaning. The phi phenomenon is probably responsible for the illusion of continuous lines in the shape of numbers as produced by the TCD. This tactile illusion could potentially be used for more complex tactile communications such as letters and words.
Primal Interior-Point Method for Large Sparse Minimax Optimization
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2009-01-01
Roč. 45, č. 5 (2009), s. 841-864 ISSN 0023-5954 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * minimax optimization * nonsmooth optimization * interior-point methods * modified Newton methods * variable metric methods * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.445, year: 2009 http://dml.cz/handle/10338.dmlcz/140034
Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware
International Nuclear Information System (INIS)
Nakata, Susumu
2008-01-01
This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.
A Review on the Modified Finite Point Method
Directory of Open Access Journals (Sweden)
Nan-Jing Wu
2014-01-01
Full Text Available The objective of this paper is to make a review on recent advancements of the modified finite point method, named MFPM hereafter. This MFPM method is developed for solving general partial differential equations. Benchmark examples of employing this method to solve Laplace, Poisson, convection-diffusion, Helmholtz, mild-slope, and extended mild-slope equations are verified and then illustrated in fluid flow problems. Application of MFPM to numerical generation of orthogonal grids, which is governed by Laplace equation, is also demonstrated.
Methods for registration laser scanner point clouds in forest stands
International Nuclear Information System (INIS)
Bienert, A.; Pech, K.; Maas, H.-G.
2011-01-01
Laser scanning is a fast and efficient 3-D measurement technique to capture surface points describing the geometry of a complex object in an accurate and reliable way. Besides airborne laser scanning, terrestrial laser scanning finds growing interest for forestry applications. These two different recording platforms show large differences in resolution, recording area and scan viewing direction. Using both datasets for a combined point cloud analysis may yield advantages because of their largely complementary information. In this paper, methods will be presented to automatically register airborne and terrestrial laser scanner point clouds of a forest stand. In a first step, tree detection is performed in both datasets in an automatic manner. In a second step, corresponding tree positions are determined using RANSAC. Finally, the geometric transformation is performed, divided in a coarse and fine registration. After a coarse registration, the fine registration is done in an iterative manner (ICP) using the point clouds itself. The methods are tested and validated with a dataset of a forest stand. The presented registration results provide accuracies which fulfill the forestry requirements [de
A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD
J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao
2017-01-01
Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...
The Oblique Basis Method from an Engineering Point of View
International Nuclear Information System (INIS)
Gueorguiev, V G
2012-01-01
The oblique basis method is reviewed from engineering point of view related to vibration and control theory. Examples are used to demonstrate and relate the oblique basis in nuclear physics to the equivalent mathematical problems in vibration theory. The mathematical techniques, such as principal coordinates and root locus, used by vibration and control theory engineers are shown to be relevant to the Richardson - Gaudin pairing-like problems in nuclear physics.
Towards Automatic Testing of Reference Point Based Interactive Methods
Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa
2016-01-01
In order to understand strengths and weaknesses of optimization algorithms, it is important to have access to different types of test problems, well defined performance indicators and analysis tools. Such tools are widely available for testing evolutionary multiobjective optimization algorithms. To our knowledge, there do not exist tools for analyzing the performance of interactive multiobjective optimization methods based on the reference point approach to communicating ...
Multiperiod hydrothermal economic dispatch by an interior point method
Directory of Open Access Journals (Sweden)
Kimball L. M.
2002-01-01
Full Text Available This paper presents an interior point algorithm to solve the multiperiod hydrothermal economic dispatch (HTED. The multiperiod HTED is a large scale nonlinear programming problem. Various optimization methods have been applied to the multiperiod HTED, but most neglect important network characteristics or require decomposition into thermal and hydro subproblems. The algorithm described here exploits the special bordered block diagonal structure and sparsity of the Newton system for the first order necessary conditions to result in a fast efficient algorithm that can account for all network aspects. Applying this new algorithm challenges a conventional method for the use of available hydro resources known as the peak shaving heuristic.
Improved fixed point iterative method for blade element momentum computations
DEFF Research Database (Denmark)
Sun, Zhenye; Shen, Wen Zhong; Chen, Jin
2017-01-01
The blade element momentum (BEM) theory is widely used in aerodynamic performance calculations and optimization applications for wind turbines. The fixed point iterative method is the most commonly utilized technique to solve the BEM equations. However, this method sometimes does not converge...... are addressed through both theoretical analysis and numerical tests. A term from the BEM equations equals to zero at a critical inflow angle is the source of the convergence problems. When the initial inflow angle is set larger than the critical inflow angle and the relaxation methodology is adopted...
Analytic continuation of quantum Monte Carlo data. Stochastic sampling method
Energy Technology Data Exchange (ETDEWEB)
Ghanem, Khaldoon; Koch, Erik [Institute for Advanced Simulation, Forschungszentrum Juelich, 52425 Juelich (Germany)
2016-07-01
We apply Bayesian inference to the analytic continuation of quantum Monte Carlo (QMC) data from the imaginary axis to the real axis. Demanding a proper functional Bayesian formulation of any analytic continuation method leads naturally to the stochastic sampling method (StochS) as the Bayesian method with the simplest prior, while it excludes the maximum entropy method and Tikhonov regularization. We present a new efficient algorithm for performing StochS that reduces computational times by orders of magnitude in comparison to earlier StochS methods. We apply the new algorithm to a wide variety of typical test cases: spectral functions and susceptibilities from DMFT and lattice QMC calculations. Results show that StochS performs well and is able to resolve sharp features in the spectrum.
Botha, R.; Labuschagne, C.; Williams, A. G.; Bosman, G.; Brunke, E.-G.; Rossouw, A.; Lindsay, R.
2018-03-01
This paper describes and discusses fifteen years (1999-2013) of continuous hourly atmospheric radon (222Rn) monitoring at the coastal low-altitude Southern Hemisphere Cape Point Station in South Africa. A strong seasonal cycle is evident in the observed radon concentrations, with maxima during the winter months, when air masses arriving at the Cape Point station from over the African continental surface are more frequently observed, and minima during the summer months, when an oceanic fetch is predominant. An atmospheric mean radon activity concentration of 676 ± 2 mBq/m3 is found over the 15-year record, having a strongly skewed distribution that exhibits a large number of events falling into a compact range of low values (corresponding to oceanic air masses), and a smaller number of events with high radon values spread over a wide range (corresponding to continental air masses). The mean radon concentration from continental air masses (1 004 ± 6 mBq/m3) is about two times higher compared to oceanic air masses (479 ± 3 mBq/m3). The number of atmospheric radon events observed is strongly dependent on the wind direction. A power spectral Fast Fourier Transform analysis of the 15-year radon time series reveals prominent peaks at semi-diurnal, diurnal and annual timescales. Two inter-annual radon periodicities have been established, the diurnal 0.98 ± 0.04 day-1 and half-diurnal 2.07 ± 0.15 day-1. The annual peak reflects major seasonal changes in the patterns of offshore versus onshore flow associated with regional/hemispheric circulation patterns, whereas the diurnal and semi-diurnal peaks together reflect the influence of local nocturnal radon build-up over land, and the interplay between mesoscale sea/land breezes. The winter-time diurnal radon concentration had a significant decrease of about 200 mBq/m3 (17%) while the summer-time diurnal radon concentration revealed nearly no changes. A slow decline in the higher radon percentiles (75th and 95th) for the
Evaluation of null-point detection methods on simulation data
Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.
2011-11-01
Using a multidisciplinary team approach, the University of California, San Diego, Health System has been able to significantly reduce average door-to-balloon angioplasty times for patients with the most severe form of heart attacks, beating national recommendations by more than a third. The multidisciplinary team meets monthly to review all cases involving patients with ST-segment-elevation myocardial infarctions (STEMI) to see where process improvements can be made. Using this continuous quality improvement (CQI) process, the health system has reduced average door-to-balloon times from 120 minutes to less than 60 minutes, and administrators are now aiming for further progress. Among the improvements instituted by the multidisciplinary team are the implementation of a "greeter" with enough clinical expertise to quickly pick up on potential STEMI heart attacks as soon as patients walk into the ED, and the purchase of an electrocardiogram (EKG) machine so that evaluations can be done in the triage area. ED staff have prepared "STEMI" packets, including items such as special IV tubing and disposable leads, so that patients headed for the catheterization laboratory are prepared to undergo the procedure soon after arrival. All the clocks and devices used in the ED are synchronized so that analysts can later review how long it took to complete each step of the care process. Points of delay can then be targeted for improvement.
Convergence results for a class of abstract continuous descent methods
Directory of Open Access Journals (Sweden)
Sergiu Aizicovici
2004-03-01
Full Text Available We study continuous descent methods for the minimization of Lipschitzian functions defined on a general Banach space. We establish convergence theorems for those methods which are generated by approximate solutions to evolution equations governed by regular vector fields. Since the complement of the set of regular vector fields is $sigma$-porous, we conclude that our results apply to most vector fields in the sense of Baire's categories.
Hybrid kriging methods for interpolating sparse river bathymetry point data
Directory of Open Access Journals (Sweden)
Pedro Velloso Gomes Batista
Full Text Available ABSTRACT Terrain models that represent riverbed topography are used for analyzing geomorphologic changes, calculating water storage capacity, and making hydrologic simulations. These models are generated by interpolating bathymetry points. River bathymetry is usually surveyed through cross-sections, which may lead to a sparse sampling pattern. Hybrid kriging methods, such as regression kriging (RK and co-kriging (CK employ the correlation with auxiliary predictors, as well as inter-variable correlation, to improve the predictions of the target variable. In this study, we use the orthogonal distance of a (x, y point to the river centerline as a covariate for RK and CK. Given that riverbed elevation variability is abrupt transversely to the flow direction, it is expected that the greater the Euclidean distance of a point to the thalweg, the greater the bed elevation will be. The aim of this study was to evaluate if the use of the proposed covariate improves the spatial prediction of riverbed topography. In order to asses such premise, we perform an external validation. Transversal cross-sections are used to make the spatial predictions, and the point data surveyed between sections are used for testing. We compare the results from CK and RK to the ones obtained from ordinary kriging (OK. The validation indicates that RK yields the lowest RMSE among the interpolators. RK predictions represent the thalweg between cross-sections, whereas the other methods under-predict the river thalweg depth. Therefore, we conclude that RK provides a simple approach for enhancing the quality of the spatial prediction from sparse bathymetry data.
Methods for solving the stochastic point reactor kinetic equations
International Nuclear Information System (INIS)
Quabili, E.R.; Karasulu, M.
1979-01-01
Two new methods are presented for analysis of the statistical properties of nonlinear outputs of a point reactor to stochastic non-white reactivity inputs. They are Bourret's approximation and logarithmic linearization. The results have been compared with the exact results, previously obtained in the case of Gaussian white reactivity input. It was found that when the reactivity noise has short correlation time, Bourret's approximation should be recommended because it yields results superior to those yielded by logarithmic linearization. When the correlation time is long, Bourret's approximation is not valid, but in that case, if one can assume the reactivity noise to be Gaussian, one may use the logarithmic linearization. (author)
A GPU code for analytic continuation through a sampling method
Directory of Open Access Journals (Sweden)
Johan Nordström
2016-01-01
Full Text Available We here present a code for performing analytic continuation of fermionic Green’s functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU. The code is based on the sampling method introduced by Mishchenko et al. (2000, and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.
a Modeling Method of Fluttering Leaves Based on Point Cloud
Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.
2017-09-01
Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD
Directory of Open Access Journals (Sweden)
J. Tang
2017-09-01
Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
A Robust Shape Reconstruction Method for Facial Feature Point Detection
Directory of Open Access Journals (Sweden)
Shuqiu Tan
2017-01-01
Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.
[Absorption spectrum of Quasi-continuous laser modulation demodulation method].
Shao, Xin; Liu, Fu-Gui; Du, Zhen-Hui; Wang, Wei
2014-05-01
A software phase-locked amplifier demodulation method is proposed in order to demodulate the second harmonic (2f) signal of quasi-continuous laser wavelength modulation spectroscopy (WMS) properly, based on the analysis of its signal characteristics. By judging the effectiveness of the measurement data, filter, phase-sensitive detection, digital filtering and other processing, the method can achieve the sensitive detection of quasi-continuous signal The method was verified by using carbon dioxide detection experiments. The WMS-2f signal obtained by the software phase-locked amplifier and the high-performance phase-locked amplifier (SR844) were compared simultaneously. The results show that the Allan variance of WMS-2f signal demodulated by the software phase-locked amplifier is one order of magnitude smaller than that demodulated by SR844, corresponding two order of magnitude lower of detection limit. And it is able to solve the unlocked problem caused by the small duty cycle of quasi-continuous modulation signal, with a small signal waveform distortion.
Energy Technology Data Exchange (ETDEWEB)
Schenk, A.; Germond, A. [Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Boss, P.; Lorin, P. [ABB Secheron SA, Geneve (Switzerland)
2000-07-01
The article describes a new method for the continuous surveillance of power transformers based on the application of artificial intelligence (AI) techniques. An experimental pilot project on a specially equipped, strategically important power transformer is described. Traditional surveillance methods and the use of mathematical models for the prediction of faults are described. The article describes the monitoring equipment used in the pilot project and the AI principles such as self-organising maps that are applied. The results obtained from the pilot project and methods for their graphical representation are discussed.
The Multiscale Material Point Method for Simulating Transient Responses
Chen, Zhen; Su, Yu-Chen; Zhang, Hetao; Jiang, Shan; Sewell, Thomas
2015-06-01
To effectively simulate multiscale transient responses such as impact and penetration without invoking master/slave treatment, the multiscale material point method (Multi-MPM) is being developed in which molecular dynamics at nanoscale and dissipative particle dynamics at mesoscale might be concurrently handled within the framework of the original MPM at microscale (continuum level). The proposed numerical scheme for concurrently linking different scales is described in this paper with simple examples for demonstration. It is shown from the preliminary study that the mapping and re-mapping procedure used in the original MPM could coarse-grain the information at fine scale and that the proposed interfacial scheme could provide a smooth link between different scales. Since the original MPM is an extension from computational fluid dynamics to solid dynamics, the proposed Multi-MPM might also become robust for dealing with multiphase interactions involving failure evolution. This work is supported in part by DTRA and NSFC.
Material-Point-Method Analysis of Collapsing Slopes
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
To understand the dynamic evolution of landslides and predict their physical extent, a computational model is required that is capable of analysing complex material behaviour as well as large strains and deformations. Here, a model is presented based on the so-called generalised-interpolation mat......To understand the dynamic evolution of landslides and predict their physical extent, a computational model is required that is capable of analysing complex material behaviour as well as large strains and deformations. Here, a model is presented based on the so-called generalised......, a deformed material description is introduced, based on time integration of the deformation gradient and utilising Gauss quadrature over the volume associated with each material point. The method has been implemented in a Fortran code and employed for the analysis of a landslide that took place during...
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal
Starting Point: Linking Methods and Materials for Introductory Geoscience Courses
Manduca, C. A.; MacDonald, R. H.; Merritts, D.; Savina, M.
2004-12-01
Introductory courses are one of the most challenging teaching environments for geoscience faculty. Courses are often large, students have a wide variety of background and skills, and student motivation can include completing a geoscience major, preparing for a career as teacher, fulfilling a distribution requirement, and general interest. The Starting Point site (http://serc.carleton.edu/introgeo/index.html) provides help for faculty teaching introductory courses by linking together examples of different teaching methods that have been used in entry-level courses with information about how to use the methods and relevant references from the geoscience and education literature. Examples span the content of geoscience courses including the atmosphere, biosphere, climate, Earth surface, energy/material cycles, human dimensions/resources, hydrosphere/cryosphere, ocean, solar system, solid earth and geologic time/earth history. Methods include interactive lecture (e.g think-pair-share, concepTests, and in-class activities and problems), investigative cases, peer review, role playing, Socratic questioning, games, and field labs. A special section of the site devoted to using an Earth System approach provides resources with content information about the various aspects of the Earth system linked to examples of teaching this content. Examples of courses incorporating Earth systems content, and strategies for designing an Earth system course are also included. A similar section on Teaching with an Earth History approach explores geologic history as a vehicle for teaching geoscience concepts and as a framework for course design. The Starting Point site has been authored and reviewed by faculty around the country. Evaluation indicates that faculty find the examples particularly helpful both for direct implementation in their classes and for sparking ideas. The help provided for using different teaching methods makes the examples particularly useful. Examples are chosen from
Rozemeijer, J.; Jansen, S.; de Jonge, H.; Lindblad Vendelboe, A.
2014-12-01
Considering their crucial role in water and solute transport, enhanced monitoring and modeling of agricultural subsurface tube drain systems is important for adequate water quality management. For example, previous work in lowland agricultural catchments has shown that subsurface tube drain effluent contributed up to 80% of the annual discharge and 90-92% of the annual NO3 loads from agricultural fields towards the surface water. However, existing monitoring techniques for flow and contaminant loads from tube drains are expensive and labor-intensive. Therefore, despite the unambiguous relevance of this transport route, tube drain monitoring data are scarce. The presented study aimed developing a cheap, simple, and robust method to monitor loads from tube drains. We are now ready to introduce the Flowcap that can be attached to the outlet of tube drains and is capable of registering total flow, contaminant loads, and flow-averaged concentrations. The Flowcap builds on the existing SorbiCells, a modern passive sampling technique that measures average concentrations over longer periods of time (days to months) for various substances. By mounting SorbiCells in our Flowcap, a flow-proportional part of the drain effluent is sampled from the main stream. Laboratory testing yielded good linear relations (R-squared of 0.98) between drainage flow rates and sampling rates. The Flowcap was tested in practice for measuring NO3 loads from two agricultural fields and one glasshouse in the Netherlands. The Flowcap registers contaminant loads from tube drains without any need for housing, electricity, or maintenance. This enables large-scale monitoring of non-point contaminant loads via tube drains, which would facilitate the improvement of contaminant transport models and would yield valuable information for the selection and evaluation of mitigation options to improve water quality.
Apparatus and method for continuous production of materials
Chang, Chih-hung; Jin, Hyungdae
2014-08-12
Embodiments of a continuous-flow injection reactor and a method for continuous material synthesis are disclosed. The reactor includes a mixing zone unit and a residence time unit removably coupled to the mixing zone unit. The mixing zone unit includes at least one top inlet, a side inlet, and a bottom outlet. An injection tube, or plurality of injection tubes, is inserted through the top inlet and extends past the side inlet while terminating above the bottom outlet. A first reactant solution flows in through the side inlet, and a second reactant solution flows in through the injection tube(s). With reference to nanoparticle synthesis, the reactant solutions combine in a mixing zone and form nucleated nanoparticles. The nucleated nanoparticles flow through the residence time unit. The residence time unit may be a single conduit, or it may include an outer housing and a plurality of inner tubes within the outer housing.
Continued SOFC cell and stack technology and improved production methods
Energy Technology Data Exchange (ETDEWEB)
Wandel, M.; Brodersen, K.; Phair, J. (and others)
2009-05-15
Within this project significant results are obtained on a number of very diverse areas ranging from development of cell production, metallic creep in interconnect to assembling and test of stacks with foot print larger than 500 cm2. Out of 38 milestones 28 have been fulfilled and 10 have been partly fulfilled. This project has focused on three main areas: 1) The continued cell development and optimization of manufacturing processes aiming at production of large foot-print cells, improving cell performance and development environmentally more benign production methods. 2) Stack technology - especially stacks with large foot print and improving the stack design with respect to flow geometry and gas leakages. 3) Development of stack components with emphasis on sealing (for 2G as well as 3G), interconnect (coat, architecture and creep) and test development. Production of cells with a foot print larger than 500 cm2 is very difficult due to the brittleness of the cells and great effort has been put into this topic. Eight cells were successfully produced making it possible to assemble and test a real stack thereby giving valuable results on the prospects of stacks with large foot print. However, the yield rate is very low and a significant development to increase this yield lies ahead. Several lessons were learned on the stack level regarding 'large foot print' stacks. Modelling studies showed that the width of the cell primarily is limited by production and handling of the cell whereas the length (in the flow direction) is limited by e.g. pressure drop and necessary manifolding. The optimal cell size in the flow direction was calculated to be between approx20 cm and < 30 cm. From an economical point of view the production yield is crucial and stacks with large foot print cell area are only feasible if the cell production yield is significantly enhanced. Co-casting has been pursued as a production technique due to the possibilities in large scale production
Continuous energy Monte Carlo method based lattice homogeinzation
International Nuclear Information System (INIS)
Li Mancang; Yao Dong; Wang Kan
2014-01-01
Based on the Monte Carlo code MCNP, the continuous energy Monte Carlo multi-group constants generation code MCMC has been developed. The track length scheme has been used as the foundation of cross section generation. The scattering matrix and Legendre components require special techniques, and the scattering event method has been proposed to solve this problem. Three methods have been developed to calculate the diffusion coefficients for diffusion reactor core codes and the Legendre method has been applied in MCMC. To the satisfaction of the equivalence theory, the general equivalence theory (GET) and the superhomogenization method (SPH) have been applied to the Monte Carlo method based group constants. The super equivalence method (SPE) has been proposed to improve the equivalence. GET, SPH and SPE have been implemented into MCMC. The numerical results showed that generating the homogenization multi-group constants via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data library can be used for a wide range of applications due to the versatility. The MCMC scheme can be seen as a potential alternative to the widely used deterministic lattice codes. (authors)
Statistical methods for assessing agreement between continuous measurements
DEFF Research Database (Denmark)
Sokolowski, Ineta; Hansen, Rikke Pilegaard; Vedsted, Peter
Background: Clinical research often involves study of agreement amongst observers. Agreement can be measured in different ways, and one can obtain quite different values depending on which method one uses. Objective: We review the approaches that have been discussed to assess the agreement between...... continuous measures and discuss their strengths and weaknesses. Different methods are illustrated using actual data from the `Delay in diagnosis of cancer in general practice´ project in Aarhus, Denmark. Subjects and Methods: We use weighted kappa-statistic, intraclass correlation coefficient (ICC......), concordance coefficient, Bland-Altman limits of agreement and percentage of agreement to assess the agreement between patient reported delay and doctor reported delay in diagnosis of cancer in general practice. Key messages: The correct statistical approach is not obvious. Many studies give the product...
International Nuclear Information System (INIS)
Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata
2010-01-01
Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)
Measuring the exhaust gas dew point of continuously operated combustion plants
Energy Technology Data Exchange (ETDEWEB)
Fehler, D.
1985-07-16
Low waste-gas temperatures represent one means of minimizing the energy consumption of combustion facilities. However, condensation should be prevented to occur in the waste gas since this could result in a destruction of parts. Measuring the waste-gas dew point allows to control combustion parameters in such a way as to be able to operate at low temperatures without danger of condensation. Dew point sensors will provide an important signal for optimizing combustion facilities.
Evaluating Point of Sale Tobacco Marketing Using Behavioral Laboratory Methods
Robinson, Jason D.; Drobes, David J.; Brandon, Thomas H.; Wetter, David W.; Cinciripini, Paul M.
2018-01-01
With passage of the 2009 Family Smoking Prevention and Tobacco Control Act, the FDA has authority to regulate tobacco advertising. As bans on traditional advertising venues and promotion of tobacco products have grown, a greater emphasis has been placed on brand exposure and price promotion in displays of products at the point-of-sale (POS). POS marketing seeks to influence attitudes and behavior towards tobacco products using a variety of explicit and implicit messaging approaches. Behavioral laboratory methods have the potential to provide the FDA with a strong scientific base for regulatory actions and a model for testing future manipulations of POS advertisements. We review aspects of POS marketing that potentially influence smoking behavior, including branding, price promotions, health claims, the marketing of emerging tobacco products, and tobacco counter-advertising. We conceptualize how POS marketing potentially influence individual attention, memory, implicit attitudes, and smoking behavior. Finally, we describe specific behavioral laboratory methods that can be adapted to measure the impact of POS marketing on these domains.
Method for continuous synthesis of metal oxide powders
Berry, David A.; Haynes, Daniel J.; Shekhawat, Dushyant; Smith, Mark W.
2015-09-08
A method for the rapid and continuous production of crystalline mixed-metal oxides from a precursor solution comprised of a polymerizing agent, chelated metal ions, and a solvent. The method discharges solution droplets of less than 500 .mu.m diameter using an atomizing or spray-type process into a reactor having multiple temperature zones. Rapid evaporation occurs in a first zone, followed by mixed-metal organic foam formation in a second zone, followed by amorphous and partially crystalline oxide precursor formation in a third zone, followed by formation of the substantially crystalline mixed-metal oxide in a fourth zone. The method operates in a continuous rather than batch manner and the use of small droplets as the starting material for the temperature-based process allows relatively high temperature processing. In a particular embodiment, the first zone operates at 100-300.degree. C., the second zone operates at 300-700.degree. C., and the third operates at 700-1000.degree. C., and fourth zone operates at at least 700.degree. C. The resulting crystalline mixed-metal oxides display a high degree of crystallinity and sphericity with typical diameters on the order of 50 .mu.m or less.
Numerical Continuation Methods for Intrusive Uncertainty Quantification Studies
Energy Technology Data Exchange (ETDEWEB)
Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Phipps, Eric Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-09-01
Rigorous modeling of engineering systems relies on efficient propagation of uncertainty from input parameters to model outputs. In recent years, there has been substantial development of probabilistic polynomial chaos (PC) Uncertainty Quantification (UQ) methods, enabling studies in expensive computational models. One approach, termed ”intrusive”, involving reformulation of the governing equations, has been found to have superior computational performance compared to non-intrusive sampling-based methods in relevant large-scale problems, particularly in the context of emerging architectures. However, the utility of intrusive methods has been severely limited due to detrimental numerical instabilities associated with strong nonlinear physics. Previous methods for stabilizing these constructions tend to add unacceptably high computational costs, particularly in problems with many uncertain parameters. In order to address these challenges, we propose to adapt and improve numerical continuation methods for the robust time integration of intrusive PC system dynamics. We propose adaptive methods, starting with a small uncertainty for which the model has stable behavior and gradually moving to larger uncertainty where the instabilities are rampant, in a manner that provides a suitable solution.
Recommender engine for continuous-time quantum Monte Carlo methods
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
Directory of Open Access Journals (Sweden)
Kênia Lara Silva
2012-09-01
Full Text Available This is a qualitative study that aims at analyzing the Primary Health Care Strategic Planning in a continuing education process, as well as the professional’s formation to work as facilitators in it. Data was obtained through interviews with 11 nurses that had acted as the plan’s facilitators in a municipality within Belo Horizonte. The results indicate that the experience as facilitators allowed them to reflect on the work process and this practice contributed to the incorporation of new tools to the primary health care system. The participants reported the difficulties faced when conducting the experience and the gap in the professionals’ formation to act in the PHC and to put into practice the processes of continuing education on a day to day basis. In conclusion, the Planning represents an important continuing education strategy and it is significance to transform processes and practices in the primary health care service.
A New Iterative Method for Equilibrium Problems and Fixed Point Problems
Directory of Open Access Journals (Sweden)
Abdul Latif
2013-01-01
Full Text Available Introducing a new iterative method, we study the existence of a common element of the set of solutions of equilibrium problems for a family of monotone, Lipschitz-type continuous mappings and the sets of fixed points of two nonexpansive semigroups in a real Hilbert space. We establish strong convergence theorems of the new iterative method for the solution of the variational inequality problem which is the optimality condition for the minimization problem. Our results improve and generalize the corresponding recent results of Anh (2012, Cianciaruso et al. (2010, and many others.
Phase-integral method allowing nearlying transition points
Fröman, Nanny
1996-01-01
The efficiency of the phase-integral method developed by the present au thors has been shown both analytically and numerically in many publica tions. With the inclusion of supplementary quantities, closely related to new Stokes constants and obtained with the aid of comparison equation technique, important classes of problems in which transition points may approach each other become accessible to accurate analytical treatment. The exposition in this monograph is of a mathematical nature but has important physical applications, some examples of which are found in the adjoined papers. Thus, we would like to emphasize that, although we aim at mathematical rigor, our treatment is made primarily with physical needs in mind. To introduce the reader into the background of this book, we start by de scribing the phase-integral approximation of arbitrary order generated from an unspecified base function. This is done in Chapter 1, which is reprinted, after minor changes, from a review article. Chapter 2 is the re...
Comparison of dew point temperature estimation methods in Southwestern Georgia
Marcus D. Williams; Scott L. Goodrick; Andrew Grundstein; Marshall Shepherd
2015-01-01
Recent upward trends in acres irrigated have been linked to increasing near-surface moisture. Unfortunately, stations with dew point data for monitoring near-surface moisture are sparse. Thus, models that estimate dew points from more readily observed data sources are useful. Daily average dew temperatures were estimated and evaluated at 14 stations in...
Anderson, J M; Reimer Kirkham, S; Browne, A J; Lynam, M J
2007-09-01
Postcolonial feminist theories provide the analytic tools to address issues of structural inequities in groups that historically have been socially and economically disadvantaged. In this paper we question what value might be added to postcolonial feminist theories on culture by drawing on Bourdieu. Are there points of connection? Like postcolonial feminists, he puts forward a position that aims to unmask oppressive structures. We argue that, while there are points of connection, there are also epistemologic and methodologic differences between postcolonial feminist perspectives and Bourdieu's work. Nonetheless, engagement with different theoretical perspectives carries the promise of new insights - new ways of 'seeing' and 'understanding' that might enhance a praxis-oriented theoretical perspective in healthcare delivery.
Energy Technology Data Exchange (ETDEWEB)
Barlow, Nathaniel S., E-mail: nsbsma@rit.edu [School of Mathematical Sciences, Rochester Institute of Technology, Rochester, New York 14623 (United States); Schultz, Andrew J., E-mail: ajs42@buffalo.edu; Kofke, David A., E-mail: kofke@buffalo.edu [Department of Chemical and Biological Engineering, University at Buffalo, State University of New York, Buffalo, New York 14260 (United States); Weinstein, Steven J., E-mail: sjweme@rit.edu [Department of Chemical Engineering, Rochester Institute of Technology, Rochester, New York 14623 (United States)
2015-08-21
The mathematical structure imposed by the thermodynamic critical point motivates an approximant that synthesizes two theoretically sound equations of state: the parametric and the virial. The former is constructed to describe the critical region, incorporating all scaling laws; the latter is an expansion about zero density, developed from molecular considerations. The approximant is shown to yield an equation of state capable of accurately describing properties over a large portion of the thermodynamic parameter space, far greater than that covered by each treatment alone.
Barlow, Nathaniel S; Schultz, Andrew J; Weinstein, Steven J; Kofke, David A
2015-08-21
The mathematical structure imposed by the thermodynamic critical point motivates an approximant that synthesizes two theoretically sound equations of state: the parametric and the virial. The former is constructed to describe the critical region, incorporating all scaling laws; the latter is an expansion about zero density, developed from molecular considerations. The approximant is shown to yield an equation of state capable of accurately describing properties over a large portion of the thermodynamic parameter space, far greater than that covered by each treatment alone.
EVALUATION OF CONTINUOUS THERMODILUTION METHOD FOR CARDIAC OUTPUT MEASUREMENT
Directory of Open Access Journals (Sweden)
Roman Parežnik
2001-12-01
Full Text Available Background. Continuous monitoring of haemodynamic variables is often necessary for detection of rapid changes in critically ill patients. In our patients recently introduced continuous thermodilution technique (CTD for cardiac output measurement was compared to bolus thermodilution technique (BTD which is a »golden standard« method for cardiac output (CO measurement in intensive care medicine.Methods. Ten critically ill patients were included in a retrospective observational study. Using CTD method cardiac output was measured continuously. BTD measurements using the same equipment were performed intermittently. The data obtained by BTD were compared to those obtained by CTD just before the BTD (CTD-before and 2–3 minutes after the BTD (CTD-after. The CO values were divided into three groups: all CO values, CO > 4.5 L/min, CO < 4.5 L/min. The bias (mean difference between values obtained by two methods, standard deviation, 95% confidence limits and relative error were calculated and the linear regression analysis was performed. t-test for pared data was used to compare the biases for CTD-before and CTD-after for an individual group. The p value of less than 0.05 was considered statistically significant.Results. A total of 60 data triplets were obtained. CTD-before ranged from 1.9 L/min to 12.6 L/min, CTD-after from 2.0 to 13.2 L/min and BTD from 1.9 to 12.0 L/min. For all CO values the bias for CTD-before was 0.13 ± 0.52 L/min (95% confidence limits 1.17–0.91 L/min, relative error was 3.52 ± 15.20%, linear regression equation was CTD-before = 0.96 × BTD + 0.01 and Pearson’s correlation coefficient was 0.95. The values for CTD-after were 0.08 ± 0.46 L/min (1.0–0.84 L/min, 2.22 ± 9.05%, CTD-after = 0.98 × BTD + 0.01 and 0.98 respectively. For all CO values there was no statistically significant difference between biases for CTD-before and CTD-after (p = 0,51. There was no statistically significant difference between biases for CTD
Gran method for end point anticipation in monosegmented flow titration
Directory of Open Access Journals (Sweden)
Aquino Emerson V
2004-01-01
Full Text Available An automatic potentiometric monosegmented flow titration procedure based on Gran linearisation approach has been developed. The controlling program can estimate the end point of the titration after the addition of three or four aliquots of titrant. Alternatively, the end point can be determined by the second derivative procedure. In this case, additional volumes of titrant are added until the vicinity of the end point and three points before and after the stoichiometric point are used for end point calculation. The performance of the system was assessed by the determination of chloride in isotonic beverages and parenteral solutions. The system employs a tubular Ag2S/AgCl indicator electrode. A typical titration, performed according to the IUPAC definition, requires only 60 mL of sample and about the same volume of titrant (AgNO3 solution. A complete titration can be carried out in 1 - 5 min. The accuracy and precision (relative standard deviation of ten replicates are 2% and 1% for the Gran and 1% and 0.5% for the Gran/derivative end point determination procedures, respectively. The proposed system reduces the time to perform a titration, ensuring low sample and reagent consumption, and full automatic sampling and titrant addition in a calibration-free titration protocol.
Robust EM Continual Reassessment Method in Oncology Dose Finding
Yuan, Ying; Yin, Guosheng
2012-01-01
The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities. PMID:22375092
Natural Preconditioning and Iterative Methods for Saddle Point Systems
Pestana, Jennifer; Wathen, Andrew J.
2015-01-01
or the discrete setting, so saddle point systems arising from the discretization of partial differential equation problems, such as those describing electromagnetic problems or incompressible flow, lead to equations with this structure, as do, for example
[End-of-life debate: Citizen's point of view about deep and continuous sedation].
Toporski, J; Jonveaux-Rivasseau, T; Lamouille-Chevalier, C
2017-12-01
Sedation in palliative care meets a precise definition and corresponds to a medical practice. We assessed the comprehension of this practice by the French population. In 2015, citizen expressed their views on the Claeys-Leonetti bill by means of a consultative forum made available on the Internet site of the National Assembly. The content of the messages filed, regarding the right to deep and continuous sedation until death was analyzed using the ALCESTE textual data analysis software, supplemented by a thematic analysis in order to identify the perception that Internet users had of this practice. Among the 1819 Internet users who participated in the forum, 67 expressed their views as Health professionals, 25 of whom were directly involved in palliative care, as well as 10 sick persons. Analysis with the ALCESTE software highlighted two classes of statements. The first dealing with deep and continuous sedation, reflecting the specificity of the discourse of the Internet users. The second one consisted of textual units in which the modal verbs were dominant and overrepresented, thus providing information on the participants' perceptions. The thematic analysis highlighted four themes: death, intent, treatment and fear. Deep and continuous sedation is perceived as a euthanasic practice or raises fear of such a drift. Provision of extended and accurate information to the population and health professionals is essential to ensure that this new model of sedation is integrated into the care of the terminally ill patients and their families. Copyright © 2017 Société Nationale Française de Médecine Interne (SNFMI). Published by Elsevier SAS. All rights reserved.
Fixed points for some non-obviously contractive operators defined in a space of continuous functions
C. Avramescu; Cristian Vladimirescu
2004-01-01
Let $X$ be an arbitrary (real or complex) Banach space, endowed with the norm $\\left| \\cdot \\right| .$ Consider the space of the continuous functions $C\\left( \\left[ 0,T\\right] ,X\\right) $ $\\left( T>0\\right) $, endowed with the usual topology, and let $M$ be a closed subset of it. One proves that each operator $A:M\\rightarrow M$ fulfilling for all $x,y\\in M$ and for all $t\\in \\left[ 0,T\\right] $ the condition \\begin{eqnarray*} \\left| \\left( Ax\\right) \\left( t\\right) -\\left( Ay\\right) \\l...
Homogeneity study of fixed-point continuous marine environmental and meteorological data: a review
Yang, Jinkun; Yang, Yang; Miao, Qingsheng; Dong, Mingmei; Wan, Fangfang
2018-02-01
The principle of inhomogeneity and the classification of homogeneity test methods are briefly described, and several common inhomogeneity methods and relative merits are described in detail. Then based on the applications of the different homogeneity methods to the ground meteorological data and marine environment data, the present status and the progress are reviewed. At present, the homogeneity research of radiosonde and ground meteorological data is mature at home and abroad, and the research and application in the marine environmental data should also be given full attention. To carry out a variety of test and correction methods combined with the use of multi-mode test system, will make the results more reasonable and scientific, and also can be used to provide accurate first-hand information for the coastal climate change researches.
Dew Point Calibration System Using a Quartz Crystal Sensor with a Differential Frequency Method
Directory of Open Access Journals (Sweden)
Ningning Lin
2016-11-01
Full Text Available In this paper, the influence of temperature on quartz crystal microbalance (QCM sensor response during dew point calibration is investigated. The aim is to present a compensation method to eliminate temperature impact on frequency acquisition. A new sensitive structure is proposed with double QCMs. One is kept in contact with the environment, whereas the other is not exposed to the atmosphere. There is a thermal conductivity silicone pad between each crystal and a refrigeration device to keep a uniform temperature condition. A differential frequency method is described in detail and is applied to calibrate the frequency characteristics of QCM at the dew point of −3.75 °C. It is worth noting that frequency changes of two QCMs were approximately opposite when temperature conditions were changed simultaneously. The results from continuous experiments show that the frequencies of two QCMs as the dew point moment was reached have strong consistency and high repeatability, leading to the conclusion that the sensitive structure can calibrate dew points with high reliability.
Dew Point Calibration System Using a Quartz Crystal Sensor with a Differential Frequency Method.
Lin, Ningning; Meng, Xiaofeng; Nie, Jing
2016-11-18
In this paper, the influence of temperature on quartz crystal microbalance (QCM) sensor response during dew point calibration is investigated. The aim is to present a compensation method to eliminate temperature impact on frequency acquisition. A new sensitive structure is proposed with double QCMs. One is kept in contact with the environment, whereas the other is not exposed to the atmosphere. There is a thermal conductivity silicone pad between each crystal and a refrigeration device to keep a uniform temperature condition. A differential frequency method is described in detail and is applied to calibrate the frequency characteristics of QCM at the dew point of -3.75 °C. It is worth noting that frequency changes of two QCMs were approximately opposite when temperature conditions were changed simultaneously. The results from continuous experiments show that the frequencies of two QCMs as the dew point moment was reached have strong consistency and high repeatability, leading to the conclusion that the sensitive structure can calibrate dew points with high reliability.
Note: interpreting iterative methods convergence with diffusion point of view
Hong, Dohy
2013-01-01
In this paper, we explain the convergence speed of different iteration schemes with the fluid diffusion view when solving a linear fixed point problem. This interpretation allows one to better understand why power iteration or Jacobi iteration may converge faster or slower than Gauss-Seidel iteration.
Micro-four-point Probe Hall effect Measurement method
DEFF Research Database (Denmark)
Petersen, Dirch Hjorth; Hansen, Ole; Lin, Rong
2008-01-01
barriers and with a magnetic field applied normal to the plane of the sheet. Based on this potential, analytical expressions for the measured four-point resistance in presence of a magnetic field are derived for several simple sample geometries. We show how the sheet resistance and Hall effect...
Spatio-temporal point process filtering methods with an application
Czech Academy of Sciences Publication Activity Database
Frcalová, B.; Beneš, V.; Klement, Daniel
2010-01-01
Roč. 21, 3-4 (2010), s. 240-252 ISSN 1180-4009 R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z50110509 Keywords : cox point process * filtering * spatio-temporal modelling * spike Subject RIV: BA - General Mathematics Impact factor: 0.750, year: 2010
Benchmarking: a method for continuous quality improvement in health.
Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe
2012-05-01
Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.
Precipitation of stoichiometric hydroxyapatite by a continuous method
Energy Technology Data Exchange (ETDEWEB)
Gomez-Morales, J.; Boix, T.; Fraile, J.; Rodriguez-Clemente, R. [Consejo Superior de Investigaciones Cientificas, Barcelona (Spain). Inst. de Ciencia de Materiales; Torrent-Burgues, J. [UPC, Barcelona (Spain). Dept. d' Enginyeria Quimica
2001-07-01
In this paper we present the precipitation of hydroxyapatite (HA), Ca{sub 5}(OH)(PO{sub 4}){sub 3}, from highly concentrated CaCl{sub 2} and K{sub 2}HPO{sub 4} solutions, carried out by a continuous method in a MSMPR reactor. The procedure consists of adding the reagents in a ratio Ca to P equal to 1.67, maintaining a temperature of 85 C, inert N{sub 2} atmosphere inside the reactor, and monitoring and adjusting automatically the pH by means of a pH-stat system (pH = 9.0 {+-} 0.1). Under these conditions HA with a Ca to P ratio equal or close to the stoichiometric composition (Ca/P=1.667), with a high yield (up to 99%) and a high production rate (up to 1.17 g/l.min) is obtained at steady state. The CSD, morphology, crystallinity of the precipitates and impurities present fit the requirement for its biomedical applications. (orig.)
Comparing Single-Point and Multi-point Calibration Methods in Modulated DSC
Energy Technology Data Exchange (ETDEWEB)
Van Buskirk, Caleb Griffith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-06-14
Heat capacity measurements for High Density Polyethylene (HDPE) and Ultra-high Molecular Weight Polyethylene (UHMWPE) were performed using Modulated Differential Scanning Calorimetry (mDSC) over a wide temperature range, -70 to 115 °C, with a TA Instruments Q2000 mDSC. The default calibration method for this instrument involves measuring the heat capacity of a sapphire standard at a single temperature near the middle of the temperature range of interest. However, this method often fails for temperature ranges that exceed a 50 °C interval, likely because of drift or non-linearity in the instrument's heat capacity readings over time or over the temperature range. Therefore, in this study a method was developed to calibrate the instrument using multiple temperatures and the same sapphire standard.
Krylov Subspace Methods for Saddle Point Problems with Indefinite Preconditioning
Czech Academy of Sciences Publication Activity Database
Rozložník, Miroslav; Simoncini, V.
2002-01-01
Roč. 24, č. 2 (2002), s. 368-391 ISSN 0895-4798 R&D Projects: GA ČR GA101/00/1035; GA ČR GA201/00/0080 Institutional research plan: AV0Z1030915 Keywords : saddle point problems * preconditioning * indefinite linear systems * finite precision arithmetic * conjugate gradients Subject RIV: BA - General Mathematics Impact factor: 0.753, year: 2002
Unified analysis of preconditioning methods for saddle point matrices
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2015-01-01
Roč. 22, č. 2 (2015), s. 233-253 ISSN 1070-5325 R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:68145535 Keywords : saddle point problems * preconditioning * spectral properties Subject RIV: BA - General Mathematics Impact factor: 1.431, year: 2015 http://onlinelibrary.wiley.com/doi/10.1002/nla.1947/pdf
Development of a Multi-Point Microwave Interferometry (MPMI) Method
Energy Technology Data Exchange (ETDEWEB)
Specht, Paul Elliott [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cooper, Marcia A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jilek, Brook Anton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-09-01
A multi-point microwave interferometer (MPMI) concept was developed for non-invasively tracking a shock, reaction, or detonation front in energetic media. Initially, a single-point, heterodyne microwave interferometry capability was established. The design, construction, and verification of the single-point interferometer provided a knowledge base for the creation of the MPMI concept. The MPMI concept uses an electro-optic (EO) crystal to impart a time-varying phase lag onto a laser at the microwave frequency. Polarization optics converts this phase lag into an amplitude modulation, which is analyzed in a heterodyne interfer- ometer to detect Doppler shifts in the microwave frequency. A version of the MPMI was constructed to experimentally measure the frequency of a microwave source through the EO modulation of a laser. The successful extraction of the microwave frequency proved the underlying physical concept of the MPMI design, and highlighted the challenges associated with the longer microwave wavelength. The frequency measurements made with the current equipment contained too much uncertainty for an accurate velocity measurement. Potential alterations to the current construction are presented to improve the quality of the measured signal and enable multiple accurate velocity measurements.
George, Monica C; Lazer, Zane P; George, David S
2016-05-01
We present a technique that uses a near-point string to demonstrate the anticipated near point of multifocal and accommodating intraocular lenses (IOLs). Beads are placed on the string at distances corresponding to the near points for diffractive and accommodating IOLs. The string is held up to the patient's eye to demonstrate where each of the IOLs is likely to provide the best near vision. None of the authors has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro
It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.
International Nuclear Information System (INIS)
Tatchyn, R.; Oregon Univ., Eugene
1989-01-01
Conventional techniques for measuring magnetic field profiles in ordinary undulators rely predominantly on Hall probes for making point-by-point static measurements. As undulators with submillimeter periods and gaps become available, such techniques will start becoming untenable, due to the relative largeness of conventional Hall probe heads and the rapidly increasing number of periods in devices of fixed length. In this paper a method is presented which can rapidly map out field profiles in undulators with periods and gaps extending down to the 100 μm range and beyond. The method, which samples the magnetic field continuously, has been used successfully in profiling a recently constructed 726 μm period undulator, and seems to offer some potential advantages over conventional Hall probe techniques in measuring large-scale undulator fields as well. (orig.)
A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems
International Nuclear Information System (INIS)
Zhang Guiyong; Liu Guirong
2010-01-01
In the framework of a weakened weak (W 2 ) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W 2 formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H 1 space, but in a G 1 space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H 1 space and G 1 space can be viewed as a space of functions with weakened weak (W 2 ) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W 2 formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W 2 formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much softer than the overly-stiff FEM model and
Tsibanos, V.; Wang, G.
2017-12-01
The Long Point Fault located in Houston Texas is a complex system of normal faults which causes significant damage to urban infrastructure on both private and public property. This case study focuses on the 20-km long fault using high accuracy continuously operating global positioning satellite (GPS) stations to delineate fault movement over five years (2012 - 2017). The Long Point Fault is the longest active fault in the greater Houston area that damages roads, buried pipes, concrete structures and buildings and creates a financial burden for the city of Houston and the residents who live in close vicinity to the fault trace. In order to monitor fault displacement along the surface 11 permanent and continuously operating GPS stations were installed 6 on the hanging wall and 5 on the footwall. This study is an overview of the GPS observations from 2013 to 2017. GPS positions were processed with both relative (double differencing) and absolute Precise Point Positioning (PPP) techniques. The PPP solutions that are referred to IGS08 reference frame were transformed to the Stable Houston Reference Frame (SHRF16). Our results show no considerable horizontal displacements across the fault, but do show uneven vertical displacement attributed to regional subsidence in the range of (5 - 10 mm/yr). This subsidence can be associated to compaction of silty clays in the Chicot and Evangeline aquifers whose water depths are approximately 50m and 80m below the land surface (bls). These levels are below the regional pre-consolidation head that is about 30 to 40m bls. Recent research indicates subsidence will continue to occur until the aquifer levels reach the pre-consolidation head. With further GPS observations both the Long Point Fault and regional land subsidence can be monitored providing important geological data to the Houston community.
Energy Technology Data Exchange (ETDEWEB)
Park, H.J.; Lee, S.J. [Wonkwang University, Iksan (Korea)
2003-01-01
In this study was proposed that a new estimating method for investigation of contractile state changes which generated from continuous isometric contraction of skeletal muscle. The physiological changes (EMG, ECG) and the psychological changes by CNS(central nervous system) were measured by experiments, while the muscle of subjects contracted continuously with isometric contraction in constant load. The psychological changes were represented as three-step-change named 'fatigue', 'pain' and 'sick(greatly pain)' from oral test, and the method which compared physiological change with psychological change on basis of these three steps was developed. The result of analyzing the physiological signals, EMG and ECG signal changes were observed at the vicinity of judging point in time of psychological changes. Namely, it is supposed that contractile states have three kind of states pattern (stable, fatigue, pain) instead of two states (stable, fatigue). (author). 24 refs., 7 figs.
Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka
Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.
Evaluating clinical accuracy of continuous glucose monitoring devices: other methods
Wentholt, Iris M. E.; Hart, August A.; Hoekstra, Joost B. L.; DeVries, J. Hans
2008-01-01
With more and more continuous glucose monitoring devices entering the market, the importance of adequate accuracy assessment grows. This review discusses pros and cons of Regression Analysis and Correlation Coefficient, Relative Difference measures, Bland Altman plot, ISO criteria, combined curve
Novel Ratio Subtraction and Isoabsorptive Point Methods for ...
African Journals Online (AJOL)
1Department of Pharmaceutical Chemistry, College of Pharmacy, King Saud University, PO Box ... Purpose: To develop and validate two innovative spectrophotometric methods used for the ..... research through the Research Group Project no.
Energy Technology Data Exchange (ETDEWEB)
Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica
2015-07-01
In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)
International Nuclear Information System (INIS)
Tumelero, Fernanda; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana
2015-01-01
In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)
Unemployment estimation: Spatial point referenced methods and models
Pereira, Soraia
2017-06-26
Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to
Directory of Open Access Journals (Sweden)
Kresno Wikan Sadono
2016-12-01
Full Text Available Persamaan differensial banyak digunakan untuk menggambarkan berbagai fenomena dalam bidang sains dan rekayasa. Berbagai masalah komplek dalam kehidupan sehari-hari dapat dimodelkan dengan persamaan differensial dan diselesaikan dengan metode numerik. Salah satu metode numerik, yaitu metode meshfree atau meshless berkembang akhir-akhir ini, tanpa proses pembuatan elemen pada domain. Penelitian ini menggabungkan metode meshless yaitu radial basis point interpolation method (RPIM dengan integrasi waktu discontinuous Galerkin method (DGM, metode ini disebut RPIM-DGM. Metode RPIM-DGM diaplikasikan pada advection equation pada satu dimensi. RPIM menggunakan basis function multiquadratic function (MQ dan integrasi waktu diturunkan untuk linear-DGM maupun quadratic-DGM. Hasil simulasi menunjukkan, metode ini mendekati hasil analitis dengan baik. Hasil simulasi numerik dengan RPIM DGM menunjukkan semakin banyak node dan semakin kecil time increment menunjukkan hasil numerik semakin akurat. Hasil lain menunjukkan, integrasi numerik dengan quadratic-DGM untuk suatu time increment dan jumlah node tertentu semakin meningkatkan akurasi dibandingkan dengan linear-DGM. [Title: Numerical solution of advection equation with radial basis interpolation method and discontinuous Galerkin method for time integration] Differential equation is widely used to describe a variety of phenomena in science and engineering. A variety of complex issues in everyday life can be modeled with differential equations and solved by numerical method. One of the numerical methods, the method meshfree or meshless developing lately, without making use of the elements in the domain. The research combines methods meshless, i.e. radial basis point interpolation method with discontinuous Galerkin method as time integration method. This method is called RPIM-DGM. The RPIM-DGM applied to one dimension advection equation. The RPIM using basis function multiquadratic function and time
Distributed Interior-point Method for Loosely Coupled Problems
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard
2014-01-01
In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow a...
Sampling point selection for energy estimation in the quasicontinuum method
Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.
2010-01-01
The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of
System and method for continuous solids slurry depressurization
Leininger, Thomas Frederick; Steele, Raymond Douglas; Yen, Hsien-Chin William; Cordes, Stephen Michael
2017-10-10
A continuous slag processing system includes a rotating parallel disc pump, coupled to a motor and a brake. The rotating parallel disc pump includes opposing discs coupled to a shaft, an outlet configured to continuously receive a fluid at a first pressure, and an inlet configured to continuously discharge the fluid at a second pressure less than the first pressure. The rotating parallel disc pump is configurable in a reverse-acting pump mode and a letdown turbine mode. The motor is configured to drive the opposing discs about the shaft and against a flow of the fluid to control a difference between the first pressure and the second pressure in the reverse-acting pump mode. The brake is configured to resist rotation of the opposing discs about the shaft to control the difference between the first pressure and the second pressure in the letdown turbine mode.
System and method for continuous solids slurry depressurization
Leininger, Thomas Frederick; Steele, Raymond Douglas; Cordes, Stephen Michael
2017-07-11
A system includes a first pump having a first outlet and a first inlet, and a controller. The first pump is configured to continuously receive a flow of a slurry into the first outlet at a first pressure and to continuously discharge the flow of the slurry from the first inlet at a second pressure less than the first pressure. The controller is configured to control a first speed of the first pump against the flow of the slurry based at least in part on the first pressure, wherein the first speed of the first pump is configured to resist a backflow of the slurry from the first outlet to the first inlet.
Crovelli, Robert A.; revised by Charpentier, Ronald R.
2012-01-01
The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.
Continuous-Flow Biochips: Technology, Physical Design Methods and Testing
DEFF Research Database (Denmark)
Pop, Paul; Araci, Ismail Emre; Chakrabarty, Krishnendu
2015-01-01
This article is a tutorial on continuous-flow biochips where the basic building blocks are microchannels, and microvalves, and by combining them, more complex units such as mixers, switches, and multiplexers can be built. It also presents the state of the art in flow-based biochip technology...
Continuous Multistep Methods for Volterra Integro-Differential
African Journals Online (AJOL)
Kamoh et al.
DIFFERENTIAL EQUATIONS OF THE SECOND ORDER. 1Kamoh N.M. ... methods, Volterra integro-differential equation, Convergent, ...... Research of a Multistep Method Applied to Numerical Solution of. Volterra ... Congress on Engineering.
Continuous multistep methods for volterra integro-differential ...
African Journals Online (AJOL)
A new class of numerical methods for Volterra integro-differential equations of the second order is developed. The methods are based on interpolation and collocation of the shifted Legendre polynomial as basis function with Trapezoidal quadrature rules. The convergence analysis revealed that the methods are consistent ...
The Closest Point Method and Multigrid Solvers for Elliptic Equations on Surfaces
Chen, Yujia; Macdonald, Colin B.
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. Elliptic partial differential equations are important from both application and analysis points of view. In this paper we apply the closest point method to solve elliptic equations on general
Rainfall Deduction Method for Estimating Non-Point Source Pollution Load for Watershed
Cai, Ming; Li, Huai-en; KAWAKAMI, Yoji
2004-01-01
The water pollution can be divided into point source pollution (PSP) and non-point source pollution (NSP). Since the point source pollution has been controlled, the non-point source pollution is becoming the main pollution source. The prediction of NSP load is being increasingly important in water pollution controlling and planning in watershed. Considering the monitoring data shortage of NPS in China, a practical estimation method of non-point source pollution load --- rainfall deduction met...
Methods and Conditions for Achieving Continuous Improvement of Processes
Florica BADEA; Catalina RADU; Ana-Maria GRIGORE
2010-01-01
In the early twentieth century, the Taylor model improved, in a spectacular maner the efficiency of the production processes. This allowed obtaining high productivity by low-skilled workers, but used in large number in the execution of production. Currently this model is questioned by experts and was replaced by the concept of "continuous improvement". The first signs of change date from the '80s, with the apparition of quality circles and groups of operators on quality issues, principles whi...
Performance of wet process method alternatives : terminal or continuous blend
Fontes, Liseane P. T. L.; Pereira, Paulo A. A.; Pais, Jorge C.; Trichês, Glicério
2006-01-01
This study presents the results of the research to investigate asphalt rubber mixtures produced with asphalt rubber binder obtained from two different processes; (i) terminal blend (produced in refinery); (ii) continuous blend (produced in laboratory). The experiment included the evaluation of fatigue and permanent deformation resistance of two gap graded mixtures (Caltrans ARHM -GG; ADOT AR-A C) and a dense gradation Asphalt Institute (AI) mix type IV) Two asphalt rubbers from terminal blend...
CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.
Saegusa, Jun
2008-01-01
The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.
Evaluation of Continuation Desire as an Iterative Game Development Method
DEFF Research Database (Denmark)
Schoenau-Fog, Henrik; Birke, Alexander; Reng, Lars
2012-01-01
When developing a game it is always valuable to use feedback from players in each iteration, in order to plan the design of the next iteration. However, it can be challenging to devise a simple approach to acquiring information about a player's engagement while playing. In this paper we will thus...... concerning a crowd game which is controlled by smartphones and is intended to be played by audiences in cinemas and at venues with large screens. The case study demonstrates how the approach can be used to help improve the desire to continue when developing a game....
The Purification Method of Matching Points Based on Principal Component Analysis
Directory of Open Access Journals (Sweden)
DONG Yang
2017-02-01
Full Text Available The traditional purification method of matching points usually uses a small number of the points as initial input. Though it can meet most of the requirements of point constraints, the iterative purification solution is easy to fall into local extreme, which results in the missing of correct matching points. To solve this problem, we introduce the principal component analysis method to use the whole point set as initial input. And thorough mismatching points step eliminating and robust solving, more accurate global optimal solution, which intends to reduce the omission rate of correct matching points and thus reaches better purification effect, can be obtained. Experimental results show that this method can obtain the global optimal solution under a certain original false matching rate, and can decrease or avoid the omission of correct matching points.
2016-04-01
AND ROTORCRAFT FROM DISCRETE -POINT LINEAR MODELS Eric L. Tobias and Mark B. Tischler Aviation Development Directorate Aviation and Missile...Stitching Architecture for Continuous Full Flight-Envelope Simulation of Fixed-Wing Aircraft and Rotorcraft from Discrete -Point Linear Models 5...of discrete -point linear models and trim data. The model stitching simulation architecture is applicable to any aircraft configuration readily
Methods for Automated and Continuous Commissioning of Building Systems
Energy Technology Data Exchange (ETDEWEB)
Larry Luskay; Michael Brambley; Srinivas Katipamula
2003-04-30
Avoidance of poorly installed HVAC systems is best accomplished at the close of construction by having a building and its systems put ''through their paces'' with a well conducted commissioning process. This research project focused on developing key components to enable the development of tools that will automatically detect and correct equipment operating problems, thus providing continuous and automatic commissioning of the HVAC systems throughout the life of a facility. A study of pervasive operating problems reveled the following would most benefit from an automated and continuous commissioning process: (1) faulty economizer operation; (2) malfunctioning sensors; (3) malfunctioning valves and dampers, and (4) access to project design data. Methodologies for detecting system operation faults in these areas were developed and validated in ''bare-bones'' forms within standard software such as spreadsheets, databases, statistical or mathematical packages. Demonstrations included flow diagrams and simplified mock-up applications. Techniques to manage data were demonstrated by illustrating how test forms could be populated with original design information and the recommended sequence of operation for equipment systems. Proposed tools would use measured data, design data, and equipment operating parameters to diagnosis system problems. Steps for future research are suggested to help more toward practical application of automated commissioning and its high potential to improve equipment availability, increase occupant comfort, and extend the life of system equipment.
Two step continuous method to synthesize colloidal spheroid gold nanorods.
Chandra, S; Doran, J; McCormack, S J
2015-12-01
This research investigated a two-step continuous process to synthesize colloidal suspension of spheroid gold nanorods. In the first step; gold precursor was reduced to seed-like particles in the presence of polyvinylpyrrolidone and ascorbic acid. In continuous second step; silver nitrate and alkaline sodium hydroxide produced various shape and size Au nanoparticles. The shape was manipulated through weight ratio of ascorbic acid to silver nitrate by varying silver nitrate concentration. The specific weight ratio of 1.35-1.75 grew spheroid gold nanorods of aspect ratio ∼1.85 to ∼2.2. Lower weight ratio of 0.5-1.1 formed spherical nanoparticle. The alkaline medium increased the yield of gold nanorods and reduced reaction time at room temperature. The synthesized gold nanorods retained their shape and size in ethanol. The surface plasmon resonance was red shifted by ∼5 nm due to higher refractive index of ethanol than water. Copyright © 2015 Elsevier Inc. All rights reserved.
Comments on the comparison of global methods for linear two-point boundary value problems
International Nuclear Information System (INIS)
de Boor, C.; Swartz, B.
1977-01-01
A more careful count of the operations involved in solving the linear system associated with collocation of a two-point boundary value problem using a rough splines reverses results recently reported by others in this journal. In addition, it is observed that the use of the technique of ''condensation of parameters'' can decrease the computer storage required. Furthermore, the use of a particular highly localized basis can also reduce the setup time when the mesh is irregular. Finally, operation counts are roughly estimated for the solution of certain linear system associated with two competing collocation methods; namely, collocation with smooth splines and collocation of the equivalent first order system with continuous piecewise polynomials
Wan, Xia; Ren, Hongyan; Ma, Enbo; Yang, Gonghuan
2017-07-25
In the past 20 years, the trends of ischemic heart disease (IHD) mortality in China have been described in divergent claims. This research analyzes mortality trends for IHD by using the data from 102 continuous Disease Surveillance Points (DSP) from 1991 to 2009. The 102 continuous DSP covered 7.3 million people during the period 1991-2000, and then were expanded to a population of 52 million in the same areas for 2004-2009. The data were adjusted by using garbage code redistribution and underreporting rate, mapped from international classification of diseases ICD-9 to ICD-10. The mortality rates for IHD were further adjusted by the crude death proportion multiplied by the total number of deaths in the mortality envelope, which was calculated by using logr t = a + bt. Age-standard death rates (ASDRs) were computed using China's 2010 census population structure. Trend in IHD was calculated from ASDRs by using a joinpoint regression model. The IHD ASDRs increased in total in regions with an average annual percentage change (AAPC) 4.96%, especially for the Southwest (AAPC = 7.97%) and Northeast areas (AAPC = 7.10%), and for male and female subjects (with 5% AAPC) as well. In rural areas, the year 2000 was a cut-off point for mortality rate with annual percentage change increasing from 3.52% in 1991-2000 to 9.02% in 2000-2009, which was much higher than in urban areas (AAPC = 1.05%). And the proportion of deaths increased in older adults, and more male deaths occurred before age 60 compared to female deaths. By observing a wide range of areas across China from 1991 to 2009, this paper concludes that the ASDR trend for IHD increased. These trends reflect changes in the Chinese standard of living and lifestyle with diets higher in fat, higher blood lipids and increased body weight.
Method for continuous measurement of export from a leaf
International Nuclear Information System (INIS)
Geiger, D.R.; Fondy, B.R.
1979-01-01
Export of labeled material derived by continuous photosynthesis in 14 CO 2 was monitored with a Geiger-Mueller detector positioned next to an exporting leaf blade. Rate of export of labeled material was calculated from the difference between rates of retention and net photosynthesis of labeled carbon for the observed leaf. Given certain conditions, including nearly constant distribution of labeled material among minor veins and various types of cells, count rate data for the source leaf can be coverted to rate of export of carbon. Changes in counting efficiency resulting from changes in leaf water status can be corrected for with data from a transducer which measures leaf thickness. Export data agreed with data obtained by monitoring the arrival of 14 C in the sink region; isolated leaves gave values near zero for export of labeled carbon from a given leaf on an intact plant. The technique detects changes in export with a resolution of 10 to 20 minutes
[Continuous subcutaneous infusion in palliative care, an undervalued method].
van Marum, R J; de Vogel, E M; Zylicz, Z
2002-11-23
Three patients, 2 men aged 55 and 54 years and a woman aged 86 years, were admitted to hospital for treatment of symptoms resulting from terminal disease (pain, agitation, nausea etc.). In all three patients, continuous subcutaneous infusion (CSI) of medication was successfully used to control the symptoms. Compared with intravenous infusion, the technique of CSI is easy to learn and is associated with fewer complications. Its reliability and ease-of-use make it a technique that can be used not only in a hospital setting, but also in general practice and nursing homes. Medication used in palliative care (e.g. morphine, haloperidol, metoclopramide, levomepromazine, midazolam) can often be administered safely by CSI. In palliative care, where goals should be accomplished with minimal burden to the patient, CSI must be considered the technique of choice in patients who are unable to swallow their medication.
A Bayesian MCMC method for point process models with intractable normalising constants
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper
2004-01-01
to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...... and pairwise interaction point processes....
Coes, Alissa L; Paretti, Nicholas V; Foreman, William T; Iverson, Jana L; Alvarez, David A
2014-03-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method. Published by Elsevier B.V.
Coes, Alissa L.; Paretti, Nicholas V.; Foreman, William T.; Iverson, Jana L.; Alvarez, David A.
2014-01-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.
The continuous, desingularized Newton method for meromorphic functions
Jongen, H.Th.; Jonker, P.; Twilt, F.
For any (nonconstant) meromorphic function, we present a real analytic dynamical system, which may be interpreted as an infinitesimal version of Newton's method for finding its zeros. A fairly complete description of the local and global features of the phase portrait of such a system is obtained
Continuation Methods for Qualitative Analysis of Aircraft Dynamics
Cummings, Peter A.
2004-01-01
A class of numerical methods for constructing bifurcation curves for systems of coupled, non-linear ordinary differential equations is presented. Foundations are discussed, and several variations are outlined along with their respective capabilities. Appropriate background material from dynamical systems theory is presented.
Confluent education: an integrative method for nursing (continuing) education.
Francke, A.L.; Erkens, T.
1994-01-01
Confluent education is presented as a method to bridge the gap between cognitive and affective learning. Attention is focused on three main characteristics of confluent education: (a) the integration of four overlapping domains in a learning process (readiness, the cognitive domain, the affective
Directory of Open Access Journals (Sweden)
R. J. Martin
2011-10-01
Full Text Available We describe the design and testing of a flexible bag ("Lung" accumulator attached to a gas chromatographic (GC analyzer capable of measuring surface-atmosphere greenhouse gas exchange fluxes in a wide range of environmental/agricultural settings. In the design presented here, the Lung can collect up to three gas samples concurrently, each accumulated into a Tedlar bag over a period of 20 min or longer. Toggling collection between 2 sets of 3 bags enables quasi-continuous collection with sequential analysis and discarding of sample residues. The Lung thus provides a flexible "front end" collection system for interfacing to a GC or alternative analyzer and has been used in 2 main types of application. Firstly, it has been applied to micrometeorological assessment of paddock-scale N_{2}O fluxes, discussed here. Secondly, it has been used for the automation of concurrent emission assessment from three sheep housed in metabolic crates with gas tracer addition and sampling multiplexed to a single GC.
The Lung allows the same GC equipment used in laboratory discrete sample analysis to be deployed for continuous field measurement. Continuity of measurement enables spatially-averaged N_{2}O fluxes in particular to be determined with greater accuracy, given the highly heterogeneous and episodic nature of N_{2}O emissions. We present a detailed evaluation of the micrometeorological flux estimation alongside an independent tuneable diode laser system, reporting excellent agreement between flux estimates based on downwind vertical concentration differences. Whilst the current design is based around triplet bag sets, the basic design could be scaled up to a larger number of inlets or bags and less frequent analysis (longer accumulation times where a greater number of sampling points are required.
Experimental continuously reinforced concrete pavement parameterization using nondestructive methods
Directory of Open Access Journals (Sweden)
L. S. Salles
Full Text Available ABSTRACT Four continuously reinforced concrete pavement (CRCP sections were built at the University of São Paulo campus in order to analyze the pavement performance in a tropical environment. The sections short length coupled with particular project aspects made the experimental CRCP cracking be different from the traditional CRCP one. After three years of construction, a series of nondestructive testing were performed - Falling Weight Deflectometer (FWD loadings - to verify and to parameterize the pavement structural condition based on two main properties: the elasticity modulus of concrete (E and the modulus of subgrade reaction (k. These properties estimation was obtained through the matching process between real and EverFE simulated basins with the load at the slab center, between two consecutive cracks. The backcalculation results show that the lack of anchorage at the sections end decreases the E and k values and that the longitudinal reinforcement percentage provides additional stiffness to the pavement. Additionally, FWD loadings tangential to the cracks allowed the load transfer efficiency (LTE estimation determination across cracks. The LTE resulted in values above 90 % for all cracks.
Standard Test Methods for Insulation Integrity and Ground Path Continuity of Photovoltaic Modules
American Society for Testing and Materials. Philadelphia
2000-01-01
1.1 These test methods cover procedures for (1) testing for current leakage between the electrical circuit of a photovoltaic module and its external components while a user-specified voltage is applied and (2) for testing for possible module insulation breakdown (dielectric voltage withstand test). 1.2 A procedure is described for measuring the insulation resistance between the electrical circuit of a photovoltaic module and its external components (insulation resistance test). 1.3 A procedure is provided for verifying that electrical continuity exists between the exposed external conductive surfaces of the module, such as the frame, structural members, or edge closures, and its grounding point (ground path continuity test). 1.4 This test method does not establish pass or fail levels. The determination of acceptable or unacceptable results is beyond the scope of this test method. 1.5 There is no similar or equivalent ISO standard. This standard does not purport to address all of the safety concerns, if a...
Comparative analysis among several methods used to solve the point kinetic equations
International Nuclear Information System (INIS)
Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da
2007-01-01
The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)
Comparative analysis among several methods used to solve the point kinetic equations
Energy Technology Data Exchange (ETDEWEB)
Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear; E-mails: alupo@if.ufrj.br; agoncalves@con.ufrj.br; aquilino@lmp.ufrj.br; fernando@con.ufrj.br
2007-07-01
The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)
Continuous Context-Aware Device Comfort Evaluation Method
DEFF Research Database (Denmark)
Guo, Jingjing; Jensen, Christian D.; Ma, Jianfeng
2015-01-01
Mobile devices have become more powerful and are increasingly integrated in the everyday life of people; from playing games, taking pictures and interacting with social media to replacing credit cards in payment solutions. The security of a mobile device is therefore increasingly linked to its...... context, such as its location, surroundings (e.g. objects and people in the immediate environment) and so on, because some actions may only be appropriate in some situations; this is not captured by traditional security models. In this paper, we examine the notion of Device Comfort and propose a way...... to calculate the sensitivity of a specific action to the context. We present two different methods for a mobile device to dynamically evaluate its security status when an action is requested, either by the user or by another device. The first method uses the predefined ideal context as a standard to assess...
A qualitative diagnosis method for a continuous process monitor system
International Nuclear Information System (INIS)
Lucas, B.; Evrard, J.M.; Lorre, J.P.
1993-01-01
SEXTANT, an expert system for the analysis of transients, was built initially to study physical transients in nuclear reactors. It combines several knowledge bases concerning measurements, models and qualitative behavior of the plant with a generate-and-test mechanism and a set of numerical models of the physical process. The integration of an improved diagnosis method using a mixed model in SEXTANT in order to take into account the existence and the reliability of only a few number of sensors, the knowledge on failure and the possibility of non anticipated failures, is presented. This diagnosis method is based on two complementary qualitative models of the process and a methodology to build these models from a system description. 8 figs., 17 refs
Methods for the continuous production of plastic scintillator materials
Bross, Alan; Pla-Dalmau, Anna; Mellott, Kerry
1999-10-19
Methods for producing plastic scintillating material employing either two major steps (tumble-mix) or a single major step (inline-coloring or inline-doping). Using the two step method, the polymer pellets are mixed with silicone oil, and the mixture is then tumble mixed with the dopants necessary to yield the proper response from the scintillator material. The mixture is then placed in a compounder and compounded in an inert gas atmosphere. The resultant scintillator material is then extruded and pelletized or formed. When only a single step is employed, the polymer pellets and dopants are metered into an inline-coloring extruding system. The mixture is then processed under a inert gas atmosphere, usually argon or nitrogen, to form plastic scintillator material in the form of either scintillator pellets, for subsequent processing, or as material in the direct formation of the final scintillator shape or form.
A simple scintigraphic method for continuous monitoring of gastric emptying
Energy Technology Data Exchange (ETDEWEB)
Lipp, R.W.; Hammer, H.F.; Schnedl, W.; Dobnig, H.; Passath, A.; Leb, G.; Krejs, G.J. (Graz Univ. (Austria). Div. of Nuclear Medicine and Endocrinology)
1993-03-01
A new and simple scintigraphic method for the measurement of gastric emptying was developed and validated. The test meal consists of 200 g potato mash mixed with 0.5 g Dowex 2X8 particles (mesh 20-50) labelled with 37 MBq (1 mCi) technetium-99m. After ingestion of the meal, sequential dynamic 15-s anteroposterior exposures in the supine position are obtained for 90 min. A second recording sequence of 20 min is added after a 30-min interval. The results can be displayed as immediate cine-replay, as time-activity diagrams and/or as acitivty retention values. Complicated mathematical fittings are not necessary. The method lends itself equally to the testing of in- and outpatients. (orig.).
Methods of mathematical modelling continuous systems and differential equations
Witelski, Thomas
2015-01-01
This book presents mathematical modelling and the integrated process of formulating sets of equations to describe real-world problems. It describes methods for obtaining solutions of challenging differential equations stemming from problems in areas such as chemical reactions, population dynamics, mechanical systems, and fluid mechanics. Chapters 1 to 4 cover essential topics in ordinary differential equations, transport equations and the calculus of variations that are important for formulating models. Chapters 5 to 11 then develop more advanced techniques including similarity solutions, matched asymptotic expansions, multiple scale analysis, long-wave models, and fast/slow dynamical systems. Methods of Mathematical Modelling will be useful for advanced undergraduate or beginning graduate students in applied mathematics, engineering and other applied sciences.
Rigge, Matthew B.; Gass, Leila; Homer, Collin G.; Xian, George Z.
2017-10-26
The National Land Cover Database (NLCD) provides thematic land cover and land cover change data at 30-meter spatial resolution for the United States. Although the NLCD is considered to be the leading thematic land cover/land use product and overall classification accuracy across the NLCD is high, performance and consistency in the vast shrub and grasslands of the Western United States is lower than desired. To address these issues and fulfill the needs of stakeholders requiring more accurate rangeland data, the USGS has developed a method to quantify these areas in terms of the continuous cover of several cover components. These components include the cover of shrub, sagebrush (Artemisia spp), big sagebrush (Artemisia tridentata spp.), herbaceous, annual herbaceous, litter, and bare ground, and shrub and sagebrush height. To produce maps of component cover, we collected field data that were then associated with spectral values in WorldView-2 and Landsat imagery using regression tree models. The current report outlines the procedures and results of converting these continuous cover components to three thematic NLCD classes: barren, shrubland, and grassland. To accomplish this, we developed a series of indices and conditional models using continuous cover of shrub, bare ground, herbaceous, and litter as inputs. The continuous cover data are currently available for two large regions in the Western United States. Accuracy of the “cross-walked” product was assessed relative to that of NLCD 2011 at independent validation points (n=787) across these two regions. Overall thematic accuracy of the “cross-walked” product was 0.70, compared to 0.63 for NLCD 2011. The kappa value was considerably higher for the “cross-walked” product at 0.41 compared to 0.28 for NLCD 2011. Accuracy was also evaluated relative to the values of training points (n=75,000) used in the development of the continuous cover components. Again, the “cross-walked” product outperformed NLCD
Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam
2018-03-01
We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.
A multi points ultrasonic detection method for material flow of belt conveyor
Zhang, Li; He, Rongjun
2018-03-01
For big detection error of single point ultrasonic ranging technology used in material flow detection of belt conveyor when coal distributes unevenly or is large, a material flow detection method of belt conveyor is designed based on multi points ultrasonic counter ranging technology. The method can calculate approximate sectional area of material by locating multi points on surfaces of material and belt, in order to get material flow according to running speed of belt conveyor. The test results show that the method has smaller detection error than single point ultrasonic ranging technology under the condition of big coal with uneven distribution.
Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method)
DEFF Research Database (Denmark)
Hansen, Susanne Brunsgaard; Berg, Rolf W.; Stenby, Erling Halfdan
Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf......Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf...
A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD
Directory of Open Access Journals (Sweden)
Z. Zhang
2016-06-01
Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.
A connection between the asymptotic iteration method and the continued fractions formalism
International Nuclear Information System (INIS)
Matamala, A.R.; Gutierrez, F.A.; Diaz-Valdes, J.
2007-01-01
In this work, we show that there is a connection between the asymptotic iteration method (a method to solve second order linear ordinary differential equations) and the older method of continued fractions to solve differential equations
Bearg, D W
1998-09-01
This article summarizes an approach for improving the indoor air quality (IAQ) in a building by providing feedback on the performance of the ventilation system. The delivery of adequate quantities of ventilation to all building occupants is necessary for the achievement of good IAQ. Feedback on the performance includes information on the adequacy of ventilation provided, the effectiveness of the distribution of this air, the adequacy of the duration of operation of the ventilation system, and the identification of leakage into the return plenum, either of outdoor or supply air. Keeping track of ventilation system performance is important not only in terms of maintaining good IAQ, but also making sure that this system continues to perform as intended after changes in building use. Information on the performance of the ventilation system is achieved by means of an automated sampling system that draws air from multiple locations and delivers it to both a carbon dioxide monitor and dew point sensor. The use of single shared sensors facilitates calibration checks as well as helps to guarantee data integrity. This approach to monitoring a building's ventilation system offers the possibility of achieving sustainable performance of this important aspect of good IAQ.
Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality
Directory of Open Access Journals (Sweden)
Zhanchao Li
2013-01-01
Full Text Available The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model and change of sequence distribution law of nonparametric statistical model. On this basis, through the reduction of change point problem, the establishment of basic nonparametric change point model, and asymptotic analysis on test method of basic change point problem, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is created in consideration of the situation that in practice concrete dam crack behavior may have more abnormality points. And the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is used in the actual project, demonstrating the effectiveness and scientific reasonableness of the method established. Meanwhile, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality has a complete theoretical basis and strong practicality with a broad application prospect in actual project.
LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics
Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel
2017-10-01
Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.
Fernández-Peña, Rosario; Fuentes-Pumarola, Concepció; Malagón-Aguilera, M Carme; Bonmatí-Tomàs, Anna; Bosch-Farré, Cristina; Ballester-Ferrando, David
2016-09-01
Adapting university programmes to European Higher Education Area criteria has required substantial changes in curricula and teaching methodologies. Reflective learning (RL) has attracted growing interest and occupies an important place in the scientific literature on theoretical and methodological aspects of university instruction. However, fewer studies have focused on evaluating the RL methodology from the point of view of nursing students. To assess nursing students' perceptions of the usefulness and challenges of RL methodology. Mixed method design, using a cross-sectional questionnaire and focus group discussion. The research was conducted via self-reported reflective learning questionnaire complemented by focus group discussion. Students provided a positive overall evaluation of RL, highlighting the method's capacity to help them better understand themselves, engage in self-reflection about the learning process, optimize their strengths and discover additional training needs, along with searching for continuous improvement. Nonetheless, RL does not help them as much to plan their learning or identify areas of weakness or needed improvement in knowledge, skills and attitudes. Among the difficulties or challenges, students reported low motivation and lack of familiarity with this type of learning, along with concerns about the privacy of their reflective journals and about the grading criteria. In general, students evaluated RL positively. The results suggest areas of needed improvement related to unfamiliarity with the methodology, ethical aspects of developing a reflective journal and the need for clear evaluation criteria. Copyright © 2016 Elsevier Ltd. All rights reserved.
Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis
Directory of Open Access Journals (Sweden)
Yuan Gao
2014-01-01
Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.
International Nuclear Information System (INIS)
Khitrov, L.M.; Rumiantsev, O.V.
1991-01-01
In Chernobyl along with usual methods of environment radiation control there were used methods and equipment of direct continuous multichannel measurements. The necessary equipment was installed both on permanent observation stations (river Pripyat, Chernobyl, river Dnieper, Kiev) and on mobile units (helicopters, scientific river-boats, automobiles). Together with continuous control of radioactive situation and its estimation in time and space this equipment enabled to carry out the following: - determination of time-spatial structure of radioactive pollution in stationary points and on space (mapping); - selection of representative samples for subsequent radionuclide analysis; - direct data input into the computer, data storage and data base creation. The results and conclusions drawn are important not only for the situation on Chernobyl atomic station - they may and should be used for a continuous radioactive monitoring of the environment. Though the method and its realization remain to be modernized and unified. (author)
Ramdas, Wishal D.; Rizopoulos, Dimitris; Wolfs, Roger C. W.; Hofman, Albert; de Jong, Paulus T. V. M.; Vingerling, Johannes R.; Jansonius, Nomdo M.
2011-01-01
Purpose: Diseases characterized by a continuous trait can be defined by setting a cut-off point for the disease measure in question, accepting some misclassification. The 97.5th percentile is commonly used as a cut-off point. However, it is unclear whether this percentile is the optimal cut-off
Robust Trajectory Design in Highly Perturbed Environments Leveraging Continuation Methods, Phase I
National Aeronautics and Space Administration — Research is proposed to investigate continuation methods to improve the robustness of trajectory design algorithms for spacecraft in highly perturbed dynamical...
Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods
International Nuclear Information System (INIS)
Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris
2016-01-01
Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-02-27
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the
Method of nuclear reactor control using a variable temperature load dependent set point
International Nuclear Information System (INIS)
Kelly, J.J.; Rambo, G.E.
1982-01-01
A method and apparatus for controlling a nuclear reactor in response to a variable average reactor coolant temperature set point is disclosed. The set point is dependent upon percent of full power load demand. A manually-actuated ''droop mode'' of control is provided whereby the reactor coolant temperature is allowed to drop below the set point temperature a predetermined amount wherein the control is switched from reactor control rods exclusively to feedwater flow
AN IMPROVEMENT ON GEOMETRY-BASED METHODS FOR GENERATION OF NETWORK PATHS FROM POINTS
Directory of Open Access Journals (Sweden)
Z. Akbari
2014-10-01
Full Text Available Determining network path is important for different purposes such as determination of road traffic, the average speed of vehicles, and other network analysis. One of the required input data is information about network path. Nevertheless, the data collected by the positioning systems often lead to the discrete points. Conversion of these points to the network path have become one of the challenges which different researchers, presents many ways for solving it. This study aims at investigating geometry-based methods to estimate the network paths from the obtained points and improve an existing point to curve method. To this end, some geometry-based methods have been studied and an improved method has been proposed by applying conditions on the best method after describing and illustrating weaknesses of them.
Directory of Open Access Journals (Sweden)
Zhenxiang Jiang
2016-01-01
Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.
International Nuclear Information System (INIS)
Matijevic, M.; Grgic, D.; Jecmenica, R.
2016-01-01
This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first
Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud
Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.
2018-04-01
In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.
Improved DEA Cross Efficiency Evaluation Method Based on Ideal and Anti-Ideal Points
Directory of Open Access Journals (Sweden)
Qiang Hou
2018-01-01
Full Text Available A new model is introduced in the process of evaluating efficiency value of decision making units (DMUs through data envelopment analysis (DEA method. Two virtual DMUs called ideal point DMU and anti-ideal point DMU are combined to form a comprehensive model based on the DEA method. The ideal point DMU is taking self-assessment system according to efficiency concept. The anti-ideal point DMU is taking other-assessment system according to fairness concept. The two distinctive ideal point models are introduced to the DEA method and combined through using variance ration. From the new model, a reasonable result can be obtained. Numerical examples are provided to illustrate the new constructed model and certify the rationality of the constructed model through relevant analysis with the traditional DEA model.
Energy Technology Data Exchange (ETDEWEB)
Yoo, Hyun Suk; Lee, Jeong Min; Yoon, Jeong Hee; Lee, Dong Ho; Chang, Won; Han, Joon Koo [Seoul National University Hospital, Seoul (Korea, Republic of)
2016-09-15
To prospectively compare technical success rate and reliable measurements of virtual touch quantification (VTQ) elastography and elastography point quantification (ElastPQ), and to correlate liver stiffness (LS) measurements obtained by the two elastography techniques. Our study included 85 patients, 80 of whom were previously diagnosed with chronic liver disease. The technical success rate and reliable measurements of the two kinds of point shear wave elastography (pSWE) techniques were compared by χ{sup 2} analysis. LS values measured using the two techniques were compared and correlated via Wilcoxon signed-rank test, Spearman correlation coefficient, and 95% Bland-Altman limit of agreement. The intraobserver reproducibility of ElastPQ was determined by 95% Bland-Altman limit of agreement and intraclass correlation coefficient (ICC). The two pSWE techniques showed similar technical success rate (98.8% for VTQ vs. 95.3% for ElastPQ, p = 0.823) and reliable LS measurements (95.3% for VTQ vs. 90.6% for ElastPQ, p = 0.509). The mean LS measurements obtained by VTQ (1.71 ± 0.47 m/s) and ElastPQ (1.66 ± 0.41 m/s) were not significantly different (p = 0.209). The LS measurements obtained by the two techniques showed strong correlation (r = 0.820); in addition, the 95% limit of agreement of the two methods was 27.5% of the mean. Finally, the ICC of repeat ElastPQ measurements was 0.991. Virtual touch quantification and ElastPQ showed similar technical success rate and reliable measurements, with strongly correlated LS measurements. However, the two methods are not interchangeable due to the large limit of agreement.
Synthesis of Numerical Methods for Modeling Wave Energy Converter-Point Absorbers: Preprint
Energy Technology Data Exchange (ETDEWEB)
Li, Y.; Yu, Y. H.
2012-05-01
During the past few decades, wave energy has received significant attention among all ocean energy formats. Industry has proposed hundreds of prototypes such as an oscillating water column, a point absorber, an overtopping system, and a bottom-hinged system. In particular, many researchers have focused on modeling the floating-point absorber as the technology to extract wave energy. Several modeling methods have been used such as the analytical method, the boundary-integral equation method, the Navier-Stokes equations method, and the empirical method. However, no standardized method has been decided. To assist the development of wave energy conversion technologies, this report reviews the methods for modeling the floating-point absorber.
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-12-24
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method
Directory of Open Access Journals (Sweden)
Yueqian Shen
2016-12-01
Full Text Available A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.
A new method to identify the location of the kick point during the golf swing.
Joyce, Christopher; Burnett, Angus; Matthews, Miccal
2013-12-01
No method currently exists to determine the location of the kick point during the golf swing. This study consisted of two phases. In the first phase, the static kick point of 10 drivers (having identical grip and head but fitted with shafts of differing mass and stiffness) was determined by two methods: (1) a visual method used by professional club fitters and (2) an algorithm using 3D locations of markers positioned on the golf club. Using level of agreement statistics, we showed the latter technique was a valid method to determine the location of the static kick point. In phase two, the validated method was used to determine the dynamic kick point during the golf swing. Twelve elite male golfers had three shots analyzed for two drivers fitted with stiff shafts of differing mass (56 g and 78 g). Excellent between-trial reliability was found for dynamic kick point location. Differences were found for dynamic kick point location when compared with static kick point location, as well as between-shaft and within-shaft. These findings have implications for future investigations examining the bending behavior of golf clubs, as well as being useful to examine relationships between properties of the shaft and launch parameters.
A feature point identification method for positron emission particle tracking with multiple tracers
Energy Technology Data Exchange (ETDEWEB)
Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)
2017-01-21
A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.
Directory of Open Access Journals (Sweden)
Hongwei Ying
2014-08-01
Full Text Available An extreme point of scale space extraction method for binary multiscale and rotation invariant local feature descriptor is studied in this paper in order to obtain a robust and fast method for local image feature descriptor. Classic local feature description algorithms often select neighborhood information of feature points which are extremes of image scale space, obtained by constructing the image pyramid using certain signal transform method. But build the image pyramid always consumes a large amount of computing and storage resources, is not conducive to the actual applications development. This paper presents a dual multiscale FAST algorithm, it does not need to build the image pyramid, but can extract feature points of scale extreme quickly. Feature points extracted by proposed method have the characteristic of multiscale and rotation Invariant and are fit to construct the local feature descriptor.
On the Convergence of the Iteration Sequence in Primal-Dual Interior-Point Methods
National Research Council Canada - National Science Library
Tapia, Richard A; Zhang, Yin; Ye, Yinyu
1993-01-01
Recently, numerous research efforts, most of them concerned with superlinear convergence of the duality gap sequence to zero in the Kojima-Mizuno-Yoshise primal-dual interior-point method for linear...
Fixed Point Methods in the Stability of the Cauchy Functional Equations
Directory of Open Access Journals (Sweden)
Z. Dehvari
2013-03-01
Full Text Available By using the fixed point methods, we prove some generalized Hyers-Ulam stability of homomorphisms for Cauchy and CauchyJensen functional equations on the product algebras and on the triple systems.
A method for computing the stationary points of a function subject to linear equality constraints
International Nuclear Information System (INIS)
Uko, U.L.
1989-09-01
We give a new method for the numerical calculation of stationary points of a function when it is subject to equality constraints. An application to the solution of linear equations is given, together with a numerical example. (author). 5 refs
Curvature-Continuous 3D Path-Planning Using QPMI Method
Directory of Open Access Journals (Sweden)
Seong-Ryong Chang
2015-06-01
Full Text Available It is impossible to achieve vertex movement and rapid velocity control in aerial robots and aerial vehicles because of momentum from the air. A continuous-curvature path ensures such robots and vehicles can fly with stable and continuous movements. General continuous path-planning methods use spline interpolation, for example B-spline and Bézier curves. However, these methods cannot be directly applied to continuous path planning in a 3D space. These methods use a subset of the waypoints to decide curvature and some waypoints are not included in the planned path. This paper proposes a method for constructing a curvature-continuous path in 3D space that includes every waypoint. The movements in each axis, x, y and z, are separated by the parameter u. Waypoint groups are formed, each with its own continuous path derived using quadratic polynomial interpolation. The membership function then combines each continuous path into one continuous path. The continuity of the path is verified and the curvature-continuous path is produced using the proposed method.
Five-point form of the nodal diffusion method and comparison with finite-difference
International Nuclear Information System (INIS)
Azmy, Y.Y.
1988-01-01
Nodal Methods have been derived, implemented and numerically tested for several problems in physics and engineering. In the field of nuclear engineering, many nodal formalisms have been used for the neutron diffusion equation, all yielding results which were far more computationally efficient than conventional Finite Difference (FD) and Finite Element (FE) methods. However, not much effort has been devoted to theoretically comparing nodal and FD methods in order to explain the very high accuracy of the former. In this summary we outline the derivation of a simple five-point form for the lowest order nodal method and compare it to the traditional five-point, edge-centered FD scheme. The effect of the observed differences on the accuracy of the respective methods is established by considering a simple test problem. It must be emphasized that the nodal five-point scheme derived here is mathematically equivalent to previously derived lowest order nodal methods. 7 refs., 1 tab
Method of analytic continuation by duality in QCD: Beyond QCD sum rules
International Nuclear Information System (INIS)
Kremer, M.; Nasrallah, N.F.; Papadopoulos, N.A.; Schilcher, K.
1986-01-01
We present the method of analytic continuation by duality which allows the approximate continuation of QCD amplitudes to small values of the momentum variables where direct perturbative calculations are not possible. This allows a substantial extension of the domain of applications of hadronic QCD phenomenology. The method is illustrated by a simple example which shows its essential features
a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree
Kang, Q.; Huang, G.; Yang, S.
2018-04-01
Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
A GROSS ERROR ELIMINATION METHOD FOR POINT CLOUD DATA BASED ON KD-TREE
Directory of Open Access Journals (Sweden)
Q. Kang
2018-04-01
Full Text Available Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data’s pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
The complexity of interior point methods for solving discounted turn-based stochastic games
DEFF Research Database (Denmark)
Hansen, Thomas Dueholm; Ibsen-Jensen, Rasmus
2013-01-01
for general 2TBSGs. This implies that a number of interior point methods can be used to solve 2TBSGs. We consider two such algorithms: the unified interior point method of Kojima, Megiddo, Noma, and Yoshise, and the interior point potential reduction algorithm of Kojima, Megiddo, and Ye. The algorithms run...... states and discount factor γ we get κ=Θ(n(1−γ)2) , −δ=Θ(n√1−γ) , and 1/θ=Θ(n(1−γ)2) in the worst case. The lower bounds for κ, − δ, and 1/θ are all obtained using the same family of deterministic games....
Comparison of methods for accurate end-point detection of potentiometric titrations
International Nuclear Information System (INIS)
Villela, R L A; Borges, P P; Vyskočil, L
2015-01-01
Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper
Comparison of methods for accurate end-point detection of potentiometric titrations
Villela, R. L. A.; Borges, P. P.; Vyskočil, L.
2015-01-01
Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.
Ngastiti, P. T. B.; Surarso, Bayu; Sutimin
2018-05-01
Transportation issue of the distribution problem such as the commodity or goods from the supply tothe demmand is to minimize the transportation costs. Fuzzy transportation problem is an issue in which the transport costs, supply and demand are in the form of fuzzy quantities. Inthe case study at CV. Bintang Anugerah Elektrik, a company engages in the manufacture of gensets that has more than one distributors. We use the methods of zero point and zero suffix to investigate the transportation minimum cost. In implementing both methods, we use robust ranking techniques for the defuzzification process. The studyresult show that the iteration of zero suffix method is less than that of zero point method.
DEFF Research Database (Denmark)
Bey, Niki
2000-01-01
to three essential assessment steps, the method enables rough environmental evaluations and supports in this way material- and process-related decision-making in the early stages of design. In its overall structure, the Oil Point Method is related to Life Cycle Assessment - except for two main differences...... of environmental evaluation and only approximate information about the product and its life cycle. This dissertation addresses this challenge in presenting a method, which is tailored to these requirements of designers - the Oil Point Method (OPM). In providing environmental key information and confining itself...
Data-Driven Method for Wind Turbine Yaw Angle Sensor Zero-Point Shifting Fault Detection
Directory of Open Access Journals (Sweden)
Yan Pei
2018-03-01
Full Text Available Wind turbine yaw control plays an important role in increasing the wind turbine production and also in protecting the wind turbine. Accurate measurement of yaw angle is the basis of an effective wind turbine yaw controller. The accuracy of yaw angle measurement is affected significantly by the problem of zero-point shifting. Hence, it is essential to evaluate the zero-point shifting error on wind turbines on-line in order to improve the reliability of yaw angle measurement in real time. Particularly, qualitative evaluation of the zero-point shifting error could be useful for wind farm operators to realize prompt and cost-effective maintenance on yaw angle sensors. In the aim of qualitatively evaluating the zero-point shifting error, the yaw angle sensor zero-point shifting fault is firstly defined in this paper. A data-driven method is then proposed to detect the zero-point shifting fault based on Supervisory Control and Data Acquisition (SCADA data. The zero-point shifting fault is detected in the proposed method by analyzing the power performance under different yaw angles. The SCADA data are partitioned into different bins according to both wind speed and yaw angle in order to deeply evaluate the power performance. An indicator is proposed in this method for power performance evaluation under each yaw angle. The yaw angle with the largest indicator is considered as the yaw angle measurement error in our work. A zero-point shifting fault would trigger an alarm if the error is larger than a predefined threshold. Case studies from several actual wind farms proved the effectiveness of the proposed method in detecting zero-point shifting fault and also in improving the wind turbine performance. Results of the proposed method could be useful for wind farm operators to realize prompt adjustment if there exists a large error of yaw angle measurement.
Directory of Open Access Journals (Sweden)
Yi-hua Zhong
2013-01-01
Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.
The Closest Point Method and Multigrid Solvers for Elliptic Equations on Surfaces
Chen, Yujia
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. Elliptic partial differential equations are important from both application and analysis points of view. In this paper we apply the closest point method to solve elliptic equations on general curved surfaces. Based on the closest point representation of the underlying surface, we formulate an embedding equation for the surface elliptic problem, then discretize it using standard finite differences and interpolation schemes on banded but uniform Cartesian grids. We prove the convergence of the difference scheme for the Poisson\\'s equation on a smooth closed curve. In order to solve the resulting large sparse linear systems, we propose a specific geometric multigrid method in the setting of the closest point method. Convergence studies in both the accuracy of the difference scheme and the speed of the multigrid algorithm show that our approaches are effective.
Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method
Directory of Open Access Journals (Sweden)
Darae Jeong
2018-01-01
Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.
Point kernels and superposition methods for scatter dose calculations in brachytherapy
International Nuclear Information System (INIS)
Carlsson, A.K.
2000-01-01
Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)
Five-point Element Scheme of Finite Analytic Method for Unsteady Groundwater Flow
Institute of Scientific and Technical Information of China (English)
Xiang Bo; Mi Xiao; Ji Changming; Luo Qingsong
2007-01-01
In order to improve the finite analytic method's adaptability for irregular unit, by using coordinates rotation technique this paper establishes a five-point element scheme of finite analytic method. It not only solves unsteady groundwater flow equation but also gives the boundary condition. This method can be used to calculate the three typical questions of groundwater. By compared with predecessor's computed result, the result of this method is more satisfactory.
Zhang, Chong; Lü, Qingtian; Yan, Jiayong; Qi, Guang
2018-04-01
Downward continuation can enhance small-scale sources and improve resolution. Nevertheless, the common methods have disadvantages in obtaining optimal results because of divergence and instability. We derive the mean-value theorem for potential fields, which could be the theoretical basis of some data processing and interpretation. Based on numerical solutions of the mean-value theorem, we present the convergent and stable downward continuation methods by using the first-order vertical derivatives and their upward continuation. By applying one of our methods to both the synthetic and real cases, we show that our method is stable, convergent and accurate. Meanwhile, compared with the fast Fourier transform Taylor series method and the integrated second vertical derivative Taylor series method, our process has very little boundary effect and is still stable in noise. We find that the characters of the fading anomalies emerge properly in our downward continuation with respect to the original fields at the lower heights.
Bendinskaitė, Irmina
2015-01-01
Bendinskaitė I. Perspective for applying traditional and innovative teaching and learning methods to nurse’s continuing education, magister thesis / supervisor Assoc. Prof. O. Riklikienė; Departament of Nursing and Care, Faculty of Nursing, Lithuanian University of Health Sciences. – Kaunas, 2015, – p. 92 The purpose of this study was to investigate traditional and innovative teaching and learning methods perspective to nurse’s continuing education. Material and methods. In a period fro...
DEFF Research Database (Denmark)
Choi, Ui-Min; Blaabjerg, Frede; Lee, Kyo-Beum
2015-01-01
time of small- and medium-voltage vectors. However, if the power factor is lower, there is a limitation to eliminate neutral-point oscillations. In this case, the proposed method can be improved by changing the switching sequence properly. Additionally, a method for neutral-point voltage balancing......This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time offset to the three-phase turn-on times. The proper time offset is simply calculated considering the phase currents and dwell...
Using the method of ideal point to solve dual-objective problem for production scheduling
Directory of Open Access Journals (Sweden)
Mariia Marko
2016-07-01
Full Text Available In practice, there are often problems, which must simultaneously optimize several criterias. This so-called multi-objective optimization problem. In the article we consider the use of the method ideal point to solve the two-objective optimization problem of production planning. The process of finding solution to the problem consists of a series of steps where using simplex method, we find the ideal point. After that for solving a scalar problems, we use the method of Lagrange multipliers
PKI, Gamma Radiation Reactor Shielding Calculation by Point-Kernel Method
International Nuclear Information System (INIS)
Li Chunhuai; Zhang Liwu; Zhang Yuqin; Zhang Chuanxu; Niu Xihua
1990-01-01
1 - Description of program or function: This code calculates radiation shielding problem of gamma-ray in geometric space. 2 - Method of solution: PKI uses a point kernel integration technique, describes radiation shielding geometric space by using geometric space configuration method and coordinate conversion, and makes use of calculation result of reactor primary shielding and flow regularity in loop system for coolant
Lee, Jennifer
2012-01-01
The intent of this study was to examine the relationship between media multitasking orientation and grade point average. The study utilized a mixed-methods approach to investigate the research questions. In the quantitative section of the study, the primary method of statistical analyses was multiple regression. The independent variables for the…
International Nuclear Information System (INIS)
Wang, Ruihong; Yang, Shulin; Pei, Lucheng
2011-01-01
Deep penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, an adaptive technique under the emission point as a sampling station is presented. The main advantage is to choose the most suitable sampling number from the emission point station to get the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is also derived. The main principle is to define the importance function of the response due to the particle state and ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive method under the emission point as a station could overcome the difficulty of underestimation to the result in some degree, and the related importance sampling method gets satisfied results as well. (author)
DEFF Research Database (Denmark)
Sørensen, Chris Khadgi; Thach, Tine; Hovmøller, Mogens Støvring
2016-01-01
flexible application procedure for spray inoculation and it gave highly reproducible results for virulence phenotyping. Six point inoculation methods were compared to find the most suitable for assessment of pathogen aggressiveness. The use of Novec 7100 and dry dilution with Lycopodium spores gave...... for the assessment of quantitative epidemiological parameters. New protocols for spray and point inoculation of P. striiformis on wheat are presented, along with the prospect for applying these in rust research and resistance breeding activities....
Apparatus and method for implementing power saving techniques when processing floating point values
Kim, Young Moon; Park, Sang Phill
2017-10-03
An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.
Directory of Open Access Journals (Sweden)
Klin-eam Chakkrid
2009-01-01
Full Text Available Abstract A new approximation method for solving variational inequalities and fixed points of nonexpansive mappings is introduced and studied. We prove strong convergence theorem of the new iterative scheme to a common element of the set of fixed points of nonexpansive mapping and the set of solutions of the variational inequality for the inverse-strongly monotone mapping which solves some variational inequalities. Moreover, we apply our main result to obtain strong convergence to a common fixed point of nonexpansive mapping and strictly pseudocontractive mapping in a Hilbert space.
Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve
Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.
2009-04-01
Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.
Implementation of the probability table method in a continuous-energy Monte Carlo code system
International Nuclear Information System (INIS)
Sutton, T.M.; Brown, F.B.
1998-10-01
RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5
Gender preference between traditional and PowerPoint methods of teaching gross anatomy.
Nuhu, Saleh; Adamu, Lawan Hassan; Buba, Mohammed Alhaji; Garba, Sani Hyedima; Dalori, Babagana Mohammed; Yusuf, Ashiru Hassan
2018-01-01
Teaching and learning process is increasingly metamorphosing from the traditional chalk and talk to the modern dynamism in the information and communication technology. Medical education is no exception to this dynamism more especially in the teaching of gross anatomy, which serves as one of the bases of understanding the human structure. This study was conducted to determine the gender preference of preclinical medical students on the use of traditional (chalk and talk) and PowerPoint presentation in the teaching of gross anatomy. This was cross-sectional and prospective study, which was conducted among preclinical medical students in the University of Maiduguri, Nigeria. Using simple random techniques, a questionnaire was circulated among 280 medical students, where 247 students filled the questionnaire appropriately. The data obtained was analyzed using SPSS version 20 (IBM Corporation, Armonk, NY, USA) to find the method preferred by the students among other things. Majority of the preclinical medical students in the University of Maiduguri preferred PowerPoint method in the teaching of gross anatomy over the conventional methods. The Cronbach alpha value of 0.76 was obtained which is an acceptable level of internal consistency. A statistically significant association was found between gender and preferred method of lecture delivery on the clarity of lecture content where females prefer the conventional method of lecture delivery whereas males prefer the PowerPoint method, On the reproducibility of text and diagram, females prefer PowerPoint method of teaching gross anatomy while males prefer the conventional method of teaching gross anatomy. There are gender preferences with regard to clarity of lecture contents and reproducibility of text and diagram. It was also revealed from this study that majority of the preclinical medical students in the University of Maiduguri prefer PowerPoint presentation over the traditional chalk and talk method in most of the
Tataru, Paula; Hobolth, Asger
2011-12-05
Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.
Directory of Open Access Journals (Sweden)
Tataru Paula
2011-12-01
Full Text Available Abstract Background Continuous time Markov chains (CTMCs is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes are unaccessible and the past must be inferred from DNA sequence data observed in the present. Results We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD, the second on uniformization (UNI, and the third on integrals of matrix exponentials (EXPM. The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. Conclusions We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.
Multi-point probe for testing electrical properties and a method of producing a multi-point probe
DEFF Research Database (Denmark)
2011-01-01
A multi-point probe for testing electrical properties of a number of specific locations of a test sample comprises a supporting body defining a first surface, a first multitude of conductive probe arms (101-101'''), each of the probe arms defining a proximal end and a distal end. The probe arms...... of contact with the supporting body, and a maximum thickness perpendicular to its perpendicular bisector and its line of contact with the supporting body. Each of the probe arms has a specific area or point of contact (111-111''') at its distal end for contacting a specific location among the number...... of specific locations of the test sample. At least one of the probe arms has an extension defining a pointing distal end providing its specific area or point of contact located offset relative to its perpendicular bisector....
Energy Technology Data Exchange (ETDEWEB)
Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
International Nuclear Information System (INIS)
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-01-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
Continuation Methods and Non-Linear/Non-Gaussian Estimation for Flight Dynamics, Phase II
National Aeronautics and Space Administration — We propose herein to augment current NASA spaceflight dynamics programs with algorithms and software from three domains. First, we use parameter continuation methods...
A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator
Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai
2017-05-01
To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.
Numerical continuation methods for dynamical systems path following and boundary value problems
Krauskopf, Bernd; Galan-Vioque, Jorge
2007-01-01
Path following in combination with boundary value problem solvers has emerged as a continuing and strong influence in the development of dynamical systems theory and its application. It is widely acknowledged that the software package AUTO - developed by Eusebius J. Doedel about thirty years ago and further expanded and developed ever since - plays a central role in the brief history of numerical continuation. This book has been compiled on the occasion of Sebius Doedel''s 60th birthday. Bringing together for the first time a large amount of material in a single, accessible source, it is hoped that the book will become the natural entry point for researchers in diverse disciplines who wish to learn what numerical continuation techniques can achieve. The book opens with a foreword by Herbert B. Keller and lecture notes by Sebius Doedel himself that introduce the basic concepts of numerical bifurcation analysis. The other chapters by leading experts discuss continuation for various types of systems and objects ...
Methods of fast, multiple-point in vivo T1 determination
International Nuclear Information System (INIS)
Zhang, Y.; Spigarelli, M.; Fencil, L.E.; Yeung, H.N.
1989-01-01
Two methods of rapid, multiple-point determination of T1 in vivo have been evaluated with a phantom consisting of vials of gel in different Mn + + concentrations. The first method was an inversion-recovery- on-the-fly technique, and the second method used a variable- tip-angle (α) progressive saturation with two sub- sequences of different repetition times. In the first method, 1/T1 was evaluated by an exponential fit. In the second method, 1/T1 was obtained iteratively with a linear fit and then readjusted together with α to a model equation until self-consistency was reached
Interior Point Methods on GPU with application to Model Predictive Control
DEFF Research Database (Denmark)
Gade-Nielsen, Nicolai Fog
The goal of this thesis is to investigate the application of interior point methods to solve dynamical optimization problems, using a graphical processing unit (GPU) with a focus on problems arising in Model Predictice Control (MPC). Multi-core processors have been available for over ten years now...... software package called GPUOPT, available under the non-restrictive MIT license. GPUOPT includes includes a primal-dual interior-point method, which supports both the CPU and the GPU. It is implemented as multiple components, where the matrix operations and solver for the Newton directions is separated...
Multiscale Modeling using Molecular Dynamics and Dual Domain Material Point Method
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division. Fluid Dynamics and Solid Mechanics Group, T-3; Rice Univ., Houston, TX (United States)
2016-07-07
For problems involving large material deformation rate, the material deformation time scale can be shorter than the material takes to reach a thermodynamical equilibrium. For such problems, it is difficult to obtain a constitutive relation. History dependency become important because of thermodynamic non-equilibrium. Our goal is to build a multi-scale numerical method which can bypass the need for a constitutive relation. In conclusion, multi-scale simulation method is developed based on the dual domain material point (DDMP). Molecular dynamics (MD) simulation is performed to calculate stress. Since the communication among material points is not necessary, the computation can be done embarrassingly parallel in CPU-GPU platform.
DEFF Research Database (Denmark)
Gernaey, Krist; Cervera Padrell, Albert Emili; Woodley, John
2012-01-01
The pharmaceutical industry is undergoing a radical transition towards continuous production processes. Systematic use of process systems engineering (PSE) methods and tools form the key to achieve this transition in a structured and efficient way.......The pharmaceutical industry is undergoing a radical transition towards continuous production processes. Systematic use of process systems engineering (PSE) methods and tools form the key to achieve this transition in a structured and efficient way....
A primal-dual interior point method for large-scale free material optimization
DEFF Research Database (Denmark)
Weldeyesus, Alemseged Gebrehiwot; Stolpe, Mathias
2015-01-01
Free Material Optimization (FMO) is a branch of structural optimization in which the design variable is the elastic material tensor that is allowed to vary over the design domain. The requirements are that the material tensor is symmetric positive semidefinite with bounded trace. The resulting...... optimization problem is a nonlinear semidefinite program with many small matrix inequalities for which a special-purpose optimization method should be developed. The objective of this article is to propose an efficient primal-dual interior point method for FMO that can robustly and accurately solve large...... of iterations the interior point method requires is modest and increases only marginally with problem size. The computed optimal solutions obtain a higher precision than other available special-purpose methods for FMO. The efficiency and robustness of the method is demonstrated by numerical experiments on a set...
Reliability of an experimental method to analyse the impact point on a golf ball during putting.
Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn
2015-06-01
This study aimed to examine the reliability of an experimental method identifying the location of the impact point on a golf ball during putting. Forty trials were completed using a mechanical putting robot set to reproduce a putt of 3.2 m, with four different putter-ball combinations. After locating the centre of the dimple pattern (centroid) the following variables were tested; distance of the impact point from the centroid, angle of the impact point from the centroid and distance of the impact point from the centroid derived from the X, Y coordinates. Good to excellent reliability was demonstrated in all impact variables reflected in very strong relative (ICC = 0.98-1.00) and absolute reliability (SEM% = 0.9-4.3%). The highest SEM% observed was 7% for the angle of the impact point from the centroid. In conclusion, the experimental method was shown to be reliable at locating the centroid location of a golf ball, therefore allowing for the identification of the point of impact with the putter head and is suitable for use in subsequent studies.
DEFF Research Database (Denmark)
Müller, Sabine; Neergaard, Helle; Ulhøi, John Parm
The aim of this paper is to propose an integrated methodological approach to study complex and longitudinal processes such as continuous innovation and business development in high-tech SME clusters. It draws from four existing and well-recognised approaches for studying events...... is especially helpful for studies which focus on continuous innovation and business development in high-tech SME clusters as these studies could benefit tremendously from more qualitative approaches, which facilitate in-depth understanding continuous and changing processes. Therefore, major...
Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.
2017-07-01
Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.
Directory of Open Access Journals (Sweden)
Mroczka Janusz
2014-12-01
Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
International Nuclear Information System (INIS)
Sharma, Sushil; Deshpande, Bhavana
2009-01-01
The purpose of this paper is to prove some common fixed point theorems for finite number of discontinuous, noncompatible mappings on noncomplete intuitionistic fuzzy metric spaces. Our results extend, generalize and intuitionistic fuzzify several known results in fuzzy metric spaces. We give an example and also give formulas for total number of commutativity conditions for finite number of mappings.
A portable low-cost 3D point cloud acquiring method based on structure light
Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia
2018-03-01
A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.
'Continuation rate', 'use-effectiveness' and their assessment for the diaphragm and jelly method.
Chandrasekaran, C; Karkal, M
1972-11-01
Abstract The application of the life-table technique in the calculation of use-effectiveness of a contraceptive was proposed by Potter in 1963.(1) The technique was also found to be useful in assessing the duration for which the use of a contraceptive was continued. The keen interest that existed in the use of IUD in the mid-1960's was reflected in the terminology developed for assessment of the continuity of use. 'Retention rate' was a frequently used index.(2) Because of the development of the concept of segments whose end-period determined either termination of the use of a method or its continuance on a cut-off date, 'closure rate' and 'termination rate' have been used as measures of the discontinuance of the use of methods primarily of the IUD.(3) While discussing concepts relating to acceptance, use and effectiveness of family planning methods, more generally, an expert group suggested that 'continuation' should be used to denote that a client (or a couple) had begun to practise a method and that the method was still being practised.(4) Since this group defined 'an acceptor' as a person taking service and/or advice, i.e. having an IUD insertion or a sterilization operation or receiving supplies (or advice on methods such as 'rhythm' or coitus-interruptus with the intent of using the method), the base for the assessment of continuation rates, according to this group, would be only those acceptors who had begun using the method. The lifetable method has also been used for the study of the continuation rate for pill acceptors.(5) Balakrishnan, et al., made a study of continuation rates of oral contraceptives using the multiple decrement life-table technique.(6).
A comparative study of the maximum power point tracking methods for PV systems
International Nuclear Information System (INIS)
Liu, Yali; Li, Ming; Ji, Xu; Luo, Xi; Wang, Meidi; Zhang, Ying
2014-01-01
Highlights: • An improved maximum power point tracking method for PV system was proposed. • Theoretical derivation procedure of the proposed method was provided. • Simulation models of MPPT trackers were established based on MATLAB/Simulink. • Experiments were conducted to verify the effectiveness of the proposed MPPT method. - Abstract: Maximum power point tracking (MPPT) algorithms play an important role in the optimization of the power and efficiency of a photovoltaic (PV) generation system. According to the contradiction of the classical Perturb and Observe (P and Oa) method between the corresponding speed and the tracking accuracy on steady-state, an improved P and O (P and Ob) method has been put forward in this paper by using the Atken interpolation algorithm. To validate the correctness and performance of the proposed method, simulation and experimental study have been implemented. Simulation models of classical P and Oa method and improved P and Ob method have been established by MATLAB/Simulink to analyze each technique under varying solar irradiation and temperature. The experimental results show that the tracking efficiency of P and Ob method is an average of 93% compared to 72% for P and Oa method, this conclusion basically agree with the simulation study. Finally, we proposed the applicable conditions and scope of these MPPT methods in the practical application
Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia
2018-05-01
Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Pereira, N F; Sitek, A, E-mail: nfp4@bwh.harvard.ed, E-mail: asitek@bwh.harvard.ed [Department of Radiology, Brigham and Women' s Hospital-Harvard Medical School Boston, MA (United States)
2010-09-21
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
International Nuclear Information System (INIS)
Pereira, N F; Sitek, A
2010-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem
Omagari, Hiroki; Higashino, Shin-Ichiro
2018-04-01
In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.
International Nuclear Information System (INIS)
Rachakonda, Prem; Muralikrishnan, Bala; Lee, Vincent; Shilling, Meghan; Sawyer, Daniel; Cournoyer, Luc; Cheok, Geraldine
2017-01-01
The Dimensional Metrology Group at the National Institute of Standards and Technology is performing research to support the development of documentary standards within the ASTM E57 committee. This committee is addressing the point-to-point performance evaluation of a subclass of 3D imaging systems called terrestrial laser scanners (TLSs), which are laser-based and use a spherical coordinate system. This paper discusses the usage of sphere targets for this effort, and methods to minimize the errors due to the determination of their centers. The key contributions of this paper include methods to segment sphere data from a TLS point cloud, and the study of some of the factors that influence the determination of sphere centers. (paper)
Non-Interior Continuation Method for Solving the Monotone Semidefinite Complementarity Problem
International Nuclear Information System (INIS)
Huang, Z.H.; Han, J.
2003-01-01
Recently, Chen and Tseng extended non-interior continuation smoothing methods for solving linear/ nonlinear complementarity problems to semidefinite complementarity problems (SDCP). In this paper we propose a non-interior continuation method for solving the monotone SDCP based on the smoothed Fischer-Burmeister function, which is shown to be globally linearly and locally quadratically convergent under suitable assumptions. Our algorithm needs at most to solve a linear system of equations at each iteration. In addition, in our analysis on global linear convergence of the algorithm, we need not use the assumption that the Frechet derivative of the function involved in the SDCP is Lipschitz continuous. For non-interior continuation/ smoothing methods for solving the nonlinear complementarity problem, such an assumption has been used widely in the literature in order to achieve global linear convergence results of the algorithms
International Nuclear Information System (INIS)
Park, Yujin; Kazantzis, Nikolaos; Parlos, Alexander G.; Chong, Kil To
2013-01-01
Highlights: • Numerical solution for stiff differential equations using matrix exponential method. • The approximation is based on First Order Hold assumption. • Various input examples applied to the point kinetics equations. • The method shows superior useful and effective activity. - Abstract: A system of nonlinear differential equations is derived to model the dynamics of neutron density and the delayed neutron precursors within a point kinetics equation modeling framework for a nuclear reactor. The point kinetic equations are mathematically characterized as stiff, occasionally nonlinear, ordinary differential equations, posing significant challenges when numerical solutions are sought and traditionally resulting in the need for smaller time step intervals within various computational schemes. In light of the above realization, the present paper proposes a new discretization method inspired by system-theoretic notions and technically based on a combination of the matrix exponential method (MEM) and the First-Order Hold (FOH) assumption. Under the proposed time discretization structure, the sampled-data representation of the nonlinear point kinetic system of equations is derived. The performance of the proposed time discretization procedure is evaluated using several case studies with sinusoidal reactivity profiles and multiple input examples (reactivity and neutron source function). It is shown, that by applying the proposed method under a First-Order Hold for the neutron density and the precursor concentrations at each time step interval, the stiffness problem associated with the point kinetic equations can be adequately addressed and resolved. Finally, as evidenced by the aforementioned detailed simulation studies, the proposed method retains its validity and accuracy for a wide range of reactor operating conditions, including large sampling periods dictated by physical and/or technical limitations associated with the current state of sensor and
A new integral method for solving the point reactor neutron kinetics equations
International Nuclear Information System (INIS)
Li Haofeng; Chen Wenzhen; Luo Lei; Zhu Qian
2009-01-01
A numerical integral method that efficiently provides the solution of the point kinetics equations by using the better basis function (BBF) for the approximation of the neutron density in one time step integrations is described and investigated. The approach is based on an exact analytic integration of the neutron density equation, where the stiffness of the equations is overcome by the fully implicit formulation. The procedure is tested by using a variety of reactivity functions, including step reactivity insertion, ramp input and oscillatory reactivity changes. The solution of the better basis function method is compared to other analytical and numerical solutions of the point reactor kinetics equations. The results show that selecting a better basis function can improve the efficiency and accuracy of this integral method. The better basis function method can be used in real time forecasting for power reactors in order to prevent reactivity accidents.
A simple method for determining the critical point of the soil water retention curve
DEFF Research Database (Denmark)
Chen, Chong; Hu, Kelin; Ren, Tusheng
2017-01-01
he transition point between capillary water and adsorbed water, which is the critical point Pc [defined by the critical matric potential (ψc) and the critical water content (θc)] of the soil water retention curve (SWRC), demarcates the energy and water content region where flow is dominated......, a fixed tangent line method was developed to estimate Pc as an alternative to the commonly used flexible tangent line method. The relationships between Pc, and particle-size distribution and specific surface area (SSA) were analyzed. For 27 soils with various textures, the mean RMSE of water content from...... the fixed tangent line method was 0.007 g g–1, which was slightly better than that of the flexible tangent line method. With increasing clay content or SSA, ψc was more negative initially but became less negative at clay contents above ∼30%. Increasing the silt contents resulted in more negative ψc values...
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
International Nuclear Information System (INIS)
Feng Guangwen; Hu Youhua; Liu Qian
2009-01-01
In this paper, the application of the entropy weight TOPSIS method to optimal layout points in monitoring the Xinjiang radiation environment has been indroduced. With the help of SAS software, It has been found that the method is more ideal and feasible. The method can provide a reference for us to monitor radiation environment in the same regions further. As the method could bring great convenience and greatly reduce the inspecting work, it is very simple, flexible and effective for a comprehensive evaluation. (authors)
Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood
Asadi, A.R.; Roos, C.
2015-01-01
In this paper, we design a class of infeasible interior-point methods for linear optimization based on large neighborhood. The algorithm is inspired by a full-Newton step infeasible algorithm with a linear convergence rate in problem dimension that was recently proposed by the second author.
Method of Check of Statistical Hypotheses for Revealing of “Fraud” Point of Sale
Directory of Open Access Journals (Sweden)
T. M. Bolotskaya
2011-06-01
Full Text Available Application method checking of statistical hypotheses fraud Point of Sale working with purchasing cards and suspected of accomplishment of unauthorized operations is analyzed. On the basis of the received results the algorithm is developed, allowing receive an assessment of works of terminals in regime off-line.
Generic primal-dual interior point methods based on a new kernel function
EL Ghami, M.; Roos, C.
2008-01-01
In this paper we present a generic primal-dual interior point methods (IPMs) for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm. The proposed kernel function does not satisfy all the
A Riccati-Based Interior Point Method for Efficient Model Predictive Control of SISO Systems
DEFF Research Database (Denmark)
Hagdrup, Morten; Johansson, Rolf; Bagterp Jørgensen, John
2017-01-01
model parts separate. The controller is designed based on the deterministic model, while the Kalman filter results from the stochastic part. The controller is implemented as a primal-dual interior point (IP) method using Riccati recursion and the computational savings possible for SISO systems...
A continuous exchange factor method for radiative exchange in enclosures with participating media
International Nuclear Information System (INIS)
Naraghi, M.H.N.; Chung, B.T.F.; Litkouhi, B.
1987-01-01
A continuous exchange factor method for analysis of radiative exchange in enclosures is developed. In this method two types of exchange functions are defined, direct exchange function and total exchange function. Certain integral equations relating total exchange functions to direct exchange functions are developed. These integral equations are solved using Gaussian quadrature integration method. The results obtained based on the present approach are found to be more accurate than those of the zonal method
A novel maximum power point tracking method for PV systems using fuzzy cognitive networks (FCN)
Energy Technology Data Exchange (ETDEWEB)
Karlis, A.D. [Electrical Machines Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece); Kottas, T.L.; Boutalis, Y.S. [Automatic Control Systems Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece)
2007-03-15
Maximum power point trackers (MPPTs) play an important role in photovoltaic (PV) power systems because they maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency. This paper presents a novel MPPT method based on fuzzy cognitive networks (FCN). The new method gives a good maximum power operation of any PV array under different conditions such as changing insolation and temperature. The numerical results show the effectiveness of the proposed algorithm. (author)
Two-point method uncertainty during control and measurement of cylindrical element diameters
Glukhov, V. I.; Shalay, V. V.; Radev, H.
2018-04-01
The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.
Interior-Point Method for Non-Linear Non-Convex Optimization
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2004-01-01
Roč. 11, č. 5-6 (2004), s. 431-453 ISSN 1070-5325 R&D Projects: GA AV ČR IAA1030103 Institutional research plan: CEZ:AV0Z1030915 Keywords : non-linear programming * interior point methods * indefinite systems * indefinite preconditioners * preconditioned conjugate gradient method * merit functions * algorithms * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.727, year: 2004
Limiting Accuracy of Segregated Solution Methods for Nonsymmetric Saddle Point Problems
Czech Academy of Sciences Publication Activity Database
Jiránek, P.; Rozložník, Miroslav
Roc. 215, c. 1 (2008), s. 28-37 ISSN 0377-0427 R&D Projects: GA MŠk 1M0554; GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : saddle point problems * Schur complement reduction method * null-space projection method * rounding error analysis Subject RIV: BA - General Mathematics Impact factor: 1.048, year: 2008
DEFF Research Database (Denmark)
Choi, Uimin; Lee, Kyo-Beum; Blaabjerg, Frede
2013-01-01
This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time-offset to the three phase turn-on times. The proper time-offset is simply calculated considering the phase currents and dwell...
A point-value enhanced finite volume method based on approximate delta functions
Xuan, Li-Jun; Majdalani, Joseph
2018-02-01
We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.
DEFF Research Database (Denmark)
Buron, Jonas Christian Due; Pizzocchero, Filippo; Jessen, Bjarke Sørensen
2014-01-01
The electrical performance of graphene synthesized by chemical vapor deposition and transferred to insulating surfaces may be compromised by extended defects, including for instance grain boundaries, cracks, wrinkles, and tears. In this study, we experimentally investigate and compare the nano......- and microscale electrical continuity of single layer graphene grown on centimeter-sized single crystal copper with that of previously studied graphene films, grown on commercially available copper foil, after transfer to SiO2 surfaces. The electrical continuity of the graphene films is analyzed using two...... for measurement of the complex conductance response in the frequency range 1-15 terahertz, covering the entire intraband conductance spectrum, and reveals that the conductance response for the graphene grown on single crystalline copper intimately follows the Drude model for a barrier-free conductor. In contrast...
Yoon, Myeong-Ho; Tahk, Seung-Jea; Yang, Hyoung-Mo; Park, Jin-Sun; Zheng, Mingri; Lim, Hong-Seok; Choi, Byoung-Joo; Choi, So-Yeon; Choi, Un-Jung; Hwang, Joung-Won; Kang, Soo-Jin; Hwang, Gyo-Seung; Shin, Joon-Han
2009-06-01
Inducing stable maximal coronary hyperemia is essential for measurement of fractional flow reserve (FFR). We evaluated the efficacy of the intracoronary (IC) continuous adenosine infusion method via a microcatheter for inducing maximal coronary hyperemia. In 43 patients with 44 intermediate coronary lesions, FFR was measured consecutively by IC bolus adenosine injection (48-80 microg in left coronary artery, 36-60 microg in the right coronary artery) and a standard intravenous (IV) adenosine infusion (140 microg x min(-1) x kg(-1)). After completion of the IV infusion method, the tip of an IC microcatheter (Progreat Microcatheter System, Terumo, Japan) was positioned at the coronary ostium, and FFR was measured with increasing IC continuous adenosine infusion rates from 60 to 360 microg/min via the microcatheter. Fractional flow reserve decreased with increasing IC adenosine infusion rates, and no further decrease was observed after 300 microg/min. All patients were well tolerated during the procedures. Fractional flow reserves measured by IC adenosine infusion with 180, 240, 300, and 360 microg/min were significantly lower than those by IV infusion (P < .05). Intracoronary infusion at 180, 240, 300, and 360 microg/min was able to shorten the times to induction of optimal and steady-stable hyperemia compared to IV infusion (P < .05). Functional significances were changed in 5 lesions by IC infusion at 240 to 360 microg/min but not by IV infusion. The results of this study suggest that an IC adenosine continuous infusion method via a microcatheter is safe and effective in inducing steady-state hyperemia and more potent and quicker in inducing optimal hyperemia than the standard IV infusion method.
C1-continuous Virtual Element Method for Poisson-Kirchhoff plate problem
Energy Technology Data Exchange (ETDEWEB)
Gyrya, Vitaliy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Mourad, Hashem Mohamed [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-09-20
We present a family of C1-continuous high-order Virtual Element Methods for Poisson-Kirchho plate bending problem. The convergence of the methods is tested on a variety of meshes including rectangular, quadrilateral, and meshes obtained by edge removal (i.e. highly irregular meshes). The convergence rates are presented for all of these tests.
Using Financial Information in Continuing Education. Accepted Methods and New Approaches.
Matkin, Gary W.
This book, which is intended as a resource/reference guide for experienced financial managers and course planners, examines accepted methods and new approaches for using financial information in continuing education. The introduction reviews theory and practice, traditional and new methods, planning and organizational management, and technology.…
Continuous anneal method for characterizing the thermal stability of ultraviolet Bragg gratings
DEFF Research Database (Denmark)
Rathje, Jacob; Kristensen, Martin; Pedersen, Jens Engholm
2000-01-01
We present a new method for determining the long-term stability of UV-induced fiber Bragg gratings. We use a continuous temperature ramp method in which systematic variation of the ramp speed probes both the short- and long-term stability. Results are obtained both for gratings written in D2 loaded...... we resolve two separate energy distributions, suggesting that two different defects are involved. The experiments show that complicated decays originating from various energy distributions can be analyzed with this continuous isochronal anneal method. The results have both practical applications...
Directory of Open Access Journals (Sweden)
Takahiro Yamaguchi
2015-05-01
Full Text Available As smartphones become widespread, a variety of smartphone applications are being developed. This paper proposes a method for indoor localization (i.e., positioning that uses only smartphones, which are general-purpose mobile terminals, as reference point devices. This method has the following features: (a the localization system is built with smartphones whose movements are confined to respective limited areas. No fixed reference point devices are used; (b the method does not depend on the wireless performance of smartphones and does not require information about the propagation characteristics of the radio waves sent from reference point devices, and (c the method determines the location at the application layer, at which location information can be easily incorporated into high-level services. We have evaluated the level of localization accuracy of the proposed method by building a software emulator that modeled an underground shopping mall. We have confirmed that the determined location is within a small area in which the user can find target objects visually.
DEFF Research Database (Denmark)
Structure from Motion (SFM) systems are composed of cameras and structure in the form of 3D points and other features. It is most often that the structure components outnumber the cameras by a great margin. It is not uncommon to have a configuration with 3 cameras observing more than 500 3D points...... an overview of existing triangulation methods with emphasis on performance versus optimality, and will suggest a fast triangulation algorithm based on linear constraints. The structure and camera motion estimation in a SFM system is based on the minimization of some norm of the reprojection error between...
A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets
Directory of Open Access Journals (Sweden)
Vilius Matiukas
2011-08-01
Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English
An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points
Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun
2014-05-01
Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.
Skinner, Carl G; Patel, Manish M; Thomas, Jerry D; Miller, Michael A
2011-01-01
Statistical methods are pervasive in medical research and general medical literature. Understanding general statistical concepts will enhance our ability to critically appraise the current literature and ultimately improve the delivery of patient care. This article intends to provide an overview of the common statistical methods relevant to medicine.
Directory of Open Access Journals (Sweden)
Wilson Rodríguez Calderón
2015-04-01
Full Text Available When we need to determine the solution of a nonlinear equation there are two options: closed-methods which use intervals that contain the root and during the iterative process reduce the size of natural way, and, open-methods that represent an attractive option as they do not require an initial interval enclosure. In general, we know open-methods are more efficient computationally though they do not always converge. In this paper we are presenting a divergence case analysis when we use the method of fixed point iteration to find the normal height in a rectangular channel using the Manning equation. To solve this problem, we propose applying two strategies (developed by authors that allow to modifying the iteration function making additional formulations of the traditional method and its convergence theorem. Although Manning equation is solved with other methods like Newton when we use the iteration method of fixed-point an interesting divergence situation is presented which can be solved with a convergence higher than quadratic over the initial iterations. The proposed strategies have been tested in two cases; a study of divergence of square root of real numbers was made previously by authors for testing. Results in both cases have been successful. We present comparisons because are important for seeing the advantage of proposed strategies versus the most representative open-methods.
An Introduction to the Material Point Method using a Case Study from Gas Dynamics
International Nuclear Information System (INIS)
Tran, L. T.; Kim, J.; Berzins, M.
2008-01-01
The Material Point Method (MPM) developed by Sulsky and colleagues is currently being used to solve many challenging problems involving large deformations and/or fragementations with considerable success as part of the Uintah code created by the CSAFE project. In order to understand the properties of this method an analysis of the considerable computational properties of MPM is undertaken in the context of model problems from gas dynamics. One aspect of the MPM method in the form used here is shown to have first order accuracy. Computational experiments using particle redistribution are described and show that smooth results with first order accuracy may be obtained.
Directory of Open Access Journals (Sweden)
Vassilios Gregoriades
2010-06-01
Full Text Available In this article we treat a notion of continuity for a multi-valued function F and we compute the descriptive set-theoretic complexity of the set of all x for which F is continuous at x. We give conditions under which the latter set is either a G_delta set or the countable union of G_delta sets. Also we provide a counterexample which shows that the latter result is optimum under the same conditions. Moreover we prove that those conditions are necessary in order to obtain that the set of points of continuity of F is Borel i.e., we show that if we drop some of the previous conditions then there is a multi-valued function F whose graph is a Borel set and the set of points of continuity of F is not a Borel set. Finally we give some analogue results regarding a stronger notion of continuity for a multi-valued function. This article is motivated by a question of M. Ziegler in "Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform Computability with Applications to Linear Algebra", (submitted.
Continuous non-invasive blood glucose monitoring by spectral image differencing method
Huang, Hao; Liao, Ningfang; Cheng, Haobo; Liang, Jing
2018-01-01
Currently, the use of implantable enzyme electrode sensor is the main method for continuous blood glucose monitoring. But the effect of electrochemical reactions and the significant drift caused by bioelectricity in body will reduce the accuracy of the glucose measurements. So the enzyme-based glucose sensors need to be calibrated several times each day by the finger-prick blood corrections. This increases the patient's pain. In this paper, we proposed a method for continuous Non-invasive blood glucose monitoring by spectral image differencing method in the near infrared band. The method uses a high-precision CCD detector to switch the filter in a very short period of time, obtains the spectral images. And then by using the morphological method to obtain the spectral image differences, the dynamic change of blood sugar is reflected in the image difference data. Through the experiment proved that this method can be used to monitor blood glucose dynamically to a certain extent.
A method for untriggered time-dependent searches for multiple flares from neutrino point sources
International Nuclear Information System (INIS)
Gora, D.; Bernardini, E.; Cruz Silva, A.H.
2011-04-01
A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)
A method for untriggered time-dependent searches for multiple flares from neutrino point sources
Energy Technology Data Exchange (ETDEWEB)
Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)
2011-04-15
A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)
Institute of Scientific and Technical Information of China (English)
LIN; Kuang-Jang; LIN; Chii-Ruey
2010-01-01
The Photovoltaic Array has a best optimal operating point where the array operating can obtain the maximum power.However, the optimal operating point can be compromised by the strength of solar radiation,angle,and by the change of environment and load.Due to the constant changes in these conditions,it has become very difficult to locate the optimal operating point by following a mathematical model.Therefore,this study will focus mostly on the application of Fuzzy Logic Control theory and Three-point Weight Comparison Method in effort to locate the optimal operating point of solar panel and achieve maximum efficiency in power generation. The Three-point Weight Comparison Method is the comparison between the characteristic curves of the voltage of photovoltaic array and output power;it is a rather simple way to track the maximum power.The Fuzzy Logic Control,on the other hand,can be used to solve problems that cannot be effectively dealt with by calculation rules,such as concepts,contemplation, deductive reasoning,and identification.Therefore,this paper uses these two kinds of methods to make simulation successively. The simulation results show that,the Three-point Comparison Method is more effective under the environment with more frequent change of solar radiation;however,the Fuzzy Logic Control has better tacking efficiency under the environment with violent change of solar radiation.
A travel time forecasting model based on change-point detection method
LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei
2017-06-01
Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.
A novel method of measuring the concentration of anaesthetic vapours using a dew-point hygrometer.
Wilkes, A R; Mapleson, W W; Mecklenburgh, J S
1994-02-01
The Antoine equation relates the saturated vapour pressure of a volatile substance, such as an anaesthetic agent, to the temperature. The measurement of the 'dew-point' of a dry gas mixture containing a volatile anaesthetic agent by a dew-point hygrometer permits the determination of the partial pressure of the anaesthetic agent. The accuracy of this technique is limited only by the accuracy of the Antoine coefficients and of the temperature measurement. Comparing measurements by the dew-point method with measurements by refractometry showed systematic discrepancies up to 0.2% and random discrepancies with SDS up to 0.07% concentration in the 1% to 5% range for three volatile anaesthetics. The systematic discrepancies may be due to errors in available data for the vapour pressures and/or the refractive indices of the anaesthetics.
Collective mass and zero-point energy in the generator-coordinate method
International Nuclear Information System (INIS)
Fiolhais, C.
1982-01-01
The aim of the present thesis if the study of the collective mass parameters and the zero-point energies in the GCM framework with special regards to the fission process. After the derivation of the collective Schroedinger equation in the framework of the Gaussian overlap approximation the inertia parameters are compared with those of the adiabatic time-dependent Hartree-Fock method. Then the kinetic and the potential zero-point energy occurring in this formulation are studied. Thereafter the practical application of the described formalism is discussed. Then a numerical calculation of the GCM mass parameter and the zero-point energy for the fission process on the base of a two-center shell model with a pairing force in the BCS approximation is presented. (HSI) [de
SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD
Directory of Open Access Journals (Sweden)
J. Zhang
2013-05-01
Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.
The continual reassessment method: comparison of Bayesian stopping rules for dose-ranging studies.
Zohar, S; Chevret, S
2001-10-15
The continual reassessment method (CRM) provides a Bayesian estimation of the maximum tolerated dose (MTD) in phase I clinical trials and is also used to estimate the minimal efficacy dose (MED) in phase II clinical trials. In this paper we propose Bayesian stopping rules for the CRM, based on either posterior or predictive probability distributions that can be applied sequentially during the trial. These rules aim at early detection of either the mis-choice of dose range or a prefixed gain in the point estimate or accuracy of estimated probability of response associated with the MTD (or MED). They were compared through a simulation study under six situations that could represent the underlying unknown dose-response (either toxicity or failure) relationship, in terms of sample size, probability of correct selection and bias of the response probability associated to the MTD (or MED). Our results show that the stopping rules act correctly, with early stopping by using the two first rules based on the posterior distribution when the actual underlying dose-response relationship is far from that initially supposed, while the rules based on predictive gain functions provide a discontinuation of inclusions whatever the actual dose-response curve after 20 patients on average, that is, depending mostly on the accumulated data. The stopping rules were then applied to a data set from a dose-ranging phase II clinical trial aiming at estimating the MED dose of midazolam in the sedation of infants during cardiac catheterization. All these findings suggest the early use of the two first rules to detect a mis-choice of dose range, while they confirm the requirement of including at least 20 patients at the same dose to reach an accurate estimate of MTD (MED). A two-stage design is under study. Copyright 2001 John Wiley & Sons, Ltd.
Oster, Richard T; Grier, Angela; Lightning, Rick; Mayan, Maria J; Toth, Ellen L
2014-10-19
We used an exploratory sequential mixed methods approach to study the association between cultural continuity, self-determination, and diabetes prevalence in First Nations in Alberta, Canada. We conducted a qualitative description where we interviewed 10 Cree and Blackfoot leaders (members of Chief and Council) from across the province to understand cultural continuity, self-determination, and their relationship to health and diabetes, in the Alberta First Nations context. Based on the qualitative findings, we then conducted a cross-sectional analysis using provincial administrative data and publically available data for 31 First Nations communities to quantitatively examine any relationship between cultural continuity and diabetes prevalence. Cultural continuity, or "being who we are", is foundational to health in successful First Nations. Self-determination, or "being a self-sufficient Nation", stems from cultural continuity and is seriously compromised in today's Alberta Cree and Blackfoot Nations. Unfortunately, First Nations are in a continuous struggle with government policy. The intergenerational effects of colonization continue to impact the culture, which undermines the sense of self-determination, and contributes to diabetes and ill health. Crude diabetes prevalence varied dramatically among First Nations with values as low as 1.2% and as high as 18.3%. Those First Nations that appeared to have more cultural continuity (measured by traditional Indigenous language knowledge) had significantly lower diabetes prevalence after adjustment for socio-economic factors (p =0.007). First Nations that have been better able to preserve their culture may be relatively protected from diabetes.
Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom
Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J.; Spadinger, Ingrid
2016-01-01
The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest.
Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom
International Nuclear Information System (INIS)
Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J; Spadinger, Ingrid
2016-01-01
The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest. (paper)
International Nuclear Information System (INIS)
Behringer, K.
1991-02-01
In a recent paper by Behringer et al. (1990), the Wiener-Hermite Functional (WHF) method has been applied to point reactor kinetics excited by Gaussian random reactivity noise under stationary conditions, in order to calculate the neutron steady-state value and the neutron power spectral density (PSD) in a second-order (WHF-2) approximation. For simplicity, delayed neutrons and any feedback effects have been disregarded. The present study is a straightforward continuation of the previous one, treating the problem more generally by including any number of delayed neutron groups. For the case of white reactivity noise, the accuracy of the approach is determined by comparison with the exact solution available from the Fokker-Planck method. In the numerical comparisons, the first-oder (WHF-1) approximation of the PSD is also considered. (author) 4 figs., 10 refs
Directory of Open Access Journals (Sweden)
YANG Bisheng
2016-02-01
Full Text Available An efficient method of feature image generation of point clouds to automatically classify dense point clouds into different categories is proposed, such as terrain points, building points. The method first uses planar projection to sort points into different grids, then calculates the weights and feature values of grids according to the distribution of laser scanning points, and finally generates the feature image of point clouds. Thus, the proposed method adopts contour extraction and tracing means to extract the boundaries and point clouds of man-made objects (e.g. buildings and trees in 3D based on the image generated. Experiments show that the proposed method provides a promising solution for classifying and extracting man-made objects from vehicle-borne laser scanning point clouds.
International Nuclear Information System (INIS)
Gu Junhua; Xu Haiguang; Wang Jingying; Chen Wen; An Tao
2013-01-01
We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time
[An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].
Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang
2014-07-01
Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.
Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study
Directory of Open Access Journals (Sweden)
Javier Eduardo Diaz Zamboni
2017-01-01
Full Text Available The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.
A Novel Complementary Method for the Point-Scan Nondestructive Tests Based on Lamb Waves
Directory of Open Access Journals (Sweden)
Rahim Gorgin
2014-01-01
Full Text Available This study presents a novel area-scan damage identification method based on Lamb waves which can be used as a complementary method for point-scan nondestructive techniques. The proposed technique is able to identify the most probable locations of damages prior to point-scan test which lead to decreasing the time and cost of inspection. The test-piece surface was partitioned with some smaller areas and the damage probability presence of each area was evaluated. A0 mode of Lamb wave was generated and collected using a mobile handmade transducer set at each area. Subsequently, a damage presence probability index (DPPI based on the energy of captured responses was defined for each area. The area with the highest DPPI value highlights the most probable locations of damages in test-piece. Point-scan nondestructive methods can then be used once these areas are found to identify the damage in detail. The approach was validated by predicting the most probable locations of representative damages including through-thickness hole and crack in aluminum plates. The obtained experimental results demonstrated the high potential of developed method in defining the most probable locations of damages in structures.
TREEDE, Point Fluxes and Currents Based on Track Rotation Estimator by Monte-Carlo Method
International Nuclear Information System (INIS)
Dubi, A.
1985-01-01
1 - Description of problem or function: TREEDE is a Monte Carlo transport code based on the Track Rotation estimator, used, in general, to calculate fluxes and currents at a point. This code served as a test code in the development of the concept of the Track Rotation estimator, and therefore analogue Monte Carlo is used (i.e. no importance biasing). 2 - Method of solution: The basic idea is to follow the particle's track in the medium and then to rotate it such that it passes through the detector point. That is, rotational symmetry considerations (even in non-spherically symmetric configurations) are applied to every history, so that a very large fraction of the track histories can be rotated and made to pass through the point of interest; in this manner the 1/r 2 singularity in the un-collided flux estimator (next event estimator) is avoided. TREEDE, being a test code, is used to estimate leakage or in-medium fluxes at given points in a 3-dimensional finite box, where the source is an isotropic point source at the centre of the z = 0 surface. However, many of the constraints of geometry and source can be easily removed. The medium is assumed homogeneous with isotropic scattering, and one energy group only is considered. 3 - Restrictions on the complexity of the problem: One energy group, a homogeneous medium, isotropic scattering
Invalid-point removal based on epipolar constraint in the structured-light method
Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin
2018-06-01
In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.
Energy Technology Data Exchange (ETDEWEB)
Vignet, P; Gabilly, R; Lutz, J; Zermizogilou, R [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1964-07-01
}/kg of water. (authors) [French] Trois appareils ont ete mis au point pour permettre la mesure de gaz en solution dans l'eau sous une pression pouvant atteindre 200 kg/cm{sup 2}. 1 - Analyseur d'hydrogene: On mesure la variation relative de resistance d'un filament immerge dans l'eau, constitue par du palladium pur ou par un alliage palladium-platine 75.25, lors de l'absorption de l'hydrogene. On a trace les courbes d'etalonnage pour un fil de palladium pour des temperatures allant de 180 a 280 deg. C. A temperature ambiante, on a utilise un ruban en alliage palladium-platine. L'etude theorique et experimentale de la cinetique d'absorption de l'hydrogene dissous dans l'eau par un metal a permis de definir les meilleures conditions de fonctionnement de l'analyseur pour obtenir une reponse rapide, de l'ordre de quelques minutes. On utilise actuellement pour la mesure un pont alimente en courant alternatif (pont de Kohlrausch) avec compensation de temperature et systeme de regeneration periodique du filament. 2 - Analyseur d'oxygene: On sait que l'oxygene dissous dans l'eau reagit sur le thallium pour former l'hydroxyde de thallium, compose soluble dont la presence augmente la conductibilite electrique de l'eau. Pour realiser une methode de dosage basee sur ce principe, on a mis au point un conductimetre differentiel dont les caracteristiques permettent d'atteindre une excellente precision dans une gamme de 0,001 ppm a quelques dizaines de ppm d'oxygene. On examine dans quelles conditions le bruit de fond de l'appareil, du a des reactions parasites, peut etre rendu negligeable; puis on etudie la cinetique de la reaction d'oxydation du thallium par l'oxygene dissous, qui est de type diffusionnel. On a ainsi pu relier quantitativement le taux de transformation aux parametres physiques et hydrodynamiques. 3 - Analyse globale des gaz en solution: On isole, dans une bombe placee en derivation sur un circuit surpresse un volume d'eau a pression et temperature elevees. Apres
DEFF Research Database (Denmark)
Mo, W.; Loh, Poh Chiang; Blaabjerg, Frede
2013-01-01
Z-source Neutral Point Clamped (NPC) inverters were introduced to integrate both the advantages of Z-source inverters and NPC inverters. However, traditional Z-source inverters suffer from high voltage stress and chopping input current. This paper proposes six types transformer-based impedance-so......-source NPC inverters which have enhanced voltage boost capability and continuous input current by utilizing of transformer and embedded dc source configuration. Experimental results are presented to verify the theory validation....
Basin boundaries and focal points in a map coming from Bairstow's method.
Gardini, Laura; Bischi, Gian-Italo; Fournier-Prunaret, Daniele
1999-06-01
This paper is devoted to the study of the global dynamical properties of a two-dimensional noninvertible map, with a denominator which can vanish, obtained by applying Bairstow's method to a cubic polynomial. It is shown that the complicated structure of the basins of attraction of the fixed points is due to the existence of singularities such as sets of nondefinition, focal points, and prefocal curves, which are specific to maps with a vanishing denominator, and have been recently introduced in the literature. Some global bifurcations that change the qualitative structure of the basin boundaries, are explained in terms of contacts among these singularities. The techniques used in this paper put in evidence some new dynamic behaviors and bifurcations, which are peculiar of maps with denominator; hence they can be applied to the analysis of other classes of maps coming from iterative algorithms (based on Newton's method, or others). (c) 1999 American Institute of Physics.
Energy Technology Data Exchange (ETDEWEB)
Barboza, Luciano Vitoria [Sul-riograndense Federal Institute for Education, Science and Technology (IFSul), Pelotas, RS (Brazil)], E-mail: luciano@pelotas.ifsul.edu.br
2009-07-01
This paper presents an overview about the maximum load ability problem and aims to study the main factors that limit this load ability. Specifically this study focuses its attention on determining which electric system buses influence directly on the power demand supply. The proposed approach uses the conventional maximum load ability method modelled by an optimization problem. The solution of this model is performed using the Interior Point methodology. As consequence of this solution method, the Lagrange multipliers are used as parameters that identify the probable 'bottlenecks' in the electric power system. The study also shows the relationship between the Lagrange multipliers and the cost function in the Interior Point optimization interpreted like sensitivity parameters. In order to illustrate the proposed methodology, the approach was applied to an IEEE test system and to assess its performance, a real equivalent electric system from the South- Southeast region of Brazil was simulated. (author)
Solution of Dendritic Growth in Steel by the Novel Point Automata Method
International Nuclear Information System (INIS)
Lorbiecka, A Z; Šarler, B
2012-01-01
The aim of this paper is the simulation of dendritic growth in steel in two dimensions by a coupled deterministic continuum mechanics heat and species transfer model and a stochastic localized phase change kinetics model taking into account the undercooling, curvature, kinetic, and thermodynamic anisotropy. The stochastic model receives temperature and concentration information from the deterministic model and the deterministic heat, and species diffusion equations receive the solid fraction information from the stochastic model. The heat and species transfer models are solved on a regular grid by the standard explicit Finite Difference Method (FDM). The phase-change kinetics model is solved by a novel Point Automata (PA) approach. The PA method was developed [1] in order to circumvent the mesh anisotropy problem, associated with the classical Cellular Automata (CA) method. The PA approach is established on randomly distributed points and neighbourhood configuration, similar as appears in meshless methods. A comparison of the PA and CA methods is shown. It is demonstrated that the results with the new PA method are not sensitive to the crystallographic orientations of the dendrite.
Teaching Methods in Mathematics and the Current Pedagogical Point of View in School Education.
岩崎, 潔; Kiyosi, Iwasaki
1995-01-01
It should be a basic principal that studies in teaching profession in universities should take into consideration the current pedagogical points of view in education and the future prospects of that education. This paper discusses the findings of a survey on the degree of recognition that students in our Math courses have about the currents pedagogical understading of teacher trainig. In this paper I will consider how to teach effectively teaching methods in Mathematics.
Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality
Li, Zhanchao; Gu, Chongshi; Wu, Zhongru
2013-01-01
The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model ...
Standard test method for determination of breaking strength of ceramic tiles by three-point loading
American Society for Testing and Materials. Philadelphia
2001-01-01
1.1 This test method covers the determination of breaking strength of ceramic tiles by three-point loading. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Short run hydrothermal coordination with network constraints using an interior point method
International Nuclear Information System (INIS)
Lopez Lezama, Jesus Maria; Gallego Pareja, Luis Alfonso; Mejia Giraldo, Diego
2008-01-01
This paper presents a lineal optimization model to solve the hydrothermal coordination problem. The main contribution of this work is the inclusion of the network constraints to the hydrothermal coordination problem and its solution using an interior point method. The proposed model allows working with a system that can be completely hydraulic, thermal or mixed. Results are presented on the IEEE 14 bus test system
Energy Technology Data Exchange (ETDEWEB)
Rogers, L.A.; Boardman, C.R.; Bebout, D.G.; Bachman, A.L. (eds.)
1981-01-01
The available well logs, production records and geological structure maps were analyzed for the Hollywood, Duson, and Church Point, Louisiana oil and gas fields to determine the areal extent of the sealed geopressured blocks and to identify which aquifer sands within the blocks are connected to commercial production of hydrocarbons. Studies such as these are needed for the Department of Energy program to identify geopressured brine reservoirs that are not connected to commercial productions. The analysis showed that over the depth intervals at the geopressured zones shown on the logs essentially all of the sands of any substantial thickness had gas production from them somewhere or other in the fault block. It is therefore expected that the sands which are fully brine saturated in many of the wells are the water drive portion of the producing gas/oil somewhere else within the fault block. In this study only one deep sand was identified, in the Hollywood field, which was apparently not connected to a producing horizon somewhere else in the field. Estimates of the reservoir parameters were made for this sand and a hypothetical production calculation showed the probable production to be less than 10,000 b/d. The required gas price to profitably produce this gas is well above the current market price.
Coordinate alignment of combined measurement systems using a modified common points method
Zhao, G.; Zhang, P.; Xiao, W.
2018-03-01
The co-ordinate metrology has been extensively researched for its outstanding advantages in measurement range and accuracy. The alignment of different measurement systems is usually achieved by integrating local coordinates via common points before measurement. The alignment errors would accumulate and significantly reduce the global accuracy, thus need to be minimized. In this thesis, a modified common points method (MCPM) is proposed to combine different traceable system errors of the cooperating machines, and optimize the global accuracy by introducing mutual geometric constraints. The geometric constraints, obtained by measuring the common points in individual local coordinate systems, provide the possibility to reduce the local measuring uncertainty whereby enhance the global measuring certainty. A simulation system is developed in Matlab to analyze the feature of MCPM using the Monto-Carlo method. An exemplary setup is constructed to verify the feasibility and efficiency of the proposed method associated with laser tracker and indoor iGPS systems. Experimental results show that MCPM could significantly improve the alignment accuracy.
A comparison of methods to adjust for continuous covariates in the analysis of randomised trials
Directory of Open Access Journals (Sweden)
Brennan C. Kahan
2016-04-01
Full Text Available Abstract Background Although covariate adjustment in the analysis of randomised trials can be beneficial, adjustment for continuous covariates is complicated by the fact that the association between covariate and outcome must be specified. Misspecification of this association can lead to reduced power, and potentially incorrect conclusions regarding treatment efficacy. Methods We compared several methods of adjustment to determine which is best when the association between covariate and outcome is unknown. We assessed (a dichotomisation or categorisation; (b assuming a linear association with outcome; (c using fractional polynomials with one (FP1 or two (FP2 polynomial terms; and (d using restricted cubic splines with 3 or 5 knots. We evaluated each method using simulation and through a re-analysis of trial datasets. Results Methods which kept covariates as continuous typically had higher power than methods which used categorisation. Dichotomisation, categorisation, and assuming a linear association all led to large reductions in power when the true association was non-linear. FP2 models and restricted cubic splines with 3 or 5 knots performed best overall. Conclusions For the analysis of randomised trials we recommend (1 adjusting for continuous covariates even if their association with outcome is unknown; (2 keeping covariates as continuous; and (3 using fractional polynomials with two polynomial terms or restricted cubic splines with 3 to 5 knots when a linear association is in doubt.
A time domain inverse dynamic method for the end point tracking control of a flexible manipulator
Kwon, Dong-Soo; Book, Wayne J.
1991-01-01
The inverse dynamic equation of a flexible manipulator was solved in the time domain. By dividing the inverse system equation into the causal part and the anticausal part, we calculated the torque and the trajectories of all state variables for a given end point trajectory. The interpretation of this method in the frequency domain was explained in detail using the two-sided Laplace transform and the convolution integral. The open loop control of the inverse dynamic method shows an excellent result in simulation. For real applications, a practical control strategy is proposed by adding a feedback tracking control loop to the inverse dynamic feedforward control, and its good experimental performance is presented.
International Nuclear Information System (INIS)
Xia, Donghui; Huang, Mei; Wang, Zhijiang; Zhang, Feng; Zhuang, Ge
2016-01-01
Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.
Energy Technology Data Exchange (ETDEWEB)
Xia, Donghui [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Huang, Mei [Southwestern Institute of Physics, 610041 Chengdu (China); Wang, Zhijiang, E-mail: wangzj@hust.edu.cn [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Zhang, Feng [Southwestern Institute of Physics, 610041 Chengdu (China); Zhuang, Ge [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China)
2016-10-15
Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.
A Newton method for solving continuous multiple material minimum compliance problems
DEFF Research Database (Denmark)
Stolpe, M; Stegmann, Jan
method, one or two linear saddle point systems are solved. These systems involve the Hessian of the objective function, which is both expensive to compute and completely dense. Therefore, the linear algebra is arranged such that the Hessian is not explicitly formed. The main concern is to solve...
A Newton method for solving continuous multiple material minimum compliance problems
DEFF Research Database (Denmark)
Stolpe, Mathias; Stegmann, Jan
2007-01-01
method, one or two linear saddle point systems are solved. These systems involve the Hessian of the objective function, which is both expensive to compute and completely dense. Therefore, the linear algebra is arranged such that the Hessian is not explicitly formed. The main concern is to solve...
Measurement of gas adsorption with Jäntti's method using continuously increasing pressure
Poulis, J.A.; Massen, C.H.; Robens, E.
2002-01-01
Jäntti et al. published a method to reduce the time necessary for adsorption measurements. They proposed to extrapolate the equilibrium in the stepwise isobaric measurement of adsorption isotherms by measuring at each step three points of the kinetic curve. For that purpose they approximated the
A fast point-cloud computing method based on spatial symmetry of Fresnel field
Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui
2017-10-01
Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.
Numerical methods for the simulation of continuous sedimentation in ideal clarifier-thickener units
Energy Technology Data Exchange (ETDEWEB)
Buerger, R.; Karlsen, K.H.; Risebro, N.H.; Towers, J.D.
2001-10-01
We consider a model of continuous sedimentation. Under idealizing assumptions, the settling of the solid particles under the influence of gravity can be described by the initial value problem for a nonlinear hyperbolic partial differential equation with a flux function that depends discontinuously on height. The purpose of this contribution is to present and demonstrate two numerical methods for simulating continuous sedimentation: a front tracking method and a finite finite difference method. The basic building blocks in the front tracking method are the solutions of a finite number of certain Riemann problems and a procedure for tracking local collisions of shocks. The solutions of the Riemann problems are recalled herein and the front tracking algorithm is described. As an alternative to the front tracking method, a simple scalar finite difference algorithm is proposed. This method is based on discretizing the spatially varying flux parameters on a mesh that is staggered with respect to that of the conserved variable, resulting in a straightforward generalization of the well-known Engquist-Osher upwind finite difference method. The result is an easily implemented upwind shock capturing method. Numerical examples demonstrate that the front tracking and finite difference methods can be used as efficient and accurate simulation tools for continuous sedimentation. The numerical results for the finite difference method indicate that discontinuities in the local solids concentration are resolved sharply and agree with those produced by the front tracking method. The latter is free of numerical dissipation, which leads to sharply resolved concentration discontinuities, but is more complicated to implement than the former. Available mathematical results for the proposed numerical methods are also briefly reviewed. (author)
Loizou, Nicolas
2017-12-27
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
Loizou, Nicolas; Richtarik, Peter
2017-01-01
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
Performance Analysis of a Maximum Power Point Tracking Technique using Silver Mean Method
Directory of Open Access Journals (Sweden)
Shobha Rani Depuru
2018-01-01
Full Text Available The proposed paper presents a simple and particularly efficacious Maximum Power Point Tracking (MPPT algorithm based on Silver Mean Method (SMM. This method operates by choosing a search interval from the P-V characteristics of the given solar array and converges to MPP of the Solar Photo-Voltaic (SPV system by shrinking its interval. After achieving the maximum power, the algorithm stops shrinking and maintains constant voltage until the next interval is decided. The tracking capability efficiency and performance analysis of the proposed algorithm are validated by the simulation and experimental results with a 100W solar panel for variable temperature and irradiance conditions. The results obtained confirm that even without any perturbation and observation process, the proposed method still outperforms the traditional perturb and observe (P&O method by demonstrating far better steady state output, more accuracy and higher efficiency.
An improved local radial point interpolation method for transient heat conduction analysis
Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang
2013-06-01
The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.
An improved local radial point interpolation method for transient heat conduction analysis
International Nuclear Information System (INIS)
Wang Feng; Lin Gao; Hu Zhi-Qiang; Zheng Bao-Jing
2013-01-01
The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions
Concepts of analytical user interface evaluation method for continuous work in NPP main control room
International Nuclear Information System (INIS)
Lee, S. J.; Heo, G. Y.; Jang, S. H.
2003-01-01
This paper describes a conceptual study of analytical evaluation method for computer-based user interface in the main control room of advanced nuclear power plant. User interfaces can classify them into two groups as static interface and dynamic interface. Existing evaluation and design methods of user interface have been mainly performed for the static user interface. But, it is useful for the dynamic user interface to control the complex system, and proper evaluation method for this is seldom. Therefore an evaluation method for dynamic user interface is proper for continuous works by standards of the load of cognition and the similarity of an interface
Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method
Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.
2018-01-01
Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.
Directory of Open Access Journals (Sweden)
Hosein Ghaffarzadeh
Full Text Available Abstract This paper investigates the numerical modeling of the flexural wave propagation in Euler-Bernoulli beams using the Hermite-type radial point interpolation method (HRPIM under the damage quantification approach. HRPIM employs radial basis functions (RBFs and their derivatives for shape function construction as a meshfree technique. The performance of Multiquadric(MQ RBF to the assessment of the reflection ratio was evaluated. HRPIM signals were compared with the theoretical and finite element responses. Results represent that MQ is a suitable RBF for HRPIM and wave propagation. However, the range of the proper shape parameters is notable. The number of field nodes is the main parameter for accurate wave propagation modeling using HRPIM. The size of support domain should be less thanan upper bound in order to prevent high error. With regard to the number of quadrature points, providing the minimum numbers of points are adequate for the stable solution, but the existence of more points in damage region does not leads to necessarily the accurate responses. It is concluded that the pure HRPIM, without any polynomial terms, is acceptable but considering a few terms will improve the accuracy; even though more terms make the problem unstable and inaccurate.
International Nuclear Information System (INIS)
Kim, Kyung-O; Jeong, Hae Sun; Jo, Daeseong
2017-01-01
Highlights: • Employing the Radial Point Interpolation Method (RPIM) in numerical analysis of multi-group neutron-diffusion equation. • Establishing mathematical formation of modified multi-group neutron-diffusion equation by RPIM. • Performing the numerical analysis for 2D critical problem. - Abstract: A mesh-free method is introduced to overcome the drawbacks (e.g., mesh generation and connectivity definition between the meshes) of mesh-based (nodal) methods such as the finite-element method and finite-difference method. In particular, the Point Interpolation Method (PIM) using a radial basis function is employed in the numerical analysis for the multi-group neutron-diffusion equation. The benchmark calculations are performed for the 2D homogeneous and heterogeneous problems, and the Multiquadrics (MQ) and Gaussian (EXP) functions are employed to analyze the effect of the radial basis function on the numerical solution. Additionally, the effect of the dimensionless shape parameter in those functions on the calculation accuracy is evaluated. According to the results, the radial PIM (RPIM) can provide a highly accurate solution for the multiplication eigenvalue and the neutron flux distribution, and the numerical solution with the MQ radial basis function exhibits the stable accuracy with respect to the reference solutions compared with the other solution. The dimensionless shape parameter directly affects the calculation accuracy and computing time. Values between 1.87 and 3.0 for the benchmark problems considered in this study lead to the most accurate solution. The difference between the analytical and numerical results for the neutron flux is significantly increased in the edge of the problem geometry, even though the maximum difference is lower than 4%. This phenomenon seems to arise from the derivative boundary condition at (x,0) and (0,y) positions, and it may be necessary to introduce additional strategy (e.g., the method using fictitious points and
A comparison of numerical methods for the solution of continuous-time DSGE models
DEFF Research Database (Denmark)
Parra-Alvarez, Juan Carlos
This paper evaluates the accuracy of a set of techniques that approximate the solution of continuous-time DSGE models. Using the neoclassical growth model I compare linear-quadratic, perturbation and projection methods. All techniques are applied to the HJB equation and the optimality conditions...... parameters of the model and suggest the use of projection methods when a high degree of accuracy is required....
Improving Reference Service: The Case for Using a Continuous Quality Improvement Method.
Aluri, Rao
1993-01-01
Discusses the evaluation of library reference service; examines problems with past evaluations, including the lack of long-term planning and a systems perspective; and suggests a method for continuously monitoring and improving reference service using quality improvement tools such as checklists, cause and effect diagrams, Pareto charts, and…
An Easy Method for Drainage of Fluid in Cases of Continuous Irrigation of the Hand
Makhijani, Sumeet
2016-01-01
Summary: Description of a novel method to perform continuous irrigation for flexor tenosynovitis in a way that is comfortable for the patient and convenient for nursing staff by placing the hand in the suction pouch of a lithotomy style drape attached to wall suction. PMID:28293498
DEFF Research Database (Denmark)
Lauridsen, Mette Munk; Grønbæk, Henning; Næser, Esben
2012-01-01
Abstract Minimal hepatic encephalopathy (MHE) is a metabolic brain disorder occurring in patients with liver cirrhosis. MHE lessens a patient's quality of life, but is treatable when identified. The continuous reaction times (CRT) method is used in screening for MHE. Gender and age effects...
Affleck, Louise; Jennett, Penny
1998-01-01
Chart audit (assessment of patient medical records) is a cost-effective continuing-education needs-assessment method. Chart stimulated recall, in which physicians' memory of particular cases is stimulated by records, potentially increases content validity and exploration of clinical reasoning as well as the context of clinical decisions. (SK)
International Nuclear Information System (INIS)
Yang, W.; Wu, H.; Cao, L.
2012-01-01
More and more MOX fuels are used in all over the world in the past several decades. Compared with UO 2 fuel, it contains some new features. For example, the neutron spectrum is harder and more resonance interference effects within the resonance energy range are introduced because of more resonant nuclides contained in the MOX fuel. In this paper, the wavelets scaling function expansion method is applied to study the resonance behavior of plutonium isotopes within MOX fuel. Wavelets scaling function expansion continuous-energy self-shielding method is developed recently. It has been validated and verified by comparison to Monte Carlo calculations. In this method, the continuous-energy cross-sections are utilized within resonance energy, which means that it's capable to solve problems with serious resonance interference effects without iteration calculations. Therefore, this method adapts to treat the MOX fuel resonance calculation problem natively. Furthermore, plutonium isotopes have fierce oscillations of total cross-section within thermal energy range, especially for 240 Pu and 242 Pu. To take thermal resonance effect of plutonium isotopes into consideration the wavelet scaling function expansion continuous-energy resonance calculation code WAVERESON is enhanced by applying the free gas scattering kernel to obtain the continuous-energy scattering source within thermal energy range (2.1 eV to 4.0 eV) contrasting against the resonance energy range in which the elastic scattering kernel is utilized. Finally, all of the calculation results of WAVERESON are compared with MCNP calculation. (authors)
Modular correction method of bending elastic modulus based on sliding behavior of contact point
International Nuclear Information System (INIS)
Ma, Zhichao; Zhao, Hongwei; Zhang, Qixun; Liu, Changyi
2015-01-01
During the three-point bending test, the sliding behavior of the contact point between the specimen and supports was observed, the sliding behavior was verified to affect the measurements of both deflection and span length, which directly affect the calculation of the bending elastic modulus. Based on the Hertz formula to calculate the elastic contact deformation and the theoretical calculation of the sliding behavior of the contact point, a theoretical model to precisely describe the deflection and span length as a function of bending load was established. Moreover, a modular correction method of bending elastic modulus was proposed, via the comparison between the corrected elastic modulus of three materials (H63 copper–zinc alloy, AZ31B magnesium alloy and 2026 aluminum alloy) and the standard modulus obtained from standard uniaxial tensile tests, the universal feasibility of the proposed correction method was verified. Also, the ratio of corrected to raw elastic modulus presented a monotonically decreasing tendency as the raw elastic modulus of materials increased. (technical note)
King, Nathan D.; Ruuth, Steven J.
2017-05-01
Maps from a source manifold M to a target manifold N appear in liquid crystals, color image enhancement, texture mapping, brain mapping, and many other areas. A numerical framework to solve variational problems and partial differential equations (PDEs) that map between manifolds is introduced within this paper. Our approach, the closest point method for manifold mapping, reduces the problem of solving a constrained PDE between manifolds M and N to the simpler problems of solving a PDE on M and projecting to the closest points on N. In our approach, an embedding PDE is formulated in the embedding space using closest point representations of M and N. This enables the use of standard Cartesian numerics for general manifolds that are open or closed, with or without orientation, and of any codimension. An algorithm is presented for the important example of harmonic maps and generalized to a broader class of PDEs, which includes p-harmonic maps. Improved efficiency and robustness are observed in convergence studies relative to the level set embedding methods. Harmonic and p-harmonic maps are computed for a variety of numerical examples. In these examples, we denoise texture maps, diffuse random maps between general manifolds, and enhance color images.
Analysis of tree stand horizontal structure using random point field methods
Directory of Open Access Journals (Sweden)
O. P. Sekretenko
2015-06-01
Full Text Available This paper uses the model approach to analyze the horizontal structure of forest stands. The main types of models of random point fields and statistical procedures that can be used to analyze spatial patterns of trees of uneven and even-aged stands are described. We show how modern methods of spatial statistics can be used to address one of the objectives of forestry – to clarify the laws of natural thinning of forest stand and the corresponding changes in its spatial structure over time. Studying natural forest thinning, we describe the consecutive stages of modeling: selection of the appropriate parametric model, parameter estimation and generation of point patterns in accordance with the selected model, the selection of statistical functions to describe the horizontal structure of forest stands and testing of statistical hypotheses. We show the possibilities of a specialized software package, spatstat, which is designed to meet the challenges of spatial statistics and provides software support for modern methods of analysis of spatial data. We show that a model of stand thinning that does not consider inter-tree interaction can project the size distribution of the trees properly, but the spatial pattern of the modeled stand is not quite consistent with observed data. Using data of three even-aged pine forest stands of 25, 55, and 90-years old, we demonstrate that the spatial point process models are useful for combining measurements in the forest stands of different ages to study the forest stand natural thinning.
An unsteady point vortex method for coupled fluid-solid problems
Energy Technology Data Exchange (ETDEWEB)
Michelin, Sebastien [Jacobs School of Engineering, UCSD, Department of Mechanical and Aerospace Engineering, La Jolla, CA (United States); Ecole Nationale Superieure des Mines de Paris, Paris (France); Llewellyn Smith, Stefan G. [Jacobs School of Engineering, UCSD, Department of Mechanical and Aerospace Engineering, La Jolla, CA (United States)
2009-06-15
A method is proposed for the study of the two-dimensional coupled motion of a general sharp-edged solid body and a surrounding inviscid flow. The formation of vorticity at the body's edges is accounted for by the shedding at each corner of point vortices whose intensity is adjusted at each time step to satisfy the regularity condition on the flow at the generating corner. The irreversible nature of vortex shedding is included in the model by requiring the vortices' intensity to vary monotonically in time. A conservation of linear momentum argument is provided for the equation of motion of these point vortices (Brown-Michael equation). The forces and torques applied on the solid body are computed as explicit functions of the solid body velocity and the vortices' position and intensity, thereby providing an explicit formulation of the vortex-solid coupled problem as a set of non-linear ordinary differential equations. The example of a falling card in a fluid initially at rest is then studied using this method. The stability of broadside-on fall is analysed and the shedding of vorticity from both plate edges is shown to destabilize this position, consistent with experimental studies and numerical simulations of this problem. The reduced-order representation of the fluid motion in terms of point vortices is used to understand the physical origin of this destabilization. (orig.)
Nazemizadeh, M.; Rahimi, H. N.; Amini Khoiy, K.
2012-03-01
This paper presents an optimal control strategy for optimal trajectory planning of mobile robots by considering nonlinear dynamic model and nonholonomic constraints of the system. The nonholonomic constraints of the system are introduced by a nonintegrable set of differential equations which represent kinematic restriction on the motion. The Lagrange's principle is employed to derive the nonlinear equations of the system. Then, the optimal path planning of the mobile robot is formulated as an optimal control problem. To set up the problem, the nonlinear equations of the system are assumed as constraints, and a minimum energy objective function is defined. To solve the problem, an indirect solution of the optimal control method is employed, and conditions of the optimality derived as a set of coupled nonlinear differential equations. The optimality equations are solved numerically, and various simulations are performed for a nonholonomic mobile robot to illustrate effectiveness of the proposed method.
Model reduction method using variable-separation for stochastic saddle point problems
Jiang, Lijian; Li, Qiuqi
2018-02-01
In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.
Statistical methods for change-point detection in surface temperature records
Pintar, A. L.; Possolo, A.; Zhang, N. F.
2013-09-01
We describe several statistical methods to detect possible change-points in a time series of values of surface temperature measured at a meteorological station, and to assess the statistical significance of such changes, taking into account the natural variability of the measured values, and the autocorrelations between them. These methods serve to determine whether the record may suffer from biases unrelated to the climate signal, hence whether there may be a need for adjustments as considered by M. J. Menne and C. N. Williams (2009) "Homogenization of Temperature Series via Pairwise Comparisons", Journal of Climate 22 (7), 1700-1717. We also review methods to characterize patterns of seasonality (seasonal decomposition using monthly medians or robust local regression), and explain the role they play in the imputation of missing values, and in enabling robust decompositions of the measured values into a seasonal component, a possible climate signal, and a station-specific remainder. The methods for change-point detection that we describe include statistical process control, wavelet multi-resolution analysis, adaptive weights smoothing, and a Bayesian procedure, all of which are applicable to single station records.
Beam-pointing error compensation method of phased array radar seeker with phantom-bit technology
Directory of Open Access Journals (Sweden)
Qiuqiu WEN
2017-06-01
Full Text Available A phased array radar seeker (PARS must be able to effectively decouple body motion and accurately extract the line-of-sight (LOS rate for target missile tracking. In this study, the real-time two-channel beam pointing error (BPE compensation method of PARS for LOS rate extraction is designed. The PARS discrete beam motion principium is analyzed, and the mathematical model of beam scanning control is finished. According to the principle of the antenna element shift phase, both the antenna element shift phase law and the causes of beam-pointing error under phantom-bit conditions are analyzed, and the effect of BPE caused by phantom-bit technology (PBT on the extraction accuracy of the LOS rate is examined. A compensation method is given, which includes coordinate transforms, beam angle margin compensation, and detector dislocation angle calculation. When the method is used, the beam angle margin in the pitch and yaw directions is calculated to reduce the effect of the missile body disturbance and to improve LOS rate extraction precision by compensating for the detector dislocation angle. The simulation results validate the proposed method.
An improved maximum power point tracking method for a photovoltaic system
Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes
2016-06-01
In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.
Point Measurements of Fermi Velocities by a Time-of-Flight Method
DEFF Research Database (Denmark)
Falk, David S.; Henningsen, J. O.; Skriver, Hans Lomholt
1972-01-01
The present paper describes in detail a new method of obtaining information about the Fermi velocity of electrons in metals, point by point, along certain contours on the Fermi surface. It is based on transmission of microwaves through thin metal slabs in the presence of a static magnetic field...... applied parallel to the surface. The electrons carry the signal across the slab and arrive at the second surface with a phase delay which is measured relative to a reference signal; the velocities are derived by analyzing the magnetic field dependence of the phase delay. For silver we have in this way...... obtained one component of the velocity along half the circumference of the centrally symmetric orbit for B→∥[100]. The results are in agreement with current models for the Fermi surface. For B→∥[011], the electrons involved are not moving in a symmetry plane of the Fermi surface. In such cases one cannot...
Iterative method to compute the Fermat points and Fermat distances of multiquarks
International Nuclear Information System (INIS)
Bicudo, P.; Cardoso, M.
2009-01-01
The multiquark confining potential is proportional to the total distance of the fundamental strings linking the quarks and antiquarks. We address the computation of the total string distance and of the Fermat points where the different strings meet. For a meson the distance is trivially the quark-antiquark distance. For a baryon the problem was solved geometrically from the onset by Fermat and by Torricelli, it can be determined just with a rule and a compass, and we briefly review it. However we also show that for tetraquarks, pentaquarks, hexaquarks, etc., the geometrical solution is much more complicated. Here we provide an iterative method, converging fast to the correct Fermat points and the total distances, relevant for the multiquark potentials.
Iterative method to compute the Fermat points and Fermat distances of multiquarks
Energy Technology Data Exchange (ETDEWEB)
Bicudo, P. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)], E-mail: bicudo@ist.utl.pt; Cardoso, M. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)
2009-04-13
The multiquark confining potential is proportional to the total distance of the fundamental strings linking the quarks and antiquarks. We address the computation of the total string distance and of the Fermat points where the different strings meet. For a meson the distance is trivially the quark-antiquark distance. For a baryon the problem was solved geometrically from the onset by Fermat and by Torricelli, it can be determined just with a rule and a compass, and we briefly review it. However we also show that for tetraquarks, pentaquarks, hexaquarks, etc., the geometrical solution is much more complicated. Here we provide an iterative method, converging fast to the correct Fermat points and the total distances, relevant for the multiquark potentials.
Pain point system scale (PPSS: a method for postoperative pain estimation in retrospective studies
Directory of Open Access Journals (Sweden)
Gkotsi A
2012-11-01
Full Text Available Anastasia Gkotsi,1 Dimosthenis Petsas,2 Vasilios Sakalis,3 Asterios Fotas,3 Argyrios Triantafyllidis,3 Ioannis Vouros,3 Evangelos Saridakis,2 Georgios Salpiggidis,3 Athanasios Papathanasiou31Department of Experimental Physiology, Aristotle University of Thessaloniki, Thessaloniki, Greece; 2Department of Anesthesiology, 3Department of Urology, Hippokration General Hospital, Thessaloniki, GreecePurpose: Pain rating scales are widely used for pain assessment. Nevertheless, a new tool is required for pain assessment needs in retrospective studies.Methods: The postoperative pain episodes, during the first postoperative day, of three patient groups were analyzed. Each pain episode was assessed by a visual analog scale, numerical rating scale, verbal rating scale, and a new tool – pain point system scale (PPSS – based on the analgesics administered. The type of analgesic was defined based on the authors’ clinic protocol, patient comorbidities, pain assessment tool scores, and preadministered medications by an artificial neural network system. At each pain episode, each patient was asked to fill the three pain scales. Bartlett’s test and Kaiser–Meyer–Olkin criterion were used to evaluate sample sufficiency. The proper scoring system was defined by varimax rotation. Spearman’s and Pearson’s coefficients assessed PPSS correlation to the known pain scales.Results: A total of 262 pain episodes were evaluated in 124 patients. The PPSS scored one point for each dose of paracetamol, three points for each nonsteroidal antiinflammatory drug or codeine, and seven points for each dose of opioids. The correlation between the visual analog scale and PPSS was found to be strong and linear (rho: 0.715; P <0.001 and Pearson: 0.631; P < 0.001.Conclusion: PPSS correlated well with the known pain scale and could be used safely in the evaluation of postoperative pain in retrospective studies.Keywords: pain scale, retrospective studies, pain point system
Directory of Open Access Journals (Sweden)
Urriza I
2010-01-01
Full Text Available Abstract This paper presents a word length selection method for the implementation of digital controllers in both fixed-point and floating-point hardware on FPGAs. This method uses the new types defined in the VHDL-2008 fixed-point and floating-point packages. These packages allow customizing the word length of fixed and floating point representations and shorten the design cycle simplifying the design of arithmetic operations. The method performs bit-true simulations in order to determine the word length to represent the constant coefficients and the internal signals of the digital controller while maintaining the control system specifications. A mixed-signal simulation tool is used to simulate the closed loop system as a whole in order to analyze the impact of the quantization effects and loop delays on the control system performance. The method is applied to implement a digital controller for a switching power converter. The digital circuit is implemented on an FPGA, and the simulations are experimentally verified.
Directory of Open Access Journals (Sweden)
Akimov Pavel
2016-01-01
Full Text Available The distinctive paper is devoted to the two-dimensional semi-analytical solution of boundary problems of analysis of shear walls with the use of discrete-continual finite element method (DCFEM. This approach allows obtaining the exact analytical solution in one direction (so-called “basic” direction, also decrease the size of the problem to one-dimensional common finite element analysis. The resulting multipoint boundary problem for the first-order system of ordinary differential equations with piecewise constant coefficients is solved analytically. The proposed method is rather efficient for evaluation of the boundary effect (such as the stress field near the concentrated force. DCFEM also has a completely computer-oriented algorithm, computational stability, optimal conditionality of resultant system and it is applicable for the various loads at an arbitrary point or a region of the wall.
Simulating Ice Shelf Response to Potential Triggers of Collapse Using the Material Point Method
Huth, A.; Smith, B. E.
2017-12-01
Weakening or collapse of an ice shelf can reduce the buttressing effect of the shelf on its upstream tributaries, resulting in sea level rise as the flux of grounded ice into the ocean increases. Here we aim to improve sea level rise projections by developing a prognostic 2D plan-view model that simulates the response of an ice sheet/ice shelf system to potential triggers of ice shelf weakening or collapse, such as calving events, thinning, and meltwater ponding. We present initial results for Larsen C. Changes in local ice shelf stresses can affect flow throughout the entire domain, so we place emphasis on calibrating our model to high-resolution data and precisely evolving fracture-weakening and ice geometry throughout the simulations. We primarily derive our initial ice geometry from CryoSat-2 data, and initialize the model by conducting a dual inversion for the ice viscosity parameter and basal friction coefficient that minimizes mismatch between modeled velocities and velocities derived from Landsat data. During simulations, we implement damage mechanics to represent fracture-weakening, and track ice thickness evolution, grounding line position, and ice front position. Since these processes are poorly represented by the Finite Element Method (FEM) due to mesh resolution issues and numerical diffusion, we instead implement the Material Point Method (MPM) for our simulations. In MPM, the ice domain is discretized into a finite set of Lagrangian material points that carry all variables and are tracked throughout the simulation. Each time step, information from the material points is projected to a Eulerian grid where the momentum balance equation (shallow shelf approximation) is solved similarly to FEM, but essentially treating the material points as integration points. The grid solution is then used to determine the new positions of the material points and update variables such as thickness and damage in a diffusion-free Lagrangian frame. The grid does not store
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
The use of the case study method in radiation worker continuing training
International Nuclear Information System (INIS)
Stevens, R.D.
1990-01-01
Typical methods of continuing training are often viewed by employees as boring, redundant and unnecessary. It is hoped that the operating experience lesson in the required course, Radiation Worker Requalification, will be well received by employees because actual RFP events will be presented as case studies. The interactive learning atmosphere created by the case study method stimulates discussion, develops analytical abilities, and motivates employees to use lessons learned in the workplace. This problem solving approach to continuing training incorporates cause and effect analysis, a technique which is also used at RFP to investigate events. A method of designing the operating experience lesson in the Radiation Worker Requalification course is described in this paper. 7 refs., 2 figs
Gan, Lei; Zhang, Chunxia; Shangguan, Fangqin; Li, Xiuping
2012-06-01
The continuous cooling crystallization of a blast furnace slag was studied by the application of the differential scanning calorimetry (DSC) method. A kinetic model describing the correlation between the evolution of the degree of crystallization with time was obtained. Bulk cooling experiments of the molten slag coupled with numerical simulation of heat transfer were conducted to validate the results of the DSC methods. The degrees of crystallization of the samples from the bulk cooling experiments were estimated by means of the X-ray diffraction (XRD) and the DSC method. It was found that the results from the DSC cooling and bulk cooling experiments are in good agreement. The continuous cooling transformation (CCT) diagram of the blast furnace slag was constructed according to crystallization kinetic model and experimental data. The obtained CCT diagram characterizes with two crystallization noses at different temperature ranges.
New practical method for evaluation of a conventional flat plate continuous pistachio dryer
International Nuclear Information System (INIS)
Kouchakzadeh, Ahmad; Tavakoli, Teymur
2011-01-01
Highlights: → Evaluation of a conventional flat plate continuous pistachio dryer with a new feasible method. → Using thermophysical properties of air and matter. → This manner could be utilized in similar dryer for other agricultural products. → Method shows the heat loss and power separately. -- Abstract: Testing a dryer is necessary to evaluate its absolute and comparative performance with other dryers. A conventional flat plate continuous pistachio dryer was tested by a new practical method of mass and energy equilibrium. Results showed that the average power consumption and heat loss in three tests are 62.13 and 18.99 kW, respectively. The ratio of heat loss on power consumption showed that the efficiency of practical pistachios flat plate dryer is about 69.4%.
New practical method for evaluation of a conventional flat plate continuous pistachio dryer
Energy Technology Data Exchange (ETDEWEB)
Kouchakzadeh, Ahmad [Agri Machinery Engineering, Ilam University, Ilam (Iran, Islamic Republic of); Tavakoli, Teymur [Agri Machinery Engineering, Tarbyat Modares University, Tehran (Iran, Islamic Republic of)
2011-07-15
Highlights: {yields} Evaluation of a conventional flat plate continuous pistachio dryer with a new feasible method. {yields} Using thermophysical properties of air and matter. {yields} This manner could be utilized in similar dryer for other agricultural products. {yields} Method shows the heat loss and power separately. -- Abstract: Testing a dryer is necessary to evaluate its absolute and comparative performance with other dryers. A conventional flat plate continuous pistachio dryer was tested by a new practical method of mass and energy equilibrium. Results showed that the average power consumption and heat loss in three tests are 62.13 and 18.99 kW, respectively. The ratio of heat loss on power consumption showed that the efficiency of practical pistachios flat plate dryer is about 69.4%.
Comparison of P&O and INC Methods in Maximum Power Point Tracker for PV Systems
Chen, Hesheng; Cui, Yuanhui; Zhao, Yue; Wang, Zhisen
2018-03-01
In the context of renewable energy, the maximum power point tracker (MPPT) is often used to increase the solar power efficiency, taking into account the randomness and volatility of solar energy due to changes in temperature and photovoltaic. In all MPPT techniques, perturb & observe and incremental conductance are widely used in MPPT controllers, because of their simplicity and ease of operation. According to the internal structure of the photovoltaic cell and the output volt-ampere characteristic, this paper established the circuit model and establishes the dynamic simulation model in Matlab/Simulink with the preparation of the s function. The perturb & observe MPPT method and the incremental conductance MPPT method were analyzed and compared by the theoretical analysis and digital simulation. The simulation results have shown that the system with INC MPPT method has better dynamic performance and improves the output power of photovoltaic power generation.
A method of undifferenced ambiguity resolution for GPS+GLONASS precise point positioning.
Yi, Wenting; Song, Weiwei; Lou, Yidong; Shi, Chuang; Yao, Yibin
2016-05-25
Integer ambiguity resolution is critical for achieving positions of high precision and for shortening the convergence time of precise point positioning (PPP). However, GLONASS adopts the signal processing technology of frequency division multiple access and results in inter-frequency code biases (IFCBs), which are currently difficult to correct. This bias makes the methods proposed for GPS ambiguity fixing unsuitable for GLONASS. To realize undifferenced GLONASS ambiguity fixing, we propose an undifferenced ambiguity resolution method for GPS+GLONASS PPP, which considers the IFCBs estimation. The experimental result demonstrates that the success rate of GLONASS ambiguity fixing can reach 75% through the proposed method. Compared with the ambiguity float solutions, the positioning accuracies of ambiguity-fixed solutions of GLONASS-only PPP are increased by 12.2%, 20.9%, and 10.3%, and that of the GPS+GLONASS PPP by 13.0%, 35.2%, and 14.1% in the North, East and Up directions, respectively.
Directory of Open Access Journals (Sweden)
David S Nolan
2011-08-01
Full Text Available A new method is presented to determine the favorability for tropical cyclone development of an atmospheric environment, as represented by a mean sounding of temperature, humidity, and wind as a function of height. A mesoscale model with nested, moving grids is used to simulate the evolution of a weak, precursor vortex in a large domain with doubly periodic boundary conditions. The equations of motion are modified to maintain arbitrary profiles of both zonal and meridional wind as a function of height, without the necessary large-scale temperature gradients that cannot be consistent with doubly periodic boundary conditions. Comparisons between simulations using the point-downscaling method and simulations using wind shear balanced by temperature gradients illustrate both the advantages and the limitations of the technique. Further examples of what can be learned with this method are presented using both idealized and observed soundings and wind profiles.
H-Point Standard Addition Method for Simultaneous Determination of Eosin and Erytrosine
Directory of Open Access Journals (Sweden)
Amandeep Kaur
2011-01-01
Full Text Available A new, simple, sensitive and selective H-point standard addition method (HPSAM has been developed for resolving binary mixture of food colorants eosin and erythrosine, which show overlapped spectra. The method is based on the complexation of food dyes eosin and erythrosine with Fe(III complexing reagent at pH 5.5 and solubilizing complexes in triton x-100 micellar media. Absorbances at the two pairs of wavelengths, 540 and 550 nm (when eosin acts as analyte or 518 and 542 nm (when erythrosine act as analyte were monitored. This method has satisfactorily been applied for the determination of eosin and erythrosine dyes in synthetic mixtures and commercial products.
Change-Point Detection Method for Clinical Decision Support System Rule Monitoring.
Liu, Siqi; Wright, Adam; Hauskrecht, Milos
2017-06-01
A clinical decision support system (CDSS) and its components can malfunction due to various reasons. Monitoring the system and detecting its malfunctions can help one to avoid any potential mistakes and associated costs. In this paper, we investigate the problem of detecting changes in the CDSS operation, in particular its monitoring and alerting subsystem, by monitoring its rule firing counts. The detection should be performed online, that is whenever a new datum arrives, we want to have a score indicating how likely there is a change in the system. We develop a new method based on Seasonal-Trend decomposition and likelihood ratio statistics to detect the changes. Experiments on real and simulated data show that our method has a lower delay in detection compared with existing change-point detection methods.
International Nuclear Information System (INIS)
Qi, Zhipeng; Li, Xiu; Lu, Xushan; Zhang, Yingying; Yao, Weihua
2015-01-01
We introduce a new and potentially useful method for wave field inverse transformation and its application in transient electromagnetic method (TEM) 3D interpretation. The diffusive EM field is known to have a unique integral representation in terms of a fictitious wave field that satisfies a wave equation. The continuous imaging of TEM can be accomplished using the imaging methods in seismic interpretation after the diffusion equation is transformed into a fictitious wave equation. The interpretation method based on the imaging of a fictitious wave field could be used as a fast 3D inversion method. Moreover, the fictitious wave field possesses some wave field features making it possible for the application of a wave field interpretation method in TEM to improve the prospecting resolution.Wave field transformation is a key issue in the migration imaging of a fictitious wave field. The equation in the wave field transformation belongs to the first class Fredholm integration equation, which is a typical ill-posed equation. Additionally, TEM has a large dynamic time range, which also facilitates the weakness of this ill-posed problem. The wave field transformation is implemented by using pre-conditioned regularized conjugate gradient method. The continuous imaging of a fictitious wave field is implemented by using Kirchhoff integration. A synthetic aperture and deconvolution algorithm is also introduced to improve the interpretation resolution. We interpreted field data by the method proposed in this paper, and obtained a satisfying interpretation result. (paper)
Directory of Open Access Journals (Sweden)
Yin Yanshu
2017-12-01
Full Text Available In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.
Yin, Gaohong
2016-05-01
Since the failure of the Scan Line Corrector (SLC) instrument on Landsat 7, observable gaps occur in the acquired Landsat 7 imagery, impacting the spatial continuity of observed imagery. Due to the highly geometric and radiometric accuracy provided by Landsat 7, a number of approaches have been proposed to fill the gaps. However, all proposed approaches have evident constraints for universal application. The main issues in gap-filling are an inability to describe the continuity features such as meandering streams or roads, or maintaining the shape of small objects when filling gaps in heterogeneous areas. The aim of the study is to validate the feasibility of using the Direct Sampling multiple-point geostatistical method, which has been shown to reconstruct complicated geological structures satisfactorily, to fill Landsat 7 gaps. The Direct Sampling method uses a conditional stochastic resampling of known locations within a target image to fill gaps and can generate multiple reconstructions for one simulation case. The Direct Sampling method was examined across a range of land cover types including deserts, sparse rural areas, dense farmlands, urban areas, braided rivers and coastal areas to demonstrate its capacity to recover gaps accurately for various land cover types. The prediction accuracy of the Direct Sampling method was also compared with other gap-filling approaches, which have been previously demonstrated to offer satisfactory results, under both homogeneous area and heterogeneous area situations. Studies have shown that the Direct Sampling method provides sufficiently accurate prediction results for a variety of land cover types from homogeneous areas to heterogeneous land cover types. Likewise, it exhibits superior performances when used to fill gaps in heterogeneous land cover types without input image or with an input image that is temporally far from the target image in comparison with other gap-filling approaches.
Development of a method of continuous improvement of services using the Business Intelligence tools
Directory of Open Access Journals (Sweden)
Svetlana V. Kulikova
2018-01-01
Full Text Available The purpose of the study was to develop a method of continuous improvement of services using the Business Intelligence tools.Materials and methods: the materials are used on the concept of the Deming Cycle, methods and Business Intelligence technologies, Agile methodology and SCRUM.Results: the article considers the problem of continuous improvement of services and offers solutions using methods and technologies of Business Intelligence. In this case, the purpose of this technology is to solve and make the final decision regarding what needs to be improved in the current organization of services. In other words, Business Intelligence helps the product manager to see what is hidden from the “human eye” on the basis of received and processed data. Development of a method based on the concept of the Deming Cycle and Agile methodologies, and SCRUM.The article describes the main stages of development of method based on activity of the enterprise. It is necessary to fully build the Business Intelligence system in the enterprise to identify bottlenecks and justify the need for their elimination and, in general, for continuous improvement of the services. This process is represented in the notation of DFD. The article presents a scheme for the selection of suitable agile methodologies.The proposed concept of the solution of the stated objectives, including methods of identification of problems through Business Intelligence technology, development of the system for troubleshooting and analysis of results of the introduced changes. The technical description of the project is given.Conclusion: following the work of the authors there was formed the concept of the method for the continuous improvement of the services, using the Business Intelligence technology with the specifics of the enterprises, offering SaaS solutions. It was also found that when using this method, the recommended development methodology is SCRUM. The result of this scientific
Wheeler, Mary
2013-11-16
We study the numerical approximation on irregular domains with general grids of the system of poroelasticity, which describes fluid flow in deformable porous media. The flow equation is discretized by a multipoint flux mixed finite element method and the displacements are approximated by a continuous Galerkin finite element method. First-order convergence in space and time is established in appropriate norms for the pressure, velocity, and displacement. Numerical results are presented that illustrate the behavior of the method. © Springer Science+Business Media Dordrecht 2013.
Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.
Reed, George H; Poyner, Russell R
2015-01-01
An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.
METHOD OF GREEN FUNCTIONS IN MATHEMATICAL MODELLING FOR TWO-POINT BOUNDARY-VALUE PROBLEMS
Directory of Open Access Journals (Sweden)
E. V. Dikareva
2015-01-01
Full Text Available Summary. In many applied problems of control, optimization, system theory, theoretical and construction mechanics, for problems with strings and nods structures, oscillation theory, theory of elasticity and plasticity, mechanical problems connected with fracture dynamics and shock waves, the main instrument for study these problems is a theory of high order ordinary differential equations. This methodology is also applied for studying mathematical models in graph theory with different partitioning based on differential equations. Such equations are used for theoretical foundation of mathematical models but also for constructing numerical methods and computer algorithms. These models are studied with use of Green function method. In the paper first necessary theoretical information is included on Green function method for multi point boundary-value problems. The main equation is discussed, notions of multi-point boundary conditions, boundary functionals, degenerate and non-degenerate problems, fundamental matrix of solutions are introduced. In the main part the problem to study is formulated in terms of shocks and deformations in boundary conditions. After that the main results are formulated. In theorem 1 conditions for existence and uniqueness of solutions are proved. In theorem 2 conditions are proved for strict positivity and equal measureness for a pair of solutions. In theorem 3 existence and estimates are proved for the least eigenvalue, spectral properties and positivity of eigenfunctions. In theorem 4 the weighted positivity is proved for the Green function. Some possible applications are considered for a signal theory and transmutation operators.
A RECOGNITION METHOD FOR AIRPLANE TARGETS USING 3D POINT CLOUD DATA
Directory of Open Access Journals (Sweden)
M. Zhou
2012-07-01
Full Text Available LiDAR is capable of obtaining three dimension coordinates of the terrain and targets directly and is widely applied in digital city, emergent disaster mitigation and environment monitoring. Especially because of its ability of penetrating the low density vegetation and canopy, LiDAR technique has superior advantages in hidden and camouflaged targets detection and recognition. Based on the multi-echo data of LiDAR, and combining the invariant moment theory, this paper presents a recognition method for classic airplanes (even hidden targets mainly under the cover of canopy using KD-Tree segmented point cloud data. The proposed algorithm firstly uses KD-tree to organize and manage point cloud data, and makes use of the clustering method to segment objects, and then the prior knowledge and invariant recognition moment are utilized to recognise airplanes. The outcomes of this test verified the practicality and feasibility of the method derived in this paper. And these could be applied in target measuring and modelling of subsequent data processing.
Directory of Open Access Journals (Sweden)
Reza Kiani Mavi
2013-01-01
Full Text Available Data envelopment analysis (DEA is used to evaluate the performance of decision making units (DMUs with multiple inputs and outputs in a homogeneous group. In this way, the acquired relative efficiency score for each decision making unit lies between zero and one where a number of them may have an equal efficiency score of one. DEA successfully divides them into two categories of efficient DMUs and inefficient DMUs. A ranking for inefficient DMUs is given but DEA does not provide further information about the efficient DMUs. One of the popular methods for evaluating and ranking DMUs is the common set of weights (CSW method. We generate a CSW model with considering nondiscretionary inputs that are beyond the control of DMUs and using ideal point method. The main idea of this approach is to minimize the distance between the evaluated decision making unit and the ideal decision making unit (ideal point. Using an empirical example we put our proposed model to test by applying it to the data of some 20 bank branches and rank their efficient units.
Curvature computation in volume-of-fluid method based on point-cloud sampling
Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.
2018-01-01
This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.
Directory of Open Access Journals (Sweden)
Zhiqiang Yang
2016-05-01
Full Text Available Due to the dynamic process of maximum power point tracking (MPPT caused by turbulence and large rotor inertia, variable-speed wind turbines (VSWTs cannot maintain the optimal tip speed ratio (TSR from cut-in wind speed up to the rated speed. Therefore, in order to increase the total captured wind energy, the existing aerodynamic design for VSWT blades, which only focuses on performance improvement at a single TSR, needs to be improved to a multi-point design. In this paper, based on a closed-loop system of VSWTs, including turbulent wind, rotor, drive train and MPPT controller, the distribution of operational TSR and its description based on inflow wind energy are investigated. Moreover, a multi-point method considering the MPPT dynamic process for the aerodynamic optimization of VSWT blades is proposed. In the proposed method, the distribution of operational TSR is obtained through a dynamic simulation of the closed-loop system under a specific turbulent wind, and accordingly the multiple design TSRs and the corresponding weighting coefficients in the objective function are determined. Finally, using the blade of a National Renewable Energy Laboratory (NREL 1.5 MW wind turbine as the baseline, the proposed method is compared with the conventional single-point optimization method using the commercial software Bladed. Simulation results verify the effectiveness of the proposed method.
Variable Camber Continuous Aerodynamic Control Surfaces and Methods for Active Wing Shaping Control
Nguyen, Nhan T. (Inventor)
2016-01-01
An aerodynamic control apparatus for an air vehicle improves various aerodynamic performance metrics by employing multiple spanwise flap segments that jointly form a continuous or a piecewise continuous trailing edge to minimize drag induced by lift or vortices. At least one of the multiple spanwise flap segments includes a variable camber flap subsystem having multiple chordwise flap segments that may be independently actuated. Some embodiments also employ a continuous leading edge slat system that includes multiple spanwise slat segments, each of which has one or more chordwise slat segment. A method and an apparatus for implementing active control of a wing shape are also described and include the determination of desired lift distribution to determine the improved aerodynamic deflection of the wings. Flap deflections are determined and control signals are generated to actively control the wing shape to approximate the desired deflection.
International Nuclear Information System (INIS)
Chen, Lin; Fan, Xiangtao; Du, Xiaoping
2014-01-01
Point cloud filtering is the basic and key step in LiDAR data processing. Adaptive Triangle Irregular Network Modelling (ATINM) algorithm and Threshold Segmentation on Elevation Statistics (TSES) algorithm are among the mature algorithms. However, few researches concentrate on the parameter selections of ATINM and the iteration condition of TSES, which can greatly affect the filtering results. First the paper presents these two key problems under two different terrain environments. For a flat area, small height parameter and angle parameter perform well and for areas with complex feature changes, large height parameter and angle parameter perform well. One-time segmentation is enough for flat areas, and repeated segmentations are essential for complex areas. Then the paper makes comparisons and analyses of the results by these two methods. ATINM has a larger I error in both two data sets as it sometimes removes excessive points. TSES has a larger II error in both two data sets as it ignores topological relations between points. ATINM performs well even with a large region and a dramatic topology while TSES is more suitable for small region with flat topology. Different parameters and iterations can cause relative large filtering differences
Lei Guo; Haoran Jiang; Xinhua Wang; Fangai Liu
2017-01-01
Point-of-interest (POI) recommendation has been well studied in recent years. However, most of the existing methods focus on the recommendation scenarios where users can provide explicit feedback. In most cases, however, the feedback is not explicit, but implicit. For example, we can only get a user’s check-in behaviors from the history of what POIs she/he has visited, but never know how much she/he likes and why she/he does not like them. Recently, some researchers have noticed this problem ...
Calculation and decomposition of spot price using interior point nonlinear optimisation methods
International Nuclear Information System (INIS)
Xie, K.; Song, Y.H.
2004-01-01
Optimal pricing for real and reactive power is a very important issue in a deregulation environment. This paper summarises the optimal pricing problem as an extended optimal power flow problem. Then, spot prices are decomposed into different components reflecting various ancillary services. The derivation of the proposed decomposition model is described in detail. Primary-Dual Interior Point method is applied to avoid 'go' 'no go' gauge. In addition, the proposed approach can be extended to cater for other types of ancillary services. (author)
Approximate Dual Averaging Method for Multiagent Saddle-Point Problems with Stochastic Subgradients
Directory of Open Access Journals (Sweden)
Deming Yuan
2014-01-01
Full Text Available This paper considers the problem of solving the saddle-point problem over a network, which consists of multiple interacting agents. The global objective function of the problem is a combination of local convex-concave functions, each of which is only available to one agent. Our main focus is on the case where the projection steps are calculated approximately and the subgradients are corrupted by some stochastic noises. We propose an approximate version of the standard dual averaging method and show that the standard convergence rate is preserved, provided that the projection errors decrease at some appropriate rate and the noises are zero-mean and have bounded variance.
Gersonius, Berry; Ashley, Richard; Jeuken, Ad; Nasruddin, Fauzy; Pathirana, Assela; Zevenbergen, Chris
2010-05-01
start the identification and analysis of adaptive strategies at the end of PSIR scheme: impact and examine whether, and for how long, current risk management strategies will continue to be effective under different future conditions. The most noteworthy application of this approach is the adaptation tipping point method. Adaptation tipping points (ATP) are defined as the points where the magnitude of change is such that the current risk management strategy can no longer meet its objectives. In the ATP method, policy objectives, determining aspirational functioning, are taken as the starting point. Also, the current measures to achieve these objectives are described. This is followed by a sensitivity analysis to determine the optimal and critical boundary conditions (state). Lastly, the state is related to pressures in terms of future change. It should be noted that in the ATP method the driver for adopting a new risk management strategy is not future change as such, but rather failing to meet the policy objectives. In the current paper, the ATP method is applied to the case study of an existing stormwater system in Dordrecht (the Netherlands). This application shows the potential of the ATP method to reduce the complexity of implementing a resilience-focused approach to water risk management. It is expected that this will help foster greater practical relevance of resilience as a perspective for the planning of water management structures.
A new maximum power point method based on a sliding mode approach for solar energy harvesting
International Nuclear Information System (INIS)
Farhat, Maissa; Barambones, Oscar; Sbita, Lassaad
2017-01-01
Highlights: • Create a simple, easy of implement and accurate V_M_P_P estimator. • Stability analysis of the proposed system based on the Lyapunov’s theory. • A comparative study versus P&O, highlight SMC good performances. • Construct a new PS-SMC algorithm to include the partial shadow case. • Experimental validation of the SMC MPP tracker. - Abstract: This paper presents a photovoltaic (PV) system with a maximum power point tracking (MPPT) facility. The goal of this work is to maximize power extraction from the photovoltaic generator (PVG). This goal is achieved using a sliding mode controller (SMC) that drives a boost converter connected between the PVG and the load. The system is modeled and tested under MATLAB/SIMULINK environment. In simulation, the sliding mode controller offers fast and accurate convergence to the maximum power operating point that outperforms the well-known perturbation and observation method (P&O). The sliding mode controller performance is evaluated during steady-state, against load varying and panel partial shadow (PS) disturbances. To confirm the above conclusion, a practical implementation of the maximum power point tracker based sliding mode controller on a hardware setup is performed on a dSPACE real time digital control platform. The data acquisition and the control system are conducted all around dSPACE 1104 controller board and its RTI environment. The experimental results demonstrate the validity of the proposed control scheme over a stand-alone real photovoltaic system.
Wang, D.; Hollaus, M.; Pfeifer, N.
2017-09-01
Classification of wood and leaf components of trees is an essential prerequisite for deriving vital tree attributes, such as wood mass, leaf area index (LAI) and woody-to-total area. Laser scanning emerges to be a promising solution for such a request. Intensity based approaches are widely proposed, as different components of a tree can feature discriminatory optical properties at the operating wavelengths of a sensor system. For geometry based methods, machine learning algorithms are often used to separate wood and leaf points, by providing proper training samples. However, it remains unclear how the chosen machine learning classifier and features used would influence classification results. To this purpose, we compare four popular machine learning classifiers, namely Support Vector Machine (SVM), Na¨ıve Bayes (NB), Random Forest (RF), and Gaussian Mixture Model (GMM), for separating wood and leaf points from terrestrial laser scanning (TLS) data. Two trees, an Erytrophleum fordii and a Betula pendula (silver birch) are used to test the impacts from classifier, feature set, and training samples. Our results showed that RF is the best model in terms of accuracy, and local density related features are important. Experimental results confirmed the feasibility of machine learning algorithms for the reliable classification of wood and leaf points. It is also noted that our studies are based on isolated trees. Further tests should be performed on more tree species and data from more complex environments.
Directory of Open Access Journals (Sweden)
D. Wang
2017-09-01
Full Text Available Classification of wood and leaf components of trees is an essential prerequisite for deriving vital tree attributes, such as wood mass, leaf area index (LAI and woody-to-total area. Laser scanning emerges to be a promising solution for such a request. Intensity based approaches are widely proposed, as different components of a tree can feature discriminatory optical properties at the operating wavelengths of a sensor system. For geometry based methods, machine learning algorithms are often used to separate wood and leaf points, by providing proper training samples. However, it remains unclear how the chosen machine learning classifier and features used would influence classification results. To this purpose, we compare four popular machine learning classifiers, namely Support Vector Machine (SVM, Na¨ıve Bayes (NB, Random Forest (RF, and Gaussian Mixture Model (GMM, for separating wood and leaf points from terrestrial laser scanning (TLS data. Two trees, an Erytrophleum fordii and a Betula pendula (silver birch are used to test the impacts from classifier, feature set, and training samples. Our results showed that RF is the best model in terms of accuracy, and local density related features are important. Experimental results confirmed the feasibility of machine learning algorithms for the reliable classification of wood and leaf points. It is also noted that our studies are based on isolated trees. Further tests should be performed on more tree species and data from more complex environments.
Kamiya, Yusuke; Ishijma, Hisahiro; Hagiwara, Akiko; Takahashi, Shizu; Ngonyani, Henook A M; Samky, Eleuter
2017-02-01
To evaluate the impact of implementing continuous quality improvement (CQI) methods on patient's experiences and satisfaction in Tanzania. Cluster-randomized trial, which randomly allocated district-level hospitals into treatment group and control group, was conducted. Sixteen district-level hospitals in Kilimanjaro and Manyara regions of Tanzania. Outpatient exit surveys targeting totally 3292 individuals, 1688 in the treatment and 1604 in the control group, from 3 time-points between September 2011 and September 2012. Implementation of the 5S (Sort, Set, Shine, Standardize, Sustain) approach as a CQI method at outpatient departments over 12 months. Cleanliness, waiting time, patient's experience, patient's satisfaction. The 5S increased cleanliness in the outpatient department, patients' subjective waiting time and overall satisfaction. However, negligible effects were confirmed for patient's experiences on hospital staff behaviours. The 5S as a CQI method is effective in enhancing hospital environment and service delivery; that are subjectively assessed by outpatients even during the short intervention period. Nevertheless, continuous efforts will be needed to connect CQI practices with the further improvement in the delivery of quality health care. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Directory of Open Access Journals (Sweden)
Chonglong Wang
Full Text Available Genomic selection has become a useful tool for animal and plant breeding. Currently, genomic evaluation is usually carried out using a single-trait model. However, a multi-trait model has the advantage of using information on the correlated traits, leading to more accurate genomic prediction. To date, joint genomic prediction for a continuous and a threshold trait using a multi-trait model is scarce and needs more attention. Based on the previously proposed methods BayesCπ for single continuous trait and BayesTCπ for single threshold trait, we developed a novel method based on a linear-threshold model, i.e., LT-BayesCπ, for joint genomic prediction of a continuous trait and a threshold trait. Computing procedures of LT-BayesCπ using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the advantages of LT-BayesCπ over BayesCπ and BayesTCπ with regard to the accuracy of genomic prediction on both traits. Factors affecting the performance of LT-BayesCπ were addressed. The results showed that, in all scenarios, the accuracy of genomic prediction obtained from LT-BayesCπ was significantly increased for the threshold trait compared to that from single trait prediction using BayesTCπ, while the accuracy for the continuous trait was comparable with that from single trait prediction using BayesCπ. The proposed LT-BayesCπ could be a method of choice for joint genomic prediction of one continuous and one threshold trait.
Ambrogioni, Luca; Güçlü, Umut; van Gerven, Marcel A. J.; Maris, Eric
2017-01-01
This paper introduces the kernel mixture network, a new method for nonparametric estimation of conditional probability densities using neural networks. We model arbitrarily complex conditional densities as linear combinations of a family of kernel functions centered at a subset of training points. The weights are determined by the outer layer of a deep neural network, trained by minimizing the negative log likelihood. This generalizes the popular quantized softmax approach, which can be seen ...
International Nuclear Information System (INIS)
Masuda, Yasuhiko; Makino, Kenichi; Gotoh, Satoshi
1999-01-01
In our previous paper regarding determination of the regional cerebral blood flow (rCBF) using the 123 I-IMP microsphere model, we reported that the accuracy of determination of the integrated value of the input function from one-point arterial blood sampling can be increased by performing correction using the 5 min: 29 min ratio for the whole-brain count. However, failure to carry out the arterial blood collection at exactly 5 minutes after 123 I-IMP injection causes errors with this method, and there is thus a time limitation. We have now revised out method so that the one-point arterial blood sampling can be performed at any time during the interval between 5 minutes and 20 minutes after 123 I-IMP injection, with addition of a correction step for the sampling time. This revised method permits more accurate estimation of the integral of the input functions. This method was then applied to 174 experimental subjects: one-point blood samples collected at random times between 5 and 20 minutes, and the estimated values for the continuous arterial octanol extraction count (COC) were determined. The mean error rate between the COC and the actual measured continuous arterial octanol extraction count (OC) was 3.6%, and the standard deviation was 12.7%. Accordingly, in 70% of the cases, the rCBF was able to be estimated within an error rate of 13%, while estimation was possible in 95% of the cases within an error rate of 25%. This improved method is a simple technique for determination of the rCBF by 123 I-IMP microsphere model and one-point arterial blood sampling which no longer shows a time limitation and does not require any octanol extraction step. (author)
Lattin, Frank G.; Paul, Donald G.
1996-11-01
A sorbent-based gas chromatographic method provides continuous quantitative measurement of phosgene, hydrogen cyanide, and cyanogen chloride in ambient air. These compounds are subject to workplace exposure limits as well as regulation under terms of the Chemical Arms Treaty and Title III of the 1990 Clean Air Act amendments. The method was developed for on-sit use in a mobile laboratory during remediation operations. Incorporated into the method are automated multi-level calibrations at time weighted average concentrations, or lower. Gaseous standards are prepared in fused silica lined air sampling canisters, then transferred to the analytical system through dynamic spiking. Precision and accuracy studies performed to validate the method are described. Also described are system deactivation and passivation techniques critical to optimum method performance.
Directory of Open Access Journals (Sweden)
Ilaria Iaconeta
2017-09-01
Full Text Available The simulation of large deformation problems, involving complex history-dependent constitutive laws, is of paramount importance in several engineering fields. Particular attention has to be paid to the choice of a suitable numerical technique such that reliable results can be obtained. In this paper, a Material Point Method (MPM and a Galerkin Meshfree Method (GMM are presented and verified against classical benchmarks in solid mechanics. The aim is to demonstrate the good behavior of the methods in the simulation of cohesive-frictional materials, both in static and dynamic regimes and in problems dealing with large deformations. The vast majority of MPM techniques in the literatrue are based on some sort of explicit time integration. The techniques proposed in the current work, on the contrary, are based on implicit approaches, which can also be easily adapted to the simulation of static cases. The two methods are presented so as to highlight the similarities to rather than the differences from “standard” Updated Lagrangian (UL approaches commonly employed by the Finite Elements (FE community. Although both methods are able to give a good prediction, it is observed that, under very large deformation of the medium, GMM lacks robustness due to its meshfree natrue, which makes the definition of the meshless shape functions more difficult and expensive than in MPM. On the other hand, the mesh-based MPM is demonstrated to be more robust and reliable for extremely large deformation cases.
Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.
2013-01-01
We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231
New sampling method in continuous energy Monte Carlo calculation for pebble bed reactors
International Nuclear Information System (INIS)
Murata, Isao; Takahashi, Akito; Mori, Takamasa; Nakagawa, Masayuki.
1997-01-01
A pebble bed reactor generally has double heterogeneity consisting of two kinds of spherical fuel element. In the core, there exist many fuel balls piled up randomly in a high packing fraction. And each fuel ball contains a lot of small fuel particles which are also distributed randomly. In this study, to realize precise neutron transport calculation of such reactors with the continuous energy Monte Carlo method, a new sampling method has been developed. The new method has been implemented in the general purpose Monte Carlo code MCNP to develop a modified version MCNP-BALL. This method was validated by calculating inventory of spherical fuel elements arranged successively by sampling during transport calculation and also by performing criticality calculations in ordered packing models. From the results, it was confirmed that the inventory of spherical fuel elements could be reproduced using MCNP-BALL within a sufficient accuracy of 0.2%. And the comparison of criticality calculations in ordered packing models between MCNP-BALL and the reference method shows excellent agreement in neutron spectrum as well as multiplication factor. MCNP-BALL enables us to analyze pebble bed type cores such as PROTEUS precisely with the continuous energy Monte Carlo method. (author)
One step linear reconstruction method for continuous wave diffuse optical tomography
Ukhrowiyah, N.; Yasin, M.
2017-09-01
The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.
Puton, Tomasz; Kozlowski, Lukasz P; Rother, Kristian M; Bujnicki, Janusz M
2013-04-01
We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks.
Rapid continuous chemical methods for studies of nuclei far from stability
Trautmann, N; Eriksen, D; Gaggeler, H; Greulich, N; Hickmann, U; Kaffrell, N; Skarnemark, G; Stender, E; Zendel, M
1981-01-01
Fast continuous separation methods accomplished by combining a gas-jet recoil-transport system with a variety of chemical systems are described. Procedures for the isolation of individual elements from fission product mixtures with the multistage solvent extraction facility SISAK are presented. Thermochromatography in connection with a gas-jet has been studied as a technique for on-line separation of volatile fission halides. Based on chemical reactions in a gas-jet system itself separation procedures for tellurium, selenium and germanium from fission products have been worked out. All the continuous chemical methods can be performed within a few seconds. The application of such procedures to the investigation of nuclides far from the line of beta -stability is illustrated by a few examples. (16 refs).
Directory of Open Access Journals (Sweden)
Elham Ghandi
2016-09-01
Full Text Available The free vibration of frame structures has been usually studied in literature without considering the effect of axial loads. In this paper, the continuous system method is employed to investigate this effect on the free flexural and torsional vibration of two and three dimensional symmetric frames. In the continuous system method, in approximate analysis of buildings, commonly, the structure is replaced by an equivalent beam which matches the dominant characteristics of the structure. Accordingly, the natural frequencies of the symmetric frame structures are obtained through solving the governing differential equation of the equivalent beam whose stiffness and mass are supposed to be uniformly distributed along the length. The corresponding axial load applied to the replaced beam is calculated based on the total weight and the number of stories of the building. A numerical example is presented to show the simplicity and efficiency of the proposed solution.
Przewłócki, Jarosław; Górski, Jarosław; Świdziński, Waldemar
2016-12-01
The paper deals with the probabilistic analysis of the settlement of a non-cohesive soil layer subjected to cyclic loading. Originally, the settlement assessment is based on a deterministic compaction model, which requires integration of a set of differential equations. However, with the use of the Bessel functions, the settlement of a soil stratum can be calculated by a simplified algorithm. The compaction model parameters were determined for soil samples taken from subsoil near the Izmit Bay, Turkey. The computations were performed for various sets of random variables. The point estimate method was applied, and the results were verified by the Monte Carlo method. The outcome leads to a conclusion that can be useful in the prediction of soil settlement under seismic loading.
A method for the solvent extraction of low-boiling-point plant volatiles.
Xu, Ning; Gruber, Margaret; Westcott, Neil; Soroka, Julie; Parkin, Isobel; Hegedus, Dwayne
2005-01-01
A new method has been developed for the extraction of volatiles from plant materials and tested on seedling tissue and mature leaves of Arabidopsis thaliana, pine needles and commercial mixtures of plant volatiles. Volatiles were extracted with n-pentane and then subjected to quick distillation at a moderate temperature. Under these conditions, compounds such as pigments, waxes and non-volatile compounds remained undistilled, while short-chain volatile compounds were distilled into a receiving flask using a high-efficiency condenser. Removal of the n-pentane and concentration of the volatiles in the receiving flask was carried out using a Vigreux column condenser prior to GC-MS. The method is ideal for the rapid extraction of low-boiling-point volatiles from small amounts of plant material, such as is required when conducting metabolic profiling or defining biological properties of volatile components from large numbers of mutant lines.
A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds
Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang
2017-04-01
3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.
Distance-based microfluidic quantitative detection methods for point-of-care testing.
Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James
2016-04-07
Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.
The shooting method and multiple solutions of two/multi-point BVPs of second-order ODE
Directory of Open Access Journals (Sweden)
Man Kam Kwong
2006-06-01
Full Text Available Within the last decade, there has been growing interest in the study of multiple solutions of two- and multi-point boundary value problems of nonlinear ordinary differential equations as fixed points of a cone mapping. Undeniably many good results have emerged. The purpose of this paper is to point out that, in the special case of second-order equations, the shooting method can be an effective tool, sometimes yielding better results than those obtainable via fixed point techniques.
Inversion of Gravity Anomalies Using Primal-Dual Interior Point Methods
Directory of Open Access Journals (Sweden)
Aaron A. Velasco
2016-06-01
Full Text Available Structural inversion of gravity datasets based on the use of density anomalies to derive robust images of the subsurface (delineating lithologies and their boundaries constitutes a fundamental non-invasive tool for geological exploration. The use of experimental techniques in geophysics to estimate and interpret di erences in the substructure based on its density properties have proven e cient; however, the inherent non-uniqueness associated with most geophysical datasets make this the ideal scenario for the use of recently developed robust constrained optimization techniques. We present a constrained optimization approach for a least squares inversion problem aimed to characterize 2-Dimensional Earth density structure models based on Bouguer gravity anomalies. The proposed formulation is solved with a Primal-Dual Interior-Point method including equality and inequality physical and structural constraints. We validate our results using synthetic density crustal structure models with varying complexity and illustrate the behavior of the algorithm using di erent initial density structure models and increasing noise levels in the observations. Based on these implementations, we conclude that the algorithm using Primal-Dual Interior-Point methods is robust, and its results always honor the geophysical constraints. Some of the advantages of using this approach for structural inversion of gravity data are the incorporation of a priori information related to the model parameters (coming from actual physical properties of the subsurface and the reduction of the solution space contingent on these boundary conditions.
Bai, Bing
2012-03-01
There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.
International Nuclear Information System (INIS)
Ghasemi, Jahan B.; Hashemi, Beshare; Shamsipur, Mojtaba
2012-01-01
A cloud point extraction (CPE) process using the nonionic surfactant Triton X-114 to simultaneous extraction and spectrophotometric determination of uranium and zirconium from aqueous solution using partial least squares (PLS) regression is investigated. The method is based on the complexation reaction of these cations with Alizarin Red S (ARS) and subsequent micelle-mediated extraction of products. The chemical parameters affecting the separation phase and detection process were studied and optimized. Under the optimum experimental conditions (i.e. pH 5.2, Triton X-114 = 0.20%, equilibrium time 10 min and cloud point 45 C), calibration graphs were linear in the range of 0.01-3 mg L -1 with detection limits of 2.0 and 0.80 μg L -1 for U and Zr, respectively. The experimental calibration set was composed of 16 sample solutions using an orthogonal design for two component mixtures. The root mean square error of predictions (RMSEPs) for U and Zr were 0.0907 and 0.1117, respectively. The interference effect of some anions and cations was also tested. The method was applied to the simultaneous determination of U and Zr in water samples.
Development of a cloud-point extraction method for copper and nickel determination in food samples
International Nuclear Information System (INIS)
Azevedo Lemos, Valfredo; Selis Santos, Moacy; Teixeira David, Graciete; Vasconcelos Maciel, Mardson; Almeida Bezerra, Marcos de
2008-01-01
A new, simple and versatile cloud-point extraction (CPE) methodology has been developed for the separation and preconcentration of copper and nickel. The metals in the initial aqueous solution were complexed with 2-(2'-benzothiazolylazo)-5-(N,N-diethyl)aminophenol (BDAP) and Triton X-114 was added as surfactant. Dilution of the surfactant-rich phase with acidified methanol was performed after phase separation, and the copper and nickel contents were measured by flame atomic absorption spectrometry. The variables affecting the cloud-point extraction were optimized using a Box-Behnken design. Under the optimum experimental conditions, enrichment factors of 29 and 25 were achieved for copper and nickel, respectively. The accuracy of the method was evaluated and confirmed by analysis of the followings certified reference materials: Apple Leaves, Spinach Leaves and Tomato Leaves. The limits of detection expressed to solid sample analysis were 0.1 μg g -1 (Cu) and 0.4 μg g -1 (Ni). The precision for 10 replicate measurements of 75 μg L -1 Cu or Ni was 6.4 and 1.0, respectively. The method has been successfully applied to the analysis of food samples
Slicing Method for curved façade and window extraction from point clouds
Iman Zolanvari, S. M.; Laefer, Debra F.
2016-09-01
Laser scanning technology is a fast and reliable method to survey structures. However, the automatic conversion of such data into solid models for computation remains a major challenge, especially where non-rectilinear features are present. Since, openings and the overall dimensions of the buildings are the most critical elements in computational models for structural analysis, this article introduces the Slicing Method as a new, computationally-efficient method for extracting overall façade and window boundary points for reconstructing a façade into a geometry compatible for computational modelling. After finding a principal plane, the technique slices a façade into limited portions, with each slice representing a unique, imaginary section passing through a building. This is done along a façade's principal axes to segregate window and door openings from structural portions of the load-bearing masonry walls. The method detects each opening area's boundaries, as well as the overall boundary of the façade, in part, by using a one-dimensional projection to accelerate processing. Slices were optimised as 14.3 slices per vertical metre of building and 25 slices per horizontal metre of building, irrespective of building configuration or complexity. The proposed procedure was validated by its application to three highly decorative, historic brick buildings. Accuracy in excess of 93% was achieved with no manual intervention on highly complex buildings and nearly 100% on simple ones. Furthermore, computational times were less than 3 sec for data sets up to 2.6 million points, while similar existing approaches required more than 16 hr for such datasets.
Measuring global oil trade dependencies: An application of the point-wise mutual information method
International Nuclear Information System (INIS)
Kharrazi, Ali; Fath, Brian D.
2016-01-01
Oil trade is one of the most vital networks in the global economy. In this paper, we analyze the 1998–2012 oil trade networks using the point-wise mutual information (PMI) method and determine the pairwise trade preferences and dependencies. Using examples of the USA's trade partners, this research demonstrates the usefulness of the PMI method as an additional methodological tool to evaluate the outcomes from countries' decisions to engage in preferred trading partners. A positive PMI value indicates trade preference where trade is larger than would be expected. For example, in 2012 the USA imported 2,548.7 kbpd despite an expected 358.5 kbpd of oil from Canada. Conversely, a negative PMI value indicates trade dis-preference where the amount of trade is smaller than what would be expected. For example, the 15-year average of annual PMI between Saudi Arabia and the U.S.A. is −0.130 and between Russia and the USA −1.596. We reflect the three primary reasons of discrepancies between actual and neutral model trade can be related to position, price, and politics. The PMI can quantify the political success or failure of trade preferences and can more accurately account temporal variation of interdependencies. - Highlights: • We analyzed global oil trade networks using the point-wise mutual information method. • We identified position, price, & politics as drivers of oil trade preference. • The PMI method is useful in research on complex trade networks and dependency theory. • A time-series analysis of PMI can track dependencies & evaluate policy decisions.
Variation method for optimization of Raman fiber amplifier pumped by continuous-spectrum radiation
International Nuclear Information System (INIS)
Ghasempour Ardekani, A.; Bahrampour, A. R.; Feizpour, A.
2007-01-01
In Raman fiber amplifiers, reduction of gain ripple versus frequency has a great importance. In this article using variational method and continuous pump, gain ripple is optimized. It is shown here that for a 40 km line the average gain is 1.3dB and the gain ripple is 0.12 dB, that is lower than the latest published data.
Continuous shear - a method for studying material elements passing a stationary shear plane
DEFF Research Database (Denmark)
Lindegren, Maria; Wiwe, Birgitte; Wanheim, Tarras
2003-01-01
circumferential groove. Normally shear in metal forming processes is of another nature, namely where the material elements move through a stationary shear zone, often of small width. In this paper a method enabling the simulation of this situation is presented. A tool for continuous shear has beeen manufactured...... and tested with AlMgSil and copper. The sheared material has thereafter been tested n plane strain compression with different orientation concerning the angle between the shear plane and the compression direction....
The Concept of Method for Determining the Minimum Level of Airport Business Continuity
Directory of Open Access Journals (Sweden)
Kozłowski Michał
2016-07-01
Full Text Available The paper presents the problem of determining the minimum acceptable level of products and services of airport business continuity. Conducted a study of legal requirements and operational needs. Characterized components of BCMS (ISO 22301. Determined the relationship between measures of the reliability and capacity in the airport BCMS. On this basis, presented a concept of use the reliability gamma-percent resource measure and RCM methods in the airport BCMS.
Directory of Open Access Journals (Sweden)
Meng Lu
2013-01-01
Full Text Available Thickness of tundish cover flux (TCF plays an important role in continuous casting (CC steelmaking process. Traditional measurement method of TCF thickness is single/double wire methods, which have several problems such as personal security, easily affected by operators, and poor repeatability. To solve all these problems, in this paper, we specifically designed and built an instrumentation and presented a novel method to measure the TCF thickness. The instrumentation was composed of a measurement bar, a mechanical device, a high-definition industrial camera, a Siemens S7-200 programmable logic controller (PLC, and a computer. Our measurement method was based on the computer vision algorithms, including image denoising method, monocular range measurement method, scale invariant feature transform (SIFT, and image gray gradient detection method. Using the present instrumentation and method, images in the CC tundish can be collected by camera and transferred to computer to do imaging processing. Experiments showed that our instrumentation and method worked well at scene of steel plants, can accurately measure the thickness of TCF, and overcome the disadvantages of traditional measurement methods, or even replace the traditional ones.
Aerodynamic Optimization Based on Continuous Adjoint Method for a Flexible Wing
Directory of Open Access Journals (Sweden)
Zhaoke Xu
2016-01-01
Full Text Available Aerodynamic optimization based on continuous adjoint method for a flexible wing is developed using FORTRAN 90 in the present work. Aerostructural analysis is performed on the basis of high-fidelity models with Euler equations on the aerodynamic side and a linear quadrilateral shell element model on the structure side. This shell element can deal with both thin and thick shell problems with intersections, so this shell element is suitable for the wing structural model which consists of two spars, 20 ribs, and skin. The continuous adjoint formulations based on Euler equations and unstructured mesh are derived and used in the work. Sequential quadratic programming method is adopted to search for the optimal solution using the gradients from continuous adjoint method. The flow charts of rigid and flexible optimization are presented and compared. The objective is to minimize drag coefficient meanwhile maintaining lift coefficient for a rigid and flexible wing. A comparison between the results from aerostructural analysis of rigid optimization and flexible optimization is shown here to demonstrate that it is necessary to include the effect of aeroelasticity in the optimization design of a wing.
Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul
2014-01-01
Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
The Application of the Method of Continuous Casting for Manufacturing of Welding Wire AMg6
International Nuclear Information System (INIS)
Azhazha, V.M.; Sverdlov, V.Ya.; Kondratov, A.A.; Rudycheva, T.Yu.
2007-01-01
The method of manufacturing semifinished item of high alloyed of aluminum, silver and copper alloys has been investigated on the basis of the continuous casting method. The sample of aluminum alloy AMg6 consist of small grains with the vios-cut dimension ∼ 15 mkm and which are stretched in the direction of longitudinal axis of the sample Such microstructure is favourable for plastic deformation of the sample. Welding wire which meets the demands of standards of commercial welding wires of this brand has been produced by the drawing from the sample
Continuous method for refining sodium. [for use in LMFBR type reactors
Energy Technology Data Exchange (ETDEWEB)
Batoux, B; Laurent-Atthalin, A; Salmon, M
1973-11-16
The invention relates to a refining method according to which commercial sodium provides a high purity sodium with, in particular, a very small calcium content. The method consists in continuously feeding a predetermined amount of sodium peroxide into a sodium stream, mixing and causing said sodium peroxide to reach with sodium at an appropriate temperature, and, finally, separating the reaction products from sodium by decanting and filtering same. The thus obtained high purity sodium meets the requirements of atomic industries in particular, in view of its possible use as coolant in nuclear reactors of the ''breeder'' type.
Damage detection and locating using tone burst and continuous excitation modulation method
Li, Zheng; Wang, Zhi; Xiao, Li; Qu, Wenzhong
2014-03-01
Among structural health monitoring techniques, nonlinear ultrasonic spectroscopy methods are found to be effective diagnostic approach to detecting nonlinear damage such as fatigue crack, due to their sensitivity to incipient structural changes. In this paper, a nonlinear ultrasonic modulation method was developed to detect and locate a fatigue crack on an aluminum plate. The method is different with nonlinear wave modulation method which recognizes the modulation of low-frequency vibration and high-frequency ultrasonic wave; it recognizes the modulation of tone burst and high-frequency ultrasonic wave. In the experiment, a Hanning window modulated sinusoidal tone burst and a continuous sinusoidal excitation were simultaneously imposed on the PZT array which was bonded on the surface of an aluminum plate. The modulations of tone burst and continuous sinusoidal excitation was observed in different actuator-sensor paths, indicating the presence and location of fatigue crack. The results of experiments show that the proposed method is capable of detecting and locating the fatigue crack successfully.
Zhu, Yanqun; Zhou, Jinsong; He, Sheng; Cai, Xiaoshu; Hu, Changxin; Zheng, Jianming; Zhang, Le; Luo, Zhongyang; Cen, Kefa
2007-06-01
The mercury emission control approach attaches more importance. The accurate measurement of mercury speciation is a first step. Because OH method (accepted method) can't provide the real-time data and 2-week time for results attained, it's high time to seek on line mercury continuous emission monitors(Hg-CEM). Firstly, the gaseous elemental and oxidized mercury were conducted to measure using OH and CEM method under normal operation conditions of PC boiler after ESP, the results between two methods show good consistency. Secondly, through ESP, gaseous oxidized mercury decrease a little and particulate mercury reduce a little bit, but the elemental mercury is just the opposite. Besides, the WFGD system achieved to gaseous oxidized mercury removal of 53.4%, gaseous overall mercury and elemental mercury are 37.1% and 22.1%, respectively.
International Nuclear Information System (INIS)
Liu, W; Sawant, A; Ruan, D
2016-01-01
Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity in local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real
Directory of Open Access Journals (Sweden)
Ssennoga Twaha
2017-12-01
Full Text Available This study proposes and implements maximum power Point Tracking (MPPT control on thermoelectric generation system using an extremum seeking control (ESC algorithm. The MPPT is applied to guarantee maximum power extraction from the TEG system. The work has been carried out through modelling of thermoelectric generator/dc-dc converter system using Matlab/Simulink. The effectiveness of ESC technique has been assessed by comparing the results with those of the Perturb and Observe (P&O MPPT method under the same operating conditions. Results indicate that ESC MPPT method extracts more power than the P&O technique, where the output power of ESC technique is higher than that of P&O by 0.47 W or 6.1% at a hot side temperature of 200 °C. It is also noted that the ESC MPPT based model is almost fourfold faster than the P&O method. This is attributed to smaller MPPT circuit of ESC compared to that of P&O, hence we conclude that the ESC MPPT method outperforms the P&O technique.
Evaluation of factor for one-point venous blood sampling method based on the causality model
International Nuclear Information System (INIS)
Matsutomo, Norikazu; Onishi, Hideo; Kobara, Kouichi; Sasaki, Fumie; Watanabe, Haruo; Nagaki, Akio; Mimura, Hiroaki
2009-01-01
One-point venous blood sampling method (Mimura, et al.) can evaluate the regional cerebral blood flow (rCBF) value with a high degree of accuracy. However, the method is accompanied by complexity of technique because it requires a venous blood Octanol value, and its accuracy is affected by factors of input function. Therefore, we evaluated the factors that are used for input function to determine the accuracy input function and simplify the technique. The input function which uses the time-dependent brain count of 5 minutes, 15 minutes, and 25 minutes from administration, and the input function in which an objective variable is used as the artery octanol value to exclude the venous blood octanol value are created. Therefore, a correlation between these functions and rCBF value by the microsphere (MS) method is evaluated. Creation of a high-accuracy input function and simplification of technique are possible. The rCBF value obtained by the input function, the factor of which is a time-dependent brain count of 5 minutes from administration, and the objective variable is artery octanol value, had a high correlation with the MS method (y=0.899x+4.653, r=0.842). (author)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Method and apparatus for improved melt flow during continuous strip casting
Follstaedt, Donald W.; King, Edward L.; Schneider, Ken C.
1991-11-12
The continuous casting of metal strip using the melt overflow process is improved by controlling the weir conditions in the nozzle to provide a more uniform flow of molten metal across the width of the nozzle and reducing the tendency for freezing of metal along the interface with refractory surfaces. A weir design having a sloped rear wall and tapered sidewalls and critical gap controls beneath the weir has resulted in the drastic reduction in edge tearing and a significant improvement in strip uniformity. The floor of the container vessel is preferably sloped and the gap between the nozzle and the rotating substrate is critically controlled. The resulting flow patterns observed with the improved casting process have reduced thermal gradients in the bath, contained surface slag and eliminated undesirable solidification near the discharge area by increasing the flow rates at those points.
A hybrid metaheuristic method to optimize the order of the sequences in continuous-casting
Directory of Open Access Journals (Sweden)
Achraf Touil
2016-06-01
Full Text Available In this paper, we propose a hybrid metaheuristic algorithm to maximize the production and to minimize the processing time in the steel-making and continuous casting (SCC by optimizing the order of the sequences where a sequence is a group of jobs with the same chemical characteristics. Based on the work Bellabdaoui and Teghem (2006 [Bellabdaoui, A., & Teghem, J. (2006. A mixed-integer linear programming model for the continuous casting planning. International Journal of Production Economics, 104(2, 260-270.], a mixed integer linear programming for scheduling steelmaking continuous casting production is presented to minimize the makespan. The order of the sequences in continuous casting is assumed to be fixed. The main contribution is to analyze an additional way to determine the optimal order of sequences. A hybrid method based on simulated annealing and genetic algorithm restricted by a tabu list (SA-GA-TL is addressed to obtain the optimal order. After parameter tuning of the proposed algorithm, it is tested on different instances using a.NET application and the commercial software solver Cplex v12.5. These results are compared with those obtained by SA-TL (simulated annealing restricted by tabu list.
A phase quantification method based on EBSD data for a continuously cooled microalloyed steel
Energy Technology Data Exchange (ETDEWEB)
Zhao, H.; Wynne, B.P.; Palmiere, E.J., E-mail: e.j.palmiere@sheffield.ac.uk
2017-01-15
Mechanical properties of steels depend on the phase constitutions of the final microstructures which can be related to the processing parameters. Therefore, accurate quantification of different phases is necessary to investigate the relationships between processing parameters, final microstructures and mechanical properties. Point counting on micrographs observed by optical or scanning electron microscopy is widely used as a phase quantification method, and different phases are discriminated according to their morphological characteristics. However, it is difficult to differentiate some of the phase constituents with similar morphology. Differently, for EBSD based phase quantification methods, besides morphological characteristics, other parameters derived from the orientation information can also be used for discrimination. In this research, a phase quantification method based on EBSD data in the unit of grains was proposed to identify and quantify the complex phase constitutions of a microalloyed steel subjected to accelerated coolings. Characteristics of polygonal ferrite/quasi-polygonal ferrite, acicular ferrite and bainitic ferrite on grain averaged misorientation angles, aspect ratios, high angle grain boundary fractions and grain sizes were analysed and used to develop the identification criteria for each phase. Comparing the results obtained by this EBSD based method and point counting, it was found that this EBSD based method can provide accurate and reliable phase quantification results for microstructures with relatively slow cooling rates. - Highlights: •A phase quantification method based on EBSD data in the unit of grains was proposed. •The critical grain area above which GAM angles are valid parameters was obtained. •Grain size and grain boundary misorientation were used to identify acicular ferrite. •High cooling rates deteriorate the accuracy of this EBSD based method.
Different seeds to solve the equations of stochastic point kinetics using the Euler-Maruyama method
International Nuclear Information System (INIS)
Suescun D, D.; Oviedo T, M.
2017-09-01
In this paper, a numerical study of stochastic differential equations that describe the kinetics in a nuclear reactor is presented. These equations, known as the stochastic equations of punctual kinetics they model temporal variations in neutron population density and concentrations of deferred neutron precursors. Because these equations are probabilistic in nature (since random oscillations in the neutrons and population of precursors were considered to be approximately normally distributed, and these equations also possess strong coupling and stiffness properties) the proposed method for the numerical simulations is the Euler-Maruyama scheme that provides very good approximations for calculating the neutron population and concentrations of deferred neutron precursors. The method proposed for this work was computationally tested for different seeds, initial conditions, experimental data and forms of reactivity for a group of precursors and then for six groups of deferred neutron precursors at each time step with 5000 Brownian movements per seed. In a paper reported in the literature, the Euler-Maruyama method was proposed, but there are many doubts about the reported values, in addition to not reporting the seed used, so in this work is expected to rectify the reported values. After taking the average of the different seeds used to generate the pseudo-random numbers the results provided by the Euler-Maruyama scheme will be compared in mean and standard deviation with other methods reported in the literature and results of the deterministic model of the equations of the punctual kinetics. This comparison confirms in particular that the Euler-Maruyama scheme is an efficient method to solve the equations of stochastic point kinetics but different from the values found and reported by another author. The Euler-Maruyama method is simple and easy to implement, provides acceptable results for neutron population density and concentration of deferred neutron precursors and
Ryvolová, Markéta; Preisler, Jan; Foret, Frantisek; Hauser, Peter C; Krásenský, Pavel; Paull, Brett; Macka, Mirek
2010-01-01
This work for the first time combines three on-capillary detection methods, namely, capacitively coupled contactless conductometric (C(4)D), photometric (PD), and fluorimetric (FD), in a single (identical) point of detection cell, allowing concurrent measurements at a single point of detection for use in capillary electrophoresis, capillary electrochromatography, and capillary/nanoliquid chromatography. The novel design is based on a standard 6.3 mm i.d. fiber-optic SMA adapter with a drilled opening for the separation capillary to go through, to which two concentrically positioned C(4)D detection electrodes with a detection gap of 7 mm were added on each side acting simultaneously as capillary guides. The optical fibers in the SMA adapter were used for the photometric signal (absorbance), and another optical fiber at a 45 degrees angle to the capillary was applied to collect the emitted light for FD. Light emitting diodes (255 and 470 nm) were used as light sources for the PD and FD detection modes. LOD values were determined under flow-injection conditions to exclude any stacking effects: For the 470 nm LED limits of detection (LODs) for FD and PD were for fluorescein (1 x 10(-8) mol/L) and tartrazine (6 x 10(-6) mol/L), respectively, and the LOD for the C(4)D was for magnesium chloride (5 x 10(-7) mol/L). The advantage of the three different detection signals in a single point is demonstrated in capillary electrophoresis using model mixtures and samples including a mixture of fluorescent and nonfluorescent dyes and common ions, underivatized amino acids, and a fluorescently labeled digest of bovine serum albumin.
Application of Wielandt method in continuous-energy nuclear data sensitivity analysis with RMC code
International Nuclear Information System (INIS)
Qiu Yishu; Wang Kan; She Ding
2015-01-01
The Iterated Fission Probability (IFP) method, an accurate method to estimate adjoint-weighted quantities in the continuous-energy Monte Carlo criticality calculations, has been widely used for calculating kinetic parameters and nuclear data sensitivity coefficients. By using a strategy of waiting, however, this method faces the challenge of high memory usage to store the tallies of original contributions which size is proportional to the number of particle histories in each cycle. Recently, the Wielandt method, applied by Monte Carlo code McCARD to calculate kinetic parameters, estimates adjoint fluxes in a single particle history and thus can save memory usage. In this work, the Wielandt method has been applied in Rector Monte Carlo code RMC for nuclear data sensitivity analysis. The methodology and algorithm of applying Wielandt method in estimation of adjoint-based sensitivity coefficients are discussed. Verification is performed by comparing the sensitivity coefficients calculated by Wielandt method with analytical solutions, those computed by IFP method which is also implemented in RMC code for sensitivity analysis, and those from the multi-group TSUNAMI-3D module in SCALE code package. (author)
Advanced DNA-Based Point-of-Care Diagnostic Methods for Plant Diseases Detection
Directory of Open Access Journals (Sweden)
Han Yih Lau
2017-12-01
Full Text Available Diagnostic technologies for the detection of plant pathogens with point-of-care capability and high multiplexing ability are an essential tool in the fight to reduce the large agricultural production losses caused by plant diseases. The main desirable characteristics for such diagnostic assays are high specificity, sensitivity, reproducibility, quickness, cost efficiency and high-throughput multiplex detection capability. This article describes and discusses various DNA-based point-of care diagnostic methods for applications in plant disease detection. Polymerase chain reaction (PCR is the most common DNA amplification technology used for detecting various plant and animal pathogens. However, subsequent to PCR based assays, several types of nucleic acid amplification technologies have been developed to achieve higher sensitivity, rapid detection as well as suitable for field applications such as loop-mediated isothermal amplification, helicase-dependent amplification, rolling circle amplification, recombinase polymerase amplification, and molecular inversion probe. The principle behind these technologies has been thoroughly discussed in several review papers; herein we emphasize the application of these technologies to detect plant pathogens by outlining the advantages and disadvantages of each technology in detail.
Cardinell, Alex P.
1999-01-01
A continuous seismic-reflection profiling survey was conducted by the U.S. Geological Survey on the Neuse River near the Cherry Point Marine Corps Air Station during July 7-24, 1998. Approximately 52 miles of profiling data were collected during the survey from areas northwest of the Air Station to Flanner Beach and southeast to Cherry Point. Positioning of the seismic lines was done by using an integrated navigational system. Data from the survey were used to define and delineate paleochannel alignments under the Neuse River near the Air Station. These data also were correlated with existing surface and borehole geophysical data, including vertical seismic-profiling velocity data collected in 1995. Sediments believed to be Quaternary in age were identified at varying depths on the seismic sections as undifferentiated reflectors and lack the lateral continuity of underlying reflectors believed to represent older sediments of Tertiary age. The sediments of possible Quaternary age thicken to the southeast. Paleochannels of Quaternary age and varying depths were identified beneath the Neuse River estuary. These paleochannels range in width from 870 feet to about 6,900 feet. Two zones of buried paleochannels were identified in the continuous seismic-reflection profiling data. The eastern paleochannel zone includes two large superimposed channel features identified during this study and in re-interpreted 1995 land seismic-reflection data. The second paleochannel zone, located west of the first paleochannel zone, contains several small paleochannels near the central and south shore of the Neuse River estuary between Slocum Creek and Flanner Beach. This second zone of channel features may be continuous with those mapped by the U.S. Geological Survey in 1995 using land seismic-reflection data on the southern end of the Air Station. Most of the channels were mapped at the Quaternary-Tertiary sediment boundary. These channels appear to have been cut into the older sediments
A new integrated dual time-point amyloid PET/MRI data analysis method
Energy Technology Data Exchange (ETDEWEB)
Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco [University Hospital of Padua, Nuclear Medicine Unit, Department of Medicine - DIMED, Padua (Italy); Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama [Leipzig University, Department of Nuclear Medicine, Leipzig (Germany); Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo [University Hospital of Padua, Neurology, Department of Neurosciences (DNS), Padua (Italy); Frigo, Anna Chiara [University Hospital of Padua, Biostatistics, Epidemiology and Public Health Unit, Department of Cardiac, Thoracic and Vascular Sciences, Padua (Italy)
2017-11-15
In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ({sup 18}F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between
A new integrated dual time-point amyloid PET/MRI data analysis method
International Nuclear Information System (INIS)
Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco; Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama; Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo; Frigo, Anna Chiara
2017-01-01
In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ( 18 F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between age
A pilot test of a new stated preference valuation method. Continuous attribute-based stated choice
International Nuclear Information System (INIS)
Ready, Richard; Fisher, Ann; Guignet, Dennis; Stedman, Richard; Wang, Junchao
2006-01-01
A new stated preference nonmarket valuation technique is developed. In an interactive computerized survey, respondents move continuous sliders to vary levels of environmental attributes. The total cost of the combination of attributes is calculated according to a preprogrammed cost function, continuously updated and displayed as respondents move the sliders. Each registered choice reveals the respondent's marginal willingness to pay for each of the attributes. The method is tested in a museum exhibit on global climate change. Two construct validity tests were conducted. Responses are sensitive to the shape of the cost function in ways that are consistent with expectations based on economic theory. Implied marginal willingness to pay values were similar to those estimated using a more traditional paired comparisons stated choice format. However, responses showed range effects that indicate potential cognitive biases. (author)
Directory of Open Access Journals (Sweden)
Kaijun Zhou
2017-09-01
Full Text Available The Jump Point Search (JPS algorithm is adopted for local path planning of the driverless car under urban environment, and it is a fast search method applied in path planning. Firstly, a vector Geographic Information System (GIS map, including Global Positioning System (GPS position, direction, and lane information, is built for global path planning. Secondly, the GIS map database is utilized in global path planning for the driverless car. Then, the JPS algorithm is adopted to avoid the front obstacle, and to find an optimal local path for the driverless car in the urban environment. Finally, 125 different simulation experiments in the urban environment demonstrate that JPS can search out the optimal and safety path successfully, and meanwhile, it has a lower time complexity compared with the Vector Field Histogram (VFH, the Rapidly Exploring Random Tree (RRT, A*, and the Probabilistic Roadmaps (PRM algorithms. Furthermore, JPS is validated usefully in the structured urban environment.
Solving eigenvalue problems on curved surfaces using the Closest Point Method
Macdonald, Colin B.
2011-06-01
Eigenvalue problems are fundamental to mathematics and science. We present a simple algorithm for determining eigenvalues and eigenfunctions of the Laplace-Beltrami operator on rather general curved surfaces. Our algorithm, which is based on the Closest Point Method, relies on an embedding of the surface in a higher-dimensional space, where standard Cartesian finite difference and interpolation schemes can be easily applied. We show that there is a one-to-one correspondence between a problem defined in the embedding space and the original surface problem. For open surfaces, we present a simple way to impose Dirichlet and Neumann boundary conditions while maintaining second-order accuracy. Convergence studies and a series of examples demonstrate the effectiveness and generality of our approach. © 2011 Elsevier Inc.
Improved incremental conductance method for maximum power point tracking using cuk converter
Directory of Open Access Journals (Sweden)
M. Saad Saoud
2014-03-01
Full Text Available The Algerian government relies on a strategy focused on the development of inexhaustible resources such as solar and uses to diversify energy sources and prepare the Algeria of tomorrow: about 40% of the production of electricity for domestic consumption will be from renewable sources by 2030, Therefore it is necessary to concentrate our forces in order to reduce the application costs and to increment their performances, Their performance is evaluated and compared through theoretical analysis and digital simulation. This paper presents simulation of improved incremental conductance method for maximum power point tracking (MPPT using DC-DC cuk converter. This improved algorithm is used to track MPPs because it performs precise control under rapidly changing Atmospheric conditions, Matlab/ Simulink were employed for simulation studies.
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
Directory of Open Access Journals (Sweden)
Florin POPESCU
2017-12-01
Full Text Available Early warning system (EWS based on a reliable forecasting process has become a critical component of the management of large complex industrial projects in the globalized transnational environment. The purpose of this research is to critically analyze the forecasting methods from the point of view of early warning, choosing those useful for the construction of EWS. This research addresses complementary techniques, using Bayesian Networks, which addresses both uncertainties and causality in project planning and execution, with the goal of generating early warning signals for project managers. Even though Bayesian networks have been widely used in a range of decision-support applications, their application as early warning systems for project management is still new.
Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro
2016-07-01
Numerical analysis of the rotation of an ultrasonically levitated droplet with a free surface boundary is discussed. The ultrasonically levitated droplet is often reported to rotate owing to the surface tangential component of acoustic radiation force. To observe the torque from an acoustic wave and clarify the mechanism underlying the phenomena, it is effective to take advantage of numerical simulation using the distributed point source method (DPSM) and moving particle semi-implicit (MPS) method, both of which do not require a calculation grid or mesh. In this paper, the numerical treatment of the viscoacoustic torque, which emerges from the viscous boundary layer and governs the acoustical droplet rotation, is discussed. The Reynolds stress traction force is calculated from the DPSM result using the idea of effective normal particle velocity through the boundary layer and input to the MPS surface particles. A droplet levitated in an acoustic chamber is simulated using the proposed calculation method. The droplet is vertically supported by a plane standing wave from an ultrasonic driver and subjected to a rotating sound field excited by two acoustic sources on the side wall with different phases. The rotation of the droplet is successfully reproduced numerically and its acceleration is discussed and compared with those in the literature.
Lenton, T. M.; Livina, V. N.; Dakos, V.; Van Nes, E. H.; Scheffer, M.
2012-01-01
We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings. PMID:22291229
Comparison of point-of-care methods for preparation of platelet concentrate (platelet-rich plasma).
Weibrich, Gernot; Kleis, Wilfried K G; Streckbein, Philipp; Moergel, Maximilian; Hitzler, Walter E; Hafner, Gerd
2012-01-01
This study analyzed the concentrations of platelets and growth factors in platelet-rich plasma (PRP), which are likely to depend on the method used for its production. The cellular composition and growth factor content of platelet concentrates (platelet-rich plasma) produced by six different procedures were quantitatively analyzed and compared. Platelet and leukocyte counts were determined on an automatic cell counter, and analysis of growth factors was performed using enzyme-linked immunosorbent assay. The principal differences between the analyzed PRP production methods (blood bank method of intermittent flow centrifuge system/platelet apheresis and by the five point-of-care methods) and the resulting platelet concentrates were evaluated with regard to resulting platelet, leukocyte, and growth factor levels. The platelet counts in both whole blood and PRP were generally higher in women than in men; no differences were observed with regard to age. Statistical analysis of platelet-derived growth factor AB (PDGF-AB) and transforming growth factor β1 (TGF-β1) showed no differences with regard to age or gender. Platelet counts and TGF-β1 concentration correlated closely, as did platelet counts and PDGF-AB levels. There were only rare correlations between leukocyte counts and PDGF-AB levels, but comparison of leukocyte counts and PDGF-AB levels demonstrated certain parallel tendencies. TGF-β1 levels derive in substantial part from platelets and emphasize the role of leukocytes, in addition to that of platelets, as a source of growth factors in PRP. All methods of producing PRP showed high variability in platelet counts and growth factor levels. The highest growth factor levels were found in the PRP prepared using the Platelet Concentrate Collection System manufactured by Biomet 3i.
Probabilistic multiobjective wind-thermal economic emission dispatch based on point estimated method
International Nuclear Information System (INIS)
Azizipanah-Abarghooee, Rasoul; Niknam, Taher; Roosta, Alireza; Malekpour, Ahmad Reza; Zare, Mohsen
2012-01-01
In this paper, wind power generators are being incorporated in the multiobjective economic emission dispatch problem which minimizes wind-thermal electrical energy cost and emissions produced by fossil-fueled power plants, simultaneously. Large integration of wind energy sources necessitates an efficient model to cope with uncertainty arising from random wind variation. Hence, a multiobjective stochastic search algorithm based on 2m point estimated method is implemented to analyze the probabilistic wind-thermal economic emission dispatch problem considering both overestimation and underestimation of available wind power. 2m point estimated method handles the system uncertainties and renders the probability density function of desired variables efficiently. Moreover, a new population-based optimization algorithm called modified teaching-learning algorithm is proposed to determine the set of non-dominated optimal solutions. During the simulation, the set of non-dominated solutions are kept in an external memory (repository). Also, a fuzzy-based clustering technique is implemented to control the size of the repository. In order to select the best compromise solution from the repository, a niching mechanism is utilized such that the population will move toward a smaller search space in the Pareto-optimal front. In order to show the efficiency and feasibility of the proposed framework, three different test systems are represented as case studies. -- Highlights: ► WPGs are being incorporated in the multiobjective economic emission dispatch problem. ► 2m PEM handles the system uncertainties. ► A MTLBO is proposed to determine the set of non-dominated (Pareto) optimal solutions. ► A fuzzy-based clustering technique is implemented to control the size of the repository.
Intelligent Continuous Double Auction method For Service Allocation in Cloud Computing
Directory of Open Access Journals (Sweden)
Nima Farajian
2013-10-01
Full Text Available Market-oriented approach is an effective method for resource management because of its regulation of supply and demand and is suitable for cloud environment where the computing resources, either software or hardware, are virtualized and allocated as services from providers to users. In this paper a continuous double auction method for efficient cloud service allocation is presented in which i enables consumers to order various resources (services for workflows and coallocation, ii consumers and providers make bid and request prices based on deadline and workload time and in addition providers can tradeoff between utilization time and price of bids, iii auctioneers can intelligently find optimum matching by sharing and merging resources which result more trades. Experimental results show that proposed method is efficient in terms of successful allocation rate and resource utilization.
An improved method of continuous LOD based on fractal theory in terrain rendering
Lin, Lan; Li, Lijun
2007-11-01
With the improvement of computer graphic hardware capability, the algorithm of 3D terrain rendering is going into the hot topic of real-time visualization. In order to solve conflict between the rendering speed and reality of rendering, this paper gives an improved method of terrain rendering which improves the traditional continuous level of detail technique based on fractal theory. This method proposes that the program needn't to operate the memory repeatedly to obtain different resolution terrain model, instead, obtains the fractal characteristic parameters of different region according to the movement of the viewpoint. Experimental results show that the method guarantees the authenticity of landscape, and increases the real-time 3D terrain rendering speed.
This study compared the utility of three sampling methods for ecological monitoring based on: interchangeability of data (rank correlations), precision (coefficient of variation), cost (minutes/transect), and potential of each method to generate multiple indicators. Species richness and foliar cover...
Shape resonances of Be- and Mg- investigated with the method of analytic continuation
Čurík, Roman; Paidarová, I.; Horáček, J.
2018-05-01
The regularized method of analytic continuation is used to study the low-energy negative-ion states of beryllium (configuration 2 s2ɛ p 2P ) and magnesium (configuration 3 s2ɛ p 2P ) atoms. The method applies an additional perturbation potential and requires only routine bound-state multi-electron quantum calculations. Such computations are accessible by most of the free or commercial quantum chemistry software available for atoms and molecules. The perturbation potential is implemented as a spherical Gaussian function with a fixed width. Stability of the analytic continuation technique with respect to the width and with respect to the input range of electron affinities is studied in detail. The computed resonance parameters Er=0.282 eV, Γ =0.316 eV for the 2 p state of Be- and Er=0.188 eV, Γ =0.167 for the 3 p state of Mg- agree well with the best results obtained by much more elaborate and computationally demanding present-day methods.
Directory of Open Access Journals (Sweden)
S. I. Bartsev
2015-06-01
Full Text Available A possible method for experimental determination of parameters of the previously proposed continual mathematical model of soil organic matter transformation is theoretically considered in this paper. The previously proposed by the authors continual model of soil organic matter transformation, based on using the rate of matter transformation as a continual scale of its recalcitrance, describes the transformation process phenomenologically without going into detail of microbiological mechanisms of transformation. Thereby simplicity of the model is achieved. The model is represented in form of one differential equation in firstorder partial derivatives, which has an analytical solution in elementary functions. The model equation contains a small number of empirical parameters which generally characterize environmental conditions where the matter transformation process occurs and initial properties of the plant litter. Given the values of these parameters, it is possible to calculate dynamics of soil organic matter stocks and its distribution over transformation rate. In the present study, possible approaches for determination of the model parameters are considered and a simple method of their experimental measurement is proposed. An experiment of an incubation of chemically homogeneous samples in soil and multiple sequential measurement of the sample mass loss with time is proposed. An equation of time dynamics of mass loss of incubated homogeneous sample is derived from the basic assumption of the presented soil organic matter transformation model. Thus, fitting by the least squares method the parameters of sample mass loss curve calculated according the proposed mass loss dynamics equation allows to determine the parameters of the general equation of soil organic transformation model.
International Nuclear Information System (INIS)
Odano, Ikuo; Takahashi, Naoya; Noguchi, Eikichi; Ohtaki, Hiro; Hatano, Masayoshi; Yamazaki, Yoshihiro; Higuchi, Takeshi; Ohkubo, Masaki.
1994-01-01
We developed a new non-invasive technique; one-point sampling method, for quantitative measurement of regional cerebral blood flow (rCBF) with N-isopropyl-p-[ 123 I]iodoamphetamine and SPECT. Although the continuous withdrawal of arterial blood and octanol treatment of the blood are required in the conventional microsphere method, the new technique dose not require these two procedures. The total activity of 123 I-IMP obtained by the continuous withdrawal of arterial blood is inferred by the activity of 133 I-IMP obtained by the one point arterial sample using a regression line. To determine when one point sampling time was optimum for inferring integral input function of the continuous withdrawal and whether the treatment of sampled blood for octanol fraction was required, we examined a correlation between the total activity of arterial blood withdrawn from 0 to 5 min after the injection and the activity of one point sample obtained at time t, and calculated a regression line. As a result, the minimum % error for the inference using the regression line was obtained at 6 min after the 123 I-IMP injection, moreover, the octanol treatment was not required. Then examining an effect on the values of rCBF when the sampling time was deviated from 6 min, we could correct the values in approximately 3% error when the sample was obtained at 6±1 min after the injection. The one-point sampling method provides accurate and relatively non-invasive measurement of rCBF without octanol extraction of arterial blood. (author)
Study of N-13 decay on time using continuous kinetic function method
International Nuclear Information System (INIS)
Tran Dai Nghiep; Vu Hoang Lam; Nguyen Ngoc Son; Nguyen Duc Thanh
1993-01-01
The decay function from radioisotope 13 N formed in the reaction 14 N(γ,n) 13 N was registered by high resolution gamma spectrometer in multiscanning mode with gamma energy 511 keV. The experimental data was processed by common and kinetic function method. The continuous comparison of the decay function on time permits to determinate possible deviation from purely exponential decay curve. The results were described by several decay theories. The degrees of corresponding between theories and experiment were evaluated by goodness factor. A complex type of decay was considered. (author). 9 refs, 2 tabs, 6 figs
Frank, Andrew A.
1984-01-01
A control system and method for a power delivery system, such as in an automotive vehicle, having an engine coupled to a continuously variable ratio transmission (CVT). Totally independent control of engine and transmission enable the engine to precisely follow a desired operating characteristic, such as the ideal operating line for minimum fuel consumption. CVT ratio is controlled as a function of commanded power or torque and measured load, while engine fuel requirements (e.g., throttle position) are strictly a function of measured engine speed. Fuel requirements are therefore precisely adjusted in accordance with the ideal characteristic for any load placed on the engine.
Palmesi, P.; Exl, L.; Bruckner, F.; Abert, C.; Suess, D.
2017-11-01
The long-range magnetic field is the most time-consuming part in micromagnetic simulations. Computational improvements can relieve problems related to this bottleneck. This work presents an efficient implementation of the Fast Multipole Method [FMM] for the magnetic scalar potential as used in micromagnetics. The novelty lies in extending FMM to linearly magnetized tetrahedral sources making it interesting also for other areas of computational physics. We treat the near field directly and in use (exact) numerical integration on the multipole expansion in the far field. This approach tackles important issues like the vectorial and continuous nature of the magnetic field. By using FMM the calculations scale linearly in time and memory.
Optimization of the uniformity of a metal flow during continuous extrusion by the Conform method
Lyubanova, A. Sh.; Gorokhov, Yu. V.; Solopko, I. V.; Ziborov, A. Yu.
2010-03-01
The scheme of plastic deformation of a billet in a container is considered as part of continuous extrusion by the Conform method. A mathematical model of the motion of a viscoplastic Bingham liquid is used to determine the metal velocity distribution in the plastic-deformation zone. As a result, the optimum angle between the longitudinal axes of the die and container is estimated. This angle is found to be one of the main factors affecting the nonuniformity of deformation when a metal flows into the die. The calculated results are compared to experimental data.
Application of distributed point source method (DPSM) to wave propagation in anisotropic media
Fooladi, Samaneh; Kundu, Tribikram
2017-04-01
Distributed Point Source Method (DPSM) was developed by Placko and Kundu1, as a technique for modeling electromagnetic and elastic wave propagation problems. DPSM has been used for modeling ultrasonic, electrostatic and electromagnetic fields scattered by defects and anomalies in a structure. The modeling of such scattered field helps to extract valuable information about the location and type of defects. Therefore, DPSM can be used as an effective tool for Non-Destructive Testing (NDT). Anisotropy adds to the complexity of the problem, both mathematically and computationally. Computation of the Green's function which is used as the fundamental solution in DPSM is considerably more challenging for anisotropic media, and it cannot be reduced to a closed-form solution as is done for isotropic materials. The purpose of this study is to investigate and implement DPSM for an anisotropic medium. While the mathematical formulation and the numerical algorithm will be considered for general anisotropic media, more emphasis will be placed on transversely isotropic materials in the numerical example presented in this paper. The unidirectional fiber-reinforced composites which are widely used in today's industry are good examples of transversely isotropic materials. Development of an effective and accurate NDT method based on these modeling results can be of paramount importance for in-service monitoring of damage in composite structures.
Arahman, Nasrul; Maimun, Teuku; Mukramah, Syawaliah
2017-01-01
The composition of polymer solution and the methods of membrane preparation determine the solidification process of membrane. The formation of membrane structure prepared via non-solvent induced phase separation (NIPS) method is mostly determined by phase separation process between polymer, solvent, and non-solvent. This paper discusses the phase separation process of polymer solution containing Polyethersulfone (PES), N-methylpirrolidone (NMP), and surfactant Tetronic 1307 (Tet). Cloud point experiment is conducted to determine the amount of non-solvent needed on induced phase separation. Amount of water required as a non-solvent decreases by the addition of surfactant Tet. Kinetics of phase separation for such system is studied by the light scattering measurement. With the addition of Tet., the delayed phase separation is observed and the structure growth rate decreases. Moreover, the morphology of fabricated membrane from those polymer systems is analyzed by scanning electron microscopy (SEM). The images of both systems show the formation of finger-like macrovoids through the cross-section.
Directory of Open Access Journals (Sweden)
Ibrahim Karahan
2016-04-01
Full Text Available Let C be a nonempty closed convex subset of a real Hilbert space H. Let {T_{n}}:C›H be a sequence of nearly nonexpansive mappings such that F:=?_{i=1}^{?}F(T_{i}?Ø. Let V:C›H be a ?-Lipschitzian mapping and F:C›H be a L-Lipschitzian and ?-strongly monotone operator. This paper deals with a modified iterative projection method for approximating a solution of the hierarchical fixed point problem. It is shown that under certain approximate assumptions on the operators and parameters, the modified iterative sequence {x_{n}} converges strongly to x^{*}?F which is also the unique solution of the following variational inequality: ?0, ?x?F. As a special case, this projection method can be used to find the minimum norm solution of above variational inequality; namely, the unique solution x^{*} to the quadratic minimization problem: x^{*}=argmin_{x?F}?x?². The results here improve and extend some recent corresponding results of other authors.
New encapsulation method using low-melting-point alloy for sealing micro heat pipes
Energy Technology Data Exchange (ETDEWEB)
Li, Congming; Wang, Xiaodong; Zhou, Chuanpeng; Luo, Yi; Li, Zhixin; Li, Sidi [Dalian University of Technology, Dalian (China)
2017-06-15
This study proposed a method using Low-melting-point alloy (LMPA) to seal Micro heat pipes (MHPs), which were made of Si substrates and glass covers. Corresponding MHP structures with charging and sealing channels were designed. Three different auxiliary structures were investigated to study the sealability of MHPs with LMPA. One structure is rectangular and the others are triangular with corner angles of 30° and 45°, respectively. Each auxiliary channel for LMPA is 0.5 mm wide and 135 μm deep. LMPA was heated to molten state, injected to channels, and then cooled to room temperature. According to the material characteristic of LMPA, the alloy should swell in the following 12 hours to form strong interaction force between LMPA and Si walls. Experimental results show that the flow speed of liquid LMPA in channels plays an important role in sealing MHPs, and the sealing performance of triangular structures is always better than that of rectangular structure. Therefore, triangular structures are more suitable in sealing MHPs than rectangular ones. LMPA sealing is a plane packaging method that can be applied in the thermal management of high-power IC device and LEDs. Meanwhile, implanting in commercialized fabrication of MHP is easy.
New encapsulation method using low-melting-point alloy for sealing micro heat pipes
International Nuclear Information System (INIS)
Li, Congming; Wang, Xiaodong; Zhou, Chuanpeng; Luo, Yi; Li, Zhixin; Li, Sidi
2017-01-01
This study proposed a method using Low-melting-point alloy (LMPA) to seal Micro heat pipes (MHPs), which were made of Si substrates and glass covers. Corresponding MHP structures with charging and sealing channels were designed. Three different auxiliary structures were investigated to study the sealability of MHPs with LMPA. One structure is rectangular and the others are triangular with corner angles of 30° and 45°, respectively. Each auxiliary channel for LMPA is 0.5 mm wide and 135 μm deep. LMPA was heated to molten state, injected to channels, and then cooled to room temperature. According to the material characteristic of LMPA, the alloy should swell in the following 12 hours to form strong interaction force between LMPA and Si walls. Experimental results show that the flow speed of liquid LMPA in channels plays an important role in sealing MHPs, and the sealing performance of triangular structures is always better than that of rectangular structure. Therefore, triangular structures are more suitable in sealing MHPs than rectangular ones. LMPA sealing is a plane packaging method that can be applied in the thermal management of high-power IC device and LEDs. Meanwhile, implanting in commercialized fabrication of MHP is easy.
Directory of Open Access Journals (Sweden)
T. A. Mikhailova
2016-01-01
Full Text Available In the paper the algorithm of modeling of continuous low-temperature free-radical butadiene-styrene copolymerization process in emulsion based on the Monte-Carlo method is offered. This process is the cornerstone of industrial production butadiene – styrene synthetic rubber which is the most widespread large-capacity rubber of general purpose. Imitation of growth of each macromolecule of the formed copolymer and tracking of the processes happening to it is the basis of algorithm of modeling. Modeling is carried out taking into account residence-time distribution of particles in system that gives the chance to research the process proceeding in the battery of consistently connected polymerization reactors. At the same time each polymerization reactor represents the continuous stirred tank reactor. Since the process is continuous, it is considered continuous addition of portions to the reaction mixture in the first reactor of battery. The constructed model allows to research molecular-weight and viscous characteristics of the formed copolymerization product, to predict the mass content of butadiene and styrene in copolymer, to carry out calculation of molecular-weight distribution of the received product at any moment of conducting process. According to the results of computational experiments analyzed the influence of mode of the process of the regulator introduced during the maintaining on change of characteristics of the formed butadiene-styrene copolymer. As the considered process takes place with participation of monomers of two types, besides listed the model allows to research compositional heterogeneity of the received product that is to carry out calculation of composite distribution and distribution of macromolecules for the size and structure. On the basis of the proposed algorithm created the software tool that allows you to keep track of changes in the characteristics of the resulting product in the dynamics.
Fixed point theorems in locally convex spacesÃ¢Â€Â”the Schauder mapping method
Directory of Open Access Journals (Sweden)
S. Cobzaş
2006-03-01
Full Text Available In the appendix to the book by F. F. Bonsal, Lectures on Some Fixed Point Theorems of Functional Analysis (Tata Institute, Bombay, 1962 a proof by Singbal of the Schauder-Tychonoff fixed point theorem, based on a locally convex variant of Schauder mapping method, is included. The aim of this note is to show that this method can be adapted to yield a proof of Kakutani fixed point theorem in the locally convex case. For the sake of completeness we include also the proof of Schauder-Tychonoff theorem based on this method. As applications, one proves a theorem of von Neumann and a minimax result in game theory.
Application of the method of continued fractions for electron scattering by linear molecules
International Nuclear Information System (INIS)
Lee, M.-T.; Iga, I.; Fujimoto, M.M.; Lara, O.; Brasilia Univ., DF
1995-01-01
The method of continued fractions (MCF) of Horacek and Sasakawa is adapted for the first time to study low-energy electron scattering by linear molecules. Particularly, we have calculated the reactance K-matrices for an electron scattered by hydrogen molecule and hydrogen molecular ion as well as by a polar LiH molecule in the static-exchange level. For all the applications studied herein. the calculated physical quantities converge rapidly, even for a strongly polar molecule such as LiH, to the correct values and in most cases the convergence is monotonic. Our study suggests that the MCF could be an efficient method for studying electron-molecule scattering and also photoionization of molecules. (Author)
Directory of Open Access Journals (Sweden)
Lei Guo
2017-02-01
Full Text Available Point-of-interest (POI recommendation has been well studied in recent years. However, most of the existing methods focus on the recommendation scenarios where users can provide explicit feedback. In most cases, however, the feedback is not explicit, but implicit. For example, we can only get a user’s check-in behaviors from the history of what POIs she/he has visited, but never know how much she/he likes and why she/he does not like them. Recently, some researchers have noticed this problem and began to learn the user preferences from the partial order of POIs. However, these works give equal weight to each POI pair and cannot distinguish the contributions from different POI pairs. Intuitively, for the two POIs in a POI pair, the larger the frequency difference of being visited and the farther the geographical distance between them, the higher the contribution of this POI pair to the ranking function. Based on the above observations, we propose a weighted ranking method for POI recommendation. Specifically, we first introduce a Bayesian personalized ranking criterion designed for implicit feedback to POI recommendation. To fully utilize the partial order of POIs, we then treat the cost function in a weighted way, that is give each POI pair a different weight according to their frequency of being visited and the geographical distance between them. Data analysis and experimental results on two real-world datasets demonstrate the existence of user preference on different POI pairs and the effectiveness of our weighted ranking method.
Grant, K.; Rohling, E. J.; Amies, J.
2017-12-01
Sea-level (SL) reconstructions over glacial-interglacial timeframes are critical for understanding the equilibrium response of ice sheets to sustained warming. In particular, continuous and high-resolution SL records are essential for accurately quantifying `natural' rates of SL rise. Global SL changes are well-constrained since the last glacial maximum ( 20,000 years ago, ky) by radiometrically-dated corals and paleoshoreline data, and fairly well-constrained over the last glacial cycle ( 150 ky). Prior to that, however, studies of ice-volume:SL relationships tend to rely on benthic δ18O, as geomorphological evidence is far more sparse and less reliably dated. An alternative SL reconstruction method (the `marginal basin' approach) was developed for the Red Sea over 500 ky, and recently attempted for the Mediterranean over 5 My (Rohling et al., 2014, Nature). This method exploits the strong sensitivity of seawater δ18O in these basins to SL changes in the relatively narrow and shallow straits which connect the basins with the open ocean. However, the initial Mediterranean SL method did not resolve sea-level highstands during Northern Hemisphere insolation maxima, when African monsoon run-off - strongly depleted in δ18O - reached the Mediterranean. Here, we present improvements to the `marginal basin' sea-level reconstruction method. These include a new `Med-Red SL stack', which combines new probabilistic Mediterranean and Red Sea sea-level stacks spanning the last 500 ky. We also show how a box model-data comparison of water-column δ18O changes over a monsoon interval allows us to quantify the monsoon versus SL δ18O imprint on Mediterranean foraminiferal carbonate δ18O records. This paves the way for a more accurate and fully continuous SL reconstruction extending back through the Pliocene.
Directory of Open Access Journals (Sweden)
Guo-Qiang Zeng
2014-01-01
Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.
Gézero, L.; Antunes, C.
2017-05-01
The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.
Hund, E; Massart, D L; Smeyers-Verbeke, J
1999-10-01
The H-point standard additions method (HPSAM) and two versions of the generalized H-point standard additions method (GHPSAM) are evaluated for the UV-analysis of two-component mixtures. Synthetic mixtures of anhydrous caffeine and phenazone as well as of atovaquone and proguanil hydrochloride were used. Furthermore, the method was applied to pharmaceutical formulations that contain these compounds as active drug substances. This paper shows both the difficulties that are related to the methods and the conditions by which acceptable results can be obtained.
Standard Test Methods for Properties of Continuous Filament Carbon and Graphite Fiber Tows
American Society for Testing and Materials. Philadelphia
1999-01-01
1.1 These test methods cover the preparation and tensile testing of resin-impregnated and consolidated test specimens made from continuous filament carbon and graphite yarns, rovings, and tows to determine their tensile properties. 1.2 These test methods also cover the determination of the density and mass per unit length of the yarn, roving, or tow to provide supplementary data for tensile property calculation. 1.3 These test methods include a procedure for sizing removal to provide the preferred desized fiber samples for density measurement. This procedure may also be used to determine the weight percent sizing. 1.4 These test methods include a procedure for determining the weight percent moisture adsorption of carbon or graphite fiber. 1.5 The values stated in SI units are to be regarded as the standard. The values in parentheses are for information only. 1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of t...
Continuous energy Monte Carlo method based homogenization multi-group constants calculation
International Nuclear Information System (INIS)
Li Mancang; Wang Kan; Yao Dong
2012-01-01
The efficiency of the standard two-step reactor physics calculation relies on the accuracy of multi-group constants from the assembly-level homogenization process. In contrast to the traditional deterministic methods, generating the homogenization cross sections via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data bank can be used for a wide range of applications, resulting in the versatility using Monte Carlo codes for homogenization. As the first stage to realize Monte Carlo based lattice homogenization, the track length scheme is used as the foundation of cross section generation, which is straight forward. The scattering matrix and Legendre components, however, require special techniques. The Scattering Event method was proposed to solve the problem. There are no continuous energy counterparts in the Monte Carlo calculation for neutron diffusion coefficients. P 1 cross sections were used to calculate the diffusion coefficients for diffusion reactor simulator codes. B N theory is applied to take the leakage effect into account when the infinite lattice of identical symmetric motives is assumed. The MCMC code was developed and the code was applied in four assembly configurations to assess the accuracy and the applicability. At core-level, A PWR prototype core is examined. The results show that the Monte Carlo based multi-group constants behave well in average. The method could be applied to complicated configuration nuclear reactor core to gain higher accuracy. (authors)
International Nuclear Information System (INIS)
Imani, A.; Modarress, H.; Eliassi, A.; Abdous, M.
2009-01-01
The phase separation of (water + salt + polyethylene glycol 15000) systems was studied by cloud-point measurements using the particle counting method. The effect of three kinds of sulphate salt (Na 2 SO 4 , K 2 SO 4 , (NH 4 ) 2 SO 4 ) concentration, polyethylene glycol 15000 concentration, mass ratio of polymer to salt on the cloud-point temperature of these systems have been investigated. The results obtained indicate that the cloud-point temperatures decrease linearly with increase in polyethylene glycol concentrations for different salts. Also, the cloud points decrease with an increase in mass ratio of salt to polymer.