WorldWideScience

Sample records for previous numerical estimates

  1. Developmental and Individual Differences in Pure Numerical Estimation

    Science.gov (United States)

    Booth, Julie L.; Siegler, Robert S.

    2006-01-01

    The authors examined developmental and individual differences in pure numerical estimation, the type of estimation that depends solely on knowledge of numbers. Children between kindergarten and 4th grade were asked to solve 4 types of numerical estimation problems: computational, numerosity, measurement, and number line. In Experiment 1,…

  2. Numerical Estimation in Preschoolers

    Science.gov (United States)

    Berteletti, Ilaria; Lucangeli, Daniela; Piazza, Manuela; Dehaene, Stanislas; Zorzi, Marco

    2010-01-01

    Children's sense of numbers before formal education is thought to rely on an approximate number system based on logarithmically compressed analog magnitudes that increases in resolution throughout childhood. School-age children performing a numerical estimation task have been shown to increasingly rely on a formally appropriate, linear…

  3. Numerical simulation of the shot peening process under previous loading conditions

    International Nuclear Information System (INIS)

    Romero-Ángeles, B; Urriolagoitia-Sosa, G; Torres-San Miguel, C R; Molina-Ballinas, A; Benítez-García, H A; Vargas-Bustos, J A; Urriolagoitia-Calderón, G

    2015-01-01

    This research presents a numerical simulation of the shot peening process and determines the residual stress field induced into a component with a previous loading history. The importance of this analysis is based on the fact that mechanical elements under shot peening are also subjected to manufacturing processes, which convert raw material into finished product. However, material is not provided in a virgin state, it has a previous loading history caused by the manner it is fabricated. This condition could alter some beneficial aspects of the residual stress induced by shot peening and could accelerate the crack nucleation and propagation progression. Studies were performed in beams subjected to strain hardening in tension (5ε y ) before shot peening was applied. Latter results were then compared in a numerical assessment of an induced residual stress field by shot peening carried out in a component (beam) without any previous loading history. In this paper, it is clearly shown the detrimental or beneficial effect that previous loading history can bring to the mechanical component and how it can be controlled to improve the mechanical behavior of the material

  4. Development of Numerical Estimation in Young Children

    Science.gov (United States)

    Siegler, Robert S.; Booth, Julie L.

    2004-01-01

    Two experiments examined kindergartners', first graders', and second graders' numerical estimation, the internal representations that gave rise to the estimates, and the general hypothesis that developmental sequences within a domain tend to repeat themselves in new contexts. Development of estimation in this age range on 0-to-100 number lines…

  5. Developmental and individual differences in pure numerical estimation.

    Science.gov (United States)

    Booth, Julie L; Siegler, Robert S

    2006-01-01

    The authors examined developmental and individual differences in pure numerical estimation, the type of estimation that depends solely on knowledge of numbers. Children between kindergarten and 4th grade were asked to solve 4 types of numerical estimation problems: computational, numerosity, measurement, and number line. In Experiment 1, kindergartners and 1st, 2nd, and 3rd graders were presented problems involving the numbers 0-100; in Experiment 2, 2nd and 4th graders were presented problems involving the numbers 0-1,000. Parallel developmental trends, involving increasing reliance on linear representations of numbers and decreasing reliance on logarithmic ones, emerged across different types of estimation. Consistent individual differences across tasks were also apparent, and all types of estimation skill were positively related to math achievement test scores. Implications for understanding of mathematics learning in general are discussed. Copyright 2006 APA, all rights reserved.

  6. Numerical estimation in individuals with Down syndrome.

    Science.gov (United States)

    Lanfranchi, Silvia; Berteletti, Ilaria; Torrisi, Erika; Vianello, Renzo; Zorzi, Marco

    2014-10-31

    We investigated numerical estimation in children with Down syndrome (DS) in order to assess whether their pattern of performance is tied to experience (age), overall cognitive level, or specifically impaired. Siegler and Opfer's (2003) number to position task, which requires translating a number into a spatial position on a number line, was administered to a group of 21 children with DS and to two control groups of typically developing children (TD), matched for mental and chronological age. Results suggest that numerical estimation and the developmental transition between logarithm and linear patterns of estimates in children with DS is more similar to that of children with the same mental age than to children with the same chronological age. Moreover linearity was related to the cognitive level in DS while in TD children it was related to the experience level. Copyright © 2014. Published by Elsevier Ltd.

  7. Representational Change and Children's Numerical Estimation

    Science.gov (United States)

    Opfer, John E.; Siegler, Robert S.

    2007-01-01

    We applied overlapping waves theory and microgenetic methods to examine how children improve their estimation proficiency, and in particular how they shift from reliance on immature to mature representations of numerical magnitude. We also tested the theoretical prediction that feedback on problems on which the discrepancy between two…

  8. GM-PHD Filter Combined with Track-Estimate Association and Numerical Interpolation

    Directory of Open Access Journals (Sweden)

    Jinguang Chen

    2015-01-01

    Full Text Available For the standard Gaussian mixture probability hypothesis density (GM-PHD filter, the number of targets can be overestimated if the clutter rate is too high or underestimated if the detection rate is too low. These problems seriously affect the accuracy of multitarget tracking for the number and the value of measurements and clutters cannot be distinguished and recognized. Therefore, we proposed an improved GM-PHD filter to tackle these problems. Firstly, a track-estimate association was implemented in the filtering process to detect and remove false-alarm targets. Secondly, a numerical interpolation technique was used to compensate the missing targets caused by low detection rate. At the end of this paper, simulation results were presented to demonstrate the proposed GM-PHD algorithm is more effective in estimating the number and state of targets than the previous ones.

  9. Estimating surface fluxes using eddy covariance and numerical ogive optimization

    DEFF Research Database (Denmark)

    Sievers, J.; Papakyriakou, T.; Larsen, Søren Ejling

    2015-01-01

    Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low-frequency con......Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low...

  10. Advantages in using Kalman phasor estimation in numerical differential protective relaying compared to the Fourier estimation method

    DEFF Research Database (Denmark)

    Bukh, Bjarne; Gudmundsdottir, Unnur Stella; Balle Holst, Per

    2007-01-01

    This paper demonstrates the results obtained from detailed studies of Kalman phasor estimation used in numerical differential protective relaying of a power transformer. The accuracy and expeditiousness of a current estimate in the numerical differential protection is critical for correct...... and not unnecessary activation of the breakers by the protecting relay, and the objective of the study was to utilize the capability of Kalman phasor estimation in a signal model representing the expected current signal from the current transformers of the power transformer. The used signal model included...

  11. Estimation of state and material properties during heat-curing molding of composite materials using data assimilation: A numerical study

    Directory of Open Access Journals (Sweden)

    Ryosuke Matsuzaki

    2018-03-01

    Full Text Available Accurate simulations of carbon fiber-reinforced plastic (CFRP molding are vital for the development of high-quality products. However, such simulations are challenging and previous attempts to improve the accuracy of simulations by incorporating the data acquired from mold monitoring have not been completely successful. Therefore, in the present study, we developed a method to accurately predict various CFRP thermoset molding characteristics based on data assimilation, a process that combines theoretical and experimental values. The degree of cure as well as temperature and thermal conductivity distributions during the molding process were estimated using both temperature data and numerical simulations. An initial numerical experiment demonstrated that the internal mold state could be determined solely from the surface temperature values. A subsequent numerical experiment to validate this method showed that estimations based on surface temperatures were highly accurate in the case of degree of cure and internal temperature, although predictions of thermal conductivity were more difficult. Keywords: Engineering, Materials science, Applied mathematics

  12. Estimation of Dynamic Friction Process of the Akatani Landslide Based on the Waveform Inversion and Numerical Simulation

    Science.gov (United States)

    Yamada, M.; Mangeney, A.; Moretti, L.; Matsushi, Y.

    2014-12-01

    Understanding physical parameters, such as frictional coefficients, velocity change, and dynamic history, is important issue for assessing and managing the risks posed by deep-seated catastrophic landslides. Previously, landslide motion has been inferred qualitatively from topographic changes caused by the event, and occasionally from eyewitness reports. However, these conventional approaches are unable to evaluate source processes and dynamic parameters. In this study, we use broadband seismic recordings to trace the dynamic process of the deep-seated Akatani landslide that occurred on the Kii Peninsula, Japan, which is one of the best recorded large slope failures. Based on the previous results of waveform inversions and precise topographic surveys done before and after the event, we applied numerical simulations using the SHALTOP numerical model (Mangeney et al., 2007). This model describes homogeneous continuous granular flows on a 3D topography based on a depth averaged thin layer approximation. We assume a Coulomb's friction law with a constant friction coefficient, i. e. the friction is independent of the sliding velocity. We varied the friction coefficients in the simulation so that the resulting force acting on the surface agrees with the single force estimated from the seismic waveform inversion. Figure shows the force history of the east-west components after the band-pass filtering between 10-100 seconds. The force history of the simulation with frictional coefficient 0.27 (thin red line) the best agrees with the result of seismic waveform inversion (thick gray line). Although the amplitude is slightly different, phases are coherent for the main three pulses. This is an evidence that the point-source approximation works reasonably well for this particular event. The friction coefficient during the sliding was estimated to be 0.38 based on the seismic waveform inversion performed by the previous study and on the sliding block model (Yamada et al., 2013

  13. Estimating local atmosphere-surface fluxes using eddy covariance and numerical Ogive optimization

    DEFF Research Database (Denmark)

    Sievers, Jakob; Papakyriakou, Tim; Larsen, Søren

    2014-01-01

    Estimating representative surface-fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modeling efforts, low-frequency cont......Estimating representative surface-fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modeling efforts, low...

  14. CONTROL BASED ON NUMERICAL METHODS AND RECURSIVE BAYESIAN ESTIMATION IN A CONTINUOUS ALCOHOLIC FERMENTATION PROCESS

    Directory of Open Access Journals (Sweden)

    Olga L. Quintero

    Full Text Available Biotechnological processes represent a challenge in the control field, due to their high nonlinearity. In particular, continuous alcoholic fermentation from Zymomonas mobilis (Z.m presents a significant challenge. This bioprocess has high ethanol performance, but it exhibits an oscillatory behavior in process variables due to the influence of inhibition dynamics (rate of ethanol concentration over biomass, substrate, and product concentrations. In this work a new solution for control of biotechnological variables in the fermentation process is proposed, based on numerical methods and linear algebra. In addition, an improvement to a previously reported state estimator, based on particle filtering techniques, is used in the control loop. The feasibility estimator and its performance are demonstrated in the proposed control loop. This methodology makes it possible to develop a controller design through the use of dynamic analysis with a tested biomass estimator in Z.m and without the use of complex calculations.

  15. A different approach to estimate nonlinear regression model using numerical methods

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  16. Numerical method for estimating the size of chaotic regions of phase space

    International Nuclear Information System (INIS)

    Henyey, F.S.; Pomphrey, N.

    1987-10-01

    A numerical method for estimating irregular volumes of phase space is derived. The estimate weights the irregular area on a surface of section with the average return time to the section. We illustrate the method by application to the stadium and oval billiard systems and also apply the method to the continuous Henon-Heiles system. 15 refs., 10 figs

  17. Estimation of Water Diffusion Coefficient into Polycarbonate at Different Temperatures Using Numerical Simulation

    DEFF Research Database (Denmark)

    Shojaee Nasirabadi, Parizad; Jabbaribehnam, Mirmasoud; Hattel, Jesper Henri

    2016-01-01

    ) is widely used in the electronics industry. Thus, in this work the water diffusion coefficient into PC is investigated. Furthermore, numerical methods used for estimation of the diffusion coefficient and their assumptions are discussed. 1D and 3D numerical solutions are compared and based on this, itis......Nowadays, many electronic systems are exposed to harsh conditions of relative humidity and temperature. Masstransport properties of electronic packaging materials are needed in order to investigate the influence of moisture andtemperature on reliability of electronic devices. Polycarbonate (PC...... shown how the estimated value can be different depending on the choice of dimensionality in the model....

  18. Parameter estimation in IMEX-trigonometrically fitted methods for the numerical solution of reaction-diffusion problems

    Science.gov (United States)

    D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice

    2018-05-01

    In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.

  19. Estimation of water diffusion coefficient into polycarbonate at different temperatures using numerical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Nasirabadi, P. Shojaee; Jabbari, M.; Hattel, J. H. [Process Modelling Group, Department of Mechanical Engineering, Technical University of Denmark, Nils Koppels Allé, 2800 Kgs. Lyngby (Denmark)

    2016-06-08

    Nowadays, many electronic systems are exposed to harsh conditions of relative humidity and temperature. Mass transport properties of electronic packaging materials are needed in order to investigate the influence of moisture and temperature on reliability of electronic devices. Polycarbonate (PC) is widely used in the electronics industry. Thus, in this work the water diffusion coefficient into PC is investigated. Furthermore, numerical methods used for estimation of the diffusion coefficient and their assumptions are discussed. 1D and 3D numerical solutions are compared and based on this, it is shown how the estimated value can be different depending on the choice of dimensionality in the model.

  20. Asynchronous machine rotor speed estimation using a tabulated numerical approach

    Science.gov (United States)

    Nguyen, Huu Phuc; De Miras, Jérôme; Charara, Ali; Eltabach, Mario; Bonnet, Stéphane

    2017-12-01

    This paper proposes a new method to estimate the rotor speed of the asynchronous machine by looking at the estimation problem as a nonlinear optimal control problem. The behavior of the nonlinear plant model is approximated off-line as a prediction map using a numerical one-step time discretization obtained from simulations. At each time-step, the speed of the induction machine is selected satisfying the dynamic fitting problem between the plant output and the predicted output, leading the system to adopt its dynamical behavior. Thanks to the limitation of the prediction horizon to a single time-step, the execution time of the algorithm can be completely bounded. It can thus easily be implemented and embedded into a real-time system to observe the speed of the real induction motor. Simulation results show the performance and robustness of the proposed estimator.

  1. Quantity estimation based on numerical cues in the mealworm beetle (Tenebrio molitor

    Directory of Open Access Journals (Sweden)

    Pau eCarazo

    2012-11-01

    Full Text Available In this study, we used a biologically relevant experimental procedure to ask whether mealworm beetles (Tenebrio molitor are spontaneously capable of assessing quantities based on numerical cues. Like other insect species, mealworm beetles adjust their reproductive behaviour (i.e. investment in mate guarding according to the perceived risk of sperm competition (i.e. probability that a female will mate with another male. To test whether males have the ability to estimate numerosity based on numerical cues, we staged matings between virgin females and virgin males in which we varied the number of rival males the experimental male had access to immediately preceding mating as a cue to sperm competition risk (from 1 to 4. Rival males were presented sequentially, and we controlled for continuous cues by ensuring that males in all treatments were exposed to the same amount of male-male contact. Males exhibited a marked increase in the time they devoted to mate guarding in response to an increase in the number of different rival males they were exposed to. Since males could not rely on continuous cues we conclude that they kept a running tally of the number of individuals they encountered serially, which meets the requirements of the basic ordinality and cardinality principles of proto-counting. Our results thus offer good evidence of ‘true’ numerosity estimation or quantity estimation and, along with recent studies in honey-bees, suggest that vertebrates and invertebrates share similar core systems of non-verbal numerical representation.

  2. Numerical estimation of the effective electrical conductivity in carbon paper diffusion media

    International Nuclear Information System (INIS)

    Zamel, Nada; Li, Xianguo; Shen, Jun

    2012-01-01

    Highlights: ► Anisotropic effective electrical conductivity of the GDL is estimated numerically. ► The electrical conductivity is a key component in understanding the structure of the GDL. ► Expressions for evaluating the electrical conductivity were proposed. ► The tortuosity factor was evaluated as 1.7 and 3.4 in the in- and through-plane directions, respectively. - Abstract: The transport of electrons through the gas diffusion layer (GDL) of polymer electrolyte membrane (PEM) fuel cells has a significant impact on the optimal design and operation of PEM fuel cells and is directly affected by the anisotropic nature of the carbon paper material. In this study, a three-dimensional reconstruction of the GDL is used to numerically estimate the directional dependent effective electrical conductivity of the layer for various porosity values. The distribution of the fibers in the through-plane direction results in high electrical resistivity; hence, decreasing the overall effective electrical conductivity in this direction. This finding is in agreement with measured experimental data. Further, using the numerical results of this study, two mathematical expressions were proposed for the calculation of the effective electrical conductivity of the carbon paper GDL. Finally, the tortuosity factor was evaluated as 1.7 and 3.4 in the in- and through-plane directions, respectively.

  3. Dynamic State Estimation for Multi-Machine Power System by Unscented Kalman Filter With Enhanced Numerical Stability

    Energy Technology Data Exchange (ETDEWEB)

    Qi, Junjian; Sun, Kai; Wang, Jianhui; Liu, Hui

    2018-03-01

    In this paper, in order to enhance the numerical stability of the unscented Kalman filter (UKF) used for power system dynamic state estimation, a new UKF with guaranteed positive semidifinite estimation error covariance (UKFGPS) is proposed and compared with five existing approaches, including UKFschol, UKF-kappa, UKFmodified, UKF-Delta Q, and the squareroot UKF (SRUKF). These methods and the extended Kalman filter (EKF) are tested by performing dynamic state estimation on WSCC 3-machine 9-bus system and NPCC 48-machine 140-bus system. For WSCC system, all methods obtain good estimates. However, for NPCC system, both EKF and the classic UKF fail. It is found that UKFschol, UKF-kappa, and UKF-Delta Q do not work well in some estimations while UKFGPS works well in most cases. UKFmodified and SRUKF can always work well, indicating their better scalability mainly due to the enhanced numerical stability.

  4. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  5. Is the difference between chemical and numerical estimates of baseflow meaningful?

    Science.gov (United States)

    Cartwright, Ian; Gilfedder, Ben; Hofmann, Harald

    2014-05-01

    Both chemical and numerical techniques are commonly used to calculate baseflow inputs to gaining rivers. In general the chemical methods yield lower estimates of baseflow than the numerical techniques. In part, this may be due to the techniques assuming two components (event water and baseflow) whereas there may also be multiple transient stores of water. Bank return waters, interflow, or waters stored on floodplains are delayed components that may be geochemically similar to the surface water from which they are derived; numerical techniques may record these components as baseflow whereas chemical mass balance studies are likely to aggregate them with the surface water component. This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. While more sophisticated techniques exist, these methods of estimating baseflow are readily applied with the available data and have been used widely elsewhere. During the early stages of high-discharge events, chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those from chemical mass balance using Cl calculated from continuous electrical conductivity. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of annual discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of annual discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge). These differences most probably reflect how the different techniques characterise the transient water sources in this catchment. The local minimum and recursive digital filters aggregate much of the

  6. Numerical algorithm for rigid body position estimation using the quaternion approach

    Science.gov (United States)

    Zigic, Miodrag; Grahovac, Nenad

    2017-11-01

    This paper deals with rigid body attitude estimation on the basis of the data obtained from an inertial measurement unit mounted on the body. The aim of this work is to present the numerical algorithm, which can be easily applied to the wide class of problems concerning rigid body positioning, arising in aerospace and marine engineering, or in increasingly popular robotic systems and unmanned aerial vehicles. Following the considerations of kinematics of rigid bodies, the relations between accelerations of different points of the body are given. A rotation matrix is formed using the quaternion approach to avoid singularities. We present numerical procedures for determination of the absolute accelerations of the center of mass and of an arbitrary point of the body expressed in the inertial reference frame, as well as its attitude. An application of the algorithm to the example of a heavy symmetrical gyroscope is presented, where input data for the numerical procedure are obtained from the solution of differential equations of motion, instead of using sensor measurements.

  7. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    Science.gov (United States)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  8. Low cycle fatigue numerical estimation of a high pressure turbine disc for the AL-31F jet engine

    Directory of Open Access Journals (Sweden)

    Spodniak Miroslav

    2017-01-01

    Full Text Available This article deals with the description of an approximate numerical estimation approach of a low cycle fatigue of a high pressure turbine disc for the AL-31F turbofan jet engine. The numerical estimation is based on the finite element method carried out in the SolidWorks software. The low cycle fatigue assessment of a high pressure turbine disc was carried out on the basis of dimensional, shape and material disc characteristics, which are available for the particular high pressure engine turbine. The method described here enables relatively fast setting of economically feasible low cycle fatigue of the assessed high pressure turbine disc using a commercially available software. The numerical estimation of accuracy of a low cycle fatigue depends on the accuracy of required input data for the particular investigated object.

  9. Estimating the mirror seeing for a large optical telescope with a numerical method

    Science.gov (United States)

    Zhang, En-Peng; Cui, Xiang-Qun; Li, Guo-Ping; Zhang, Yong; Shi, Jian-Rong; Zhao, Yong-Heng

    2018-05-01

    It is widely accepted that mirror seeing is caused by turbulent fluctuations in the index of air refraction in the vicinity of a telescope mirror. Computational Fluid Dynamics (CFD) is a useful tool to evaluate the effects of mirror seeing. In this paper, we present a numerical method to estimate the mirror seeing for a large optical telescope (∼ 4 m) in cases of natural convection with the ANSYS ICEPAK software. We get the FWHM of the image for different inclination angles (i) of the mirror and different temperature differences (ΔT) between the mirror and ambient air. Our results show that the mirror seeing depends very weakly on i, which agrees with observational data from the Canada-France-Hawaii Telescope. The numerical model can be used to estimate mirror seeing in the case of natural convection although with some limitations. We can determine ΔT for thermal control of the primary mirror according to the simulation, empirical data and site seeing.

  10. New Estimates of Numerical Values Related to a Simplex

    Directory of Open Access Journals (Sweden)

    Mikhail V. Nevskii

    2017-01-01

    if \\(\\xi_n=n.\\ This statement is valid only in one direction. There exists a simplex \\(S\\subset Q_5\\ such that the boundary of the simplex \\(5S\\ contains all the vertices of the cube \\(Q_5\\. We describe a one-parameter family of simplices contained in \\(Q_5\\ with the property \\(\\alpha(S=\\xi(S=5.\\ These simplices were found with the use of numerical and symbolic computations. %Numerical experiments allow to discover Another new result is an inequality \\(\\xi_6\\ <6.0166\\. %Прежняя оценка имела вид \\(6\\leq \\xi_6\\leq 6.6\\. We also systematize some of our estimates of numbers \\(\\xi_n\\, \\(\\theta_n\\, \\(\\varkappa_n\\ derived by~now. The symbol \\(\\theta_n\\ denotes the minimal norm of interpolation projection on the space of linear functions of \\(n\\ variables as~an~operator from \\(C(Q_n\\ to~\\(C(Q_n\\.

  11. Estimating and localizing the algebraic and total numerical errors using flux reconstructions

    Czech Academy of Sciences Publication Activity Database

    Papež, Jan; Strakoš, Z.; Vohralík, M.

    2018-01-01

    Roč. 138, č. 3 (2018), s. 681-721 ISSN 0029-599X R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * a posteriori error estimation * algebraic error * discretization error * stopping criteria * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016

  12. Numerical estimation of structure constants in the three-dimensional Ising conformal field theory through Markov chain uv sampler

    Science.gov (United States)

    Herdeiro, Victor

    2017-09-01

    Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] introduced a numerical recipe, dubbed uv sampler, offering precise estimations of the conformal field theory (CFT) data of the planar two-dimensional (2D) critical Ising model. It made use of scale invariance emerging at the critical point in order to sample finite sublattice marginals of the infinite plane Gibbs measure of the model by producing holographic boundary distributions. The main ingredient of the Markov chain Monte Carlo sampler is the invariance under dilation. This paper presents a generalization to higher dimensions with the critical 3D Ising model. This leads to numerical estimations of a subset of the CFT data—scaling weights and structure constants—through fitting of measured correlation functions. The results are shown to agree with the recent most precise estimations from numerical bootstrap methods [Kos, Poland, Simmons-Duffin, and Vichi, J. High Energy Phys. 08 (2016) 036, 10.1007/JHEP08(2016)036].

  13. Effects of numerical dissipation and unphysical excursions on scalar-mixing estimates in large-eddy simulations

    Science.gov (United States)

    Sharan, Nek; Matheou, Georgios; Dimotakis, Paul

    2017-11-01

    Artificial numerical dissipation decreases dispersive oscillations and can play a key role in mitigating unphysical scalar excursions in large eddy simulations (LES). Its influence on scalar mixing can be assessed through the resolved-scale scalar, Z , its probability density function (PDF), variance, spectra, and the budget of the horizontally averaged equation for Z2. LES of incompressible temporally evolving shear flow enabled us to study the influence of numerical dissipation on unphysical scalar excursions and mixing estimates. Flows with different mixing behavior, with both marching and non-marching scalar PDFs, are studied. Scalar fields for each flow are compared for different grid resolutions and numerical scalar-convection term schemes. As expected, increasing numerical dissipation enhances scalar mixing in the development stage of shear flow characterized by organized large-scale pairings with a non-marching PDF, but has little influence in the self-similar stage of flows with marching PDFs. Flow parameters and regimes sensitive to numerical dissipation help identify approaches to mitigate unphysical excursions while minimizing dissipation.

  14. Estimating the Numerical Diapycnal Mixing in the GO5.0 Ocean Model

    Science.gov (United States)

    Megann, A.; Nurser, G.

    2014-12-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2014), and forms part of the GC1 and GC2 climate models. It uses version 3.4 of the NEMO model, on the ORCA025 ¼° global tripolar grid. We describe various approaches to quantifying the numerical diapycnal mixing in this model, and present results from analysis of the GO5.0 model based on the isopycnal watermass analysis of Lee et al (2002) that indicate that numerical mixing does indeed form a significant component of the watermass transformation in the ocean interior.

  15. Parameter Estimation for Partial Differential Equations by Collage-Based Numerical Approximation

    Directory of Open Access Journals (Sweden)

    Xiaoyan Deng

    2009-01-01

    into a minimization problem of a function of several variables after the partial differential equation is approximated by a differential dynamical system. Then numerical schemes for solving this minimization problem are proposed, including grid approximation and ant colony optimization. The proposed schemes are applied to a parameter estimation problem for the Belousov-Zhabotinskii equation, and the results show that the proposed approximation method is efficient for both linear and nonlinear partial differential equations with respect to unknown parameters. At worst, the presented method provides an excellent starting point for traditional inversion methods that must first select a good starting point.

  16. Evaluation and adjustment of altimeter measurement and numerical hindcast in wave height trend estimation in China's coastal seas

    Science.gov (United States)

    Li, Shuiqing; Guan, Shoude; Hou, Yijun; Liu, Yahao; Bi, Fan

    2018-05-01

    A long-term trend of significant wave height (SWH) in China's coastal seas was examined based on three datasets derived from satellite measurements and numerical hindcasts. One set of altimeter data were obtained from the GlobWave, while the other two datasets of numerical hindcasts were obtained from the third-generation wind wave model, WAVEWATCH III, forced by wind fields from the Cross-Calibrated Multi-Platform (CCMP) and NCEP's Climate Forecast System Reanalysis (CFSR). The mean and extreme wave trends were estimated for the period 1992-2010 with respect to the annual mean and the 99th-percentile values of SWH, respectively. The altimeter wave trend estimates feature considerable uncertainties owing to the sparse sampling rate. Furthermore, the extreme wave trend tends to be overestimated because of the increasing sampling rate over time. Numerical wave trends strongly depend on the quality of the wind fields, as the CCMP waves significantly overestimate the wave trend, whereas the CFSR waves tend to underestimate the trend. Corresponding adjustments were applied which effectively improved the trend estimates from the altimeter and numerical data. The adjusted results show generally increasing mean wave trends, while the extreme wave trends are more spatially-varied, from decreasing trends prevailing in the South China Sea to significant increasing trends mainly in the East China Sea.

  17. A novel numerical model for estimating the collapse pressure of flexible pipes

    Energy Technology Data Exchange (ETDEWEB)

    Nogueira, Victor P.P.; Antoun Netto, Theodoro [Universidade Federal do Rio de Janeiro (COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao em Engenharia], e-mail: victor@lts.coppe.ufrj.br

    2009-07-01

    As the worldwide oil and gas industry operational environments move to ultra-deep waters, failure mechanisms in flexible pipes such as instability of the armor layers under compression and hydrostatic collapse are more likely to occur. Therefore, it is important to develop reliable numerical tools to reproduce the failure mechanisms that may occur in flexible pipes. This work presents a representative finite element model of flexible pipe capable to reproduce its pre and post-collapse behavior under hydrostatic pressure. The model, developed in the scope of this work, uses beam elements and includes nonlinear kinematics and material behavior influences. The dependability of the numerical results is assessed in light of experimental tests on flexible pipes with 4 inches and 8 inches nominal diameter available in the literature (Souza, 2002). The applied methodology provided coherent values regarding the estimation of the collapse pressures and results have shown that the proposed model is capable to reproduce experimental results. (author)

  18. Estimating the Cross-Shelf Export of Riverine Materials: Part 1. General Relationships From an Idealized Numerical Model

    Science.gov (United States)

    Izett, Jonathan G.; Fennel, Katja

    2018-02-01

    Rivers deliver large amounts of terrestrially derived materials (such as nutrients, sediments, and pollutants) to the coastal ocean, but a global quantification of the fate of this delivery is lacking. Nutrients can accumulate on shelves, potentially driving high levels of primary production with negative consequences like hypoxia, or be exported across the shelf to the open ocean where impacts are minimized. Global biogeochemical models cannot resolve the relatively small-scale processes governing river plume dynamics and cross-shelf export; instead, river inputs are often parameterized assuming an "all or nothing" approach. Recently, Sharples et al. (2017), https://doi.org/10.1002/2016GB005483 proposed the SP number—a dimensionless number relating the estimated size of a plume as a function of latitude to the local shelf width—as a simple estimator of cross-shelf export. We extend their work, which is solely based on theoretical and empirical scaling arguments, and address some of its limitations using a numerical model of an idealized river plume. In a large number of simulations, we test whether the SP number can accurately describe export in unforced cases and with tidal and wind forcings imposed. Our numerical experiments confirm that the SP number can be used to estimate export and enable refinement of the quantitative relationships proposed by Sharples et al. We show that, in general, external forcing has only a weak influence compared to latitude and derive empirical relationships from the results of the numerical experiments that can be used to estimate riverine freshwater export to the open ocean.

  19. Biased calculations: Numeric anchors influence answers to math equations

    Directory of Open Access Journals (Sweden)

    Andrew R. Smith

    2011-02-01

    Full Text Available People must often perform calculations in order to produce a numeric estimate (e.g., a grocery-store shopper estimating the total price of his or her shopping cart contents. The current studies were designed to test whether estimates based on calculations are influenced by comparisons with irrelevant anchors. Previous research has demonstrated that estimates across a wide range of contexts assimilate toward anchors, but none has examined estimates based on calculations. In two studies, we had participants compare the answers to math problems with anchors. In both studies, participants' estimates assimilated toward the anchor values. This effect was moderated by time limit such that the anchoring effects were larger when the participants' ability to engage in calculations was limited by a restrictive time limit.

  20. Numerical experiments to investigate the accuracy of broad-band moment magnitude, Mwp

    Science.gov (United States)

    Hara, Tatsuhiko; Nishimura, Naoki

    2011-12-01

    We perform numerical experiments to investigate the accuracy of broad-band moment magnitude, Mwp. We conduct these experiments by measuring Mwp from synthetic seismograms and comparing the resulting values to the moment magnitudes used in the calculation of synthetic seismograms. In the numerical experiments using point sources, we have found that there is a significant dependence of Mwp on focal mechanisms, and that depths phases have a large impact on Mwp estimates, especially for large shallow earthquakes. Numerical experiments using line sources suggest that the effects of source finiteness and rupture propagation on Mwp estimates are on the order of 0.2 magnitude units for vertical fault planes with pure dip-slip mechanisms and 45° dipping fault planes with pure dip-slip (thrust) mechanisms, but that the dependence is small for strike-slip events on a vertical fault plane. Numerical experiments for huge thrust faulting earthquakes on a fault plane with a shallow dip angle suggest that the Mwp estimates do not saturate in the moment magnitude range between 8 and 9, although they are underestimates. Our results are consistent with previous studies that compared Mwp estimates to moment magnitudes calculated from seismic moment tensors obtained by analyses of observed data.

  1. Parameter estimation method that directly compares gravitational wave observations to numerical relativity

    Science.gov (United States)

    Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2017-11-01

    We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.

  2. ESTIMATION OF THE WANDA GLACIER (SOUTH SHETLANDS SEDIMENT EROSION RATE USING NUMERICAL MODELLING

    Directory of Open Access Journals (Sweden)

    Kátia Kellem Rosa

    2013-09-01

    Full Text Available Glacial sediment yield results from glacial erosion and is influenced by several factors including glacial retreat rate, ice flow velocity and thermal regime. This paper estimates the contemporary subglacial erosion rate and sediment yield of Wanda Glacier (King George Island, South Shetlands. This work also examines basal sediment evacuation mechanisms by runoff and glacial erosion processes during the subglacial transport. This is small temperate glacier that has seen retreating for the last decades. In this work, we examine basal sediment evacuation mechanisms by runoff and analyze glacial erosion processes occurring during subglacial transport. The glacial erosion rate at Wanda Glacier, estimated using a numerical model that consider sediment evacuated to outlet streams, ice flow velocity, ice thickness and glacier area, is 1.1 ton m yr-1.

  3. Approximate method in estimation sensitivity responses to variations in delayed neutron energy spectra

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J; Shin, H S; Song, T Y; Park, W S [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    Previous our numerical results in computing point kinetics equations show a possibility in developing approximations to estimate sensitivity responses of nuclear reactor. We recalculate sensitivity responses by maintaining the corrections with first order of sensitivity parameter. We present a method for computing sensitivity responses of nuclear reactor based on an approximation derived from point kinetics equations. Exploiting this approximation, we found that the first order approximation works to estimate variations in the time to reach peak power because of their linear dependence on a sensitivity parameter, and that there are errors in estimating the peak power in the first order approximation for larger sensitivity parameters. To confirm legitimacy of out approximation, these approximate results are compared with exact results obtained from out previous numerical study. 4 refs., 2 figs., 3 tabs. (Author)

  4. Approximate method in estimation sensitivity responses to variations in delayed neutron energy spectra

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J.; Shin, H. S.; Song, T. Y.; Park, W. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    Previous our numerical results in computing point kinetics equations show a possibility in developing approximations to estimate sensitivity responses of nuclear reactor. We recalculate sensitivity responses by maintaining the corrections with first order of sensitivity parameter. We present a method for computing sensitivity responses of nuclear reactor based on an approximation derived from point kinetics equations. Exploiting this approximation, we found that the first order approximation works to estimate variations in the time to reach peak power because of their linear dependence on a sensitivity parameter, and that there are errors in estimating the peak power in the first order approximation for larger sensitivity parameters. To confirm legitimacy of out approximation, these approximate results are compared with exact results obtained from out previous numerical study. 4 refs., 2 figs., 3 tabs. (Author)

  5. Unconditional convergence and error estimates for bounded numerical solutions of the barotropic Navier-Stokes system

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Hošek, Radim; Maltese, D.; Novotný, A.

    2017-01-01

    Roč. 33, č. 4 (2017), s. 1208-1223 ISSN 0749-159X EU Projects: European Commission(XE) 320078 - MATH EF Institutional support: RVO:67985840 Keywords : convergence * error estimates * mixed numerical method * Navier–Stokes system Subject RIV: BA - General Math ematics OBOR OECD: Pure math ematics Impact factor: 1.079, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/num.22140/abstract

  6. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.

    Science.gov (United States)

    Frick, Eric; Rahmatalla, Salam

    2018-04-04

    The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.

  7. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Eric Frick

    2018-04-01

    Full Text Available The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA. This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO. First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated (r > 0.82 with the true, time-varying joint center solution.

  8. Multi-channel PSD Estimators for Speech Dereverberation

    DEFF Research Database (Denmark)

    Kuklasinski, Adam; Doclo, Simon; Gerkmann, Timo

    2015-01-01

    densities (PSDs). We first derive closed-form expressions for the mean square error (MSE) of both PSD estimators and then show that one estimatorpreviously used for speech dereverberation by the authors – always yields a better MSE. Only in the case of a two microphone array or for special spatial...... distributions of the interference both estimators yield the same MSE. The theoretically derived MSE values are in good agreement with numerical simulation results and with instrumental speech quality measures in a realistic speech dereverberation task for binaural hearing aids....

  9. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    Science.gov (United States)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  10. NUMERICAL AND ANALYTIC METHODS OF ESTIMATION BRIDGES’ CONSTRUCTIONS

    Directory of Open Access Journals (Sweden)

    Y. Y. Luchko

    2010-03-01

    Full Text Available In this article the numerical and analytical methods of calculation of the stressed-and-strained state of bridge constructions are considered. The task on increasing of reliability and accuracy of the numerical method and its solution by means of calculations in two bases are formulated. The analytical solution of the differential equation of deformation of a ferro-concrete plate under the action of local loads is also obtained.

  11. Numerical tools to estimate the flux of a gas across the air–water interface and assess the heterogeneity of its forcing functions

    Directory of Open Access Journals (Sweden)

    V. M. N. C. S. Vieira

    2013-03-01

    Full Text Available A numerical tool was developed for the estimation of gas fluxes across the air–water interface. The primary objective is to use it to estimate CO2 fluxes. Nevertheless application to other gases is easily accomplished by changing the values of the parameters related to the physical properties of the gases. A user-friendly software was developed allowing to build upon a standard kernel a custom-made gas flux model with the preferred parameterizations. These include single or double layer models; several numerical schemes for the effects of wind in the air-side and water-side transfer velocities; the effects of atmospheric stability, surface roughness and turbulence from current drag with the bottom; and the effects on solubility of water temperature, salinity, air temperature and pressure. An analysis was also developed which decomposes the difference between the fluxes in a reference situation and in alternative situations into its several forcing functions. This analysis relies on the Taylor expansion of the gas flux model, requiring the numerical estimation of partial derivatives by a multivariate version of the collocation polynomial. Both the flux model and the difference decomposition analysis were tested with data taken from surveys done in the lagoon system of Ria Formosa, south Portugal, in which the CO2 fluxes were estimated using the infrared gas analyzer (IRGA and floating chamber method, whereas the CO2 concentrations were estimated using the IRGA and degasification chamber. Observations and estimations show a remarkable fit.

  12. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

    2015-01-01

    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

  13. The inverse Numerical Computer Program FLUX-BOT for estimating Vertical Water Fluxes from Temperature Time-Series.

    Science.gov (United States)

    Trauth, N.; Schmidt, C.; Munz, M.

    2016-12-01

    Heat as a natural tracer to quantify water fluxes between groundwater and surface water has evolved to a standard hydrological method. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. Analytical solutions can be easily implemented but assumptions on the boundary conditions have to be made a priori, e.g. sinusoidal upper temperature boundary. Numerical models offer more flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. This also reduced the effort of data preprocessing such as the extraction of the diurnal temperature variation. We developed a software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB which is intended to calculate vertical water fluxes in saturated sediments, based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation. Besides its core inverse numerical routines, FLUX-BOT includes functions visualizing the results and functions for performing uncertainty analysis. We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance.

  14. Strategy for a numerical Rock Mechanics Site Descriptive Model. Further development of the theoretical/numerical approach

    International Nuclear Information System (INIS)

    Olofsson, Isabelle; Fredriksson, Anders

    2005-05-01

    The Swedish Nuclear and Fuel Management Company (SKB) is conducting Preliminary Site Investigations at two different locations in Sweden in order to study the possibility of a Deep Repository for spent fuel. In the frame of these Site Investigations, Site Descriptive Models are achieved. These products are the result of an interaction of several disciplines such as geology, hydrogeology, and meteorology. The Rock Mechanics Site Descriptive Model constitutes one of these models. Before the start of the Site Investigations a numerical method using Discrete Fracture Network (DFN) models and the 2D numerical software UDEC was developed. Numerical simulations were the tool chosen for applying the theoretical approach for characterising the mechanical rock mass properties. Some shortcomings were identified when developing the methodology. Their impacts on the modelling (in term of time and quality assurance of results) were estimated to be so important that the improvement of the methodology with another numerical tool was investigated. The theoretical approach is still based on DFN models but the numerical software used is 3DEC. The main assets of the programme compared to UDEC are an optimised algorithm for the generation of fractures in the model and for the assignment of mechanical fracture properties. Due to some numerical constraints the test conditions were set-up in order to simulate 2D plane strain tests. Numerical simulations were conducted on the same data set as used previously for the UDEC modelling in order to estimate and validate the results from the new methodology. A real 3D simulation was also conducted in order to assess the effect of the '2D' conditions in the 3DEC model. Based on the quality of the results it was decided to update the theoretical model and introduce the new methodology based on DFN models and 3DEC simulations for the establishment of the Rock Mechanics Site Descriptive Model. By separating the spatial variability into two parts, one

  15. Numerical estimation of concrete beams reinforced with FRP bars

    Directory of Open Access Journals (Sweden)

    Protchenko Kostiantyn

    2016-01-01

    Full Text Available This paper introduces numerical investigation on mechanical performance of a concrete beam reinforced with Fibre Reinforced Polymer (FRP bars, which can be competitive alternative to steel bars for enhancing concrete structures. The objective of this work is being identified as elaborating of reliable numerical model for predicting strength capacity of structural elements with implementation of Finite Element Analysis (FEA. The numerical model is based on experimental study prepared for the beams, which were reinforced with Basalt FRP (BFRP bars and steel bars (for comparison. The results obtained for the beams reinforced with steel bars are found to be in close agreement with the experimental results. However, the beams reinforced with BFRP bars in experimental programme demonstrated higher bearing capacity than those reinforced with steel bars, which is not in a good convergence with numerical results. Authors did attempt to describe the reasons on achieving experimentally higher bearing capacity of beams reinforced with BFRP bars.

  16. ESTIMATION OF TURBULENT DIFFUSIVITY WITH DIRECT NUMERICAL SIMULATION OF STELLAR CONVECTION

    Energy Technology Data Exchange (ETDEWEB)

    Hotta, H.; Iida, Y.; Yokoyama, T., E-mail: hotta.h@eps.s.u-tokyo.ac.jp [Department of Earth and Planetary Science, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan)

    2012-05-20

    We investigate the value of horizontal turbulent diffusivity {eta} by numerical calculation of thermal convection. In this study, we introduce a new method whereby the turbulent diffusivity is estimated by monitoring the time development of the passive scalar, which is initially distributed in a given Gaussian function with a spatial scale d{sub 0}. Our conclusions are as follows: (1) assuming the relation {eta} = L{sub c} v{sub rms}/3, where v{sub rms} is the root-mean-square (rms) velocity, the characteristic length L{sub c} is restricted by the shortest one among the pressure (density) scale height and the region depth. (2) The value of turbulent diffusivity becomes greater with the larger initial distribution scale d{sub 0}. (3) The approximation of turbulent diffusion holds better when the ratio of the initial distribution scale d{sub 0} to the characteristic length L{sub c} is larger.

  17. Satellite telemetry reveals higher fishing mortality rates than previously estimated, suggesting overfishing of an apex marine predator.

    Science.gov (United States)

    Byrne, Michael E; Cortés, Enric; Vaudo, Jeremy J; Harvey, Guy C McN; Sampson, Mark; Wetherbee, Bradley M; Shivji, Mahmood

    2017-08-16

    Overfishing is a primary cause of population declines for many shark species of conservation concern. However, means of obtaining information on fishery interactions and mortality, necessary for the development of successful conservation strategies, are often fisheries-dependent and of questionable quality for many species of commercially exploited pelagic sharks. We used satellite telemetry as a fisheries-independent tool to document fisheries interactions, and quantify fishing mortality of the highly migratory shortfin mako shark ( Isurus oxyrinchus ) in the western North Atlantic Ocean. Forty satellite-tagged shortfin mako sharks tracked over 3 years entered the Exclusive Economic Zones of 19 countries and were harvested in fisheries of five countries, with 30% of tagged sharks harvested. Our tagging-derived estimates of instantaneous fishing mortality rates ( F = 0.19-0.56) were 10-fold higher than previous estimates from fisheries-dependent data (approx. 0.015-0.024), suggesting data used in stock assessments may considerably underestimate fishing mortality. Additionally, our estimates of F were greater than those associated with maximum sustainable yield, suggesting a state of overfishing. This information has direct application to evaluations of stock status and for effective management of populations, and thus satellite tagging studies have potential to provide more accurate estimates of fishing mortality and survival than traditional fisheries-dependent methodology. © 2017 The Author(s).

  18. Modelling and development of estimation and control algorithms: application to a bio process; Modelisation et elaboration d`algorithmes d`estimation et de commande: application a un bioprocede

    Energy Technology Data Exchange (ETDEWEB)

    Maher, M

    1995-02-03

    Modelling, estimation and control of an alcoholic fermentation process is the purpose of this thesis. A simple mathematical model of a fermentation process is established by using experimental results obtained on the plant. This nonlinear model is used for numerical simulation, analysis and synthesis of estimation and control algorithms. The problem of state and parameter nonlinear estimation of bio-processes is studied. Two estimation techniques are developed and proposed to bypass the lack of sensors for certain physical variables. Their performances are studied by numerical simulation. One of these estimators is validated on experimental results of batch and continuous fermentations. An adaptive control by law is proposed for the regulation and tracking of the substrate concentration of the plant by acting on the dilution rate. It is a nonlinear control strategy coupled with the previous validated estimator. The performance of this control law is evaluated by a real application to a continuous flow fermentation process. (author) refs.

  19. Bayesian estimation of the discrete coefficient of determination.

    Science.gov (United States)

    Chen, Ting; Braga-Neto, Ulisses M

    2016-12-01

    The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.

  20. When is best-worst best? A comparison of best-worst scaling, numeric estimation, and rating scales for collection of semantic norms.

    Science.gov (United States)

    Hollis, Geoff; Westbury, Chris

    2018-02-01

    Large-scale semantic norms have become both prevalent and influential in recent psycholinguistic research. However, little attention has been directed towards understanding the methodological best practices of such norm collection efforts. We compared the quality of semantic norms obtained through rating scales, numeric estimation, and a less commonly used judgment format called best-worst scaling. We found that best-worst scaling usually produces norms with higher predictive validities than other response formats, and does so requiring less data to be collected overall. We also found evidence that the various response formats may be producing qualitatively, rather than just quantitatively, different data. This raises the issue of potential response format bias, which has not been addressed by previous efforts to collect semantic norms, likely because of previous reliance on a single type of response format for a single type of semantic judgment. We have made available software for creating best-worst stimuli and scoring best-worst data. We also made available new norms for age of acquisition, valence, arousal, and concreteness collected using best-worst scaling. These norms include entries for 1,040 words, of which 1,034 are also contained in the ANEW norms (Bradley & Lang, Affective norms for English words (ANEW): Instruction manual and affective ratings (pp. 1-45). Technical report C-1, the center for research in psychophysiology, University of Florida, 1999).

  1. Statistical methods of parameter estimation for deterministically chaotic time series

    Science.gov (United States)

    Pisarenko, V. F.; Sornette, D.

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

  2. Air Space Proportion in Pterosaur Limb Bones Using Computed Tomography and Its Implications for Previous Estimates of Pneumaticity

    Science.gov (United States)

    Martin, Elizabeth G.; Palmer, Colin

    2014-01-01

    Air Space Proportion (ASP) is a measure of how much air is present within a bone, which allows for a quantifiable comparison of pneumaticity between specimens and species. Measured from zero to one, higher ASP means more air and less bone. Conventionally, it is estimated from measurements of the internal and external bone diameter, or by analyzing cross-sections. To date, the only pterosaur ASP study has been carried out by visual inspection of sectioned bones within matrix. Here, computed tomography (CT) scans are used to calculate ASP in a small sample of pterosaur wing bones (mainly phalanges) and to assess how the values change throughout the bone. These results show higher ASPs than previous pterosaur pneumaticity studies, and more significantly, higher ASP values in the heads of wing bones than the shaft. This suggests that pneumaticity has been underestimated previously in pterosaurs, birds, and other archosaurs when shaft cross-sections are used to estimate ASP. Furthermore, ASP in pterosaurs is higher than those found in birds and most sauropod dinosaurs, giving them among the highest ASP values of animals studied so far, supporting the view that pterosaurs were some of the most pneumatized animals to have lived. The high degree of pneumaticity found in pterosaurs is proposed to be a response to the wing bone bending stiffness requirements of flight rather than a means to reduce mass, as is often suggested. Mass reduction may be a secondary result of pneumaticity that subsequently aids flight. PMID:24817312

  3. A simple numerical model to estimate the effect of coal selection on pulverized fuel burnout

    Energy Technology Data Exchange (ETDEWEB)

    Sun, J.K.; Hurt, R.H.; Niksa, S.; Muzio, L.; Mehta, A.; Stallings, J. [Brown University, Providence, RI (USA). Division Engineering

    2003-06-01

    The amount of unburned carbon in ash is an important performance characteristic in commercial boilers fired with pulverized coal. Unburned carbon levels are known to be sensitive to fuel selection, and there is great interest in methods of estimating the burnout propensity of coals based on proximate and ultimate analysis - the only fuel properties readily available to utility practitioners. A simple numerical model is described that is specifically designed to estimate the effects of coal selection on burnout in a way that is useful for commercial coal screening. The model is based on a highly idealized description of the combustion chamber but employs detailed descriptions of the fundamental fuel transformations. The model is validated against data from laboratory and pilot-scale combustors burning a range of international coals, and then against data obtained from full-scale units during periods of coal switching. The validated model form is then used in a series of sensitivity studies to explore the role of various individual fuel properties that influence burnout.

  4. Market projections of cellulose nanomaterial-enabled products-- Part 2: Volume estimates

    Science.gov (United States)

    John Cowie; E.M. (Ted) Bilek; Theodore H. Wegner; Jo Anne Shatkin

    2014-01-01

    Nanocellulose has enormous potential to provide an important materials platform in numerous product sectors. This study builds on previous work by the same authors in which likely high-volume, low-volume, and novel applications for cellulosic nanomaterials were identified. In particular, this study creates a transparent methodology and estimates the potential annual...

  5. Numerical estimation on balance coefficients of central difference averaging method for quench detection of the KSTAR PF coils

    International Nuclear Information System (INIS)

    Kim, Jin Sub; An, Seok Chan; Ko, Tae Kuk; Chu, Yong

    2016-01-01

    A quench detection system of KSTAR Poloidal Field (PF) coils is inevitable for stable operation because normal zone generates overheating during quench occurrence. Recently, new voltage quench detection method, combination of Central Difference Averaging (CDA) and Mutual Inductance Compensation (MIK) for compensating mutual inductive voltage more effectively than conventional voltage detection method, has been suggested and studied. For better performance of mutual induction cancellation by adjacent coils of CDA+MIK method for KSTAR coil system, balance coefficients of CDA must be estimated and adjusted preferentially. In this paper, the balance coefficients of CDA for KSTAR PF coils were numerically estimated. The estimated result was adopted and tested by using simulation. The CDA method adopting balance coefficients effectively eliminated mutual inductive voltage, and also it is expected to improve performance of CDA+MIK method for quench detection of KSTAR PF coils

  6. Accounting for Antenna in Half-Space Fresnel Coefficient Estimation

    Directory of Open Access Journals (Sweden)

    A. D'Alterio

    2012-01-01

    Full Text Available The problem of retrieving the Fresnel reflection coefficients of a half-space medium starting from measurements collected under a reflection mode multistatic configuration is dealt with. According to our previous results, reflection coefficient estimation is cast as the inversion of linear operator. However, here, we take a step ahead towards more realistic scenarios as the role of antennas (both transmitting and receiving is embodied in the estimation procedure. Numerical results are presented to show the effectiveness of the method for different types of half-space media.

  7. Higher Order Numerical Methods and Use of Estimation Techniques to Improve Modeling of Two-Phase Flow in Pipelines and Wells

    Energy Technology Data Exchange (ETDEWEB)

    Lorentzen, Rolf Johan

    2002-04-01

    The main objective of this thesis is to develop methods which can be used to improve predictions of two-phase flow (liquid and gas) in pipelines and wells. More reliable predictions are accomplished by improvements of numerical methods, and by using measured data to tune the mathematical model which describes the two-phase flow. We present a way to extend simple numerical methods to second order spatial accuracy. These methods are implemented, tested and compared with a second order Godunov-type scheme. In addition, a new (and faster) version of the Godunov-type scheme utilizing primitive (observable) variables is presented. We introduce a least squares method which is used to tune parameters embedded in the two-phase flow model. This method is tested using synthetic generated measurements. We also present an ensemble Kalman filter which is used to tune physical state variables and model parameters. This technique is tested on synthetic generated measurements, but also on several sets of full-scale experimental measurements. The thesis is divided into an introductory part, and a part consisting of four papers. The introduction serves both as a summary of the material treated in the papers, and as supplementary background material. It contains five sections, where the first gives an overview of the main topics which are addressed in the thesis. Section 2 contains a description and discussion of mathematical models for two-phase flow in pipelines. Section 3 deals with the numerical methods which are used to solve the equations arising from the two-phase flow model. The numerical scheme described in Section 3.5 is not included in the papers. This section includes results in addition to an outline of the numerical approach. Section 4 gives an introduction to estimation theory, and leads towards application of the two-phase flow model. The material in Sections 4.6 and 4.7 is not discussed in the papers, but is included in the thesis as it gives an important validation

  8. BAESNUM, a conversational computer program for the Bayesian estimation of a parameter by a numerical method

    International Nuclear Information System (INIS)

    Colombo, A.G.; Jaarsma, R.J.

    1982-01-01

    This report describes a conversational computer program which, via Bayes' theorem, numerically combines the prior distribution of a parameter with a likelihood function. Any type of prior and likelihood function can be considered. The present version of the program includes six types of prior and employs the binomial likelihood. As input the program requires the law and parameters of the prior distribution and the sample data. As output it gives the posterior distribution as a histogram. The use of the program for estimating the constant failure rate of an item is briefly described

  9. Estimation of Resting Energy Expenditure: Validation of Previous and New Predictive Equations in Obese Children and Adolescents.

    Science.gov (United States)

    Acar-Tek, Nilüfer; Ağagündüz, Duygu; Çelik, Bülent; Bozbulut, Rukiye

    2017-08-01

    Accurate estimation of resting energy expenditure (REE) in childrenand adolescents is important to establish estimated energy requirements. The aim of the present study was to measure REE in obese children and adolescents by indirect calorimetry method, compare these values with REE values estimated by equations, and develop the most appropriate equation for this group. One hundred and three obese children and adolescents (57 males, 46 females) between 7 and 17 years (10.6 ± 2.19 years) were recruited for the study. REE measurements of subjects were made with indirect calorimetry (COSMED, FitMatePro, Rome, Italy) and body compositions were analyzed. In females, the percentage of accurate prediction varied from 32.6 (World Health Organization [WHO]) to 43.5 (Molnar and Lazzer). The bias for equations was -0.2% (Kim), 3.7% (Molnar), and 22.6% (Derumeaux-Burel). Kim's (266 kcal/d), Schmelzle's (267 kcal/d), and Henry's equations (268 kcal/d) had the lowest root mean square error (RMSE; respectively 266, 267, 268 kcal/d). The equation that has the highest RMSE values among female subjects was the Derumeaux-Burel equation (394 kcal/d). In males, when the Institute of Medicine (IOM) had the lowest accurate prediction value (12.3%), the highest values were found using Schmelzle's (42.1%), Henry's (43.9%), and Müller's equations (fat-free mass, FFM; 45.6%). When Kim and Müller had the smallest bias (-0.6%, 9.9%), Schmelzle's equation had the smallest RMSE (331 kcal/d). The new specific equation based on FFM was generated as follows: REE = 451.722 + (23.202 * FFM). According to Bland-Altman plots, it has been found out that the new equations are distributed randomly in both males and females. Previously developed predictive equations mostly provided unaccurate and biased estimates of REE. However, the new predictive equations allow clinicians to estimate REE in an obese children and adolescents with sufficient and acceptable accuracy.

  10. Sensitivity analysis of numerical solutions for environmental fluid problems

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu; Motoyama, Yasunori

    2003-01-01

    In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)

  11. Testing gravitational-wave searches with numerical relativity waveforms: results from the first Numerical INJection Analysis (NINJA) project

    International Nuclear Information System (INIS)

    Aylott, Benjamin; Baker, John G; Camp, Jordan; Centrella, Joan; Boggs, William D; Buonanno, Alessandra; Boyle, Michael; Buchman, Luisa T; Chu, Tony; Brady, Patrick R; Brown, Duncan A; Bruegmann, Bernd; Cadonati, Laura; Campanelli, Manuela; Faber, Joshua; Chatterji, Shourov; Christensen, Nelson; Diener, Peter; Dorband, Nils; Etienne, Zachariah B

    2009-01-01

    The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave data analysis communities. The purpose of NINJA is to study the sensitivity of existing gravitational-wave search algorithms using numerically generated waveforms and to foster closer collaboration between the numerical relativity and data analysis communities. We describe the results of the first NINJA analysis which focused on gravitational waveforms from binary black hole coalescence. Ten numerical relativity groups contributed numerical data which were used to generate a set of gravitational-wave signals. These signals were injected into a simulated data set, designed to mimic the response of the initial LIGO and Virgo gravitational-wave detectors. Nine groups analysed this data using search and parameter-estimation pipelines. Matched filter algorithms, un-modelled-burst searches and Bayesian parameter estimation and model-selection algorithms were applied to the data. We report the efficiency of these search methods in detecting the numerical waveforms and measuring their parameters. We describe preliminary comparisons between the different search methods and suggest improvements for future NINJA analyses.

  12. Estimation of Radar Cross Section of a Target under Track

    Directory of Open Access Journals (Sweden)

    Hong Sun-Mog

    2010-01-01

    Full Text Available In allocating radar beam for tracking a target, it is attempted to maintain the signal-to-noise ratio (SNR of signal returning from the illuminated target close to an optimum value for efficient track updates. An estimate of the average radar cross section (RCS of the target is required in order to adjust transmitted power based on the estimate such that a desired SNR can be realized. In this paper, a maximum-likelihood (ML approach is presented for estimating the average RCS, and a numerical solution to the approach is proposed based on a generalized expectation maximization (GEM algorithm. Estimation accuracy of the approach is compared to that of a previously reported procedure.

  13. A hybrid numerical prediction scheme for solar radiation estimation in un-gauged catchments.

    Science.gov (United States)

    Shamim, M. A.; Bray, M.; Ishak, A. M.; Remesan, R.; Han, D.

    2009-09-01

    The importance of solar radiation on earth's surface is depicted in its wide range of applications in the fields of meteorology, agricultural sciences, engineering, hydrology, crop water requirements, climatic changes and energy assessment. It is quite random in nature as it has to go through different processes of assimilation and dispersion while on its way to earth. Compared to other meteorological parameters, solar radiation is quite infrequently measured, for example, the worldwide ratio of stations collecting solar radiation to those collecting temperature is 1:500 (Badescu, 2008). Researchers, therefore, have to rely on indirect techniques of estimation that include nonlinear models, artificial intelligence (e.g. neural networks), remote sensing and numerical weather predictions (NWP). This study proposes a hybrid numerical prediction scheme for solar radiation estimation in un-gauged catchments. It uses the PSU/NCAR's Mesoscale Modelling system (MM5) (Grell et al., 1995) to parameterise the cloud effect on extraterrestrial radiation by dividing the atmosphere into four layers of very high (6-12 km), high (3-6 km), medium (1.5-3) and low (0-1.5) altitudes from earth. It is believed that various cloud forms exist within each of these layers. An hourly time series of upper air pressure and relative humidity data sets corresponding to all of these layers is determined for the Brue catchment, southwest UK, using MM5. Cloud Index (CI) was then determined using (Yang and Koike, 2002): 1 p?bi [ (Rh - Rh )] ci =------- max 0.0,---------cri dp pbi - ptipti (1- Rhcri) where, pbi and pti represent the air pressure at the top and bottom of each layer and Rhcri is the critical value of relative humidity at which a certain cloud type is formed. Output from a global clear sky solar radiation model (MRM v-5) (Kambezidis and Psiloglu, 2008) is used along with meteorological datasets of temperature and precipitation and astronomical information. The analysis is aided by the

  14. Count on dopamine: influences of COMT polymorphisms on numerical cognition

    Directory of Open Access Journals (Sweden)

    Annelise eJúlio-Costa

    2013-08-01

    Full Text Available Catechol-O-methyltransferase (COMT is an enzyme that is particularly important for the metabolism of dopamine. Functional polymorphisms of COMT have been implicated in working memory and numerical cognition. This is an exploratory study that aims at investigating associations between COMT polymorphisms, working memory and numerical cognition. Elementary school children from 2th to 6th grades were divided into two groups according to their COMT val158met polymorphism (homozygous for valine allele [n= 61] versus heterozygous plus methionine homozygous children or met+ group [n=94]. Both groups were matched for age and intelligence. Working memory was assessed through digit span and Corsi blocks. Symbolic numerical processing was assessed through transcoding and single-digit word problem tasks. Non-symbolic magnitude comparison and estimation tasks were used to assess number sense. Between-group differences were found in symbolic and non-symbolic numerical tasks, but not in working memory tasks. Children in the met+ group showed better performance in all numerical tasks while val homozygous children presented slower development of non-symbolic magnitude representations. These results suggest COMT-related dopaminergic modulation may be related not only to working memory, as found in previous studies, but also to the development of magnitude processing and magnitude representations.

  15. Finger-Based Numerical Skills Link Fine Motor Skills to Numerical Development in Preschoolers.

    Science.gov (United States)

    Suggate, Sebastian; Stoeger, Heidrun; Fischer, Ursula

    2017-12-01

    Previous studies investigating the association between fine-motor skills (FMS) and mathematical skills have lacked specificity. In this study, we test whether an FMS link to numerical skills is due to the involvement of finger representations in early mathematics. We gave 81 pre-schoolers (mean age of 4 years, 9 months) a set of FMS measures and numerical tasks with and without a specific finger focus. Additionally, we used receptive vocabulary and chronological age as control measures. FMS linked more closely to finger-based than to nonfinger-based numerical skills even after accounting for the control variables. Moreover, the relationship between FMS and numerical skill was entirely mediated by finger-based numerical skills. We concluded that FMS are closely related to early numerical skill development through finger-based numerical counting that aids the acquisition of mathematical mental representations.

  16. Numerical relativity and asymptotic flatness

    International Nuclear Information System (INIS)

    Deadman, E; Stewart, J M

    2009-01-01

    It is highly plausible that the region of spacetime far from an isolated gravitating body is, in some sense, asymptotically Minkowskian. However theoretical studies of the full nonlinear theory, initiated by Bondi et al (1962 Proc. R. Soc. A 269 21-51), Sachs (1962 Proc. R. Soc. A 270 103-26) and Newman and Unti (1962 J. Math. Phys. 3 891-901), rely on careful, clever, a priori choices of a chart (and tetrad) and so are not readily accessible to the numerical relativist, who chooses her/his chart on the basis of quite different grounds. This paper seeks to close this gap. Starting from data available in a typical numerical evolution, we construct a chart and tetrad which are, asymptotically, sufficiently close to the theoretical ones, so that the key concepts of the Bondi news function, Bondi mass and its rate of decrease can be estimated. In particular, these estimates can be expressed in the numerical relativist's chart as numerical relativity recipes.

  17. Stochastic goal-oriented error estimation with memory

    Science.gov (United States)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  18. Estimation of Anaerobic Debromination Rate Constants of PBDE Pathways Using an Anaerobic Dehalogenation Model.

    Science.gov (United States)

    Karakas, Filiz; Imamoglu, Ipek

    2017-04-01

    This study aims to estimate anaerobic debromination rate constants (k m ) of PBDE pathways using previously reported laboratory soil data. k m values of pathways are estimated by modifying a previously developed model as Anaerobic Dehalogenation Model. Debromination activities published in the literature in terms of bromine substitutions as well as specific microorganisms and their combinations are used for identification of pathways. The range of estimated k m values is between 0.0003 and 0.0241 d -1 . The median and maximum of k m values are found to be comparable to the few available biologically confirmed rate constants published in the literature. The estimated k m values can be used as input to numerical fate and transport models for a better and more detailed investigation of the fate of individual PBDEs in contaminated sediments. Various remediation scenarios such as monitored natural attenuation or bioremediation with bioaugmentation can be handled in a more quantitative manner with the help of k m estimated in this study.

  19. Fast focus estimation using frequency analysis in digital holography.

    Science.gov (United States)

    Oh, Seungtaik; Hwang, Chi-Young; Jeong, Il Kwon; Lee, Sung-Keun; Park, Jae-Hyeung

    2014-11-17

    A novel fast frequency-based method to estimate the focus distance of digital hologram for a single object is proposed. The focus distance is computed by analyzing the distribution of intersections of smoothed-rays. The smoothed-rays are determined by the directions of energy flow which are computed from local spatial frequency spectrum based on the windowed Fourier transform. So our method uses only the intrinsic frequency information of the optical field on the hologram and therefore does not require any sequential numerical reconstructions and focus detection techniques of conventional photography, both of which are the essential parts in previous methods. To show the effectiveness of our method, numerical results and analysis are presented as well.

  20. Numerical study of the evaporation process and parameter estimation analysis of an evaporation experiment

    Directory of Open Access Journals (Sweden)

    K. Schneider-Zapp

    2010-05-01

    Full Text Available Evaporation is an important process in soil-atmosphere interaction. The determination of hydraulic properties is one of the crucial parts in the simulation of water transport in porous media. Schneider et al. (2006 developed a new evaporation method to improve the estimation of hydraulic properties in the dry range. In this study we used numerical simulations of the experiment to study the physical dynamics in more detail, to optimise the boundary conditions and to choose the optimal combination of measurements. The physical analysis exposed, in accordance to experimental findings in the literature, two different evaporation regimes: (i a soil-atmosphere boundary layer dominated regime (regime I close to saturation and (ii a hydraulically dominated regime (regime II. During this second regime a drying front (interface between unsaturated and dry zone with very steep gradients forms which penetrates deeper into the soil as time passes. The sensitivity analysis showed that the result is especially sensitive at the transition between the two regimes. By changing the boundary conditions it is possible to force the system to switch between the two regimes, e.g. from II back to I. Based on this findings a multistep experiment was developed. The response surfaces for all parameter combinations are flat and have a unique, localised minimum. Best parameter estimates are obtained if the evaporation flux and a potential measurement in 2 cm depth are used as target variables. Parameter estimation from simulated experiments with realistic measurement errors with a two-stage Monte-Carlo Levenberg-Marquardt procedure and manual rejection of obvious misfits lead to acceptable results for three different soil textures.

  1. [Estimating child mortality using the previous child technique, with data from health centers and household surveys: methodological aspects].

    Science.gov (United States)

    Aguirre, A; Hill, A G

    1988-01-01

    2 trials of the previous child or preceding birth technique in Bamako, Mali, and Lima, Peru, gave very promising results for measurement of infant and early child mortality using data on survivorship of the 2 most recent births. In the Peruvian study, another technique was tested in which each woman was asked about her last 3 births. The preceding birth technique described by Brass and Macrae has rapidly been adopted as a simple means of estimating recent trends in early childhood mortality. The questions formulated and the analysis of results are direct when the mothers are visited at the time of birth or soon after. Several technical aspects of the method believed to introduce unforeseen biases have now been studied and found to be relatively unimportant. But the problems arising when the data come from a nonrepresentative fraction of the total fertile-aged population have not been resolved. The analysis based on data from 5 maternity centers including 1 hospital in Bamako, Mali, indicated some practical problems and the information obtained showed the kinds of subtle biases that can result from the effects of selection. The study in Lima tested 2 abbreviated methods for obtaining recent early childhood mortality estimates in countries with deficient vital registration. The basic idea was that a few simple questions added to household surveys on immunization or diarrheal disease control for example could produce improved child mortality estimates. The mortality estimates in Peru were based on 2 distinct sources of information in the questionnaire. All women were asked their total number of live born children and the number still alive at the time of the interview. The proportion of deaths was converted into a measure of child survival using a life table. Then each woman was asked for a brief history of the 3 most recent live births. Dates of birth and death were noted in month and year of occurrence. The interviews took only slightly longer than the basic survey

  2. Theory and numerics of gravitational waves from preheating after inflation

    International Nuclear Information System (INIS)

    Dufaux, Jean-Francois; Kofman, Lev; Bergman, Amanda; Felder, Gary; Uzan, Jean-Philippe

    2007-01-01

    Preheating after inflation involves large, time-dependent field inhomogeneities, which act as a classical source of gravitational radiation. The resulting spectrum might be probed by direct detection experiments if inflation occurs at a low enough energy scale. In this paper, we develop a theory and algorithm to calculate, analytically and numerically, the spectrum of energy density in gravitational waves produced from an inhomogeneous background of stochastic scalar fields in an expanding universe. We derive some generic analytical results for the emission of gravity waves by stochastic media of random fields, which can test the validity/accuracy of numerical calculations. We contrast our method with other numerical methods in the literature, and then we apply it to preheating after chaotic inflation. In this case, we are able to check analytically our numerical results, which differ significantly from previous works. We discuss how the gravity-wave spectrum builds up with time and find that the amplitude and the frequency of its peak depend in a relatively simple way on the characteristic spatial scale amplified during preheating. We then estimate the peak frequency and amplitude of the spectrum produced in two models of preheating after hybrid inflation, which for some parameters may be relevant for gravity-wave interferometric experiments

  3. Numerical Analysis of Partial Differential Equations

    CERN Document Server

    Lions, Jacques-Louis

    2011-01-01

    S. Albertoni: Alcuni metodi di calcolo nella teoria della diffusione dei neutroni.- I. Babuska: Optimization and numerical stability in computations.- J.H. Bramble: Error estimates in elliptic boundary value problems.- G. Capriz: The numerical approach to hydrodynamic problems.- A. Dou: Energy inequalities in an elastic cylinder.- T. Doupont: On the existence of an iterative method for the solution of elliptic difference equation with an improved work estimate.- J. Douglas, J.R. Cannon: The approximation of harmonic and parabolic functions of half-spaces from interior data.- B.E. Hubbard: Erro

  4. Numerical validation of selected computer programs in nonlinear analysis of steel frame exposed to fire

    Science.gov (United States)

    Maślak, Mariusz; Pazdanowski, Michał; Woźniczka, Piotr

    2018-01-01

    Validation of fire resistance for the same steel frame bearing structure is performed here using three different numerical models, i.e. a bar one prepared in the SAFIR environment, and two 3D models developed within the framework of Autodesk Simulation Mechanical (ASM) and an alternative one developed in the environment of the Abaqus code. The results of the computer simulations performed are compared with the experimental results obtained previously, in a laboratory fire test, on a structure having the same characteristics and subjected to the same heating regimen. Comparison of the experimental and numerically determined displacement evolution paths for selected nodes of the considered frame during the simulated fire exposure constitutes the basic criterion applied to evaluate the validity of the numerical results obtained. The experimental and numerically determined estimates of critical temperature specific to the considered frame and related to the limit state of bearing capacity in fire have been verified as well.

  5. Wildlife Loss Estimates and Summary of Previous Mitigation Related to Hydroelectric Projects in Montana, Volume Three, Hungry Horse Project.

    Energy Technology Data Exchange (ETDEWEB)

    Casey, Daniel

    1984-10-01

    This assessment addresses the impacts to the wildlife populations and wildlife habitats due to the Hungry Horse Dam project on the South Fork of the Flathead River and previous mitigation of theses losses. In order to develop and focus mitigation efforts, it was first necessary to estimate wildlife and wildlife hatitat losses attributable to the construction and operation of the project. The purpose of this report was to document the best available information concerning the degree of impacts to target wildlife species. Indirect benefits to wildlife species not listed will be identified during the development of alternative mitigation measures. Wildlife species incurring positive impacts attributable to the project were identified.

  6. A Polynomial Estimate of Railway Line Delay

    DEFF Research Database (Denmark)

    Cerreto, Fabrizio; Harrod, Steven; Nielsen, Otto Anker

    2017-01-01

    Railway service may be measured by the aggregate delay over a time horizon or due to an event. Timetables for railway service may dampen aggregate delay by addition of additional process time, either supplement time or buffer time. The evaluation of these variables has previously been performed...... by numerical analysis with simulation. This paper proposes an analytical estimate of aggregate delay with a polynomial form. The function returns the aggregate delay of a railway line resulting from an initial, primary, delay. Analysis of the function demonstrates that there should be a balance between the two...

  7. Toward an enhanced Bayesian estimation framework for multiphase flow soft-sensing

    International Nuclear Information System (INIS)

    Luo, Xiaodong; Lorentzen, Rolf J; Stordal, Andreas S; Nævdal, Geir

    2014-01-01

    In this work the authors study the multiphase flow soft-sensing problem based on a previously established framework. There are three functional modules in this framework, namely, a transient well flow model that describes the response of certain physical variables in a well, for instance, temperature, velocity and pressure, to the flow rates entering and leaving the well zones; a Markov jump process that is designed to capture the potential abrupt changes in the flow rates; and an estimation method that is adopted to estimate the underlying flow rates based on the measurements from the physical sensors installed in the well. In the previous studies, the variances of the flow rates in the Markov jump process are chosen manually. To fill this gap, in the current work two automatic approaches are proposed in order to optimize the variance estimation. Through a numerical example, we show that, when the estimation framework is used in conjunction with these two proposed variance-estimation approaches, it can achieve reasonable performance in terms of matching both the measurements of the physical sensors and the true underlying flow rates. (paper)

  8. Numerical Estimation Method for the NonStationary Thrust of Pulsejet Ejector Nozzle

    Directory of Open Access Journals (Sweden)

    A. Yu. Mikushkin

    2016-01-01

    Full Text Available The article considers a calculation method for the non-stationary thrust of pulsejet ejector nozzle that is based on detonation combustion of gaseous fuel.To determine initial distributions of the thermodynamic parameters inside the detonation tube was carried out a rapid analysis based on x-t-diagrams of motion of glowing combustion products. For this purpose, the section with transparent walls was connected to the outlet of the tube to register the movement of products of combustion.Based on obtained images and gas-dynamic and thermodynamic equations the velocity distribution of the combustion products, its density, pressure and temperature required for numerical analysis were calculated. The world literature presents data on distribution of parameters, however they are given only for direct initiation of detonation at the closed end and for chemically "frozen" gas composition. The article presents the interpolation methods of parameters measured at the temperatures of 2500-2800K.Estimation of the thermodynamic parameters is based on the Chapman-Jouguet theory that the speed of the combustion products directly behind the detonation wave front with respect to the wave front is equal to the speed of sound of these products at a given point. The method of minimizing enthalpy of the final thermodynamic state was used to calculate the equilibrium parameters. Thus, a software package «IVTANTHERMO», which is a database of thermodynamic properties of many individual substances in a wide temperature range, was used.An integral thrust was numerically calculated according to the ejector nozzle surface. We solved the Navier-Stokes equations using the finite-difference Roe scheme of the second order. The combustion products were considered both as an inert mixture with "frozen" composition and as a mixture in chemical equilibrium with the changing temperature. The comparison with experimental results was made.The above method can be used for rapid

  9. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Science.gov (United States)

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  10. Adaptive optimisation-offline cyber attack on remote state estimator

    Science.gov (United States)

    Huang, Xin; Dong, Jiuxiang

    2017-10-01

    Security issues of cyber-physical systems have received increasing attentions in recent years. In this paper, deception attacks on the remote state estimator equipped with the chi-squared failure detector are considered, and it is assumed that the attacker can monitor and modify all the sensor data. A novel adaptive optimisation-offline cyber attack strategy is proposed, where using the current and previous sensor data, the attack can yield the largest estimation error covariance while ensuring to be undetected by the chi-squared monitor. From the attacker's perspective, the attack is better than the existing linear deception attacks to degrade the system performance. Finally, some numerical examples are provided to demonstrate theoretical results.

  11. Transportation package design using numerical optimization

    International Nuclear Information System (INIS)

    Harding, D.C.; Witkowski, W.R.

    1991-01-01

    The purpose of this overview is twofold: first, to outline the theory and basic elements of numerical optimization; and second, to show how numerical optimization can be applied to the transportation packaging industry and used to increase efficiency and safety of radioactive and hazardous material transportation packages. A more extensive review of numerical optimization and its applications to radioactive material transportation package design was performed previously by the authors (Witkowski and Harding 1992). A proof-of-concept Type B package design is also presented as a simplified example of potential improvements achievable using numerical optimization in the design process

  12. Cardiovascular magnetic resonance in adults with previous cardiovascular surgery.

    Science.gov (United States)

    von Knobelsdorff-Brenkenhoff, Florian; Trauzeddel, Ralf Felix; Schulz-Menger, Jeanette

    2014-03-01

    Cardiovascular magnetic resonance (CMR) is a versatile non-invasive imaging modality that serves a broad spectrum of indications in clinical cardiology and has proven evidence. Most of the numerous applications are appropriate in patients with previous cardiovascular surgery in the same manner as in non-surgical subjects. However, some specifics have to be considered. This review article is intended to provide information about the application of CMR in adults with previous cardiovascular surgery. In particular, the two main scenarios, i.e. following coronary artery bypass surgery and following heart valve surgery, are highlighted. Furthermore, several pictorial descriptions of other potential indications for CMR after cardiovascular surgery are given.

  13. Learning linear spatial-numeric associations improves accuracy of memory for numbers

    Directory of Open Access Journals (Sweden)

    Clarissa Ann Thompson

    2016-01-01

    Full Text Available Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1. Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status. To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2. As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  14. Estimating EQ-5D values from the Oswestry Disability Index and numeric rating scales for back and leg pain.

    Science.gov (United States)

    Carreon, Leah Y; Bratcher, Kelly R; Das, Nandita; Nienhuis, Jacob B; Glassman, Steven D

    2014-04-15

    Cross-sectional cohort. The purpose of this study is to determine whether the EuroQOL-5D (EQ-5D) can be derived from commonly available low back disease-specific health-related quality of life measures. The Oswestry Disability Index (ODI) and numeric rating scales (0-10) for back pain (BP) and leg pain (LP) are widely used disease-specific measures in patients with lumbar degenerative disorders. Increasingly, the EQ-5D is being used as a measure of utility due to ease of administration and scoring. The EQ-5D, ODI, BP, and LP were prospectively collected in 14,544 patients seen in clinic for lumbar degenerative disorders. Pearson correlation coefficients for paired observations from multiple time points between ODI, BP, LP, and EQ-5D were determined. Regression modeling was done to compute the EQ-5D score from the ODI, BP, and LP. The mean age was 53.3 ± 16.4 years and 41% were male. Correlations between the EQ-5D and the ODI, BP, and LP were statistically significant (P < 0.0001) with correlation coefficients of -0.77, -0.50, and -0.57, respectively. The regression equation: [0.97711 + (-0.00687 × ODI) + (-0.01488 × LP) + (-0.01008 × BP)] to predict EQ-5D, had an R2 of 0.61 and a root mean square error of 0.149. The model using ODI alone had an R2 of 0.57 and a root mean square error of 0.156. The model using the individual ODI items had an R2 of 0.64 and a root mean square error of 0.143. The correlation coefficient between the observed and estimated EQ-5D score was 0.78. There was no statistically significant difference between the actual EQ-5D (0.553 ± 0.238) and the estimated EQ-5D score (0.553 ± 0.186) using the ODI, BP, and LP regression model. However, rounding off the coefficients to less than 5 decimal places produced less accurate results. Unlike previous studies showing a robust relationship between low back-specific measures and the Short Form-6D, a similar relationship was not seen between the ODI, BP, LP, and the EQ-5D. Thus, the EQ-5D cannot be

  15. Investigation of error estimation method of observational data and comparison method between numerical and observational results toward V and V of seismic simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Kawakami, Yoshiaki; Nakajima, Norihiro

    2017-01-01

    The method to estimate errors included in observational data and the method to compare numerical results with observational results are investigated toward the verification and validation (V and V) of a seismic simulation. For the method to estimate errors, 144 literatures for the past 5 years (from the year 2010 to 2014) in the structure engineering field and earthquake engineering field where the description about acceleration data is frequent are surveyed. As a result, it is found that some processes to remove components regarded as errors from observational data are used in about 30% of those literatures. Errors are caused by the resolution, the linearity, the temperature coefficient for sensitivity, the temperature coefficient for zero shift, the transverse sensitivity, the seismometer property, the aliasing, and so on. Those processes can be exploited to estimate errors individually. For the method to compare numerical results with observational results, public materials of ASME V and V Symposium 2012-2015, their references, and above 144 literatures are surveyed. As a result, it is found that six methods have been mainly proposed in existing researches. Evaluating those methods using nine items, advantages and disadvantages for those methods are arranged. The method is not well established so that it is necessary to employ those methods by compensating disadvantages and/or to search for a solution to a novel method. (author)

  16. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    Science.gov (United States)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  17. Numerical simulation of laser resonators

    International Nuclear Information System (INIS)

    Yoo, J. G.; Jeong, Y. U.; Lee, B. C.; Rhee, Y. J.; Cho, S. O.

    2004-01-01

    We developed numerical simulation packages for laser resonators on the bases of a pair of integral equations. Two numerical schemes, a matrix formalism and an iterative method, were programmed for finding numeric solutions to the pair of integral equations. The iterative method was tried by Fox and Li, but it was not applicable for high Fresnel numbers since the numerical errors involved propagate and accumulate uncontrollably. In this paper, we implement the matrix method to extend the computational limit further. A great number of case studies are carried out with various configurations of stable and unstable r;esonators to compute diffraction losses, phase shifts, intensity distributions and phases of the radiation fields on mirrors. Our results presented in this paper show not only a good agreement with the results previously obtained by Fox and Li, but also the legitimacy of our numerical procedures for high Fresnel numbers.

  18. NUMERICAL ESTIMATES OF ELECTRODYNAMICS PROCESSES IN THE INDUCTOR SYSTEM WITH AN ATTRACTIVE SCREEN AND A FLAT RECTANGULAR SOLENOID

    Directory of Open Access Journals (Sweden)

    E. A. Chaplygin

    2018-04-01

    Full Text Available Purpose. To carry out numerical estimates of currents and forces in the investigated inductor system with an attractive screen (ISAS and determine the effectiveness of the force attraction. Methodology. The calculated relationships and graphical constructions were obtained using the initial data of the system: induced current in the screen and sheet metal; the distributed force of attraction (Ampère force; the repulsive force acting on the sheet metal (Lorentz force; amplitude values of the force of attraction and repulsion; phase dependence of the force of attraction, the repulsive force and the total resulting force. Results. The results of calculations in the form of graphical dependencies of electrodynamic processes in the region under the conductors of a rectangular solenoid of inductor system with an attracting screen are presented. The graphs of forces and currents in region of dent are obtained. In the paper the analysis of electrodynamics processes for whole area under the winding of inductor system with an attractive screen is shown. The flowing this processes in the region of dent a given geometry is presented. Originality. The considered inductor system with an attractive screen and a rectangular solenoid is improved, in comparison with the previous developed ISAS. It has a working area under the lines of parallel conductors in the cross section of a rectangular solenoid, and this allows to place a predetermined portion of the sheet metal anywhere within the working region. Comparison of the indicators of electrodynamics processes in the considered variants of calculation shows an approximate growth of almost 1.5 times the power indicators in the area of the accepted dent in comparison with similar values for the entire area under the winding of the ISAS. Practical value. The results obtained are important for the practice of real estimates of the excited forces of attraction. With a decrease in the dent, the amplitude of the

  19. Wind gust estimation by combining numerical weather prediction model and statistical post-processing

    Science.gov (United States)

    Patlakas, Platon; Drakaki, Eleni; Galanis, George; Spyrou, Christos; Kallos, George

    2017-04-01

    The continuous rise of off-shore and near-shore activities as well as the development of structures, such as wind farms and various offshore platforms, requires the employment of state-of-the-art risk assessment techniques. Such analysis is used to set the safety standards and can be characterized as a climatologically oriented approach. Nevertheless, a reliable operational support is also needed in order to minimize cost drawbacks and human danger during the construction and the functioning stage as well as during maintenance activities. One of the most important parameters for this kind of analysis is the wind speed intensity and variability. A critical measure associated with this variability is the presence and magnitude of wind gusts as estimated in the reference level of 10m. The latter can be attributed to different processes that vary among boundary-layer turbulence, convection activities, mountain waves and wake phenomena. The purpose of this work is the development of a wind gust forecasting methodology combining a Numerical Weather Prediction model and a dynamical statistical tool based on Kalman filtering. To this end, the parameterization of Wind Gust Estimate method was implemented to function within the framework of the atmospheric model SKIRON/Dust. The new modeling tool combines the atmospheric model with a statistical local adaptation methodology based on Kalman filters. This has been tested over the offshore west coastline of the United States. The main purpose is to provide a useful tool for wind analysis and prediction and applications related to offshore wind energy (power prediction, operation and maintenance). The results have been evaluated by using observational data from the NOAA's buoy network. As it was found, the predicted output shows a good behavior that is further improved after the local adjustment post-process.

  20. Reconciling experimental and static-dynamic numerical estimations of seismic anisotropy in Alpine Fault mylonites

    Science.gov (United States)

    Adam, L.; Frehner, M.; Sauer, K. M.; Toy, V.; Guerin-Marthe, S.; Boulton, C. J.

    2017-12-01

    Reconciling experimental and static-dynamic numerical estimations of seismic anisotropy in Alpine Fault mylonitesLudmila Adam1, Marcel Frehner2, Katrina Sauer3, Virginia Toy3, Simon Guerin-Marthe4, Carolyn Boulton5(1) University of Auckland, New Zealand, (2) ETH Zurich, Switzerland, (3) University of Otago, New Zealand (4) Durham University, Earth Sciences, United Kingdom (5) Victoria University of Wellington, New Zealand Quartzo-feldspathic mylonites and schists are the main contributors to seismic wave anisotropy in the vicinity of the Alpine Fault (New Zealand). We must determine how the physical properties of rocks like these influence elastic wave anisotropy if we want to unravel both the reasons for heterogeneous seismic wave propagation, and interpret deformation processes in fault zones. To study such controls on velocity anisotropy we can: 1) experimentally measure elastic wave anisotropy on cores at in-situ conditions or 2) estimate wave velocities by static (effective medium averaging) or dynamic (finite element) modelling based on EBSD data or photomicrographs. Here we compare all three approaches in study of schist and mylonite samples from the Alpine Fault. Volumetric proportions of intrinsically anisotropic micas in cleavage domains and comparatively isotropic quartz+feldspar in microlithons commonly vary significantly within one sample. Our analysis examines the effects of these phases and their arrangement, and further addresses how heterogeneity influences elastic wave anisotropy. We compare P-wave seismic anisotropy estimates based on millimetres-scale ultrasonic waves under in situ conditions, with simulations that account for micrometre-scale variations in elastic properties of constitutent minerals with the MTEX toolbox and finite-element wave propagation on EBSD images. We observe that the sorts of variations in the distribution of micas and quartz+feldspar within any one of our real core samples can change the elastic wave anisotropy by 10

  1. Numerical estimation of transport properties of cementitious materials using 3D digital images

    NARCIS (Netherlands)

    Ukrainczyk, N.; Koenders, E.A.B.; Van Breugel, K.

    2012-01-01

    A multi-scale characterisation of the transport process within cementitious microstructure possesses a great challenge in terms of modelling and schematization. In this paper a numerical method is proposed to mitigate the resolution problems in numerical methods for calculating effective transport

  2. Estimation of permeability and permeability anisotropy in horizontal wells through numerical simulation of mud filtrate invasion

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Nelson [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Exploracao e Producao; Altman, Raphael; Rasmus, John; Oliveira, Jansen [Schlumberger Servicos de Petroleo Ltda., Rio de Janeiro, RJ (Brazil)

    2008-07-01

    This paper describes how permeability and permeability anisotropy is estimated in horizontal wells using LWD (logging-while-drilling) laterolog resistivity data. Laterolog-while-drilling resistivity passes of while-drilling and timelapse (while reaming) were used to capture the invasion process. Radial positions of water based mud invasion fronts were calculated from while-drilling and reaming resistivity data. The invasion process was then recreated by constructing forward models with a fully implicit, near-wellbore numerical simulation such that the invasion front at a given time was consistent with the position of the front predicted by resistivity inversions. The radial position of the invasion front was shown to be sensitive to formation permeability. The while-drilling environment provides a fertile scenario to investigate reservoir dynamic properties because mud cake integrity and growth is not fully developed which means that the position of the invasion front at a particular point in time is more sensitive to formation permeability. The estimation of dynamic formation properties in horizontal wells is of particular value in marginal fields and deep-water offshore developments where running wireline and obtaining core is not always feasible, and where the accuracy of reservoir models can reduce the risk in field development decisions. (author)

  3. Numerical analysis using Sage

    CERN Document Server

    Anastassiou, George A

    2015-01-01

    This is the first numerical analysis text to use Sage for the implementation of algorithms and can be used in a one-semester course for undergraduates in mathematics, math education, computer science/information technology, engineering, and physical sciences. The primary aim of this text is to simplify understanding of the theories and ideas from a numerical analysis/numerical methods course via a modern programming language like Sage. Aside from the presentation of fundamental theoretical notions of numerical analysis throughout the text, each chapter concludes with several exercises that are oriented to real-world application.  Answers may be verified using Sage.  The presented code, written in core components of Sage, are backward compatible, i.e., easily applicable to other software systems such as Mathematica®.  Sage is  open source software and uses Python-like syntax. Previous Python programming experience is not a requirement for the reader, though familiarity with any programming language is a p...

  4. Revised age estimates of the Euphrosyne family

    Science.gov (United States)

    Carruba, Valerio; Masiero, Joseph R.; Cibulková, Helena; Aljbaae, Safwan; Espinoza Huaman, Mariela

    2015-08-01

    The Euphrosyne family, a high inclination asteroid family in the outer main belt, is considered one of the most peculiar groups of asteroids. It is characterized by the steepest size frequency distribution (SFD) among families in the main belt, and it is the only family crossed near its center by the ν6 secular resonance. Previous studies have shown that the steep size frequency distribution may be the result of the dynamical evolution of the family.In this work we further explore the unique dynamical configuration of the Euphrosyne family by refining the previous age values, considering the effects of changes in shapes of the asteroids during YORP cycle (``stochastic YORP''), the long-term effect of close encounters of family members with (31) Euphrosyne itself, and the effect that changing key parameters of the Yarkovsky force (such as density and thermal conductivity) has on the estimate of the family age obtained using Monte Carlo methods. Numerical simulations accounting for the interaction with the local web of secular and mean-motion resonances allow us to refine previous estimates of the family age. The cratering event that formed the Euphrosyne family most likely occurred between 560 and 1160 Myr ago, and no earlier than 1400 Myr ago when we allow for larger uncertainties in the key parameters of the Yarkovsky force.

  5. METRIC CHARACTERISTICS OF VARIOUS METHODS FOR NUMERICAL DENSITY ESTIMATION IN TRANSMISSION LIGHT MICROSCOPY – A COMPUTER SIMULATION

    Directory of Open Access Journals (Sweden)

    Miroslav Kališnik

    2011-05-01

    Full Text Available In the introduction the evolution of methods for numerical density estimation of particles is presented shortly. Three pairs of methods have been analysed and compared: (1 classical methods for particles counting in thin and thick sections, (2 original and modified differential counting methods and (3 physical and optical disector methods. Metric characteristics such as accuracy, efficiency, robustness, and feasibility of methods have been estimated and compared. Logical, geometrical and mathematical analysis as well as computer simulations have been applied. In computer simulations a model of randomly distributed equal spheres with maximal contrast against surroundings has been used. According to our computer simulation all methods give accurate results provided that the sample is representative and sufficiently large. However, there are differences in their efficiency, robustness and feasibility. Efficiency and robustness increase with increasing slice thickness in all three pairs of methods. Robustness is superior in both differential and both disector methods compared to both classical methods. Feasibility can be judged according to the additional equipment as well as to the histotechnical and counting procedures necessary for performing individual counting methods. However, it is evident that not all practical problems can efficiently be solved with models.

  6. Reasoning with Previous Decisions: Beyond the Doctrine of Precedent

    DEFF Research Database (Denmark)

    Komárek, Jan

    2013-01-01

    in different jurisdictions use previous judicial decisions in their argument, we need to move beyond the concept of precedent to a wider notion, which would embrace practices and theories in legal systems outside the Common law tradition. This article presents the concept of ‘reasoning with previous decisions...... law method’, but they are no less rational and intellectually sophisticated. The reason for the rather conceited attitude of some comparatists is in the dominance of the common law paradigm of precedent and the accompanying ‘case law method’. If we want to understand how courts and lawyers......’ as such an alternative and develops its basic models. The article first points out several shortcomings inherent in limiting the inquiry into reasoning with previous decisions by the common law paradigm (1). On the basis of numerous examples provided in section (1), I will present two basic models of reasoning...

  7. BCJ numerators from reduced Pfaffian

    Energy Technology Data Exchange (ETDEWEB)

    Du, Yi-Jian [Center for Theoretical Physics, School of Physics and Technology, Wuhan University,No. 299 Bayi Road, Wuhan 430072 (China); Teng, Fei [Department of Physics and Astronomy, University of Utah,115 South 1400 East, Salt Lake City, UT 84112 (United States)

    2017-04-07

    By expanding the reduced Pfaffian in the tree level Cachazo-He-Yuan (CHY) integrands for Yang-Mills (YM) and nonlinear sigma model (NLSM), we can get the Bern-Carrasco-Johansson (BCJ) numerators in Del Duca-Dixon-Maltoni (DDM) form for arbitrary number of particles in any spacetime dimensions. In this work, we give a set of very straightforward graphic rules based on spanning trees for a direct evaluation of the BCJ numerators for YM and NLSM. Such rules can be derived from the Laplace expansion of the corresponding reduced Pfaffian. For YM, the each one of the (n−2)! DDM form BCJ numerators contains exactly (n−1)! terms, corresponding to the increasing trees with respect to the color order. For NLSM, the number of nonzero numerators is at most (n−2)!−(n−3)!, less than those of several previous constructions.

  8. Nonspinning numerical relativity waveform surrogates: assessing the model

    Science.gov (United States)

    Field, Scott; Blackman, Jonathan; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel

    2015-04-01

    Recently, multi-modal gravitational waveform surrogate models have been built directly from data numerically generated by the Spectral Einstein Code (SpEC). I will describe ways in which the surrogate model error can be quantified. This task, in turn, requires (i) characterizing differences between waveforms computed by SpEC with those predicted by the surrogate model and (ii) estimating errors associated with the SpEC waveforms from which the surrogate is built. Both pieces can have numerous sources of numerical and systematic errors. We make an attempt to study the most dominant error sources and, ultimately, the surrogate model's fidelity. These investigations yield information about the surrogate model's uncertainty as a function of time (or frequency) and parameter, and could be useful in parameter estimation studies which seek to incorporate model error. Finally, I will conclude by comparing the numerical relativity surrogate model to other inspiral-merger-ringdown models. A companion talk will cover the building of multi-modal surrogate models.

  9. Estimation of Poisson-Dirichlet Parameters with Monotone Missing Data

    Directory of Open Access Journals (Sweden)

    Xueqin Zhou

    2017-01-01

    Full Text Available This article considers the estimation of the unknown numerical parameters and the density of the base measure in a Poisson-Dirichlet process prior with grouped monotone missing data. The numerical parameters are estimated by the method of maximum likelihood estimates and the density function is estimated by kernel method. A set of simulations was conducted, which shows that the estimates perform well.

  10. On Numerical Characteristics of а Simplex and their Estimates

    Directory of Open Access Journals (Sweden)

    M. V. Nevskii

    2016-01-01

    precisely\\(\\frac{19+5\\sqrt{13}}{9}\\. Applying this valuein numerical computations we achive the value$$\\varkappa_4 = \\frac{4+\\sqrt{13}}{5}=1.5211\\ldots$$Denote by \\(\\theta_n\\ the minimal normof interpolation projection on the space of linear functions of \\(n\\variables as an operator from\\(C(Q_n\\in \\(C(Q_n\\. It is known that, for each \\(n\\,$$\\xi_n\\leq \\frac{n+1}{2}\\left(\\theta_n-1\\right+1,$$and for \\(n=1,2,3,7\\ here we have an equality.Using computer methods we obtain the result \\(\\theta_4=\\frac{7}{3}\\.Hence, the minimal \\(n\\ such that the above inequality has a strong formis equal to 4.%, a principal architecture of common purpose CPU and its main components are discussed, CPUs evolution is considered and drawbacks that prevent future CPU development are mentioned. Further, solutions proposed so far are addressed and new CPU architecture is introduced. The proposed architecture is based on wireless cache access that enables reliable interaction between cores in multicore CPUs using terahertz band, 0.1-10THz. The presented architecture addresses the scalability problem of existing processors and may potentially allow to scale them to tens of cores. As in-depth analysis of the applicability of suggested architecture requires accurate prediction of traffic in current and next generations of processors we then consider a set of approaches for traffic estimation in modern CPUs discussing their benefits and drawbacks. The authors identify traffic measurements using existing software tools as the most promising approach for traffic estimation, and use Intel Performance Counter Monitor for this purpose. Three types of CPU loads are considered including two artificial tests and background system load. For each load type the amount of data transmitted through the L2-L3 interface is reported for various input parameters including the number of active cores and their dependences on number of cores and operational frequency.

  11. Reconciling estimates of the ratio of heat and salt fluxes at the ice-ocean interface

    Science.gov (United States)

    Keitzl, T.; Mellado, J. P.; Notz, D.

    2016-12-01

    The heat exchange between floating ice and the underlying ocean is determined by the interplay of diffusive fluxes directly at the ice-ocean interface and turbulent fluxes away from it. In this study, we examine this interplay through direct numerical simulations of free convection. Our results show that an estimation of the interface flux ratio based on direct measurements of the turbulent fluxes can be difficult because the flux ratio varies with depth. As an alternative, we present a consistent evaluation of the flux ratio based on the total heat and salt fluxes across the boundary layer. This approach allows us to reconcile previous estimates of the ice-ocean interface conditions. We find that the ratio of heat and salt fluxes directly at the interface is 83-100 rather than 33 as determined by previous turbulence measurements in the outer layer. This can cause errors in the estimated ice-ablation rate from field measurements of up to 40% if they are based on the three-equation formulation.

  12. Numerical computation of the transport matrix in toroidal plasma with a stochastic magnetic field

    Science.gov (United States)

    Zhu, Siqiang; Chen, Dunqiang; Dai, Zongliang; Wang, Shaojie

    2018-04-01

    A new numerical method, based on integrating along the full orbit of guiding centers, to compute the transport matrix is realized. The method is successfully applied to compute the phase-space diffusion tensor of passing electrons in a tokamak with a stochastic magnetic field. The new method also computes the Lagrangian correlation function, which can be used to evaluate the Lagrangian correlation time and the turbulence correlation length. For the case of the stochastic magnetic field, we find that the order of magnitude of the parallel correlation length can be estimated by qR0, as expected previously.

  13. Estimating groundwater-ephemeral stream exchange in hyper-arid environments: Field experiments and numerical simulations

    Science.gov (United States)

    Wang, Ping; Pozdniakov, Sergey P.; Vasilevskiy, Peter Yu.

    2017-12-01

    Surface water infiltration from ephemeral dryland streams is particularly important in hyporheic exchange and biogeochemical processes in arid and semi-arid regions. However, streamflow transmission losses can vary significantly, partly due to spatiotemporal variations in streambed permeability. To extend our understanding of changes in streambed hydraulic properties, field investigations of streambed hydraulic conductivity were conducted in an ephemeral dryland stream in north-western China during high and low streamflow periods. Additionally, streamflow transmission losses were numerically estimated using combined stream and groundwater hydraulic head data and stream and streambed temperature data. An analysis of slug test data at two different river flow stages (one test was performed at a low river stage with clean water and the other at a high river stage with muddy water) suggested that sedimentation from fine-grained particles, i.e., physical clogging processes, likely led to a reduction in streambed hydraulic properties. To account for the effects of streambed clogging on changes in hydraulic properties, an iteratively increasing total hydraulic resistance during the slug test was considered to correct the estimation of streambed hydraulic conductivity. The stream and streambed temperature can also greatly influence the hydraulic properties of the streambed. One-dimensional coupled water and heat flux modelling with HYDRUS-1D was used to quantify the effects of seasonal changes in stream and streambed temperature on streamflow losses. During the period from 6 August 2014 to 4 June 2015, the total infiltration estimated using temperature-dependent hydraulic conductivity accounted for approximately 88% of that using temperature-independent hydraulic conductivity. Streambed clogging processes associated with fine particle settling/wash up cycles during flow events, and seasonal changes in streamflow temperature are two considerable factors that affect water

  14. A methodology for modeling photocatalytic reactors for indoor pollution control using previously estimated kinetic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Passalia, Claudio; Alfano, Orlando M. [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina); Brandi, Rodolfo J., E-mail: rbrandi@santafe-conicet.gov.ar [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Indoor pollution control via photocatalytic reactors. Black-Right-Pointing-Pointer Scaling-up methodology based on previously determined mechanistic kinetics. Black-Right-Pointing-Pointer Radiation interchange model between catalytic walls using configuration factors. Black-Right-Pointing-Pointer Modeling and experimental validation of a complex geometry photocatalytic reactor. - Abstract: A methodology for modeling photocatalytic reactors for their application in indoor air pollution control is carried out. The methodology implies, firstly, the determination of intrinsic reaction kinetics for the removal of formaldehyde. This is achieved by means of a simple geometry, continuous reactor operating under kinetic control regime and steady state. The kinetic parameters were estimated from experimental data by means of a nonlinear optimization algorithm. The second step was the application of the obtained kinetic parameters to a very different photoreactor configuration. In this case, the reactor is a corrugated wall type using nanosize TiO{sub 2} as catalyst irradiated by UV lamps that provided a spatially uniform radiation field. The radiative transfer within the reactor was modeled through a superficial emission model for the lamps, the ray tracing method and the computation of view factors. The velocity and concentration fields were evaluated by means of a commercial CFD tool (Fluent 12) where the radiation model was introduced externally. The results of the model were compared experimentally in a corrugated wall, bench scale reactor constructed in the laboratory. The overall pollutant conversion showed good agreement between model predictions and experiments, with a root mean square error less than 4%.

  15. Numerical estimation of temperature field in a laser welded butt joint made of dissimilar materials

    Directory of Open Access Journals (Sweden)

    Saternus Zbigniew

    2018-01-01

    Full Text Available The paper concerns numerical analysis of thermal phenomena occurring in the butt welding of two different materials by a laser beam welding. The temperature distribution for the welded butt-joint is obtained on the basis of numerical simulations performed in the ABAQUS program. Numerical analysis takes into account the thermophysical properties of welded plate made of two different materials. Temperature distribution in analysed joints is obtained on the basis of numerical simulation in Abaqus/Standard solver, which allowed the determination of the geometry of laser welded butt-joint.

  16. A numerical integration-based yield estimation method for integrated circuits

    International Nuclear Information System (INIS)

    Liang Tao; Jia Xinzhang

    2011-01-01

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  17. A numerical integration-based yield estimation method for integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)

    2011-04-15

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  18. Sub-dominant cogrowth behaviour and the viability of deciding amenability numerically

    OpenAIRE

    Elder, Murray; Rogers, Cameron

    2016-01-01

    We critically analyse a recent numerical method due to the first author, Rechnitzer and van Rensburg, which attempts to detect amenability or non-amenability in a finitely generated group by numerically estimating its asymptotic cogrowth rate. We identify two potential sources of error. We then propose a modification of the method that enables it to easily compute surprisingly accurate estimates for initial terms of the cogrowth sequence.

  19. Observer variability in estimating numbers: An experiment

    Science.gov (United States)

    Erwin, R.M.

    1982-01-01

    Census estimates of bird populations provide an essential framework for a host of research and management questions. However, with some exceptions, the reliability of numerical estimates and the factors influencing them have received insufficient attention. Independent of the problems associated with habitat type, weather conditions, cryptic coloration, ete., estimates may vary widely due only to intrinsic differences in observers? abilities to estimate numbers. Lessons learned in the field of perceptual psychology may be usefully applied to 'real world' problems in field ornithology. Based largely on dot discrimination tests in the laboratory, it was found that numerical abundance, density of objects, spatial configuration, color, background, and other variables influence individual accuracy in estimating numbers. The primary purpose of the present experiment was to assess the effects of observer, prior experience, and numerical range on accuracy in estimating numbers of waterfowl from black-and-white photographs. By using photographs of animals rather than black dots, I felt the results could be applied more meaningfully to field situations. Further, reinforcement was provided throughout some experiments to examine the influence of training on accuracy.

  20. Review of Methods and Approaches for Deriving Numeric ...

    Science.gov (United States)

    EPA will propose numeric criteria for nitrogen/phosphorus pollution to protect estuaries, coastal areas and South Florida inland flowing waters that have been designated Class I, II and III , as well as downstream protective values (DPVs) to protect estuarine and marine waters. In accordance with the formal determination and pursuant to a subsequent consent decree, these numeric criteria are being developed to translate and implement Florida’s existing narrative nutrient criterion, to protect the designated use that Florida has previously set for these waters, at Rule 62-302.530(47)(b), F.A.C. which provides that “In no case shall nutrient concentrations of a body of water be altered so as to cause an imbalance in natural populations of aquatic flora or fauna.” Under the Clean Water Act and EPA’s implementing regulations, these numeric criteria must be based on sound scientific rationale and reflect the best available scientific knowledge. EPA has previously published a series of peer reviewed technical guidance documents to develop numeric criteria to address nitrogen/phosphorus pollution in different water body types. EPA recognizes that available and reliable data sources for use in numeric criteria development vary across estuarine and coastal waters in Florida and flowing waters in South Florida. In addition, scientifically defensible approaches for numeric criteria development have different requirements that must be taken into consider

  1. Extensible numerical library in JAVA

    International Nuclear Information System (INIS)

    Aso, T.; Okazawa, H.; Takashimizu, N.

    2001-01-01

    The authors present the current status of the project for developing the numerical library in JAVA. The authors have presented how object-oriented techniques improve usage and also development of numerical libraries compared with the conventional way at previous conference. The authors need many functions for data analysis which is not provided within JAVA language, for example, good random number generators, special functions and so on. Authors' development strategy is focused on easiness of implementation and adding new features by users themselves not only by developers. In HPC field, there are other focus efforts to develop numerical libraries in JAVA. However, their focus is on the performance of execution, not easiness of extension. Following the strategy, the authors have designed and implemented more classes for random number generators and so on

  2. Numerical Calculation of Transport Based on the Drift-Kinetic Equation for Plasmas in General Toroidal Magnetic Geometry: Numerical Methods

    International Nuclear Information System (INIS)

    Reynolds, J. M.; Lopez-Bruna, D.

    2009-01-01

    In this report we continue with the description of a newly developed numerical method to solve the drift kinetic equation for ions and electrons in toroidal plasmas. Several numerical aspects, already outlined in a previous report [Informes Tecnicos Ciemat 1165, mayo 2009], will be treated now in more detail. Aside from discussing the method in the context of other existing codes, various aspects will be now explained from the viewpoint of numerical methods: the way to solve convection equations, the adopted boundary conditions, the real-space meshing procedures along with a new software developed to build them, and some additional questions related with the parallelization and the numerical integration. (Author) 16 refs

  3. Evaluation of steel corrosion by numerical analysis

    OpenAIRE

    Kawahigashi, Tatsuo

    2017-01-01

    Recently, various non-destructive and numerical methods have been used and many cases of steel corrosion are examined. For example, methods of evaluating corrosion through various numerical methods and evaluating macrocell corrosion and micro-cell corrosion using measurements have been proposed. However, there are few reports on estimating of corrosion loss with distinguishing the macro-cell and micro-cell corrosion and with resembling an actuality phenomenon. In this study, for distinguishin...

  4. Modelling landscape-level numerical responses of predators to prey: the case of cats and rabbits.

    Directory of Open Access Journals (Sweden)

    Jennyffer Cruz

    Full Text Available Predator-prey systems can extend over large geographical areas but empirical modelling of predator-prey dynamics has been largely limited to localised scales. This is due partly to difficulties in estimating predator and prey abundances over large areas. Collection of data at suitably large scales has been a major problem in previous studies of European rabbits (Oryctolagus cuniculus and their predators. This applies in Western Europe, where conserving rabbits and predators such as Iberian lynx (Lynx pardinus is important, and in other parts of the world where rabbits are an invasive species supporting populations of introduced, and sometimes native, predators. In pastoral regions of New Zealand, rabbits are the primary prey of feral cats (Felis catus that threaten native fauna. We estimate the seasonal numerical response of cats to fluctuations in rabbit numbers in grassland-shrubland habitat across the Otago and Mackenzie regions of the South Island of New Zealand. We use spotlight counts over 1645 km of transects to estimate rabbit and cat abundances with a novel modelling approach that accounts simultaneously for environmental stochasticity, density dependence and varying detection probability. Our model suggests that cat abundance is related consistently to rabbit abundance in spring and summer, possibly through increased rabbit numbers improving the fecundity and juvenile survival of cats. Maintaining rabbits at low abundance should therefore suppress cat numbers, relieving predation pressure on native prey. Our approach provided estimates of the abundance of cats and rabbits over a large geographical area. This was made possible by repeated sampling within each season, which allows estimation of detection probabilities. A similar approach could be applied to predator-prey systems elsewhere, and could be adapted to any method of direct observation in which there is no double-counting of individuals. Reliable estimates of numerical

  5. Numerical modeling of economic uncertainty

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2007-01-01

    Representation and modeling of economic uncertainty is addressed by different modeling methods, namely stochastic variables and probabilities, interval analysis, and fuzzy numbers, in particular triple estimates. Focusing on discounted cash flow analysis numerical results are presented, comparisons...... are made between alternative modeling methods, and characteristics of the methods are discussed....

  6. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison

    KAUST Repository

    Bäck, Joakim

    2010-09-17

    Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods. By introducing a suitable generalization of the classical sparse grid SC method, we are able to compare SG and SC on the same underlying multivariate polynomial space in terms of accuracy vs. computational work. The approximation spaces considered here include isotropic and anisotropic versions of Tensor Product (TP), Total Degree (TD), Hyperbolic Cross (HC) and Smolyak (SM) polynomials. Numerical results for linear elliptic SPDEs indicate a slight computational work advantage of isotropic SC over SG, with SC-SM and SG-TD being the best choices of approximation spaces for each method. Finally, numerical results corroborate the optimality of the theoretical estimate of anisotropy ratios introduced by the authors in a previous work for the construction of anisotropic approximation spaces. © 2011 Springer.

  7. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  8. Numerical models for differential problems

    CERN Document Server

    Quarteroni, Alfio

    2017-01-01

    In this text, we introduce the basic concepts for the numerical modelling of partial differential equations. We consider the classical elliptic, parabolic and hyperbolic linear equations, but also the diffusion, transport, and Navier-Stokes equations, as well as equations representing conservation laws, saddle-point problems and optimal control problems. Furthermore, we provide numerous physical examples which underline such equations. We then analyze numerical solution methods based on finite elements, finite differences, finite volumes, spectral methods and domain decomposition methods, and reduced basis methods. In particular, we discuss the algorithmic and computer implementation aspects and provide a number of easy-to-use programs. The text does not require any previous advanced mathematical knowledge of partial differential equations: the absolutely essential concepts are reported in a preliminary chapter. It is therefore suitable for students of bachelor and master courses in scientific disciplines, an...

  9. Estimation of flushing time in a monsoonal estuary using observational and numerical approaches

    Digital Repository Service at National Institute of Oceanography (India)

    Manoj, N.T.

    and numerical model simulations to correlate TF with monthly mean river discharges. The power regression equation derived from FOS (numerical model) showed good statistical fit with data (r=-0.997 (-1.0)) for any given river discharge compared... was to calculate the TF in the Mandovi during three different seasons in a year. For this purpose, we adopted two approaches, first the computation of TF from FOS. The application of H2N-Model was another approach to calculate the TF in the estuary. The FWF...

  10. Duration and numerical estimation in right brain-damaged patients with and without neglect: Lack of support for a mental time line.

    Science.gov (United States)

    Masson, Nicolas; Pesenti, Mauro; Dormal, Valérie

    2016-08-01

    Previous studies have shown that left neglect patients are impaired when they have to orient their attention leftward relative to a standard in numerical comparison tasks. This finding has been accounted for by the idea that numerical magnitudes are represented along a spatial continuum oriented from left to right with small magnitudes on the left and large magnitudes on the right. Similarly, it has been proposed that duration could be represented along a mental time line that shares the properties of the number continuum. By comparing directly duration and numerosity processing, this study investigates whether or not the performance of neglect patients supports the hypothesis of a mental time line. Twenty-two right brain-damaged patients (11 with and 11 without left neglect), as well as 11 age-matched healthy controls, had to judge whether a single dot presented visually lasted shorter or longer than 500 ms and whether a sequence of flashed dots was smaller or larger than 5. Digit spans were also assessed to measure verbal working memory capacities. In duration comparison, no spatial-duration bias was found in neglect patients. Moreover, a significant correlation between verbal working memory and duration performance was observed in right brain-damaged patients, irrespective of the presence or absence of neglect. In numerical comparison, only neglect patients showed an enhanced distance effect for numerical magnitude smaller than the standard. These results do not support the hypothesis of the existence of a mental continuum oriented from left to right for duration. We discuss an alternative account to explain the duration impairment observed in right brain-damaged patients. © 2015 The British Psychological Society.

  11. Risk approximation in decision making: approximative numeric abilities predict advantageous decisions under objective risk.

    Science.gov (United States)

    Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias

    2018-01-22

    Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.

  12. Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models

    Science.gov (United States)

    Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.

    2011-01-01

    Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.

  13. Numerical estimation of structural integrity of salt cavern wells.

    NARCIS (Netherlands)

    Orlic, B.; Thienen-Visser, K. van; Schreppers, G.J.

    2016-01-01

    Finite element analyses were performed to estimate axial deformation of cavern wells due to gas storage operations in solution-mined salt caverns. Caverns shrink over time due to salt creep and the cavern roof subsides potentially threatening well integrity. Cavern deformation, deformation of salt

  14. Response to health insurance by previously uninsured rural children.

    Science.gov (United States)

    Tilford, J M; Robbins, J M; Shema, S J; Farmer, F L

    1999-08-01

    To examine the healthcare utilization and costs of previously uninsured rural children. Four years of claims data from a school-based health insurance program located in the Mississippi Delta. All children who were not Medicaid-eligible or were uninsured, were eligible for limited benefits under the program. The 1987 National Medical Expenditure Survey (NMES) was used to compare utilization of services. The study represents a natural experiment in the provision of insurance benefits to a previously uninsured population. Premiums for the claims cost were set with little or no information on expected use of services. Claims from the insurer were used to form a panel data set. Mixed model logistic and linear regressions were estimated to determine the response to insurance for several categories of health services. The use of services increased over time and approached the level of utilization in the NMES. Conditional medical expenditures also increased over time. Actuarial estimates of claims cost greatly exceeded actual claims cost. The provision of a limited medical, dental, and optical benefit package cost approximately $20-$24 per member per month in claims paid. An important uncertainty in providing health insurance to previously uninsured populations is whether a pent-up demand exists for health services. Evidence of a pent-up demand for medical services was not supported in this study of rural school-age children. States considering partnerships with private insurers to implement the State Children's Health Insurance Program could lower premium costs by assembling basic data on previously uninsured children.

  15. A numerical simulation of pre-big bang cosmology

    CERN Document Server

    Maharana, J P; Veneziano, Gabriele

    1998-01-01

    We analyse numerically the onset of pre-big bang inflation in an inhomogeneous, spherically symmetric Universe. Adding a small dilatonic perturbation to a trivial (Milne) background, we find that suitable regions of space undergo dilaton-driven inflation and quickly become spatially flat ($\\Omega \\to 1$). Numerical calculations are pushed close enough to the big bang singularity to allow cross checks against previously proposed analytic asymptotic solutions.

  16. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  17. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin; Genton, Marc G.

    2013-01-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  18. beta. and. gamma. -comparative dose estimates on Enewetak Atoll

    Energy Technology Data Exchange (ETDEWEB)

    Crase, K.W.; Gudiksen, P.H.; Robison, W.L. (California Univ., Livermore (USA). Lawrence Livermore National Lab.)

    1982-05-01

    Enewetak Atoll in the Pacific is used for atmospheric testing of U.S. nuclear weapons. Beta dose and ..gamma..-ray exposure measurements were made on two islands of the Enewetak Atoll during July-August 1976 to determine the ..beta.. and low energy ..gamma..-contribution to the total external radiation doses to the returning Marshallese. Measurements were made at numerous locations with thermoluminescent dosimeters (TLD), pressurized ionization chambers, portable NaI detectors, and thin-window pancake GM probes. Results of the TLD measurements with and without a ..beta..-attenuator indicate that approx. 29% of the total dose rate at 1 m in air is due to ..beta..- or low energy ..gamma..-contribution. The contribution at any particular site, however, is reduced by vegetation. Integral 30-yr external shallow dose estimates for future inhabitants were made and compared with external dose estimates of a previous large scale radiological survey. Integral 30-yr shallow external dose estimates are 25-50% higher than whole body estimates. Due to the low penetrating ability of the ..beta..'s or low energy ..gamma..'s, however, several remedial actions can be taken to reduce the shallow dose contribution to the total external dose.

  19. Estimation of complex permittivity using loop antenna

    DEFF Research Database (Denmark)

    Lenler-Eriksen, Hans-Rudolph; Meincke, Peter

    2004-01-01

    A method for estimating the complex permittivity of materials in the vicinity of a loop antenna is proposed. The method is based on comparing measured and numerically calculated input admittances for the loop antenna.......A method for estimating the complex permittivity of materials in the vicinity of a loop antenna is proposed. The method is based on comparing measured and numerically calculated input admittances for the loop antenna....

  20. A numerical method to estimate AC loss in superconducting coated conductors by finite element modelling

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Z; Jiang, Q; Pei, R; Campbell, A M; Coombs, T A [Engineering Department, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ (United Kingdom)

    2007-04-15

    A finite element method code based on the critical state model is proposed to solve the AC loss problem in YBCO coated conductors. This numerical method is based on a set of partial differential equations (PDEs) in which the magnetic field is used as the state variable. The AC loss problems have been investigated both in self-field condition and external field condition. Two numerical approaches have been introduced: the first model is configured on the cross-section plane of the YBCO tape to simulate an infinitely long superconducting tape. The second model represents the plane of the critical current flowing and is able to simulate the YBCO tape with finite length where the end effect is accounted. An AC loss measurement has been done to verify the numerical results and shows a good agreement with the numerical solution.

  1. Constructing exact symmetric informationally complete measurements from numerical solutions

    Science.gov (United States)

    Appleby, Marcus; Chien, Tuan-Yow; Flammia, Steven; Waldron, Shayne

    2018-04-01

    Recently, several intriguing conjectures have been proposed connecting symmetric informationally complete quantum measurements (SIC POVMs, or SICs) and algebraic number theory. These conjectures relate the SICs to their minimal defining algebraic number field. Testing or sharpening these conjectures requires that the SICs are expressed exactly, rather than as numerical approximations. While many exact solutions of SICs have been constructed previously using Gröbner bases, this method has probably been taken as far as is possible with current computer technology (except in special cases where there are additional symmetries). Here, we describe a method for converting high-precision numerical solutions into exact ones using an integer relation algorithm in conjunction with the Galois symmetries of an SIC. Using this method, we have calculated 69 new exact solutions, including nine new dimensions, where previously only numerical solutions were known—which more than triples the number of known exact solutions. In some cases, the solutions require number fields with degrees as high as 12 288. We use these solutions to confirm that they obey the number-theoretic conjectures, and address two questions suggested by the previous work.

  2. Numerical and Experimental Modal Control of Flexible Rotor Using Electromagnetic Actuator

    Directory of Open Access Journals (Sweden)

    Edson Hideki Koroishi

    2014-01-01

    Full Text Available The present work is dedicated to active modal control applied to flexible rotors. The effectiveness of the corresponding techniques for controlling a flexible rotor is tested numerically and experimentally. Two different approaches are used to determine the appropriate controllers. The first uses the linear quadratic regulator and the second approach is the fuzzy modal control. This paper is focused on the electromagnetic actuator, which in this case is part of a hybrid bearing. Due to numerical reasons it was necessary to reduce the size of the model of the rotating system so that the design of the controllers and estimator could be performed. The role of the Kalman estimator in the present contribution is to estimate the modal states of the system and to determine the displacement of the rotor at the position of the hybrid bearing. Finally, numerical and experimental results demonstrate the success of the methodology conveyed.

  3. Is 27 a Big Number? Correlational and Causal Connections among Numerical Categorization, Number Line Estimation, and Numerical Magnitude Comparison

    Science.gov (United States)

    Laski, Elida V.; Siegler, Robert S.

    2007-01-01

    This study examined the generality of the logarithmic to linear transition in children's representations of numerical magnitudes and the role of subjective categorization of numbers in the acquisition of more advanced understanding. Experiment 1 (49 girls and 41 boys, ages 5-8 years) suggested parallel transitions from kindergarten to second grade…

  4. Overconfidence in Interval Estimates

    Science.gov (United States)

    Soll, Jack B.; Klayman, Joshua

    2004-01-01

    Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…

  5. Comparison of scale analysis and numerical simulation for saturated zone convective mixing processes

    International Nuclear Information System (INIS)

    Oldenburg, C.M.

    1998-01-01

    Scale analysis can be used to predict a variety of quantities arising from natural systems where processes are described by partial differential equations. For example, scale analysis can be applied to estimate the effectiveness of convective missing on the dilution of contaminants in groundwater. Scale analysis involves substituting simple quotients for partial derivatives and identifying and equating the dominant terms in an order-of-magnitude sense. For free convection due to sidewall heating of saturated porous media, scale analysis shows that vertical convective velocity in the thermal boundary layer region is proportional to the Rayleigh number, horizontal convective velocity is proportional to the square root of the Rayleigh number, and thermal boundary layer thickness is proportional to the inverse square root of the Rayleigh number. These scale analysis estimates are corroborated by numerical simulations of an idealized system. A scale analysis estimate of mixing time for a tracer mixing by hydrodynamic dispersion in a convection cell also agrees well with numerical simulation for two different Rayleigh numbers. Scale analysis for the heating-from-below scenario produces estimates of maximum velocity one-half as large as the sidewall case. At small values of the Rayleigh number, this estimate is confirmed by numerical simulation. For larger Rayleigh numbers, simulation results suggest maximum velocities are similar to the sidewall heating scenario. In general, agreement between scale analysis estimates and numerical simulation results serves to validate the method of scale analysis. Application is to radioactive repositories

  6. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Maity, Arnab; Carroll, Raymond J.

    2013-01-01

    PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus

  7. Comparison of reactivity estimation performance between two extended Kalman filtering schemes

    International Nuclear Information System (INIS)

    Peng, Xingjie; Cai, Yun; Li, Qing; Wang, Kan

    2016-01-01

    Highlights: • The performances of two EKF schemes using different Jacobian matrices are compared. • Numerical simulations are used for the validation and comparison of these two EKF schemes. • The simulation results show that the EKF scheme adopted by this paper performs better than the one adopted by previous literatures. - Abstract: The extended Kalman filtering (EKF) technique has been utilized in the estimation of reactivity which is a significantly important parameter to indicate the status of the nuclear reactor. In this paper, the performances of two EKF schemes using different Jacobian matrices are compared. Numerical simulations are used for the validation and comparison of these two EKF schemes, and the results show that the Jacobian matrix obtained directly from the discrete-time state model performs better than the one which is the discretization form of the Jacobian matrix obtained from the continuous-time state model.

  8. On the Hughes model and numerical aspects

    KAUST Repository

    Gomes, Diogo A.

    2017-01-05

    We study a crowd model proposed by R. Hughes in [11] and we describe a numerical approach to solve it. This model comprises a Fokker-Planck equation coupled with an eikonal equation with Dirichlet or Neumann data. First, we establish a priori estimates for the solutions. Second, we study radial solutions and identify a shock formation mechanism. Third, we illustrate the existence of congestion, the breakdown of the model, and the trend to the equilibrium. Finally, we propose a new numerical method and consider two examples.

  9. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation

    OpenAIRE

    Eric Frick; Salam Rahmatalla

    2018-01-01

    The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential applic...

  10. Numerical modeling of slow shocks

    International Nuclear Information System (INIS)

    Winske, D.

    1987-01-01

    This paper reviews previous attempt and the present status of efforts to understand the structure of slow shocks by means of time dependent numerical calculations. Studies carried out using MHD or hybrid-kinetic codes have demonstrated qualitative agreement with theory. A number of unresolved issues related to hybrid simulations of the internal shock structure are discussed in some detail. 43 refs., 8 figs

  11. Root water extraction and limiting soil hydraulic conditions estimated by numerical simulation

    NARCIS (Netherlands)

    Jong van Lier, de Q.; Metselaar, K.; Dam, van J.C.

    2006-01-01

    Root density, soil hydraulic functions, and hydraulic head gradients play an important role in the determination of transpiration-rate-limiting soil water contents. We developed an implicit numerical root water extraction model to solve the Richards equation for the modeling of radial root water

  12. Some Numerical Aspects on Crowd Motion - The Hughes Model

    KAUST Repository

    Gomes, Diogo A.

    2016-01-06

    Here, we study a crowd model proposed by R. Hughes in [5] and we describe a numerical approach to solve it. This model comprises a Fokker-Planck equation coupled with an Eikonal equation with Dirichlet or Neumann data. First, we establish a priori estimates for the solution. Second, we study radial solutions and identify a shock formation mechanism. Third, we illustrate the existence of congestion, the breakdown of the model, and the trend to the equilibrium. Finally, we propose a new numerical method and consider two numerical examples.

  13. [Estimating non work-related sickness leave absences related to a previous occupational injury in Catalonia (Spain)].

    Science.gov (United States)

    Molinero-Ruiz, Emilia; Navarro, Albert; Moriña, David; Albertí-Casas, Constança; Jardí-Lliberia, Josefina; de Montserrat-Nonó, Jaume

    2015-01-01

    To estimate the frequency of non-work sickness absence (ITcc) related to previous occupational injuries with (ATB) or without (ATSB) sick leave. Prospective longitudinal study. Workers with ATB or ATSB notified to the Occupational Accident Registry of Catalonia were selected in the last term of 2009. They were followed-up for six months after returning to work (ATB) or after the accident (ATSB), by sex and occupation. Official labor and health authority registries were used as information sources. An "injury-associated ITcc" was defined when the sick leave occurred in the following six months and within the same diagnosis group. The absolute and relative frequency were calculated according to time elapsed and its duration (cumulated days, measures of central trend and dispersion), by diagnosis group or affected body area, as compared to all of Catalonia. 2,9%of ATB (n=627) had an injury-associated ITcc, with differences by diagnosis, sex and occupation; this was also the case for 2,1% of ATSB (n=496).With the same diagnosis, duration of ITcc was longer among those who had an associated injury, and with respect to all of Catalonia. Some of the under-reporting of occupational pathology corresponds to episodes initially recognized as being work-related. Duration of sickness absence depends not only on diagnosis and clinical course, but also on criteria established by the entities managing the case. This could imply that more complicated injuries are referred to the national health system, resulting in personal, legal, healthcare and economic cost consequences for all involved stakeholders. Copyright belongs to the Societat Catalana de Salut Laboral.

  14. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander

    2017-05-16

    We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.

  15. Numerical Estimation of the Outer Bank Resistance Characteristics in AN Evolving Meandering River

    Science.gov (United States)

    Wang, D.; Konsoer, K. M.; Rhoads, B. L.; Garcia, M. H.; Best, J.

    2017-12-01

    Few studies have examined the three-dimensional flow structure and its interaction with bed morphology within elongate loops of large meandering rivers. The present study uses a numerical model to simulate the flow pattern and sediment transport, especially the flow close to the outer-bank, at two elongate meandering loops in Wabash River, USA. The numerical grid for the model is based on a combination of airborne LIDAR data on floodplains and the multibeam data within the river channel. A Finite Element Method (FEM) is used to solve the non-hydrostatic RANS equation using a K-epsilon turbulence closure scheme. High-resolution topographic data allows detailed numerical simulation of flow patterns along the outer bank and model calibration involves comparing simulated velocities to ADCP measurements at 41 cross sections near this bank. Results indicate that flow along the outer bank is strongly influenced by large resistance elements, including woody debris, large erosional scallops within the bank face, and outcropping bedrock. In general, patterns of bank migration conform with zones of high near-bank velocity and shear stress. Using the existing model, different virtual events can be simulated to explore the impacts of different resistance characteristics on patterns of flow, sediment transport, and bank erosion.

  16. Precarious Rock Methodology for Seismic Hazard: Physical Testing, Numerical Modeling and Coherence Studies

    Energy Technology Data Exchange (ETDEWEB)

    Anooshehpoor, Rasool; Purvance, Matthew D.; Brune, James N.; Preston, Leiph A.; Anderson, John G.; Smith, Kenneth D.

    2006-09-29

    This report covers the following projects: Shake table tests of precarious rock methodology, field tests of precarious rocks at Yucca Mountain and comparison of the results with PSHA predictions, study of the coherence of the wave field in the ESF, and a limited survey of precarious rocks south of the proposed repository footprint. A series of shake table experiments have been carried out at the University of Nevada, Reno Large Scale Structures Laboratory. The bulk of the experiments involved scaling acceleration time histories (uniaxial forcing) from 0.1g to the point where the objects on the shake table overturned a specified number of times. The results of these experiments have been compared with numerical overturning predictions. Numerical predictions for toppling of large objects with simple contact conditions (e.g., I-beams with sharp basal edges) agree well with shake-table results. The numerical model slightly underpredicts the overturning of small rectangular blocks. It overpredicts the overturning PGA for asymmetric granite boulders with complex basal contact conditions. In general the results confirm the approximate predictions of previous studies. Field testing of several rocks at Yucca Mountain has approximately confirmed the preliminary results from previous studies, suggesting that he PSHA predictions are too high, possibly because the uncertainty in the mean of the attenuation relations. Study of the coherence of wavefields in the ESF has provided results which will be very important in design of the canisters distribution, in particular a preliminary estimate of the wavelengths at which the wavefields become incoherent. No evidence was found for extreme focusing by lens-like inhomogeneities. A limited survey for precarious rocks confirmed that they extend south of the repository, and one of these has been field tested.

  17. Numerical Simulation of Polynomial-Speed Convergence Phenomenon

    Science.gov (United States)

    Li, Yao; Xu, Hui

    2017-11-01

    We provide a hybrid method that captures the polynomial speed of convergence and polynomial speed of mixing for Markov processes. The hybrid method that we introduce is based on the coupling technique and renewal theory. We propose to replace some estimates in classical results about the ergodicity of Markov processes by numerical simulations when the corresponding analytical proof is difficult. After that, all remaining conclusions can be derived from rigorous analysis. Then we apply our results to seek numerical justification for the ergodicity of two 1D microscopic heat conduction models. The mixing rate of these two models are expected to be polynomial but very difficult to prove. In both examples, our numerical results match the expected polynomial mixing rate well.

  18. Numerical orbit generators of artificial earth satellites

    Science.gov (United States)

    Kugar, H. K.; Dasilva, W. C. C.

    1984-04-01

    A numerical orbit integrator containing updatings and improvements relative to the previous ones that are being utilized by the Departmento de Mecanica Espacial e Controle (DMC), of INPE, besides incorporating newer modellings resulting from the skill acquired along the time is presented. Flexibility and modularity were taken into account in order to allow future extensions and modifications. Characteristics of numerical accuracy, processing quickness, memory saving as well as utilization aspects were also considered. User's handbook, whole program listing and qualitative analysis of accuracy, processing time and orbit perturbation effects were included as well.

  19. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann G.; Liu, Dishi; Schillings, Claudia; Schulz, Volker

    2017-01-01

    In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al '17]. For modeling we used the TAU code, developed in DLR, Germany.

  20. Numerical approach of the bond stress behavior of steel bars embedded in self-compacting concrete and in ordinary concrete using beam models

    Directory of Open Access Journals (Sweden)

    F.M. Almeida Filho

    Full Text Available The present study evaluates the bond behavior between steel bars and concrete by means of a numerical analysis based on Finite Element Method. Results of a previously conducted experimental program on reinforced concrete beams subjected to monotonic loading are also presented. Two concrete types, self-compacting concrete and ordinary concrete, were considered in the study. Non-linear constitutive relations were used to represent concrete and steel in the proposed numerical model, aiming to reproduce the bond behavior observed in the tests. Experimental analysis showed similar results for the bond resistances of self-compacting and ordinary concrete, with self-compacting concrete presenting a better performance in some cases. The results given by the numerical modeling showed a good agreement with the tests for both types of concrete, especially in the pre-peak branch of the load vs. slip and load vs. displacement curves. As a consequence, the proposed numerical model could be used to estimate a reliable development length, allowing a possible reduction of the structure costs.

  1. Efficacy of a numerical value of a fixed-effect estimator in stochastic frontier analysis as an indicator of hospital production structure

    Directory of Open Access Journals (Sweden)

    Kawaguchi Hiroyuki

    2012-09-01

    Full Text Available Abstract Background The casemix-based payment system has been adopted in many countries, although it often needs complementary adjustment taking account of each hospital’s unique production structure such as teaching and research duties, and non-profit motives. It has been challenging to numerically evaluate the impact of such structural heterogeneity on production, separately of production inefficiency. The current study adopted stochastic frontier analysis and proposed a method to assess unique components of hospital production structures using a fixed-effect variable. Methods There were two stages of analyses in this study. In the first stage, we estimated the efficiency score from the hospital production function using a true fixed-effect model (TFEM in stochastic frontier analysis. The use of a TFEM allowed us to differentiate the unobserved heterogeneity of individual hospitals as hospital-specific fixed effects. In the second stage, we regressed the obtained fixed-effect variable for structural components of hospitals to test whether the variable was explicitly related to the characteristics and local disadvantages of the hospitals. Results In the first analysis, the estimated efficiency score was approximately 0.6. The mean value of the fixed-effect estimator was 0.784, the standard deviation was 0.137, the range was between 0.437 and 1.212. The second-stage regression confirmed that the value of the fixed effect was significantly correlated with advanced technology and local conditions of the sample hospitals. Conclusion The obtained fixed-effect estimator may reflect hospitals’ unique structures of production, considering production inefficiency. The values of fixed-effect estimators can be used as evaluation tools to improve fairness in the reimbursement system for various functions of hospitals based on casemix classification.

  2. Numerical Estimation of Balanced and Falling States for Constrained Legged Systems

    Science.gov (United States)

    Mummolo, Carlotta; Mangialardi, Luigi; Kim, Joo H.

    2017-08-01

    Instability and risk of fall during standing and walking are common challenges for biped robots. While existing criteria from state-space dynamical systems approach or ground reference points are useful in some applications, complete system models and constraints have not been taken into account for prediction and indication of fall for general legged robots. In this study, a general numerical framework that estimates the balanced and falling states of legged systems is introduced. The overall approach is based on the integration of joint-space and Cartesian-space dynamics of a legged system model. The full-body constrained joint-space dynamics includes the contact forces and moments term due to current foot (or feet) support and another term due to altered contact configuration. According to the refined notions of balanced, falling, and fallen, the system parameters, physical constraints, and initial/final/boundary conditions for balancing are incorporated into constrained nonlinear optimization problems to solve for the velocity extrema (representing the maximum perturbation allowed to maintain balance without changing contacts) in the Cartesian space at each center-of-mass (COM) position within its workspace. The iterative algorithm constructs the stability boundary as a COM state-space partition between balanced and falling states. Inclusion in the resulting six-dimensional manifold is a necessary condition for a state of the given system to be balanced under the given contact configuration, while exclusion is a sufficient condition for falling. The framework is used to analyze the balance stability of example systems with various degrees of complexities. The manifold for a 1-degree-of-freedom (DOF) legged system is consistent with the experimental and simulation results in the existing studies for specific controller designs. The results for a 2-DOF system demonstrate the dependency of the COM state-space partition upon joint-space configuration (elbow-up vs

  3. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  4. Theoretical stability in coefficient inverse problems for general hyperbolic equations with numerical reconstruction

    Science.gov (United States)

    Yu, Jie; Liu, Yikan; Yamamoto, Masahiro

    2018-04-01

    In this article, we investigate the determination of the spatial component in the time-dependent second order coefficient of a hyperbolic equation from both theoretical and numerical aspects. By the Carleman estimates for general hyperbolic operators and an auxiliary Carleman estimate, we establish local Hölder stability with either partial boundary or interior measurements under certain geometrical conditions. For numerical reconstruction, we minimize a Tikhonov functional which penalizes the gradient of the unknown function. Based on the resulting variational equation, we design an iteration method which is updated by solving a Poisson equation at each step. One-dimensional prototype examples illustrate the numerical performance of the proposed iteration.

  5. Coastal Amplification Laws for the French Tsunami Warning Center: Numerical Modeling and Fast Estimate of Tsunami Wave Heights Along the French Riviera

    Science.gov (United States)

    Gailler, A.; Hébert, H.; Schindelé, F.; Reymond, D.

    2018-04-01

    Tsunami modeling tools in the French tsunami Warning Center operational context provide rapidly derived warning levels with a dimensionless variable at basin scale. A new forecast method based on coastal amplification laws has been tested to estimate the tsunami onshore height, with a focus on the French Riviera test-site (Nice area). This fast prediction tool provides a coastal tsunami height distribution, calculated from the numerical simulation of the deep ocean tsunami amplitude and using a transfer function derived from the Green's law. Due to a lack of tsunami observations in the western Mediterranean basin, coastal amplification parameters are here defined regarding high resolution nested grids simulations. The preliminary results for the Nice test site on the basis of nine historical and synthetic sources show a good agreement with the time-consuming high resolution modeling: the linear approximation is obtained within 1 min in general and provides estimates within a factor of two in amplitude, although the resonance effects in harbors and bays are not reproduced. In Nice harbor especially, variation in tsunami amplitude is something that cannot be really assessed because of the magnitude range and maximum energy azimuth of possible events to account for. However, this method is well suited for a fast first estimate of the coastal tsunami threat forecast.

  6. The implications of selective attrition for estimates of intergenerational elasticity of family income.

    Science.gov (United States)

    Schoeni, Robert F; Wiemers, Emily E

    2015-09-01

    Numerous studies have estimated a high intergenerational correlation in economic status. Such studies do not typically attend to potential biases that may arise due to survey attrition. Using the Panel Study of Income Dynamics - the data source most commonly used in prior studies - we demonstrate that attrition is particularly high for low-income adult children with low-income parents and particularly low for high-income adult children with high-income parents. Because of this pattern of attrition, intergenerational upward mobility has been overstated for low-income families and downward mobility has been understated for high-income families. The bias among low-income families is greater than the bias among high-income families implying that intergenerational elasticity in family income is higher than previous estimates with the Panel Study of Income Dynamics would suggest.

  7. Numerical Hydrodynamics and Magnetohydrodynamics in General Relativity

    Directory of Open Access Journals (Sweden)

    Font José A.

    2008-09-01

    Full Text Available This article presents a comprehensive overview of numerical hydrodynamics and magnetohydrodynamics (MHD in general relativity. Some significant additions have been incorporated with respect to the previous two versions of this review (2000, 2003, most notably the coverage of general-relativistic MHD, a field in which remarkable activity and progress has occurred in the last few years. Correspondingly, the discussion of astrophysical simulations in general-relativistic hydrodynamics is enlarged to account for recent relevant advances, while those dealing with general-relativistic MHD are amply covered in this review for the first time. The basic outline of this article is nevertheless similar to its earlier versions, save for the addition of MHD-related issues throughout. Hence, different formulations of both the hydrodynamics and MHD equations are presented, with special mention of conservative and hyperbolic formulations well adapted to advanced numerical methods. A large sample of numerical approaches for solving such hyperbolic systems of equations is discussed, paying particular attention to solution procedures based on schemes exploiting the characteristic structure of the equations through linearized Riemann solvers. As previously stated, a comprehensive summary of astrophysical simulations in strong gravitational fields is also presented. These are detailed in three basic sections, namely gravitational collapse, black-hole accretion, and neutron-star evolutions; despite the boundaries, these sections may (and in fact do overlap throughout the discussion. The material contained in these sections highlights the numerical challenges of various representative simulations. It also follows, to some extent, the chronological development of the field, concerning advances in the formulation of the gravitational field, hydrodynamics and MHD equations and the numerical methodology designed to solve them. To keep the length of this article reasonable

  8. Estimating the time evolution of NMR systems via a quantum-speed-limit-like expression

    Science.gov (United States)

    Villamizar, D. V.; Duzzioni, E. I.; Leal, A. C. S.; Auccaise, R.

    2018-05-01

    Finding the solutions of the equations that describe the dynamics of a given physical system is crucial in order to obtain important information about its evolution. However, by using estimation theory, it is possible to obtain, under certain limitations, some information on its dynamics. The quantum-speed-limit (QSL) theory was originally used to estimate the shortest time in which a Hamiltonian drives an initial state to a final one for a given fidelity. Using the QSL theory in a slightly different way, we are able to estimate the running time of a given quantum process. For that purpose, we impose the saturation of the Anandan-Aharonov bound in a rotating frame of reference where the state of the system travels slower than in the original frame (laboratory frame). Through this procedure it is possible to estimate the actual evolution time in the laboratory frame of reference with good accuracy when compared to previous methods. Our method is tested successfully to predict the time spent in the evolution of nuclear spins 1/2 and 3/2 in NMR systems. We find that the estimated time according to our method is better than previous approaches by up to four orders of magnitude. One disadvantage of our method is that we need to solve a number of transcendental equations, which increases with the system dimension and parameter discretization used to solve such equations numerically.

  9. Numerical calculation of particle collection efficiency in an ...

    Indian Academy of Sciences (India)

    Theoretical and numerical research has been previously done on ESPs to predict the efficiency ... Lagrangian simulations of particle transport in wire–plate ESP were .... The collection efficiency can be defined as the ratio of the number of ...

  10. Mathematical properties of numerical inversion for jet calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Cukierman, Aviv [Physics Department, Stanford University, Stanford, CA 94305 (United States); SLAC National Accelerator Laboratory, Stanford University, Menlo Park, CA 94025 (United States); Nachman, Benjamin, E-mail: bnachman@cern.ch [Physics Department, Stanford University, Stanford, CA 94305 (United States); SLAC National Accelerator Laboratory, Stanford University, Menlo Park, CA 94025 (United States); Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94704 (United States)

    2017-06-21

    Numerical inversion is a general detector calibration technique that is independent of the underlying spectrum. This procedure is formalized and important statistical properties are presented, using high energy jets at the Large Hadron Collider as an example setting. In particular, numerical inversion is inherently biased and common approximations to the calibrated jet energy tend to over-estimate the resolution. Analytic approximations to the closure and calibrated resolutions are demonstrated to effectively predict the full forms under realistic conditions. Finally, extensions of numerical inversion are presented which can reduce the inherent biases. These methods will be increasingly important to consider with degraded resolution at low jet energies due to a much higher instantaneous luminosity in the near future.

  11. Extraction of gravitational waves in numerical relativity.

    Science.gov (United States)

    Bishop, Nigel T; Rezzolla, Luciano

    2016-01-01

    A numerical-relativity calculation yields in general a solution of the Einstein equations including also a radiative part, which is in practice computed in a region of finite extent. Since gravitational radiation is properly defined only at null infinity and in an appropriate coordinate system, the accurate estimation of the emitted gravitational waves represents an old and non-trivial problem in numerical relativity. A number of methods have been developed over the years to "extract" the radiative part of the solution from a numerical simulation and these include: quadrupole formulas, gauge-invariant metric perturbations, Weyl scalars, and characteristic extraction. We review and discuss each method, in terms of both its theoretical background as well as its implementation. Finally, we provide a brief comparison of the various methods in terms of their inherent advantages and disadvantages.

  12. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  13. Analysis of pumping tests of partially penetrating wells in an unconfined aquifer using inverse numerical optimization

    Science.gov (United States)

    Hvilshøj, S.; Jensen, K. H.; Barlebo, H. C.; Madsen, B.

    1999-08-01

    Inverse numerical modeling was applied to analyze pumping tests of partially penetrating wells carried out in three wells established in an unconfined aquifer in Vejen, Denmark, where extensive field investigations had previously been carried out, including tracer tests, mini-slug tests, and other hydraulic tests. Drawdown data from multiple piezometers located at various horizontal and vertical distances from the pumping well were included in the optimization. Horizontal and vertical hydraulic conductivities, specific storage, and specific yield were estimated, assuming that the aquifer was either a homogeneous system with vertical anisotropy or composed of two or three layers of different hydraulic properties. In two out of three cases, a more accurate interpretation was obtained for a multi-layer model defined on the basis of lithostratigraphic information obtained from geological descriptions of sediment samples, gammalogs, and flow-meter tests. Analysis of the pumping tests resulted in values for horizontal hydraulic conductivities that are in good accordance with those obtained from slug tests and mini-slug tests. Besides the horizontal hydraulic conductivity, it is possible to determine the vertical hydraulic conductivity, specific yield, and specific storage based on a pumping test of a partially penetrating well. The study demonstrates that pumping tests of partially penetrating wells can be analyzed using inverse numerical models. The model used in the study was a finite-element flow model combined with a non-linear regression model. Such a model can accommodate more geological information and complex boundary conditions, and the parameter-estimation procedure can be formalized to obtain optimum estimates of hydraulic parameters and their standard deviations.

  14. Numerical Radius Inequalities for Finite Sums of Operators

    Directory of Open Access Journals (Sweden)

    Mirmostafaee Alireza Kamel

    2014-12-01

    Full Text Available In this paper, we obtain some sharp inequalities for numerical radius of finite sums of operators. Moreover, we give some applications of our result in estimation of spectral radius. We also compare our results with some known results.

  15. Left ventricular asynergy score as an indicator of previous myocardial infarction

    International Nuclear Information System (INIS)

    Backman, C.; Jacobsson, K.A.; Linderholm, H.; Osterman, G.

    1986-01-01

    Sixty-eight patients with coronary heart disease (CHD) i.e. a hisotry of angina of effort and/or previous 'possible infarction' were examined inter alia with ECG and cinecardioangiography. A system of scoring was designed which allowed a semiquantitative estimate of the left ventricular asynergy from cinecardioangiography - the left ventricular motion score (LVMS). The LVMS was associated with the presence of a previous myocardial infarction (MI), as indicated by the history and ECG findings. The ECG changes specific for a previous MI were associated with high LVMS values and unspecific or absent ECG changes with low LVMS values. Decision thresholds for ECG changes and asynergy in diagnosing a previous MI were evaluated by means of a ROC analysis. The accuracy of ECG in detecting a previous MI was slightly higher when asynergy indicated a 'true MI' than when autopsy result did so in a comparable group. Therefore the accuracy of asynergy (LVMS ≥ 1) in detecting a previous MI or myocardial fibrosis in patients with CHD should be at least comparable with that of autopsy (scar > 1 cm). (orig.)

  16. Contribution to the asymptotic estimation of the global error of single step numerical integration methods. Application to the simulation of electric power networks; Contribution a l'estimation asymptotique de l'erreur globale des methodes d'integration numerique a un pas. Application a la simulation des reseaux electriques

    Energy Technology Data Exchange (ETDEWEB)

    Aid, R.

    1998-01-07

    This work comes from an industrial problem of validating numerical solutions of ordinary differential equations modeling power systems. This problem is solved using asymptotic estimators of the global error. Four techniques are studied: Richardson estimator (RS), Zadunaisky's techniques (ZD), integration of the variational equation (EV), and Solving for the correction (SC). We give some precisions on the relative order of SC w.r.t. the order of the numerical method. A new variant of ZD is proposed that uses the Modified Equation. In the case of variable step-size, it is shown that under suitable restriction, on the hypothesis of the step-size selection, ZD and SC are still valid. Moreover, some Runge-Kutta methods are shown to need less hypothesis on the step-sizes to exhibit a valid order of convergence for ZD and SC. Numerical tests conclude this analysis. Industrial cases are given. Finally, an algorithm to avoid the a priori specification of the integration path for complex time differential equations is proposed. (author)

  17. Numerical vs. turbulent diffusion in geophysical flow modelling

    International Nuclear Information System (INIS)

    D'Isidoro, M.; Maurizi, A.; Tampieri, F.

    2008-01-01

    Numerical advection schemes induce the spreading of passive tracers from localized sources. The effects of changing resolution and Courant number are investigated using the WAF advection scheme, which leads to a sub-diffusive process. The spreading rate from an instantaneous source is compared with the physical diffusion necessary to simulate unresolved turbulent motions. The time at which the physical diffusion process overpowers the numerical spreading is estimated, and is shown to reduce as the resolution increases, and to increase as the wind velocity increases.

  18. M-estimator for the 3D symmetric Helmert coordinate transformation

    Science.gov (United States)

    Chang, Guobin; Xu, Tianhe; Wang, Qianxin

    2018-01-01

    The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.

  19. Estimation of population mean under systematic sampling

    Science.gov (United States)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  20. Multifractal rainfall extremes: Theoretical analysis and practical estimation

    International Nuclear Information System (INIS)

    Langousis, Andreas; Veneziano, Daniele; Furcolo, Pierluigi; Lepore, Chiara

    2009-01-01

    We study the extremes generated by a multifractal model of temporal rainfall and propose a practical method to estimate the Intensity-Duration-Frequency (IDF) curves. The model assumes that rainfall is a sequence of independent and identically distributed multiplicative cascades of the beta-lognormal type, with common duration D. When properly fitted to data, this simple model was found to produce accurate IDF results [Langousis A, Veneziano D. Intensity-duration-frequency curves from scaling representations of rainfall. Water Resour Res 2007;43. (doi:10.1029/2006WR005245)]. Previous studies also showed that the IDF values from multifractal representations of rainfall scale with duration d and return period T under either d → 0 or T → ∞, with different scaling exponents in the two cases. We determine the regions of the (d, T)-plane in which each asymptotic scaling behavior applies in good approximation, find expressions for the IDF values in the scaling and non-scaling regimes, and quantify the bias when estimating the asymptotic power-law tail of rainfall intensity from finite-duration records, as was often done in the past. Numerically calculated exact IDF curves are compared to several analytic approximations. The approximations are found to be accurate and are used to propose a practical IDF estimation procedure.

  1. Estimating biozone hydraulic conductivity in wastewater soil-infiltration systems using inverse numerical modeling.

    Science.gov (United States)

    Bumgarner, Johnathan R; McCray, John E

    2007-06-01

    During operation of an onsite wastewater treatment system, a low-permeability biozone develops at the infiltrative surface (IS) during application of wastewater to soil. Inverse numerical-model simulations were used to estimate the biozone saturated hydraulic conductivity (K(biozone)) under variably saturated conditions for 29 wastewater infiltration test cells installed in a sandy loam field soil. Test cells employed two loading rates (4 and 8cm/day) and 3 IS designs: open chamber, gravel, and synthetic bundles. The ratio of K(biozone) to the saturated hydraulic conductivity of the natural soil (K(s)) was used to quantify the reductions in the IS hydraulic conductivity. A smaller value of K(biozone)/K(s,) reflects a greater reduction in hydraulic conductivity. The IS hydraulic conductivity was reduced by 1-3 orders of magnitude. The reduction in IS hydraulic conductivity was primarily influenced by wastewater loading rate and IS type and not by the K(s) of the native soil. The higher loading rate yielded greater reductions in IS hydraulic conductivity than the lower loading rate for bundle and gravel cells, but the difference was not statistically significant for chamber cells. Bundle and gravel cells exhibited a greater reduction in IS hydraulic conductivity than chamber cells at the higher loading rates, while the difference between gravel and bundle systems was not statistically significant. At the lower rate, bundle cells exhibited generally lower K(biozone)/K(s) values, but not at a statistically significant level, while gravel and chamber cells were statistically similar. Gravel cells exhibited the greatest variability in measured values, which may complicate design efforts based on K(biozone) evaluations for these systems. These results suggest that chamber systems may provide for a more robust design, particularly for high or variable wastewater infiltration rates.

  2. Numerosity estimation in visual stimuli in the absence of luminance-based cues.

    Directory of Open Access Journals (Sweden)

    Peter Kramer

    2011-02-01

    Full Text Available Numerosity estimation is a basic preverbal ability that humans share with many animal species and that is believed to be foundational of numeracy skills. It is notoriously difficult, however, to establish whether numerosity estimation is based on numerosity itself, or on one or more non-numerical cues like-in visual stimuli-spatial extent and density. Frequently, different non-numerical cues are held constant on different trials. This strategy, however, still allows numerosity estimation to be based on a combination of non-numerical cues rather than on any particular one by itself.Here we introduce a novel method, based on second-order (contrast-based visual motion, to create stimuli that exclude all first-order (luminance-based cues to numerosity. We show that numerosities can be estimated almost as well in second-order motion as in first-order motion.The results show that numerosity estimation need not be based on first-order spatial filtering, first-order density perception, or any other processing of luminance-based cues to numerosity. Our method can be used as an effective tool to control non-numerical variables in studies of numerosity estimation.

  3. EFFECTS OF DIFFERENT NUMERICAL INTERFACE METHODS ON HYDRODYNAMICS INSTABILITY

    Energy Technology Data Exchange (ETDEWEB)

    FRANCOIS, MARIANNE M. [Los Alamos National Laboratory; DENDY, EDWARD D. [Los Alamos National Laboratory; LOWRIE, ROBERT B. [Los Alamos National Laboratory; LIVESCU, DANIEL [Los Alamos National Laboratory; STEINKAMP, MICHAEL J. [Los Alamos National Laboratory

    2007-01-11

    The authors compare the effects of different numerical schemes for the advection and material interface treatments on the single-mode Rayleigh-Taylor instability, using the RAGE hydro-code. The interface growth and its surface density (interfacial area) versus time are investigated. The surface density metric shows to be better suited to characterize the difference in the flow, than the conventional interface growth metric. They have found that Van Leer's limiter combined to no interface treatment leads to the largest surface area. Finally, to quantify the difference between the numerical methods they have estimated the numerical viscosity in the linear-regime at different scales.

  4. Preliminary estimates of spatially distributed net infiltration and recharge for the Death Valley region, Nevada-California

    International Nuclear Information System (INIS)

    Hevesi, J.A.; Flint, A.L.; Flint, L.E.

    2002-01-01

    A three-dimensional ground-water flow model has been developed to evaluate the Death Valley regional flow system, which includes ground water beneath the Nevada Test Site. Estimates of spatially distributed net infiltration and recharge are needed to define upper boundary conditions. This study presents a preliminary application of a conceptual and numerical model of net infiltration. The model was developed in studies at Yucca Mountain, Nevada, which is located in the approximate center of the Death Valley ground-water flow system. The conceptual model describes the effects of precipitation, runoff, evapotranspiration, and redistribution of water in the shallow unsaturated zone on predicted rates of net infiltration; precipitation and soil depth are the two most significant variables. The conceptual model was tested using a preliminary numerical model based on energy- and water-balance calculations. Daily precipitation for 1980 through 1995, averaging 202 millimeters per year over the 39,556 square kilometers area of the ground-water flow model, was input to the numerical model to simulate net infiltration ranging from zero for a soil thickness greater than 6 meters to over 350 millimeters per year for thin soils at high elevations in the Spring Mountains overlying permeable bedrock. Estimated average net infiltration over the entire ground-water flow model domain is 7.8 millimeters per year. To evaluate the application of the net-infiltration model developed on a local scale at Yucca Mountain, to net-infiltration estimates representing the magnitude and distribution of recharge on a regional scale, the net-infiltration results were compared with recharge estimates obtained using empirical methods. Comparison of model results with previous estimates of basinwide recharge suggests that the net-infiltration estimates obtained using this model may overestimate recharge because of uncertainty in modeled precipitation, bedrock permeability, and soil properties for

  5. A NEW MODIFIED RATIO ESTIMATOR FOR ESTIMATION OF POPULATION MEAN WHEN MEDIAN OF THE AUXILIARY VARIABLE IS KNOWN

    Directory of Open Access Journals (Sweden)

    Jambulingam Subramani

    2013-10-01

    Full Text Available The present paper deals with a modified ratio estimator for estimation of population mean of the study variable when the population median of the auxiliary variable is known. The bias and mean squared error of the proposed estimator are derived and are compared with that of existing modified ratio estimators for certain known populations. Further we have also derived the conditions for which the proposed estimator performs better than the existing modified ratio estimators. From the numerical study it is also observed that the proposed modified ratio estimator performs better than the existing modified ratio estimators for certain known populations.

  6. Parameter estimation in nonlinear models for pesticide degradation

    International Nuclear Information System (INIS)

    Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.

    1991-01-01

    A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

  7. Extensive numerical study of a D-brane, anti-D-brane system in AdS5/CFT4

    International Nuclear Information System (INIS)

    Hegedűs, Árpád

    2015-01-01

    In this paper the hybrid-NLIE approach of http://dx.doi.org/10.1007/JHEP08(2012)022 is extended to the ground state of a D-brane anti-D-brane system in AdS/CFT. The hybrid-NLIE equations presented in the paper are finite component alternatives of the previously proposed TBA equations and they admit an appropriate framework for the numerical investigation of the ground state of the problem. Straightforward numerical iterative methods fail to converge, thus new numerical methods are worked out to solve the equations. Our numerical data confirm the previous TBA data. In view of the numerical results the mysterious L=1 case is also commented in the paper.

  8. European Conference on Numerical Mathematics and Advanced Applications

    CERN Document Server

    Manguoğlu, Murat; Tezer-Sezgin, Münevver; Göktepe, Serdar; Uğur, Ömür

    2016-01-01

    The European Conference on Numerical Mathematics and Advanced Applications (ENUMATH), held every 2 years, provides a forum for discussing recent advances in and aspects of numerical mathematics and scientific and industrial applications. The previous ENUMATH meetings took place in Paris (1995), Heidelberg (1997), Jyvaskyla (1999), Ischia (2001), Prague (2003), Santiago de Compostela (2005), Graz (2007), Uppsala (2009), Leicester (2011) and Lausanne (2013). This book presents a selection of invited and contributed lectures from the ENUMATH 2015 conference, which was organised by the Institute of Applied Mathematics (IAM), Middle East Technical University, Ankara, Turkey, from September 14 to 18, 2015. It offers an overview of central recent developments in numerical analysis, computational mathematics, and applications in the form of contributions by leading experts in the field.

  9. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  10. When three is not some: on the pragmatics of numerals.

    Science.gov (United States)

    Shetreet, Einat; Chierchia, Gennaro; Gaab, Nadine

    2014-04-01

    Both numerals and quantifiers (like some) have more than one possible interpretation (i.e., weak and strong interpretations). Some studies have found similar behavior for numerals and quantifiers, whereas others have shown critical differences. It is, therefore, debated whether they are processed in the same way. A previous fMRI investigation showed that the left inferior frontal gyrus is linked to the computation of the strong interpretation of quantifiers (derived by a scalar implicature) and that the left middle frontal gyrus and the medial frontal gyrus are linked to processing the mismatch between the strong interpretation of quantifiers and the context in which they are presented. In the current study, we attempted to characterize the similarities and differences between numbers and quantifiers by examining brain activation patterns related to the processing of numerals in these brain regions. When numbers were presented in a mismatch context (i.e., where their strong interpretation did not match the context), they elicited brain activations similar to those previously observed with quantifiers in the same context type. Conversely, in a match context (i.e., where both interpretations of the scalar item matched the context), numbers elicited a different activation pattern than the one observed with quantifiers: Left inferior frontal gyrus activations in response to the match condition showed decrease for numbers (but not for quantifiers). Our results support previous findings suggesting that, although they share some features, numbers and quantifiers are processed differently. We discuss our results in light of various theoretical approaches linked to the representation of numerals.

  11. Real-Time Estimation of Volcanic ASH/SO2 Cloud Height from Combined Uv/ir Satellite Observations and Numerical Modeling

    Science.gov (United States)

    Vicente, Gilberto A.

    An efficient iterative method has been developed to estimate the vertical profile of SO2 and ash clouds from volcanic eruptions by comparing near real-time satellite observations with numerical modeling outputs. The approach uses UV based SO2 concentration and IR based ash cloud images, the volcanic ash transport model PUFF and wind speed, height and directional information to find the best match between the simulated and the observed displays. The method is computationally fast and is being implemented for operational use at the NOAA Volcanic Ash Advisory Centers (VAACs) in Washington, DC, USA, to support the Federal Aviation Administration (FAA) effort to detect, track and measure volcanic ash cloud heights for air traffic safety and management. The presentation will show the methodology, results, statistical analysis and SO2 and Aerosol Index input products derived from the Ozone Monitoring Instrument (OMI) onboard the NASA EOS/Aura research satellite and from the Global Ozone Monitoring Experiment-2 (GOME-2) instrument in the MetOp-A. The volcanic ash products are derived from AVHRR instruments in the NOAA POES-16, 17, 18, 19 as well as MetOp-A. The presentation will also show how a VAAC volcanic ash analyst interacts with the system providing initial condition inputs such as location and time of the volcanic eruption, followed by the automatic real-time tracking of all the satellite data available, subsequent activation of the iterative approach and the data/product delivery process in numerical and graphical format for operational applications.

  12. The Cognitive Estimation Task Is Nonunitary: Evidence for Multiple Magnitude Representation Mechanisms Among Normative and ADHD College Students

    Directory of Open Access Journals (Sweden)

    Sarit Ashkenazi

    2017-02-01

    Full Text Available There is a current debate on whether the cognitive system has a shared representation for all magnitudes or whether there are unique representations. To investigate this question, we used the Biber cognitive estimation task. In this task, participants were asked to provide estimates for questions such as, “How many sticks of spaghetti are in a package?” The task uses different estimation categories (e.g., time, numerical quantity, distance, and weight to look at real-life magnitude representations. Experiment 1 assessed (N = 95 a Hebrew version of the Biber Cognitive Estimation Task and found that different estimation categories had different relations, for example, weight, time, and distance shared variance, but numerical estimation did not. We suggest that numerical estimation does not require the use of measurement in units, hence, it represents a more “pure” numerical estimation. Experiment 2 found that different factors explain individual abilities in different estimation categories. For example, numerical estimation was predicted by preverbal innate quantity understanding (approximate number sense and working memory, whereas time estimations were supported by IQ. These results demonstrate that cognitive estimation is not a unified construct.

  13. An information-guided channel-hopping scheme for block-fading channels with estimation errors

    KAUST Repository

    Yang, Yuli

    2010-12-01

    Information-guided channel-hopping technique employing multiple transmit antennas was previously proposed for supporting high data rate transmission over fading channels. This scheme achieves higher data rates than some mature schemes, such as the well-known cyclic transmit antenna selection and space-time block coding, by exploiting the independence character of multiple channels, which effectively results in having an additional information transmitting channel. Moreover, maximum likelihood decoding may be performed by simply decoupling the signals conveyed by the different mapping methods. In this paper, we investigate the achievable spectral efficiency of this scheme in the case of having channel estimation errors, with optimum pilot overhead for minimum meansquare error channel estimation, when transmitting over blockfading channels. Our numerical results further substantiate the robustness of the presented scheme, even with imperfect channel state information. ©2010 IEEE.

  14. Varieties of Quantity Estimation in Children

    Science.gov (United States)

    Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco

    2015-01-01

    In the number-to-position task, with increasing age and numerical expertise, children's pattern of estimates shifts from a biased (nonlinear) to a formal (linear) mapping. This widely replicated finding concerns symbolic numbers, whereas less is known about other types of quantity estimation. In Experiment 1, Preschool, Grade 1, and Grade 3…

  15. Adaptive Methods for Permeability Estimation and Smart Well Management

    Energy Technology Data Exchange (ETDEWEB)

    Lien, Martha Oekland

    2005-04-01

    The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement

  16. Online Wavelet Complementary velocity Estimator.

    Science.gov (United States)

    Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin

    2018-02-01

    In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. A study for estimate of contamination source with numerical simulation method in the turbulent type clean room

    International Nuclear Information System (INIS)

    Han, Sang Mok; Hwang, Young Kyu; Kim, Dong Kwon

    2015-01-01

    Contamination in a clean room may appear even more complicated by the effect of complicated manufacturing processes and indoor equipment. For this reason, detailed information about the concentration of pollutant particles in the clean room is needed to control the level of contamination financially and efficiently without any problem in manufacturing process. Allocation method has been developed as one of main ideas to fulfill a function of controlling contamination under the situation. By using this method, weighting factor can be predicted based on cleanliness on sampling spots and the values based on numerical analysis. In this point, the weighting factor indicates how each of contaminant sources influences the concentration of pollutant in the clean room. In this paper, when applied allocation method, we propose zoning method to accelerate the calculation time. And it was applied to cleanliness the actual improvement of the turbulent type clean room. As a result, we could estimate quantitatively the amount of contamination generated from the pollution sources. And was proved by experiments that it is possible to improve the level of cleanliness of the clean rooms by using these results.

  18. The Adriatic response to the bora forcing. A numerical study

    International Nuclear Information System (INIS)

    Rachev, N.

    2001-01-01

    This paper deals with the bora wind effect on the Adriatic Sea circulation as simulated by a 3-D numerical code (the DieCAST model). The main result of this forcing is the formation of intense upwelling along the eastern coast in agreement with previous theoretical studies and observations. Different numerical experiments are discussed for various boundary and initial conditions to evaluate their influence on both circulation and upwelling patterns

  19. On the mechanics of cerebral aneurysms: experimental research and numerical simulation

    Science.gov (United States)

    Parshin, D. V.; Kuianova, I. O.; Yunoshev, A. S.; Ovsyannikov, K. S.; Dubovoy, A. V.

    2017-10-01

    This research extends existing experimental data for CA tissues [1, 2] and presents the preliminary results of numerical calculations. Experiments were performed to measure aneurysm wall stiffness and the data obtained was analyzed. To reconstruct the geometry of the CAs, DICOM images of real patients with aneurysms and ITK Snap [3] were used. In addition, numerical calculations were performed in ANSYS (commercial software, License of Lavrentyev Institute of Hydrodynamics). The results of these numerical calculations show a high level of agreement with experimental data from previous literature.

  20. Cross-property relations and permeability estimation in model porous media

    International Nuclear Information System (INIS)

    Schwartz, L.M.; Martys, N.; Bentz, D.P.; Garboczi, E.J.; Torquato, S.

    1993-01-01

    Results from a numerical study examining cross-property relations linking fluid permeability to diffusive and electrical properties are presented. Numerical solutions of the Stokes equations in three-dimensional consolidated granular packings are employed to provide a basis of comparison between different permeability estimates. Estimates based on the Λ parameter (a length derived from electrical conduction) and on d c (a length derived from immiscible displacement) are found to be considerably more reliable than estimates based on rigorous permeability bounds related to pore space diffusion. We propose two hybrid relations based on diffusion which provide more accurate estimates than either of the rigorous permeability bounds

  1. beta- and gamma-Comparative dose estimates on Eniwetok Atoll

    Energy Technology Data Exchange (ETDEWEB)

    Crase, K.W.; Gudiksen, P.H.; Robison, W.L.

    1982-05-01

    Eniwetok Atoll is one of the Pacific atolls used for atmospheric testing of U.S. nuclear weapons. Beta dose and gamma-ray exposure measurements were made on two islands of the Eniwetok Atoll during July-August 1976 to determine the beta and low energy gamma-contribution to the total external radiation doses to the returning Marshallese. Measurements were made at numerous locations with thermoluminescent dosimeters (TLD), pressurized ionization chambers, portable NaI detectors, and thin-window pancake GM probes. Results of the TLD measurements with and without a beta-attenuator indicate that approx. 29% of the total dose rate at 1 m in air is due to beta- or low energy gamma-contribution. The contribution at any particular site, however, is somewhat dependent on ground cover, since a minimal amount of vegetation will reduce it significantly from that over bare soil, but thick stands of vegetation have little effect on any further reductions. Integral 30-yr external shallow dose estimates for future inhabitants were made and compared with external dose estimates of a previous large scale radiological survey (En73). Integral 30-yr shallow external dose estimates are 25-50% higher than whole body estimates. Due to the low penetrating ability of the beta's or low energy gamma's, however, several remedial actions can be taken to reduce the shallow dose contribution to the total external dose.

  2. Cuba: Multidimensional numerical integration library

    Science.gov (United States)

    Hahn, Thomas

    2016-08-01

    The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

  3. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  4. Development of a biometric method to estimate age on hand radiographs.

    Science.gov (United States)

    Remy, Floriane; Hossu, Gabriela; Cendre, Romain; Micard, Emilien; Mainard-Simard, Laurence; Felblinger, Jacques; Martrille, Laurent; Lalys, Loïc

    2017-02-01

    Age estimation of living individuals aged less than 13, 18 or 21 years, which are some relevant legal ages in most European countries, is currently problematic in the forensic context. Thus, numerous methods are available for legal authorities, although their efficiency can be discussed. For those reasons, we aimed to propose a new method, based on the biometric analysis of hand bones. 451 hand radiographs of French individuals under the age of 21 were retrospectively analyzed. This total sample was divided into three subgroups bounded by the relevant legal ages previously mentioned: 0-13, 13-18 and 18-21 years. On these radiographs, we numerically applied the osteometric board method used in anthropology, by including each metacarpal and proximal phalange of the five hand rays in the smallest rectangle possible. In that we can access their length and width information thanks to a measurement protocol developed precisely for our treatment with the ORS Visual ® software. Then, a statistical analysis was performed from these biometric data: a Linear Discriminant Analysis (LDA) evaluated the probability for an individual to belong to one of the age group (0-13, 13-18 or 18-21); and several multivariate regression models were tested for the establishment of age estimation formulas for each of these age groups. The mean Correlation Coefficient between chronological age and both lengths and widths of hand bones is equal to 0.90 for the total sample. Repeatability and reproducibility were assessed. The LDA could more easily predict the belonging to the 0-13 age group. Age can be estimated with a mean standard error which never exceeds 1 year for the 95% confidence interval. Finally, compared to the literature, we can conclude that estimating an age from the biometric information of metacarpals and proximal phalanges is promising. Copyright © 2016. Published by Elsevier B.V.

  5. Numerically robust geometry engine for compound solid geometries

    International Nuclear Information System (INIS)

    Vlachoudis, V.; Sinuela-Pastor, D.

    2013-01-01

    Monte Carlo programs heavily rely on a fast and numerically robust solid geometry engines. However the success of solid modeling, depends on facilities for specifying and editing parameterized models through a user-friendly graphical front-end. Such a user interface has to be fast enough in order to be interactive for 2D and/or 3D displays, but at the same time numerically robust in order to display possible modeling errors at real time that could be critical for the simulation. The graphical user interface Flair for FLUKA currently employs such an engine where special emphasis has been given on being fast and numerically robust. The numerically robustness is achieved by a novel method of estimating the floating precision of the operations, which dynamically adapts all the decision operations accordingly. Moreover a predictive caching mechanism is ensuring that logical errors in the geometry description are found online, without compromising the processing time by checking all regions. (authors)

  6. Uncertainty estimation and global forecasting with a chemistry-transport model - application to the numerical simulation of air quality; Estimation de l'incertitude et prevision d'ensemble avec un modele de chimie transport - Application a la simulation numerique de la qualite de l'air

    Energy Technology Data Exchange (ETDEWEB)

    Mallet, V

    2005-12-15

    The aim of this work is the evaluation of the quality of a chemistry-transport model, not by a classical comparison with observations, but by the estimation of its uncertainties due to the input data, to the model formulation and to the numerical approximations. The study of these 3 sources of uncertainty is carried out with Monte Carlo simulations, with multi-model simulations and with comparisons between numerical schemes, respectively. A high uncertainty is shown for ozone concentrations. To overcome the uncertainty-related limitations, a strategy consists in using the overall forecasting. By combining several models (up to 48) on the basis of past observations, forecasts can be significantly improved. This work has been also the occasion of developing an innovative modeling system, named Polyphemus. (J.S.)

  7. Moyamoya disease in a child with previous acute necrotizing encephalopathy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Taik-Kun; Cha, Sang Hoon; Chung, Kyoo Byung; Kim, Jung Hyuck; Kim, Baek Hyun; Chung, Hwan Hoon [Department of Diagnostic Radiology, Korea University College of Medicine, Ansan Hospital, 516 Kojan-Dong, Ansan City, Kyungki-Do 425-020 (Korea); Eun, Baik-Lin [Department of Pediatrics, Korea University College of Medicine, Seoul (Korea)

    2003-09-01

    A previously healthy 24-day-old boy presented with a 2-day history of fever and had a convulsion on the day of admission. MRI showed abnormal signal in the thalami, caudate nuclei and central white matter. Acute necrotising encephalopathy was diagnosed, other causes having been excluded after biochemical and haematological analysis of blood, urine and CSF. He recovered, but with spastic quadriparesis. At the age of 28 months, he suffered sudden deterioration of consciousness and motor weakness of his right limbs. MRI was consistent with an acute cerebrovascular accident. Angiography showed bilateral middle cerebral artery stenosis or frank occlusion with numerous lenticulostriate collateral vessels consistent with moyamoya disease. (orig.)

  8. Analyses of more than 60,000 exomes questions the role of numerous genes previously associated with dilated cardiomyopathy

    DEFF Research Database (Denmark)

    Nouhravesh, Nina; Ahlberg, Gustav; Ghouse, Jonas

    2016-01-01

    BACKGROUND: Hundreds of genetic variants have been described as disease causing in dilated cardiomyopathy (DCM). Some of these associations are now being questioned. We aimed to identify the prevalence of previously DCM associated variants in the Exome Aggregation Consortium (ExAC), in order...... to identify potentially false-positive DCM variants. METHODS: Variants listed as DCM disease-causing variants in the Human Gene Mutation Database were extracted from ExAC. Pathogenicity predictions for these variants were mined from dbNSFP v 2.9 database. RESULTS: Of the 473 DCM variants listed in HGMD, 148...... (31%) were found in ExAC. The expected number of individuals with DCM in ExAC is 25 based on the prevalence in the general population. Yet, 35 variants were found in more than 25 individuals. In 13 genes, we identified all variants previously associated with DCM; four genes contained variants above...

  9. Numerical study of thermal test of a cask of transportation for radioactive material

    International Nuclear Information System (INIS)

    Vieira, Tiago A.S.; Santos, André A.C. dos; Vidal, Guilherme A.M.; Silva Junior, Geraldo E.

    2017-01-01

    In this study numerical simulations of a transport cask for radioactive material were made and the numerical results were compared with experimental results of tests carried out in two different opportunities. A mesh study was also made regarding the previously designed geometry of the same cask, in order to evaluate its impact in relation to the stability of numerical results for this type of problem. The comparison of the numerical and experimental results allowed to evaluate the need to plan and carry out a new test in order to validate the CFD codes used in the numerical simulations

  10. Generalized shrunken type-GM estimator and its application

    International Nuclear Information System (INIS)

    Ma, C Z; Du, Y L

    2014-01-01

    The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points

  11. Generalized shrunken type-GM estimator and its application

    Science.gov (United States)

    Ma, C. Z.; Du, Y. L.

    2014-03-01

    The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points.

  12. Numerical solution of neutral functional-differential equations with proportional delays

    Directory of Open Access Journals (Sweden)

    Mehmet Giyas Sakar

    2017-07-01

    Full Text Available In this paper, homotopy analysis method is improved with optimal determination of auxiliary parameter by use of residual error function for solving neutral functional-differential equations (NFDEs with proportional delays. Convergence analysis and error estimate of method are given. Some numerical examples are solved and comparisons are made with the existing results. The numerical results show that the homotopy analysis method with residual error function is very effective and simple.

  13. Contributions to the numerical modeling of concrete structures cracking with creep and estimation of the permeability

    International Nuclear Information System (INIS)

    Dufour, F.

    2007-12-01

    The industrial context of this research work is to study the durability of the internal barriers of nuclear power plants. This paper is divided in two parts, the first part is relative to the crack-damage state and the second part to the creep consequences on the rupture properties of concrete. In the first part, the analysis of the experimental results, (carried out on a compression cylinder on which the radial permeability has been measured), shows that the permeability decreases until a deformation of half of those at the force peak, by re-closure of the preexisting microcracks in the material; then the permeability strongly increases until after the force peak by initiation, connexion and opening of the crack, and at last it increases less rapidly until the rupture because only the opening of the macro-cracks increases. In order to simulate these phenomena, two original methods are presented, in post-treatment phase, for estimating the leaks from a mechanical computing based on finite element methods. With the first method, it is possible to measure the permeability from the damage field and from a relation between the permeability and the damage which bind the Poiseuille law to an empirical law established for weak damages. The second method is on the deformations field from which the position and opening of the crack are calculated. The Poiseuille relation is then applied along the crack to estimate the leaks rates. The relation between the concrete creep and its mechanical characteristics is analyzed in the second part. In particular, are studied the creep consequences on the long term mechanical properties. After having given the experimental results which show essentially an embrittlement of the material after creep, a qualitative analysis by the bifurcations study is proposed, and then by a discrete numerical method to find again the same influence of the visco-elasticity on the rupture embrittlement experimentally observed. At last, the first results of

  14. Numerical Solution of Stochastic Nonlinear Fractional Differential Equations

    KAUST Repository

    El-Beltagy, Mohamed A.

    2015-01-07

    Using Wiener-Hermite expansion (WHE) technique in the solution of the stochastic partial differential equations (SPDEs) has the advantage of converting the problem to a system of deterministic equations that can be solved efficiently using the standard deterministic numerical methods [1]. WHE is the only known expansion that handles the white/colored noise exactly. This work introduces a numerical estimation of the stochastic response of the Duffing oscillator with fractional or variable order damping and driven by white noise. The WHE technique is integrated with the Grunwald-Letnikov approximation in case of fractional order and with Coimbra approximation in case of variable-order damping. The numerical solver was tested with the analytic solution and with Monte-Carlo simulations. The developed mixed technique was shown to be efficient in simulating SPDEs.

  15. Numerical Solution of Stochastic Nonlinear Fractional Differential Equations

    KAUST Repository

    El-Beltagy, Mohamed A.; Al-Juhani, Amnah

    2015-01-01

    Using Wiener-Hermite expansion (WHE) technique in the solution of the stochastic partial differential equations (SPDEs) has the advantage of converting the problem to a system of deterministic equations that can be solved efficiently using the standard deterministic numerical methods [1]. WHE is the only known expansion that handles the white/colored noise exactly. This work introduces a numerical estimation of the stochastic response of the Duffing oscillator with fractional or variable order damping and driven by white noise. The WHE technique is integrated with the Grunwald-Letnikov approximation in case of fractional order and with Coimbra approximation in case of variable-order damping. The numerical solver was tested with the analytic solution and with Monte-Carlo simulations. The developed mixed technique was shown to be efficient in simulating SPDEs.

  16. NUMERICAL SIMULATION OF POLLUTION DISPERSION IN URBAN STREET

    Directory of Open Access Journals (Sweden)

    M. M. Biliaiev

    2017-08-01

    Full Text Available Purpose. The scientific paper solves the question of 2D numerical model development, which allows quick computation of air pollution in streets from vehicles. The aim of the work is numerical model development that would enable to predict the level of air pollution by using protective barriers along the road. Methodology. The developed model is based on the equation of inviscid flow and equation of pollutant transfer. Potential equation is used to compute velocity field of air flow near road in the case of protection barriers application. To solve equation for potential flow implicit difference scheme of «conditional approximation« is used. The implicit change – triangle difference scheme is used to solve equation of convective – diffusive dispersion. Numerical integration is carried out using the rectangular difference grid. Method of porosity technique («markers method» is used to create the form of comprehensive computational region. Emission of toxic gases from vehicle is modeled using Delta function for point source.Findings. Authors developed 2D numerical model. It takes into account the main physical factors affecting the process of dispersion of pollutants in the atmosphere when emissions of vehicle including protection barriers near the road. On the basis of the developed numerical models a computational experiment was performed to estimate the level of air pollution in the street. Originality. A numerical model has been created. It makes it possible to calculate 2D aerodynamics of the wind flow in the presence of noises and the process of mass transfer of toxic gas emissions from the motorway. The model allows taking into account the presence of the car on the road, the form of a protective barrier, the presence of a curb. Calculations have been performed to determine the contamination zone formed at the protective barrier that is located at the motorway. Practical value. An effective numerical model that can be applied in the

  17. Numerical fatigue analysis of premolars restored by CAD/CAM ceramic crowns.

    Science.gov (United States)

    Homaei, Ehsan; Jin, Xiao-Zhuang; Pow, Edmond Ho Nang; Matinlinna, Jukka Pekka; Tsoi, James Kit-Hon; Farhangdoost, Khalil

    2018-04-10

    The purpose of this study was to estimate the fatigue life of premolars restored with two dental ceramics, lithium disilicate (LD) and polymer infiltrated ceramic (PIC) using the numerical method and compare it with the published in vitro data. A premolar restored with full-coverage crown was digitized. The volumetric shape of tooth tissues and crowns were created in Mimics ® . They were transferred to IA-FEMesh for mesh generation and the model was analyzed with Abaqus. By combining the stress distribution results with fatigue stress-life (S-N) approach, the lifetime of restored premolars was predicted. The predicted lifetime was 1,231,318 cycles for LD with fatigue load of 1400N, while the one for PIC was 475,063 cycles with the load of 870N. The peak value of maximum principal stress occurred at the contact area (LD: 172MPa and PIC: 96MPa) and central fossa (LD: 100MPa and PIC: 64MPa) for both ceramics which were the most seen failure areas in the experiment. In the adhesive layer, the maximum shear stress was observed at the shoulder area (LD: 53.6MPa and PIC: 29MPa). The fatigue life and failure modes of all-ceramic crown determined by the numerical method seem to correlate well with the previous experimental study. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  18. The use of a numerical mass-balance model to estimate rates of soil redistribution on uncultivated land from 137Cs measurements

    International Nuclear Information System (INIS)

    Owens, P.N.; Walling, D.E.

    1988-01-01

    A numerical mass-balance model is developed which can be used to estimate rates of soil redistribution on uncultivated land from measurements of bombderived 137 Cs inventories. The model uses a budgeting approach, which takes account of temporal variations in atmospheric fallout of 137 Cs, radioactive decay, and net gains or losses of 137 Cs due to erosion and deposition processes, combined with parameters which describe internal 137 Cs redistribution processes, to estimate the 137 Cs content of topsoil and the 137 Cs inventory at specific points, from the start of 137 Cs fallout in the 1950s to the present day. The model is also able to account for potential differences in particle size composition and organic matter content between mobilised soil particles and the original soil, and the effect that these may have on 137 Cs concentrations and inventories. By running the model for a range of soil erosion and deposition rates, a calibration relationship can be constructed which relates the 137 Cs inventory at a sampling point to the average net soil loss or gain at that location. In addition to the magnitude and temporal distribution of the 137 Cs atmospheric fallout flux, the soil redistribution rates estimated by the model are sensitive to parameters which describe the relative texture and organic matter content of the eroded or deposited material, and the ability of the soil to retain 137 Cs in the upper part of the soil profile. (Copyright (c) 1988 Elsevier Science B.V., Amsterdam. All rights reserved.)

  19. Numerical Simulation of Aerogasdynamics Processes in A Longwall Panel for Estimation of Spontaneous Combustion Hazards

    Science.gov (United States)

    Meshkov, Sergey; Sidorenko, Andrey

    2017-11-01

    The relevance of a solution of the problem of endogenous fire safety in seams liable to self-ignition is shown. The possibilities of numerical methods of researches of gasdynamic processes are considered. The analysis of methodical approaches with the purpose to create models and carry out numerical researches of aerogasdynamic processes in longwall panels of gas mines is made. Parameters of the gob for longwall mining are considered. The significant influence of geological and mining conditions of conducting mining operations on distribution of air streams on longwall panels and effective management of gas emission is shown. The aerogasdynamic model of longwall panels for further research of influence of parameters of ventilation and properties of gob is presented. The results of numerical researches including distribution of air streams, fields of concentration of methane and oxygen at application of various schemes of airing for conditions of perspective mines of the Pechora basin and Kuzbass are given. Recommendations for increase of efficiency of the coal seams mining liable to selfignition are made. The directions of further researches are defined.

  20. Numerical estimation of phase transformations in solid state during Yb:YAG laser heating of steel sheets

    Energy Technology Data Exchange (ETDEWEB)

    Kubiak, Marcin, E-mail: kubiak@imipkm.pcz.pl; Piekarska, Wiesława; Domański, Tomasz; Saternus, Zbigniew [Institute of Mechanics and Machine Design Foundations, Częstochowa University of Technology, Dąbrowskiego 73, 42-200 Częstochowa (Poland); Stano, Sebastian [Welding Technologies Department, Welding Institute, Błogosławionego Czesława 16-18, 44-100 Gliwice (Poland)

    2015-03-10

    This work concerns the numerical modeling of heat transfer and phase transformations in solid state occurring during the Yb:YAG laser beam heating process. The temperature field is obtained by the numerical solution into transient heat transfer equation with convective term. The laser beam heat source model is developed using the Kriging interpolation method with experimental measurements of Yb:YAG laser beam profile taken into account. Phase transformations are calculated on the basis of Johnson - Mehl - Avrami (JMA) and Koistinen - Marburger (KM) kinetics models as well as continuous heating transformation (CHT) and continuous cooling transformation (CCT) diagrams for S355 steel. On the basis of developed numerical algorithms 3D computer simulations are performed in order to predict temperature history and phase transformations in Yb:YAG laser heating process.

  1. Analytical estimates of structural behavior

    CERN Document Server

    Dym, Clive L

    2012-01-01

    Explicitly reintroducing the idea of modeling to the analysis of structures, Analytical Estimates of Structural Behavior presents an integrated approach to modeling and estimating the behavior of structures. With the increasing reliance on computer-based approaches in structural analysis, it is becoming even more important for structural engineers to recognize that they are dealing with models of structures, not with the actual structures. As tempting as it is to run innumerable simulations, closed-form estimates can be effectively used to guide and check numerical results, and to confirm phys

  2. Vehicle State Information Estimation with the Unscented Kalman Filter

    Directory of Open Access Journals (Sweden)

    Hongbin Ren

    2014-01-01

    Full Text Available The vehicle state information plays an important role in the vehicle active safety systems; this paper proposed a new concept to estimate the instantaneous vehicle speed, yaw rate, tire forces, and tire kinemics information in real time. The estimator is based on the 3DoF vehicle model combined with the piecewise linear tire model. The estimator is realized using the unscented Kalman filter (UKF, since it is based on the unscented transfer technique and considers high order terms during the measurement and update stage. The numerical simulations are carried out to further investigate the performance of the estimator under high friction and low friction road conditions in the MATLAB/Simulink combined with the Carsim environment. The simulation results are compared with the numerical results from Carsim software, which indicate that UKF can estimate the vehicle state information accurately and in real time; the proposed estimation will provide the necessary and reliable state information to the vehicle controller in the future.

  3. High-order accurate numerical algorithm for three-dimensional transport prediction

    Energy Technology Data Exchange (ETDEWEB)

    Pepper, D W [Savannah River Lab., Aiken, SC; Baker, A J

    1980-01-01

    The numerical solution of the three-dimensional pollutant transport equation is obtained with the method of fractional steps; advection is solved by the method of moments and diffusion by cubic splines. Topography and variable mesh spacing are accounted for with coordinate transformations. First estimate wind fields are obtained by interpolation to grid points surrounding specific data locations. Numerical results agree with results obtained from analytical Gaussian plume relations for ideal conditions. The numerical model is used to simulate the transport of tritium released from the Savannah River Plant on 2 May 1974. Predicted ground level air concentration 56 km from the release point is within 38% of the experimentally measured value.

  4. Numerical method for three dimensional steady-state two-phase flow calculations

    International Nuclear Information System (INIS)

    Raymond, P.; Toumi, I.

    1992-01-01

    This paper presents the numerical scheme which was developed for the FLICA-4 computer code to calculate three dimensional steady state two phase flows. This computer code is devoted to steady state and transient thermal hydraulics analysis of nuclear reactor cores 1,3 . The first section briefly describes the FLICA-4 flow modelling. Then in order to introduce the numerical method for steady state computations, some details are given about the implicit numerical scheme based upon an approximate Riemann solver which was developed for calculation of flow transients. The third section deals with the numerical method for steady state computations, which is derived from this previous general scheme and its optimization. We give some numerical results for steady state calculations and comparisons on required CPU time and memory for various meshing and linear system solvers

  5. Fast fundamental frequency estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2017-01-01

    Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate the funda...

  6. Error estimates for a numerical method for the compressible Navier-Stokes system on sufficiently smooth domains

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Hošek, Radim; Maltese, D.; Novotný, A.

    2017-01-01

    Roč. 51, č. 1 (2017), s. 279-319 ISSN 0764-583X EU Projects: European Commission(XE) 320078 - MATHEF Institutional support: RVO:67985840 Keywords : Navier-Stokes system * finite element numerical method * finite volume numerical method Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.727, year: 2016 http://www.esaim-m2an.org/ articles /m2an/abs/2017/01/m2an150157/m2an150157.html

  7. Automatic estimation of aquifer parameters using long-term water supply pumping and injection records

    Science.gov (United States)

    Luo, Ning; Illman, Walter A.

    2016-09-01

    Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity ( T) and storativity ( S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.

  8. Black hole spectroscopy: Systematic errors and ringdown energy estimates

    Science.gov (United States)

    Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav

    2018-02-01

    The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.

  9. Interpolation Inequalities and Spectral Estimates for Magnetic Operators

    Science.gov (United States)

    Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael

    2018-05-01

    We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.

  10. Numerical methods for hyperbolic differential functional problems

    Directory of Open Access Journals (Sweden)

    Roman Ciarski

    2008-01-01

    Full Text Available The paper deals with the initial boundary value problem for quasilinear first order partial differential functional systems. A general class of difference methods for the problem is constructed. Theorems on the error estimate of approximate solutions for difference functional systems are presented. The convergence results are proved by means of consistency and stability arguments. A numerical example is given.

  11. An inverse hyperbolic heat conduction problem in estimating surface heat flux by the conjugate gradient method

    International Nuclear Information System (INIS)

    Huang, C.-H.; Wu, H.-H.

    2006-01-01

    In the present study an inverse hyperbolic heat conduction problem is solved by the conjugate gradient method (CGM) in estimating the unknown boundary heat flux based on the boundary temperature measurements. Results obtained in this inverse problem will be justified based on the numerical experiments where three different heat flux distributions are to be determined. Results show that the inverse solutions can always be obtained with any arbitrary initial guesses of the boundary heat flux. Moreover, the drawbacks of the previous study for this similar inverse problem, such as (1) the inverse solution has phase error and (2) the inverse solution is sensitive to measurement error, can be avoided in the present algorithm. Finally, it is concluded that accurate boundary heat flux can be estimated in this study

  12. Microwave Breast Imaging System Prototype with Integrated Numerical Characterization

    Directory of Open Access Journals (Sweden)

    Mark Haynes

    2012-01-01

    Full Text Available The increasing number of experimental microwave breast imaging systems and the need to properly model them have motivated our development of an integrated numerical characterization technique. We use Ansoft HFSS and a formalism we developed previously to numerically characterize an S-parameter- based breast imaging system and link it to an inverse scattering algorithm. We show successful reconstructions of simple test objects using synthetic and experimental data. We demonstrate the sensitivity of image reconstructions to knowledge of the background dielectric properties and show the limits of the current model.

  13. Numerical compliance testing of human exposure to electromagnetic radiation from smart-watches.

    Science.gov (United States)

    Hong, Seon-Eui; Lee, Ae-Kyoung; Kwon, Jong-Hwa; Pack, Jeong-Ki

    2016-10-07

    In this study, we investigated the electromagnetic dosimetry for smart-watches. At present, the standard for compliance testing of body-mounted and handheld devices specifies the use of a flat phantom to provide conservative estimates of the peak spatial-averaged specific absorption rate (SAR). This means that the estimated SAR using a flat phantom should be higher than the SAR in the exposure part of an anatomical human-body model. To verify this, we numerically calculated the SAR for a flat phantom and compared it with the numerical calculation of the SAR for four anatomical human-body models of different ages. The numerical analysis was performed using the finite difference time domain method (FDTD). The smart-watch models were used in the three antennas: the shorted planar inverted-F antenna (PIFA), loop antenna, and monopole antenna. Numerical smart-watch models were implemented for cellular commutation and wireless local-area network operation at 835, 1850, and 2450 MHz. The peak spatial-averaged SARs of the smart-watch models are calculated for the flat phantom and anatomical human-body model for the wrist-worn and next to mouth positions. The results show that the flat phantom does not provide a consistent conservative SAR estimate. We concluded that the difference in the SAR results between an anatomical human-body model and a flat phantom can be attributed to the different phantom shapes and tissue structures.

  14. Resilient Distributed Estimation Through Adversary Detection

    Science.gov (United States)

    Chen, Yuan; Kar, Soummya; Moura, Jose M. F.

    2018-05-01

    This paper studies resilient multi-agent distributed estimation of an unknown vector parameter when a subset of the agents is adversarial. We present and analyze a Flag Raising Distributed Estimator ($\\mathcal{FRDE}$) that allows the agents under attack to perform accurate parameter estimation and detect the adversarial agents. The $\\mathcal{FRDE}$ algorithm is a consensus+innovations estimator in which agents combine estimates of neighboring agents (consensus) with local sensing information (innovations). We establish that, under $\\mathcal{FRDE}$, either the uncompromised agents' estimates are almost surely consistent or the uncompromised agents detect compromised agents if and only if the network of uncompromised agents is connected and globally observable. Numerical examples illustrate the performance of $\\mathcal{FRDE}$.

  15. Numerical properties of staggered overlap fermions

    CERN Document Server

    de Forcrand, Philippe; Panero, Marco

    2010-01-01

    We report the results of a numerical study of staggered overlap fermions, following the construction of Adams which reduces the number of tastes from 4 to 2 without fine-tuning. We study the sensitivity of the operator to the topology of the gauge field, its locality and its robustness to fluctuations of the gauge field. We make a first estimate of the computing cost of a quark propagator calculation, and compare with Neuberger's overlap.

  16. Numerical studies of fermionic field theories at large-N

    International Nuclear Information System (INIS)

    Dickens, T.A.

    1987-01-01

    A description of an algorithm, which may be used to study large-N theories with or without fermions, is presented. As an initial test of the method, the spectrum of continuum QCD in 1 + 1 dimensions is determined and compared to previously obtained results. Exact solutions of 1 + 1 dimensional lattice versions of the free fermion theory, the Gross-Neveu model, and QCD are obtained. Comparison of these exact results with results from the numerical algorithm is used to test the algorithms, and more importantly, to determine the errors incurred from the approximations used in the numerical technique. Numerical studies of the above three lattice theories in higher dimensions are also presented. The results are again compared to exact solutions for free fermions and the Gross-Neveu model; perturbation theory is used to derive expansions with which the numerical results for QCD may be compared. The numerical algorithm may also be used to study the euclidean formulation of lattice gauge theories. Results for 1 + 1 dimensional euclidean lattice QCD are compared to the exact solution of this model

  17. Extensive numerical study of a D-brane, anti-D-brane system in AdS{sub 5}/CFT{sub 4}

    Energy Technology Data Exchange (ETDEWEB)

    Hegedűs, Árpád [MTA Lendület Holographic QFT Group, Wigner Research Centre,H-1525 Budapest 114, P.O.B. 49 (Hungary)

    2015-04-20

    In this paper the hybrid-NLIE approach of http://dx.doi.org/10.1007/JHEP08(2012)022 is extended to the ground state of a D-brane anti-D-brane system in AdS/CFT. The hybrid-NLIE equations presented in the paper are finite component alternatives of the previously proposed TBA equations and they admit an appropriate framework for the numerical investigation of the ground state of the problem. Straightforward numerical iterative methods fail to converge, thus new numerical methods are worked out to solve the equations. Our numerical data confirm the previous TBA data. In view of the numerical results the mysterious L=1 case is also commented in the paper.

  18. NUMERICAL SIMULATION OF TOXIC CHEMICAL DISPERSION AFTER ACCIDENT AT RAILWAY

    Directory of Open Access Journals (Sweden)

    M. M. Biliaiev

    2016-04-01

    Full Text Available Purpose. This research focuses on the development of an applied numerical model to calculate the dynamics of atmospheric pollution in the emission of dangerous chemical substances in the event of transportation by railway. Methodology. For the numerical simulation of transport process of the dangerous chemical substance in the atmosphere the equation of convection-diffusion pollutant transport is used. This equation takes into account the effect of wind, atmospheric diffusion, the power of emission source, as well as the movement of the source of emission (depressurized tank on the process of pollutant dispersion. When carrying out computing experiment one also takes into account the profile of the speed of the wind flow. For the numerical integration of pollutant transport in the atmosphere implicit finite-difference splitting scheme is used. The numerical calculation is divided into four steps of splitting and at each step of splitting the unknown value of the concentration of hazardous substance is determined by the explicit running account scheme. On the basis of the numerical model it was created the code using the algorithmic language FORTRAN. One conducted the computational experiments to assess the level of air pollution near the railway station «Illarionovo» in the event of a possible accident during transportation of ammonia. Findings. The proposed model allows you to quickly calculate the air pollution after the emission of chemically hazardous substance, taking into account the motion of the emission source. The model makes it possible to determine the size of the land surface pollution zones and the amount of pollutants deposited on a specific area. Using the developed numerical model it was estimated the environmental damage near the railway station «Illarionovo». Originality. One can use the numerical model to calculate the size and intensity of the chemical contamination zones after accidents on transport. Practical value

  19. A novel numerical approach for workspace determination of parallel mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yiqun; Niu, Junchuan; Liu, Zhihui; Zhang, Fuliang [Shandong University, Shandong (China)

    2017-06-15

    In this paper, a novel numerical approach is proposed for workspace determination of parallel mechanisms. Compared with the classical numerical approaches, this presented approach discretizes both location and orientation of the mechanism simultaneously, not only one of the two. This technique makes the presented numerical approach applicable in determining almost all types of workspaces, while traditional numerical approaches are only applicable in determining the constant orientation workspace and orientation workspace. The presented approach and its steps to determine the inclusive orientation workspace and total orientation workspace are described in detail. A lower-mobility parallel mechanism and a six-degrees-of-freedom Stewart platform are set as examples, the workspaces of these mechanisms are estimated and visualized by the proposed numerical approach. Furthermore, the efficiency of the presented approach is discussed. The examples show that the presented approach is applicable in determining the inclusive orientation workspace and total orientation workspace of parallel mechanisms with high efficiency.

  20. Estimating non-circular motions in barred galaxies using numerical N-body simulations

    Science.gov (United States)

    Randriamampandry, T. H.; Combes, F.; Carignan, C.; Deg, N.

    2015-12-01

    The observed velocities of the gas in barred galaxies are a combination of the azimuthally averaged circular velocity and non-circular motions, primarily caused by gas streaming along the bar. These non-circular flows must be accounted for before the observed velocities can be used in mass modelling. In this work, we examine the performance of the tilted-ring method and the DISKFIT algorithm for transforming velocity maps of barred spiral galaxies into rotation curves (RCs) using simulated data. We find that the tilted-ring method, which does not account for streaming motions, under-/overestimates the circular motions when the bar is parallel/perpendicular to the projected major axis. DISKFIT, which does include streaming motions, is limited to orientations where the bar is not aligned with either the major or minor axis of the image. Therefore, we propose a method of correcting RCs based on numerical simulations of galaxies. We correct the RC derived from the tilted-ring method based on a numerical simulation of a galaxy with similar properties and projections as the observed galaxy. Using observations of NGC 3319, which has a bar aligned with the major axis, as a test case, we show that the inferred mass models from the uncorrected and corrected RCs are significantly different. These results show the importance of correcting for the non-circular motions and demonstrate that new methods of accounting for these motions are necessary as current methods fail for specific bar alignments.

  1. A quasi-stationary numerical model of atomized metal droplets, II: Prediction and assessment

    DEFF Research Database (Denmark)

    Pryds, Nini H.; Hattel, Jesper Henri; Thorborg, Jesper

    1999-01-01

    been illustrated.A comparison between the numerical model and the experimental results shows an excellent agreement and demonstrates the validity of the present model, e.g. the calculated gas temperature which has an important influence on the droplet solidification behaviour as well as the calculated......A new model which extends previous studies and includes the interaction between enveloping gas and an array of droplets has been developed and presented in a previous paper. The model incorporates the probability density function of atomized metallic droplets into the heat transfer equations....... The main thrust of the model is that the gas temperature was not predetermined and calculated empirically but calculated numerically based on heat balance consideration. In this paper, the accuracy of the numerical model and the applicability of the model as a predictive tool have been investigated...

  2. Bin mode estimation methods for Compton camera imaging

    International Nuclear Information System (INIS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-01-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods

  3. Wave Transformation Over Reefs: Evaluation of One-Dimensional Numerical Models

    National Research Council Canada - National Science Library

    Demirbilek, Zeki; Nwogu, Okey G; Ward, Donald L; Sanchez, Alejandro

    2009-01-01

    Three one-dimensional (1D) numerical wave models are evaluated for wave transformation over reefs and estimates of wave setup, runup, and ponding levels in an island setting where the beach is fronted by fringing reef and lagoons...

  4. Numerical Study on Density Gradient Carbon-Carbon Composite for Vertical Launching System

    Science.gov (United States)

    Yoon, Jin-Young; Kim, Chun-Gon; Lim, Juhwan

    2018-04-01

    This study presents new carbon-carbon (C/C) composite that has a density gradient within single material, and estimates its heat conduction performance by a numerical method. To address the high heat conduction of a high-density C/C, which can cause adhesion separation in the steel structures of vertical launching systems, density gradient carbon-carbon (DGCC) composite is proposed due to its exhibiting low thermal conductivity as well as excellent ablative resistance. DGCC is manufactured by hybridizing two different carbonization processes into a single carbon preform. One part exhibits a low density using phenolic resin carbonization to reduce heat conduction, and the other exhibits a high density using thermal gradient-chemical vapor infiltration for excellent ablative resistance. Numerical analysis for DGCC is performed with a heat conduction problem, and internal temperature distributions are estimated by the forward finite difference method. Material properties of the transition density layer, which is inevitably formed during DGCC manufacturing, are assumed to a combination of two density layers for numerical analysis. By comparing numerical results with experimental data, we validate that DGCC exhibits a low thermal conductivity, and it can serve as highly effective ablative material for vertical launching systems.

  5. Prediction of successful trial of labour in patients with a previous caesarean section

    International Nuclear Information System (INIS)

    Shaheen, N.; Khalil, S.; Iftikhar, P.

    2014-01-01

    Objective: To determine the prediction rate of success in trial of labour after one previous caesarean section. Methods: The cross-sectional study was conducted at the Department of Obstetrics and Gynaecology, Cantonment General Hospital, Rawalpindi, from January 1, 2012 to January 31, 2013, and comprised women with one previous Caesarean section and with single alive foetus at 37-41 weeks of gestation. Women with more than one Caesarean section, unknown site of uterine scar, bony pelvic deformity, placenta previa, intra-uterine growth restriction, deep transverse arrest in previous labour and non-reassuring foetal status at the time of admission were excluded. Intrapartum risk assessment included Bishop score at admission, rate of cervical dilatation and scar tenderness. SPSS 21 was used for statistical analysis. Results: Out of a total of 95 women, the trial was successful in 68 (71.6%). Estimated foetal weight and number of prior vaginal deliveries had a high predictive value for successful trial of labour after Caesarean section. Estimated foetal weight had an odds ratio of 0.46 (p<0.001), while number of prior vaginal deliveries had an odds ratio of 0.85 with (p=0.010). Other factors found to be predictive of successful trial included Bishop score at the time of admission (p<0.037) and rate of cervical dilatation in the first stage of labour (p<0.021). Conclusion: History of prior vaginal deliveries, higher Bishop score at the time of admission, rapid rate of cervical dilatation and lower estimated foetal weight were predictive of a successful trial of labour after Caesarean section. (author)

  6. Analytic Investigation Into Effect of Population Heterogeneity on Parameter Ratio Estimates

    International Nuclear Information System (INIS)

    Schinkel, Colleen; Carlone, Marco; Warkentin, Brad; Fallone, B. Gino

    2007-01-01

    Purpose: A homogeneous tumor control probability (TCP) model has previously been used to estimate the α/β ratio for prostate cancer from clinical dose-response data. For the ratio to be meaningful, it must be assumed that parameter ratios are not sensitive to the type of tumor control model used. We investigated the validity of this assumption by deriving analytic relationships between the α/β estimates from a homogeneous TCP model, ignoring interpatient heterogeneity, and those of the corresponding heterogeneous (population-averaged) model that incorporated heterogeneity. Methods and Materials: The homogeneous and heterogeneous TCP models can both be written in terms of the geometric parameters D 50 and γ 50 . We show that the functional forms of these models are similar. This similarity was used to develop an expression relating the homogeneous and heterogeneous estimates for the α/β ratio. The expression was verified numerically by generating pseudo-data from a TCP curve with known parameters and then using the homogeneous and heterogeneous TCP models to estimate the α/β ratio for the pseudo-data. Results: When the dominant form of interpatient heterogeneity is that of radiosensitivity, the homogeneous and heterogeneous α/β estimates differ. This indicates that the presence of this heterogeneity affects the value of the α/β ratio derived from analysis of TCP curves. Conclusions: The α/β ratio estimated from clinical dose-response data is model dependent-a heterogeneous TCP model that accounts for heterogeneity in radiosensitivity will produce a greater α/β estimate than that resulting from a homogeneous TCP model

  7. A numerical library in Java for scientists and engineers

    CERN Document Server

    Lau, Hang T

    2003-01-01

    At last researchers have an inexpensive library of Java-based numeric procedures for use in scientific computation. The first and only book of its kind, A Numeric Library in Java for Scientists and Engineers is a translation into Java of the library NUMAL (NUMerical procedures in ALgol 60). This groundbreaking text presents procedural descriptions for linear algebra, ordinary and partial differential equations, optimization, parameter estimation, mathematical physics, and other tools that are indispensable to any dynamic research group. The book offers test programs that allow researchers to execute the examples provided; users are free to construct their own tests and apply the numeric procedures to them in order to observe a successful computation or simulate failure. The entry for each procedure is logically presented, with name, usage parameters, and Java code included. This handbook serves as a powerful research tool, enabling the performance of critical computations in Java. It stands as a cost-effi...

  8. Numerical analysis of biosonar beamforming mechanisms and strategies in bats.

    Science.gov (United States)

    Müller, Rolf

    2010-09-01

    Beamforming is critical to the function of most sonar systems. The conspicuous noseleaf and pinna shapes in bats suggest that beamforming mechanisms based on diffraction of the outgoing and incoming ultrasonic waves play a major role in bat biosonar. Numerical methods can be used to investigate the relationships between baffle geometry, acoustic mechanisms, and resulting beampatterns. Key advantages of numerical approaches are: efficient, high-resolution estimation of beampatterns, spatially dense predictions of near-field amplitudes, and the malleability of the underlying shape representations. A numerical approach that combines near-field predictions based on a finite-element formulation for harmonic solutions to the Helmholtz equation with a free-field projection based on the Kirchhoff integral to obtain estimates of the far-field beampattern is reviewed. This method has been used to predict physical beamforming mechanisms such as frequency-dependent beamforming with half-open resonance cavities in the noseleaf of horseshoe bats and beam narrowing through extension of the pinna aperture with skin folds in false vampire bats. The fine structure of biosonar beampatterns is discussed for the case of the Chinese noctule and methods for assessing the spatial information conveyed by beampatterns are demonstrated for the brown long-eared bat.

  9. Comptonization in Ultra-Strong Magnetic Fields: Numerical Solution to the Radiative Transfer Problem

    Science.gov (United States)

    Ceccobello, C.; Farinelli, R.; Titarchuk, L.

    2014-01-01

    We consider the radiative transfer problem in a plane-parallel slab of thermal electrons in the presence of an ultra-strong magnetic field (B approximately greater than B(sub c) approx. = 4.4 x 10(exp 13) G). Under these conditions, the magnetic field behaves like a birefringent medium for the propagating photons, and the electromagnetic radiation is split into two polarization modes, ordinary and extraordinary, that have different cross-sections. When the optical depth of the slab is large, the ordinary-mode photons are strongly Comptonized and the photon field is dominated by an isotropic component. Aims. The radiative transfer problem in strong magnetic fields presents many mathematical issues and analytical or numerical solutions can be obtained only under some given approximations. We investigate this problem both from the analytical and numerical point of view, provide a test of the previous analytical estimates, and extend these results with numerical techniques. Methods. We consider here the case of low temperature black-body photons propagating in a sub-relativistic temperature plasma, which allows us to deal with a semi-Fokker-Planck approximation of the radiative transfer equation. The problem can then be treated with the variable separation method, and we use a numerical technique to find solutions to the eigenvalue problem in the case of a singular kernel of the space operator. The singularity of the space kernel is the result of the strong angular dependence of the electron cross-section in the presence of a strong magnetic field. Results. We provide the numerical solution obtained for eigenvalues and eigenfunctions of the space operator, and the emerging Comptonization spectrum of the ordinary-mode photons for any eigenvalue of the space equation and for energies significantly lesser than the cyclotron energy, which is on the order of MeV for the intensity of the magnetic field here considered. Conclusions. We derived the specific intensity of the

  10. Real-time 3-D space numerical shake prediction for earthquake early warning

    Science.gov (United States)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  11. Algorithms for Brownian first-passage-time estimation

    Science.gov (United States)

    Adib, Artur B.

    2009-09-01

    A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.

  12. Application of spreadsheet to estimate infiltration parameters

    OpenAIRE

    Zakwan, Mohammad; Muzzammil, Mohammad; Alam, Javed

    2016-01-01

    Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach ...

  13. Power system static state estimation using Kalman filter algorithm

    Directory of Open Access Journals (Sweden)

    Saikia Anupam

    2016-01-01

    Full Text Available State estimation of power system is an important tool for operation, analysis and forecasting of electric power system. In this paper, a Kalman filter algorithm is presented for static estimation of power system state variables. IEEE 14 bus system is employed to check the accuracy of this method. Newton Raphson load flow study is first carried out on our test system and a set of data from the output of load flow program is taken as measurement input. Measurement inputs are simulated by adding Gaussian noise of zero mean. The results of Kalman estimation are compared with traditional Weight Least Square (WLS method and it is observed that Kalman filter algorithm is numerically more efficient than traditional WLS method. Estimation accuracy is also tested for presence of parametric error in the system. In addition, numerical stability of Kalman filter algorithm is tested by considering inclusion of zero mean errors in the initial estimates.

  14. Inclusion estimation from a single electrostatic boundary measurement

    DEFF Research Database (Denmark)

    Karamehmedovic, Mirza; Knudsen, Kim

    2013-01-01

    We present a numerical method for the detection and estimation of perfectly conducting inclusions in conducting homogeneous host media in . The estimation is based on the evaluation of an indicator function that depends on a single pair of Cauchy data (electric potential and current) given at the...

  15. Estimating surface solar radiation from upper-air humidity

    Energy Technology Data Exchange (ETDEWEB)

    Kun Yang [Telecommunications Advancement Organization of Japan, Tokyo (Japan); Koike, Toshio [University of Tokyo (Japan). Dept. of Civil Engineering

    2002-07-01

    A numerical model is developed to estimate global solar irradiance from upper-air humidity. In this model, solar radiation under clear skies is calculated through a simple model with radiation-damping processes under consideration. A sky clearness indicator is parameterized from relative humidity profiles within three atmospheric sublayers, and the indicator is used to connect global solar radiation under clear skies and that under cloudy skies. Model inter-comparisons at 18 sites in Japan suggest (1) global solar radiation strongly depends on the sky clearness indicator, (2) the new model generally gives better estimation to hourly-mean solar irradiance than the other three methods used in numerical weather predictions, and (3) the new model may be applied to estimate long-term solar radiation. In addition, a study at one site in the Tibetan Plateau shows vigorous convective activities in the region may cause some uncertainties to radiation estimations due to the small-scale and short life of convective systems. (author)

  16. A student's guide to numerical methods

    CERN Document Server

    Hutchinson, Ian H

    2015-01-01

    This concise, plain-language guide for senior undergraduates and graduate students aims to develop intuition, practical skills and an understanding of the framework of numerical methods for the physical sciences and engineering. It provides accessible self-contained explanations of mathematical principles, avoiding intimidating formal proofs. Worked examples and targeted exercises enable the student to master the realities of using numerical techniques for common needs such as solution of ordinary and partial differential equations, fitting experimental data, and simulation using particle and Monte Carlo methods. Topics are carefully selected and structured to build understanding, and illustrate key principles such as: accuracy, stability, order of convergence, iterative refinement, and computational effort estimation. Enrichment sections and in-depth footnotes form a springboard to more advanced material and provide additional background. Whether used for self-study, or as the basis of an accelerated introdu...

  17. Numerical aspects of drift kinetic turbulence: Ill-posedness, regularization and a priori estimates of sub-grid-scale terms

    KAUST Repository

    Samtaney, Ravi

    2012-01-01

    We present a numerical method based on an Eulerian approach to solve the Vlasov-Poisson system for 4D drift kinetic turbulence. Our numerical approach uses a conservative formulation with high-order (fourth and higher) evaluation of the numerical fluxes coupled with a fourth-order accurate Poisson solver. The fluxes are computed using a low-dissipation high-order upwind differencing method or a tuned high-resolution finite difference method with no numerical dissipation. Numerical results are presented for the case of imposed ion temperature and density gradients. Different forms of controlled regularization to achieve a well-posed system are used to obtain convergent resolved simulations. The regularization of the equations is achieved by means of a simple collisional model, by inclusion of an ad-hoc hyperviscosity or artificial viscosity term or by implicit dissipation in upwind schemes. Comparisons between the various methods and regularizations are presented. We apply a filtering formalism to the Vlasov equation and derive sub-grid-scale (SGS) terms analogous to the Reynolds stress terms in hydrodynamic turbulence. We present a priori quantifications of these SGS terms in resolved simulations of drift-kinetic turbulence by applying a sharp filter. © 2012 IOP Publishing Ltd.

  18. Numerical aspects of drift kinetic turbulence: ill-posedness, regularization and a priori estimates of sub-grid-scale terms

    International Nuclear Information System (INIS)

    Samtaney, Ravi

    2012-01-01

    We present a numerical method based on an Eulerian approach to solve the Vlasov-Poisson system for 4D drift kinetic turbulence. Our numerical approach uses a conservative formulation with high-order (fourth and higher) evaluation of the numerical fluxes coupled with a fourth-order accurate Poisson solver. The fluxes are computed using a low-dissipation high-order upwind differencing method or a tuned high-resolution finite difference method with no numerical dissipation. Numerical results are presented for the case of imposed ion temperature and density gradients. Different forms of controlled regularization to achieve a well-posed system are used to obtain convergent resolved simulations. The regularization of the equations is achieved by means of a simple collisional model, by inclusion of an ad-hoc hyperviscosity or artificial viscosity term or by implicit dissipation in upwind schemes. Comparisons between the various methods and regularizations are presented. We apply a filtering formalism to the Vlasov equation and derive sub-grid-scale (SGS) terms analogous to the Reynolds stress terms in hydrodynamic turbulence. We present a priori quantifications of these SGS terms in resolved simulations of drift-kinetic turbulence by applying a sharp filter.

  19. Experimental and numerical investigations of soil water balance at the hinterland of the Badain Jaran Desert for groundwater recharge estimation

    Science.gov (United States)

    Hou, Lizhu; Wang, Xu-Sheng; Hu, Bill X.; Shang, Jie; Wan, Li

    2016-09-01

    Quantification of groundwater recharge from precipitation in the huge sand dunes is an issue in accounting for regional water balance in the Badain Jaran Desert (BJD) where about 100 lakes exist between dunes. In this study, field observations were conducted on a sand dune near a large saline lake in the BJD to investigate soil water movement through a thick vadose zone for groundwater estimation. The hydraulic properties of the soils at the site were determined using in situ experiments and laboratory measurements. A HYDRUS-1D model was built up for simulating the coupling processes of vertical water-vapor movement and heat transport in the desert soil. The model was well calibrated and validated using the site measurements of the soil water and temperature at various depths. Then, the model was applied to simulate the vertical flow across a 3-m-depth soil during a 53-year period under variable climate conditions. The simulated flow rate at the depth is an approximate estimation of groundwater recharge from the precipitation in the desert. It was found that the annual groundwater recharge would be 11-30 mm during 1983-2012, while the annual precipitation varied from 68 to 172 mm in the same period. The recharge rates are significantly higher than those estimated from the previous studies using chemical information. The modeling results highlight the role of the local precipitation as an essential source of groundwater in the BJD.

  20. Preliminary Upper Estimate of Peak Currents in Transcranial Magnetic Stimulation at Distant Locations From a TMS Coil.

    Science.gov (United States)

    Makarov, Sergey N; Yanamadala, Janakinadh; Piazza, Matthew W; Helderman, Alex M; Thang, Niang S; Burnham, Edward H; Pascual-Leone, Alvaro

    2016-09-01

    Transcranial magnetic stimulation (TMS) is increasingly used as a diagnostic and therapeutic tool for numerous neuropsychiatric disorders. The use of TMS might cause whole-body exposure to undesired induced currents in patients and TMS operators. The aim of this study is to test and justify a simple analytical model known previously, which may be helpful as an upper estimate of eddy current density at a particular distant observation point for any body composition and any coil setup. We compare the analytical solution with comprehensive adaptive mesh refinement-based FEM simulations of a detailed full-body human model, two coil types, five coil positions, about 100 000 observation points, and two distinct pulse rise times; thus, providing a representative number of different datasets for comparison, while also using other numerical data. Our simulations reveal that, after a certain modification, the analytical model provides an upper estimate for the eddy current density at any location within the body. In particular, it overestimates the peak eddy currents at distant locations from a TMS coil by a factor of 10 on average. The simple analytical model tested in this study may be valuable as a rapid method to safely estimate levels of TMS currents at different locations within a human body. At present, safe limits of general exposure to TMS electric and magnetic fields are an open subject, including fetal exposure for pregnant women.

  1. The distance effect in numerical memory-updating tasks.

    Science.gov (United States)

    Lendínez, Cristina; Pelegrina, Santiago; Lechuga, Teresa

    2011-05-01

    Two experiments examined the role of numerical distance in updating numerical information in working memory. In the first experiment, participants had to memorize a new number only when it was smaller than a previously memorized number. In the second experiment, updating was based on an external signal, which removed the need to perform any numerical comparison. In both experiments, distance between the memorized number and the new one was manipulated. The results showed that smaller distances between the new and the old information led to shorter updating times. This graded facilitation suggests that the process by which information is substituted in the focus of attention involves maintaining the shared features between the new and the old number activated and selecting other new features to be activated. Thus, the updating cost may be related to amount of new features to be activated in the focus of attention.

  2. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    Science.gov (United States)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  3. Wave Velocity Estimation in Heterogeneous Media

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2016-01-01

    In this paper, modulating functions-based method is proposed for estimating space-time dependent unknown velocity in the wave equation. The proposed method simplifies the identification problem into a system of linear algebraic equations. Numerical

  4. Numerical estimates of the evolution of quark and gluon populations inside QCD jets

    International Nuclear Information System (INIS)

    Garetto, M.

    1980-01-01

    The system of first order differential equations for the probabilities of producing nsub(g) gluons and nsub(q) quarks in a single gluon or quark jet are solved numerically for a convenient choice of the parameters A, A-tilde, B. Relevant branching ratios as the evolution parameter Y increases are shown. The different behaviour of the distributions in the quark- and in the gluon-jet is discussed. (author)

  5. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    Science.gov (United States)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  6. Technogenic Rock Dumps Physical Properties' Prognosis via Results of the Structure Numerical Modeling

    Directory of Open Access Journals (Sweden)

    Markov Sergey

    2017-01-01

    Full Text Available Understanding of internal structure of the technogenic rock dumps (gob dumps is required condition for estimation of using ones as filtration massifs for treatment of mine wastewater. Internal structure of gob piles greatly depends on dumping technology to applying restrictions for use them as filtration massifs. Numerical modelling of gob dumps allows adequately estimate them physical parameters, as a filtration coefficient, density, etc. The gob dumps numerical modelling results given in this article, in particular was examined grain size distribution of determined fractions depend on dump height. Shown, that filtration coefficient is in a nonlinear dependence on amount of several fractions of rock in gob dump. The numerical model adequacy both the gob structure and the dependence of filtration coefficient from gob height acknowledged equality of calculated and real filtration coefficient values. The results of this research can be apply to peripheral dumping technology.

  7. Water vapor estimation using digital terrestrial broadcasting waves

    Science.gov (United States)

    Kawamura, S.; Ohta, H.; Hanado, H.; Yamamoto, M. K.; Shiga, N.; Kido, K.; Yasuda, S.; Goto, T.; Ichikawa, R.; Amagai, J.; Imamura, K.; Fujieda, M.; Iwai, H.; Sugitani, S.; Iguchi, T.

    2017-03-01

    A method of estimating water vapor (propagation delay due to water vapor) using digital terrestrial broadcasting waves is proposed. Our target is to improve the accuracy of numerical weather forecast for severe weather phenomena such as localized heavy rainstorms in urban areas through data assimilation. In this method, we estimate water vapor near a ground surface from the propagation delay of digital terrestrial broadcasting waves. A real-time delay measurement system with a software-defined radio technique is developed and tested. The data obtained using digital terrestrial broadcasting waves show good agreement with those obtained by ground-based meteorological observation. The main features of this observation are, no need for transmitters (receiving only), applicable wherever digital terrestrial broadcasting is available and its high time resolution. This study shows a possibility to estimate water vapor using digital terrestrial broadcasting waves. In the future, we will investigate the impact of these data toward numerical weather forecast through data assimilation. Developing a system that monitors water vapor near the ground surface with time and space resolutions of 30 s and several kilometers would improve the accuracy of the numerical weather forecast of localized severe weather phenomena.

  8. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  9. Economic impact of feeding a phenylalanine-restricted diet to adults with previously untreated phenylketonuria.

    Science.gov (United States)

    Brown, M C; Guest, J F

    1999-02-01

    The aim of the present study was to estimate the direct healthcare cost of managing adults with previously untreated phenylketonuria (PKU) for one year before any dietary restrictions and for the first year after a phenylalanine- (PHE-) restricted diet was introduced. The resource use and corresponding costs were estimated from medical records and interviews with health care professionals experienced in caring for adults with previously untreated PKU. The mean annual cost of caring for a client being fed an unrestricted diet was estimated to be 83 996 pound silver. In the first year after introducing a PHE-restricted diet, the mean annual cost was reduced by 20 647 pound silver to 63 348 pound silver as a result of a reduction in nursing time, hospitalizations, outpatient clinic visits and medications. However, the economic benefit of the diet depended on whether the clients were previously high or low users of nursing care. Nursing time was the key cost-driver, accounting for 79% of the cost of managing high users and 31% of the management cost for low users. In contrast, the acquisition cost of a PHE-restricted diet accounted for up to 6% of the cost for managing high users and 15% of the management cost for low users. Sensitivity analyses showed that introducing a PHE-restricted diet reduces the annual cost of care, provided that annual nursing time was reduced by more than 8% or more than 5% of clients respond to the diet. The clients showed fewer negative behaviours when being fed a PHE-restricted diet, which may account for the observed reduction in nursing time needed to care for these clients. In conclusion, feeding a PHE-restricted diet to adults with previously untreated PKU leads to economic benefits to the UK's National Health Service and society in general.

  10. Numerical relativity

    International Nuclear Information System (INIS)

    Piran, T.

    1982-01-01

    There are many recent developments in numerical relativity, but there remain important unsolved theoretical and practical problems. The author reviews existing numerical approaches to solution of the exact Einstein equations. A framework for classification and comparison of different numerical schemes is presented. Recent numerical codes are compared using this framework. The discussion focuses on new developments and on currently open questions, excluding a review of numerical techniques. (Auth.)

  11. Numerical distribution functions of fractional unit root and cointegration tests

    DEFF Research Database (Denmark)

    MacKinnon, James G.; Nielsen, Morten Ørregaard

    We calculate numerically the asymptotic distribution functions of likelihood ratio tests for fractional unit roots and cointegration rank. Because these distributions depend on a real-valued parameter, b, which must be estimated, simple tabulation is not feasible. Partly due to the presence...

  12. Numerical investigation into the failure of a micropile retaining wall

    OpenAIRE

    Prat Catalán, Pere

    2017-01-01

    The paper presents a numerical investigation on the failure of a micropile wall that collapsed while excavating the adjacent ground. The main objectives are: to estimate the strength parameters of the ground; to perform a sensitivity analysis on the back slope height and to obtain the shape and position of the failure surface. Because of uncertainty of the original strength parameters, a simplified backanalysis using a range of cohesion/friction pairs has been used to estimate the most realis...

  13. Generating human reliability estimates using expert judgment. Volume 2. Appendices

    International Nuclear Information System (INIS)

    Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.

    1984-11-01

    The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessments (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 2 provides detailed procedures for using the techniques, detailed descriptions of the analyses performed to evaluate the techniques, and HEP estimates generated as part of this project. The results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. Judgments were shown to be consistent and to provide HEP estimates with a good degree of convergent validity. Of the two techniques tested, direct numerical estimation appears to be preferable in terms of ease of application and quality of results

  14. Adaptive OFDM Radar Waveform Design for Improved Micro-Doppler Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Engineering Science Advanced Research, Computer Science and Mathematics Division

    2014-07-01

    Here we analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a rotating target having multiple scattering centers. The use of a frequency-diverse OFDM signal enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. We characterize the accuracy of micro-Doppler frequency estimation by computing the Cramer-Rao bound (CRB) on the angular-velocity estimate of the target. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to the OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations with respect to the signal-to-noise ratios, number of temporal samples, and number of OFDM subcarriers. We also analysed numerically the improvement in estimation accuracy due to the adaptive waveform design. A grid-based maximum likelihood estimation technique is applied to evaluate the corresponding mean-squared error performance.

  15. Dynamic estimator for determining operating conditions in an internal combustion engine

    Science.gov (United States)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-01-05

    Methods and systems are provided for estimating engine performance information for a combustion cycle of an internal combustion engine. Estimated performance information for a previous combustion cycle is retrieved from memory. The estimated performance information includes an estimated value of at least one engine performance variable. Actuator settings applied to engine actuators are also received. The performance information for the current combustion cycle is then estimated based, at least in part, on the estimated performance information for the previous combustion cycle and the actuator settings applied during the previous combustion cycle. The estimated performance information for the current combustion cycle is then stored to the memory to be used in estimating performance information for a subsequent combustion cycle.

  16. Applicability of numerical model for seabed topography changes by tsunami flow. Analysis of formulae for sediment transport and simulations in a rectangular harbor

    International Nuclear Information System (INIS)

    Matsuyama, Masafumi

    2009-01-01

    Characteristics of formulae for bed-load transport and pick-up rate in suspended transport are investigated in order to clarify the impact on seabed topography changes by tsunami flow. The impact by bed-load transport was depended on Froude number and water surface slope. Bed-load transport causes deposition under Fr 6/7 at face front of tsunami wave. Pick-up rate has more predominant influences for seabed topography changes than that of one brought by bed-load transport. 2-D Numerical simulations with formulae by Ikeno et.al were carried out to simulate topography changes around harbor by tsunami flow in the flume. The result indicated that the numerical model is more applicable than a numerical model with previous formulae for estimation of deposit and erosion by topography changes. It is for this reason that the formula of pick-up rate is adaptable for wide-range diameter of sand, from 0.08mm to 0.2mm. Upper limit of suspended sediment concentration is needed to set due to avoid overlarge concentration in the numerical model. Comparison between numerical results in a real scale with 1% and 5% upper limits clearly shows topography changes have a deep relevance with the upper limit value. The upper limit value is one of dominant factors for evaluating seabed topography changes by the 2-D Numerical simulations with the formulae by Ikeno et.al in a real scale. (author)

  17. Measured and estimated glomerular filtration rate. Numerous methods of measurements (Part I

    Directory of Open Access Journals (Sweden)

    Jaime Pérez Loredo

    2017-04-01

    Equations applied for estimating GFR in population studies, should be reconsidered, given their imperfection and the difficulty for clinicians, who are not specialists on the subject, to interpret the results.

  18. Advanced Numerical Model for Irradiated Concrete

    Energy Technology Data Exchange (ETDEWEB)

    Giorla, Alain B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-03-01

    In this report, we establish a numerical model for concrete exposed to irradiation to address these three critical points. The model accounts for creep in the cement paste and its coupling with damage, temperature and relative humidity. The shift in failure mode with the loading rate is also properly represented. The numerical model for creep has been validated and calibrated against different experiments in the literature [Wittmann, 1970, Le Roy, 1995]. Results from a simplified model are shown to showcase the ability of numerical homogenization to simulate irradiation effects in concrete. In future works, the complete model will be applied to the analysis of the irradiation experiments of Elleuch et al. [1972] and Kelly et al. [1969]. This requires a careful examination of the experimental environmental conditions as in both cases certain critical information are missing, including the relative humidity history. A sensitivity analysis will be conducted to provide lower and upper bounds of the concrete expansion under irradiation, and check if the scatter in the simulated results matches the one found in experiments. The numerical and experimental results will be compared in terms of expansion and loss of mechanical stiffness and strength. Both effects should be captured accordingly by the model to validate it. Once the model has been validated on these two experiments, it can be applied to simulate concrete from nuclear power plants. To do so, the materials used in these concrete must be as well characterized as possible. The main parameters required are the mechanical properties of each constituent in the concrete (aggregates, cement paste), namely the elastic modulus, the creep properties, the tensile and compressive strength, the thermal expansion coefficient, and the drying shrinkage. These can be either measured experimentally, estimated from the initial composition in the case of cement paste, or back-calculated from mechanical tests on concrete. If some

  19. Numerical investigations on pressurized AL-composite vessel response to hypervelocity impacts: Comparison between experimental works and a numerical code

    Directory of Open Access Journals (Sweden)

    Mespoulet Jérôme

    2015-01-01

    Full Text Available Response of pressurized composite-Al vessels to hypervelocity impact of aluminum spheres have been numerically investigated to evaluate the influence of initial pressure on the vulnerability of these vessels. Investigated tanks are carbon-fiber overwrapped prestressed Al vessels. Explored internal air pressure ranges from 1 bar to 300 bar and impact velocity are around 4400 m/s. Data obtained from experiments (Xray radiographies, particle velocity measurement and post-mortem vessels have been compared to numerical results given from LS-DYNA ALE-Lagrange-SPH full coupling models. Simulations exhibit an under estimation in term of debris cloud evolution and shock wave propagation in pressurized air but main modes of damage/rupture on the vessels given by simulations are coherent with post-mortem recovered vessels from experiments. First results of this numerical work are promising and further simulation investigations with additional experimental data will be done to increase the reliability of the simulation model. The final aim of this crossed work is to numerically explore a wide range of impact conditions (impact angle, projectile weight, impact velocity, initial pressure that cannot be explore experimentally. Those whole results will define a rule of thumbs for the definition of a vulnerability analytical model for a given pressurized vessel.

  20. Numerical and Experimental Analysis of Aircraft Wing Subjected to Fatigue Loading

    Directory of Open Access Journals (Sweden)

    Hatem Rahim Wasmi

    2016-10-01

    Full Text Available This study deals with the aircraft wing analysis (numerical and experimental which subjected to fatigue loading in order to analyze the aircraft wing numerically by using ANSYS 15.0 software and experimentally by using loading programs which effect on fatigue test specimens at laboratory to estimate life of used metal (aluminum alloy 7075-T651 the wing metal and compare between numerical and experimental work, as well as to formulate an experimental mathematical model which may find safe estimate for metals and most common alloys that are used to build aircraft wing at certain conditions. In experimental work, a (34 specimen of (aluminum alloy 7075-T651 were tested using alternating bending fatigue machine rig. The test results are ; (18 Specimen to establish the (S-N curve and endurance limit and the other specimens used for variable amplitude tests were represented by loading programs which represents actual flight conditions. Also it has been obtained the safe fatigue curves which are described by mathematical formulas. ANSYS results show convergence with experimental results about cumulative fatigue damage (D, a mathematical model is proposed to estimate the life; this model gives good results in case of actual loading programs. Also, Miner and Marsh rules are applied to the specimens and compared with the proposal mathematical model in order to estimate the life of the wing material under actual flight loading conditions, comparing results show that it is possible to depend on present mathematical model than Miner and Marsh theories because the proposal mathematical model shows safe and good results compared with experimental work results.

  1. A Numerical Model for Trickle Bed Reactors

    Science.gov (United States)

    Propp, Richard M.; Colella, Phillip; Crutchfield, William Y.; Day, Marcus S.

    2000-12-01

    Trickle bed reactors are governed by equations of flow in porous media such as Darcy's law and the conservation of mass. Our numerical method for solving these equations is based on a total-velocity splitting, sequential formulation which leads to an implicit pressure equation and a semi-implicit mass conservation equation. We use high-resolution finite-difference methods to discretize these equations. Our solution scheme extends previous work in modeling porous media flows in two ways. First, we incorporate physical effects due to capillary pressure, a nonlinear inlet boundary condition, spatial porosity variations, and inertial effects on phase mobilities. In particular, capillary forces introduce a parabolic component into the recast evolution equation, and the inertial effects give rise to hyperbolic nonconvexity. Second, we introduce a modification of the slope-limiting algorithm to prevent our numerical method from producing spurious shocks. We present a numerical algorithm for accommodating these difficulties, show the algorithm is second-order accurate, and demonstrate its performance on a number of simplified problems relevant to trickle bed reactor modeling.

  2. Numerical Investigation of Pressure Losses in Axisymmetric Sudden Expansion with a Chamfer

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Youngmin; Kim, Youngin; Kim, Keung Koo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    In this paper, the pressure losses through axisymmetric sudden expansions with a chamfer are analyzed by means of numerical simulation, with an emphasis on the effect of the Reynolds number. In this study, we investigate numerically the turbulent flow in axisymmetric sudden expansions having a slight chamfer on the edge. With the aim of investigating the impact of Reynolds number on the expansion losses in a time-averaged sense, an extensive set of simulations is carried out. On the basis of numerical results, we also propose a general correlation to estimate the local loss coefficient in sudden expansions with a chamfer.

  3. Numerical Investigation of Pressure Losses in Axisymmetric Sudden Expansion with a Chamfer

    International Nuclear Information System (INIS)

    Bae, Youngmin; Kim, Youngin; Kim, Keung Koo

    2014-01-01

    In this paper, the pressure losses through axisymmetric sudden expansions with a chamfer are analyzed by means of numerical simulation, with an emphasis on the effect of the Reynolds number. In this study, we investigate numerically the turbulent flow in axisymmetric sudden expansions having a slight chamfer on the edge. With the aim of investigating the impact of Reynolds number on the expansion losses in a time-averaged sense, an extensive set of simulations is carried out. On the basis of numerical results, we also propose a general correlation to estimate the local loss coefficient in sudden expansions with a chamfer

  4. How to prevent type 2 diabetes in women with previous gestational diabetes?

    DEFF Research Database (Denmark)

    Pedersen, Anne Louise Winkler; Terkildsen Maindal, Helle; Juul, Lise

    2017-01-01

    OBJECTIVES: Women with previous gestational diabetes (GDM) have a seven times higher risk of developing type 2 diabetes (T2DM) than women without. We aimed to review the evidence of effective behavioural interventions seeking to prevent T2DM in this high-risk group. METHODS: A systematic review...... of RCTs in several databases in March 2016. RESULTS: No specific intervention or intervention components were found superior. The pooled effect on diabetes incidence (four trials) was estimated to: -5.02 per 100 (95% CI: -9.24; -0.80). CONCLUSIONS: This study indicates that intervention is superior...... to no intervention in prevention of T2DM among women with previous GDM....

  5. Rigid inclusions-Comparison between analytical and numerical methods

    International Nuclear Information System (INIS)

    Gomez Perez, R.; Melentijevic, S.

    2014-01-01

    This paper compares different analytical methods for analysis of rigid inclusions with finite element modeling. First of all, the load transfer in the distribution layer is analyzed for its different thicknesses and different inclusion grids to define the range between results obtained by analytical and numerical methods. The interaction between the soft soil and the inclusion in the estimation of settlements is studied as well. Considering different stiffness of the soft soil, settlements obtained analytical and numerically are compared. The influence of the soft soil modulus of elasticity on the neutral point depth was also performed by finite elements. This depth has a great importance for the definition of the total length of rigid inclusion. (Author)

  6. Numerical soliton-like solutions of the potential Kadomtsev-Petviashvili equation by the decomposition method

    International Nuclear Information System (INIS)

    Kaya, Dogan; El-Sayed, Salah M.

    2003-01-01

    In this Letter we present an Adomian's decomposition method (shortly ADM) for obtaining the numerical soliton-like solutions of the potential Kadomtsev-Petviashvili (shortly PKP) equation. We will prove the convergence of the ADM. We obtain the exact and numerical solitary-wave solutions of the PKP equation for certain initial conditions. Then ADM yields the analytic approximate solution with fast convergence rate and high accuracy through previous works. The numerical solutions are compared with the known analytical solutions

  7. Numerical simulation of electro-osmotic consolidation coupling non-linear variation of soil parameters

    Science.gov (United States)

    Wu, Hui; Hu, Liming; Wen, Qingbo

    2017-06-01

    Electro-osmotic consolidation is an effective method for soft ground improvement. A main limitation of previous numerical models on this technique is the ignorance of the non-linear variation of soil parameters. In the present study, a multi-field numerical model is developed with the consideration of the non-linear variation of soil parameters during electro-osmotic consolidation process. The numerical simulations on an axisymmetric model indicated that the non-linear variation of soil parameters showed remarkable impact on the development of the excess pore water pressure and degree of consolidation. A field experiment with complex geometry, boundary conditions, electrode configuration and voltage application was further simulated with the developed numerical model. The comparison between field and numerical data indicated that the numerical model coupling of the non-linear variation of soil parameters gave more reasonable results. The developed numerical model is capable to analyze engineering cases with complex operating conditions.

  8. Transportation package design using numerical optimization

    International Nuclear Information System (INIS)

    Harding, D.C.; Witkowski, W.R.

    1992-01-01

    The design of structures and engineering systems has always been an iterative process whose complexity was dependent upon the boundary conditions, constraints and available analytical tools. Transportation packaging design is no exception with structural, thermal and radiation shielding constraints based on regulatory hypothetical accident conditions. Transportation packaging design is often accomplished by a group of specialists, each designing a single component based on one or more simple criteria, pooling results with the group, evaluating the open-quotes pooledclose quotes design, and then reiterating the entire process until a satisfactory design is reached. The manual iterative methods used by the designer/analyst can be summarized in the following steps: design the part, analyze the part, interpret the analysis results, modify the part, and re-analyze the part. The inefficiency of this design practice and the frequently conservative result suggests the need for a more structured design methodology, which can simultaneously consider all of the design constraints. Numerical optimization is a structured design methodology whose maturity in development has allowed it to become a primary design tool in many industries. The purpose of this overview is twofold: first, to outline the theory and basic elements of numerical optimization; and second, to show how numerical optimization can be applied to the transportation packaging industry and used to increase efficiency and safety of radioactive and hazardous material transportation packages. A more extensive review of numerical optimization and its applications to radioactive material transportation package design was performed previously by the authors (Witkowski and Harding 1992). A proof-of-concept Type B package design is also presented as a simplified example of potential improvements achievable using numerical optimization in the design process

  9. Numerical Simulation of Steady Supercavitating Flows

    OpenAIRE

    Ali Jafarian; Ahmad-Reza Pishevar

    2016-01-01

    In this research, the Supercavitation phenomenon in compressible liquid flows is simulated. The one-fluid method based on a new exact two-phase Riemann solver is used for modeling. The cavitation is considered as an isothermal process and a consistent equation of state with the physical behavior of the water is used. High speed flow of water over a cylinder and a projectile are simulated and the results are compared with the previous numerical and experimental results. The cavitation bubble p...

  10. Uncertainty estimation and global forecasting with a chemistry-transport model - application to the numerical simulation of air quality; Estimation de l'incertitude et prevision d'ensemble avec un modele de chimie transport - Application a la simulation numerique de la qualite de l'air

    Energy Technology Data Exchange (ETDEWEB)

    Mallet, V.

    2005-12-15

    The aim of this work is the evaluation of the quality of a chemistry-transport model, not by a classical comparison with observations, but by the estimation of its uncertainties due to the input data, to the model formulation and to the numerical approximations. The study of these 3 sources of uncertainty is carried out with Monte Carlo simulations, with multi-model simulations and with comparisons between numerical schemes, respectively. A high uncertainty is shown for ozone concentrations. To overcome the uncertainty-related limitations, a strategy consists in using the overall forecasting. By combining several models (up to 48) on the basis of past observations, forecasts can be significantly improved. This work has been also the occasion of developing an innovative modeling system, named Polyphemus. (J.S.)

  11. Experimental and numerical analyses of different extended surfaces

    International Nuclear Information System (INIS)

    Diani, A; Mancin, S; Zilio, C; Rossetto, L

    2012-01-01

    Air is a cheap and safe fluid, widely used in electronic, aerospace and air conditioning applications. Because of its poor heat transfer properties, it always flows through extended surfaces, such as finned surfaces, to enhance the convective heat transfer. In this paper, experimental results are reviewed and numerical studies during air forced convection through extended surfaces are presented. The thermal and hydraulic behaviours of a reference trapezoidal finned surface, experimentally evaluated by present authors in an open-circuit wind tunnel, has been compared with numerical simulations carried out by using the commercial CFD software COMSOL Multiphysics. Once the model has been validated, numerical simulations have been extended to other rectangular finned configurations, in order to study the effects of the fin thickness, fin pitch and fin height on the thermo-hydraulic behaviour of the extended surfaces. Moreover, several pin fin surfaces have been simulated in the same range of operating conditions previously analyzed. Numerical results about heat transfer and pressure drop, for both plain finned and pin fin surfaces, have been compared with empirical correlations from the open literature, and more accurate equations have been developed, proposed, and validated.

  12. A numerical study on the heat and mass transfer of a micro heat pipe with a phase-change interface analysis

    Science.gov (United States)

    Jung, Eui Guk; Boo, Joon Hong

    2017-11-01

    A numerical study was conducted to analyze the heat and mass transfer in a micro heat pipe, with the thin-film theory applied to the phase change at the liquid-vapor interface. The model described the liquid and vapor distributions, phase change rate, wall temperature, pressure drop, and heat transfer rate in a micro heat pipe under normal operation. The reference cross-sectional geometry of the micro heat pipe was triangular, but the model could be applied to various geometries by utilizing a hydraulic diameter. In previous studies, to predict the thermal performance of a micro heat pipe, the phase change interface has usually been modeled using the Young-Laplace capillary equation, and the phase-change ratio has been estimated using terms such as vapor pressure, liquid pressure, and capillary pressure. In this study, a thermal numerical model for a micro heat pipe was developed using an augmented Young-Laplace equation. Consequently, terms that have been commonly excluded in previous studies, including the disjoining pressure, were included. The validity of the model was verified using the experimental results for the wall temperature of the micro heat pipe, wherein the relative error bound was less than 1 °C and 6 °C for the operating and condenser temperatures, respectively. The influence of the disjoining pressure on the heat transfer was analyzed and discussed for various operating temperatures and tilt angles.

  13. Improved Estimates of Thermodynamic Parameters

    Science.gov (United States)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  14. Experimental and numerical investigations of aerodynamic loads and 3D flow over non-rotating MEXICO blades

    NARCIS (Netherlands)

    Zhang, Y.; Gillebaart, T.; van Zuijlen, A.H.; van Bussel, G.J.W.; Bijl, H.

    2017-01-01

    This paper presents the experimental and numerical study on MEXICO wind turbine blades. Previous work by other researchers shows that large deviations exist in the loads comparison between numerical predictions and experimental data for the rotating MEXICO wind turbine. To reduce complexities and

  15. Estimating the effect of current, previous and never use of drugs in studies based on prescription registries

    DEFF Research Database (Denmark)

    Nielsen, Lars Hougaard; Løkkegaard, Ellen; Andreasen, Anne Helms

    2009-01-01

    of this misclassification for analysing the risk of breast cancer. MATERIALS AND METHODS: Prescription data were obtained from Danish Registry of Medicinal Products Statistics and we applied various methods to approximate treatment episodes. We analysed the duration of HT episodes to study the ability to identify......PURPOSE: Many studies which investigate the effect of drugs categorize the exposure variable into never, current, and previous use of the study drug. When prescription registries are used to make this categorization, the exposure variable possibly gets misclassified since the registries do...... not carry any information on the time of discontinuation of treatment.In this study, we investigated the amount of misclassification of exposure (never, current, previous use) to hormone therapy (HT) when the exposure variable was based on prescription data. Furthermore, we evaluated the significance...

  16. A posteriori error estimates in voice source recovery

    Science.gov (United States)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  17. A novel inverse numerical modeling method for the estimation of water and salt mass transfer coefficients during ultrasonic assisted-osmotic dehydration of cucumber cubes.

    Science.gov (United States)

    Kiani, Hosein; Karimi, Farzaneh; Labbafi, Mohsen; Fathi, Morteza

    2018-06-01

    The objective of this paper was to study the moisture and salt diffusivity during ultrasonic assisted-osmotic dehydration of cucumbers. Experimental measurements of moisture and salt concentration versus time were carried out and an inverse numerical method was performed by coupling a CFD package (OpenFOAM) with a parameter estimation software (DAKOTA) to determine mass transfer coefficients. A good agreement between experimental and numerical results was observed. Mass transfer coefficients were from 3.5 × 10 -9 to 7 × 10 -9  m/s for water and from 4.8 × 10 -9  m/s to 7.4 × 10 -9  m/s for salt at different conditions (diffusion coefficients of around 3.5 × 10 -12 -11.5 × 10 -12  m 2 /s for water and 5 × 10 -12  m/s-12 × 10 -12  m 2 /s for salt). Ultrasound irradiation could increase the mass transfer coefficient. The values obtained by this method were closer to the actual data. The inverse simulation method can be an accurate technique to study the mass transfer phenomena during food processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Estimation of the thermal properties in alloys as an inverse problem

    International Nuclear Information System (INIS)

    Zueco, J.; Alhama, F.

    2005-01-01

    This paper provides an efficient numerical method for estimating the thermal conductivity and heat capacity of alloys, as a function of the temperature, starting from temperature measurements (including errors) in heating and cooling processes. The proposed procedure is a modification of the known function estimation technique, typical of the inverse problem field, in conjunction with the network simulation method (already checked in many non-lineal problems) as the numerical tool. Estimations only require a point of measurement. The methodology is applied for determining these thermal properties in alloys within ranges of temperature where allotropic changes take place. These changes are characterized by sharp temperature dependencies. (Author) 13 refs

  19. Parameter estimation for stiff deterministic dynamical systems via ensemble Kalman filter

    International Nuclear Information System (INIS)

    Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki

    2014-01-01

    A commonly encountered problem in numerous areas of applications is to estimate the unknown coefficients of a dynamical system from direct or indirect observations at discrete times of some of the components of the state vector. A related problem is to estimate unobserved components of the state. An egregious example of such a problem is provided by metabolic models, in which the numerous model parameters and the concentrations of the metabolites in tissue are to be estimated from concentration data in the blood. A popular method for addressing similar questions in stochastic and turbulent dynamics is the ensemble Kalman filter (EnKF), a particle-based filtering method that generalizes classical Kalman filtering. In this work, we adapt the EnKF algorithm for deterministic systems in which the numerical approximation error is interpreted as a stochastic drift with variance based on classical error estimates of numerical integrators. This approach, which is particularly suitable for stiff systems where the stiffness may depend on the parameters, allows us to effectively exploit the parallel nature of particle methods. Moreover, we demonstrate how spatial prior information about the state vector, which helps the stability of the computed solution, can be incorporated into the filter. The viability of the approach is shown by computed examples, including a metabolic system modeling an ischemic episode in skeletal muscle, with a high number of unknown parameters. (paper)

  20. Load estimation from planar PIV measurement in vortex dominated flows

    Science.gov (United States)

    McClure, Jeffrey; Yarusevych, Serhiy

    2017-11-01

    Control volume-based loading estimates are employed on experimental and synthetic numerical planar Particle Image Velocimetry (PIV) data of a stationary cylinder and a cylinder undergoing one degree-of-freedom (1DOF) Vortex Induced Vibration (VIV). The results reveal the necessity of including out of plane terms, identified from a general formulation of the control volume momentum balance, when evaluating loads from planar measurements in three-dimensional flows. Reynolds stresses from out of plane fluctuations are shown to be significant for both instantaneous and mean force estimates when the control volume encompasses vortex dominated regions. For planar measurement, invoking a divergence-free assumption allows accurate estimation of half the identified terms. Towards evaluating the fidelity of PIV-based loading estimates for obtaining the forcing function unobtrusively in VIV experiments, the accuracy of the control volume-based loading methodology is evaluated using the numerical data with synthetically generated experimental PIV error, and a comparison is made between experimental PIV-based estimates and simultaneous force balance measurements.

  1. Numerical relativity

    CERN Document Server

    Shibata, Masaru

    2016-01-01

    This book is composed of two parts: First part describes basics in numerical relativity, that is, the formulations and methods for a solution of Einstein's equation and general relativistic matter field equations. This part will be helpful for beginners of numerical relativity who would like to understand the content of numerical relativity and its background. The second part focuses on the application of numerical relativity. A wide variety of scientific numerical results are introduced focusing in particular on the merger of binary neutron stars and black holes.

  2. Numerical analysis

    CERN Document Server

    Khabaza, I M

    1960-01-01

    Numerical Analysis is an elementary introduction to numerical analysis, its applications, limitations, and pitfalls. Methods suitable for digital computers are emphasized, but some desk computations are also described. Topics covered range from the use of digital computers in numerical work to errors in computations using desk machines, finite difference methods, and numerical solution of ordinary differential equations. This book is comprised of eight chapters and begins with an overview of the importance of digital computers in numerical analysis, followed by a discussion on errors in comput

  3. Application of Stochastic Unsaturated Flow Theory, Numerical Simulations, and Comparisons to Field Observations

    DEFF Research Database (Denmark)

    Jensen, Karsten Høgh; Mantoglou, Aristotelis

    1992-01-01

    unsaturated flow equation representing the mean system behavior is solved using a finite difference numerical solution technique. The effective parameters are evaluated from the stochastic theory formulas before entering them into the numerical solution for each iteration. The stochastic model is applied...... seems to offer a rational framework for modeling large-scale unsaturated flow and estimating areal averages of soil-hydrological processes in spatially variable soils....

  4. Operational Modal Analysis based Stress Estimation in Friction Systems

    DEFF Research Database (Denmark)

    Tarpø, Marius; Friis, Tobias; Nabuco, Bruna

    this assumption. In this paper, the precision of estimating the strain response of a nonlinear system is investigated using the operational response of numerical simulations. Local nonlinearities are introduced by adding friction to the test specimen and this paper finds that this approach of strain estimation...

  5. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  6. Bayesian estimates of linkage disequilibrium

    Directory of Open Access Journals (Sweden)

    Abad-Grau María M

    2007-06-01

    Full Text Available Abstract Background The maximum likelihood estimator of D' – a standard measure of linkage disequilibrium – is biased toward disequilibrium, and the bias is particularly evident in small samples and rare haplotypes. Results This paper proposes a Bayesian estimation of D' to address this problem. The reduction of the bias is achieved by using a prior distribution on the pair-wise associations between single nucleotide polymorphisms (SNPs that increases the likelihood of equilibrium with increasing physical distances between pairs of SNPs. We show how to compute the Bayesian estimate using a stochastic estimation based on MCMC methods, and also propose a numerical approximation to the Bayesian estimates that can be used to estimate patterns of LD in large datasets of SNPs. Conclusion Our Bayesian estimator of D' corrects the bias toward disequilibrium that affects the maximum likelihood estimator. A consequence of this feature is a more objective view about the extent of linkage disequilibrium in the human genome, and a more realistic number of tagging SNPs to fully exploit the power of genome wide association studies.

  7. Conservative numerical methods for solitary wave interactions

    Energy Technology Data Exchange (ETDEWEB)

    Duran, A; Lopez-Marcos, M A [Departamento de Matematica Aplicada y Computacion, Facultad de Ciencias, Universidad de Valladolid, Paseo del Prado de la Magdalena s/n, 47005 Valladolid (Spain)

    2003-07-18

    The purpose of this paper is to show the advantages that represent the use of numerical methods that preserve invariant quantities in the study of solitary wave interactions for the regularized long wave equation. It is shown that the so-called conservative methods are more appropriate to study the phenomenon and provide a dynamic point of view that allows us to estimate the changes in the parameters of the solitary waves after the collision.

  8. The Recidivism Patterns of Previously Deported Aliens Released from a Local Jail: Are They High-Risk Offenders?

    Science.gov (United States)

    Hickman, Laura J.; Suttorp, Marika J.

    2010-01-01

    Previously deported aliens are a group about which numerous claims are made but very few facts are known. Using data on male deportable aliens released from a local jail, the study sought to test the ubiquitous claim that they pose a high risk of recidivism. Using multiple measures of recidivism and propensity score weighting to account for…

  9. Hartree-Fock-Bogoliubov model: a theoretical and numerical perspective

    International Nuclear Information System (INIS)

    Paul, S.

    2012-01-01

    This work is devoted to the theoretical and numerical study of Hartree-Fock-Bogoliubov (HFB) theory for attractive quantum systems, which is one of the main methods in nuclear physics. We first present the model and its main properties, and then explain how to get numerical solutions. We prove some convergence results, in particular for the simple fixed point algorithm (sometimes called Roothaan). We show that it converges, or oscillates between two states, none of them being a solution. This generalizes to the HFB case previous results of Cances and Le Bris for the simpler Hartree-Fock model in the repulsive case. Following these authors, we also propose a relaxed constraint algorithm for which convergence is guaranteed. In the last part of the thesis, we illustrate the behavior of these algorithms by some numerical experiments. We first consider a system where the particles only interact through the Newton potential. Our numerical results show that the pairing matrix never vanishes, a fact that has not yet been proved rigorously. We then study a very simplified model for protons and neutrons in a nucleus. (author)

  10. Estimating linear effects in ANOVA designs: the easy way.

    Science.gov (United States)

    Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana

    2012-09-01

    Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.

  11. Java technology for implementing efficient numerical analysis in intranet

    International Nuclear Information System (INIS)

    Song, Hee Yong; Ko, Sung Ho

    2001-01-01

    This paper introduces some useful Java technologies for utilizing the internet in numerical analysis, and suggests one architecture performing efficient numerical analysis in the intranet by using them. The present work has verified it's possibility by implementing some parts of this architecture with two easy examples. One is based on Servlet-Applet communication, JDBC and swing. The other is adding multi-threads, file transfer and Java remote method invocation to the former. Through this work it has been intended to make the base for the later advanced and practical research that will include efficiency estimates of this architecture and deal with advanced load balancing

  12. Spectrum response estimation for deep-water floating platforms via retardation function representation

    Science.gov (United States)

    Liu, Fushun; Liu, Chengcheng; Chen, Jiefeng; Wang, Bin

    2017-08-01

    The key concept of spectrum response estimation with commercial software, such as the SESAM software tool, typically includes two main steps: finding a suitable loading spectrum and computing the response amplitude operators (RAOs) subjected to a frequency-specified wave component. In this paper, we propose a nontraditional spectrum response estimation method that uses a numerical representation of the retardation functions. Based on estimated added mass and damping matrices of the structure, we decompose and replace the convolution terms with a series of poles and corresponding residues in the Laplace domain. Then, we estimate the power density corresponding to each frequency component using the improved periodogram method. The advantage of this approach is that the frequency-dependent motion equations in the time domain can be transformed into the Laplace domain without requiring Laplace-domain expressions for the added mass and damping. To validate the proposed method, we use a numerical semi-submerged pontoon from the SESAM. The numerical results show that the responses of the proposed method match well with those obtained from the traditional method. Furthermore, the estimated spectrum also matches well, which indicates its potential application to deep-water floating structures.

  13. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  14. Implementation of a Simplified State Estimator for Wind Turbine Monitoring on an Embedded System

    DEFF Research Database (Denmark)

    Rasmussen, Theis Bo; Yang, Guangya; Nielsen, Arne Hejde

    2017-01-01

    system, including individual DER, is time consuming and numerically challenging. This paper presents the approach and results of implementing a simplified state estimator onto an embedded system for improving DER monitoring. The implemented state estimator is based on numerically robust orthogonal......The transition towards a cyber-physical energy system (CPES) entails an increased dependency on valid data. Simultaneously, an increasing implementation of renewable generation leads to possible control actions at individual distributed energy resources (DERs). A state estimation covering the whole...

  15. Efficient approximation of random fields for numerical applications

    KAUST Repository

    Harbrecht, Helmut; Peters, Michael; Siebenmorgen, Markus

    2015-01-01

    We consider the rapid computation of separable expansions for the approximation of random fields. We compare approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. We provide an a-posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples validate and quantify the considered methods.

  16. Efficient approximation of random fields for numerical applications

    KAUST Repository

    Harbrecht, Helmut

    2015-01-07

    We consider the rapid computation of separable expansions for the approximation of random fields. We compare approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. We provide an a-posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples validate and quantify the considered methods.

  17. Numerical simulation of avascular tumor growth

    Energy Technology Data Exchange (ETDEWEB)

    Slezak, D Fernandez; Suarez, C; Soba, A; Risk, M; Marshall, G [Laboratorio de Sistemas Complejos, Departamento de Computacion, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires (C1428EGA) Buenos Aires (Argentina)

    2007-11-15

    A mathematical and numerical model for the description of different aspects of microtumor development is presented. The model is based in the solution of a system of partial differential equations describing an avascular tumor growth. A detailed second-order numeric algorithm for solving this system is described. Parameters are swiped to cover a range of feasible physiological values. While previous published works used a single set of parameters values, here we present a wide range of feasible solutions for tumor growth, covering a more realistic scenario. The model is validated by experimental data obtained with a multicellular spheroid model, a specific type of in vitro biological model which is at present considered to be optimum for the study of complex aspects of avascular microtumor physiology. Moreover, a dynamical analysis and local behaviour of the system is presented, showing chaotic situations for particular sets of parameter values at some fixed points. Further biological experiments related to those specific points may give potentially interesting results.

  18. Parameter Estimation of Partial Differential Equation Models.

    Science.gov (United States)

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  19. Numerical estimate of fracture parameters under elastic and elastic-plastic conditions

    International Nuclear Information System (INIS)

    Soba, Alejandro; Denis, Alicia C.

    2003-01-01

    The importance of the stress intensity factor K in the elastic fracture analysis is well known. In this work three methods are developed to estimate the parameter K I , corresponding to the normal loading mode, employing the finite elements method. The elastic-plastic condition is also analyzed, where the line integral J is the relevant parameter. Two cases of interest are studied: sample with a crack in its center and tubes with internal pressure. (author)

  20. Estimation of moderator temperature coefficient of actual PWRs using wavelet transform

    International Nuclear Information System (INIS)

    Katsumata, Ryosuke; Shimazu, Yoichiro

    2001-01-01

    Recently, an applicability of wavelet transform for estimation of moderator temperature coefficient was shown in numerical simulations. The basic concept of the wavelet transform is to eliminate noise in the measured signals. The concept is similar to that of Fourier transform method in which the analyzed reactivity component is divided by the analyzed component of relevant parameter. In order to apply the method to analyze measured data in actual PWRs, we carried out numerical simulations on the data that were more similar to actual data and proposed a method for estimation of moderator temperature coefficient using the wavelet transform. In the numerical simulations we obtained moderator temperature coefficients with the relative error of less than 4%. Based on this result we applied this method to analyze measured data in actual PWRs and the results have proved that the method is applicable for estimation of moderator temperature coefficients in the actual PWRs. It is expected that this method can reduce the required data length during the measurement. We expect to expand the applicability of this method to estimate the other reactivity coefficients with the data of short transient. (author)

  1. Numerically Accelerated Importance Sampling for Nonlinear Non-Gaussian State Space Models

    NARCIS (Netherlands)

    Koopman, S.J.; Lucas, A.; Scharth, M.

    2015-01-01

    We propose a general likelihood evaluation method for nonlinear non-Gaussian state-space models using the simulation-based method of efficient importance sampling. We minimize the simulation effort by replacing some key steps of the likelihood estimation procedure by numerical integration. We refer

  2. Numerical experiment to estimate the validity of negative ion diagnostic using photo-detachment combined with Langmuir probing

    Energy Technology Data Exchange (ETDEWEB)

    Oudini, N. [Laboratoire des plasmas de décharges, Centre de Développement des Technologies Avancées, Cité du 20 Aout BP 17 Baba Hassen, 16081 Algiers (Algeria); Sirse, N.; Ellingboe, A. R. [Plasma Research Laboratory, School of Physical Sciences and NCPST, Dublin City University, Dublin 9 (Ireland); Benallal, R. [Unité de Recherche Matériaux et Energies Renouvelables, BP 119, Université Abou Bekr Belkaïd, Tlemcen 13000 (Algeria); Taccogna, F. [Istituto di Metodologie Inorganiche e di Plasmi, CNR, via Amendola 122/D, 70126 Bari (Italy); Aanesland, A. [Laboratoire de Physique des Plasmas, (CNRS, Ecole Polytechnique, Sorbonne Universités, UPMC Univ Paris 06, Univ Paris-Sud), École Polytechnique, 91128 Palaiseau Cedex (France); Bendib, A. [Laboratoire d' Electronique Quantique, Faculté de Physique, USTHB, El Alia BP 32, Bab Ezzouar, 16111 Algiers (Algeria)

    2015-07-15

    This paper presents a critical assessment of the theory of photo-detachment diagnostic method used to probe the negative ion density and electronegativity α = n{sub -}/n{sub e}. In this method, a laser pulse is used to photo-detach all negative ions located within the electropositive channel (laser spot region). The negative ion density is estimated based on the assumption that the increase of the current collected by an electrostatic probe biased positively to the plasma is a result of only the creation of photo-detached electrons. In parallel, the background electron density and temperature are considered as constants during this diagnostics. While the numerical experiments performed here show that the background electron density and temperature increase due to the formation of an electrostatic potential barrier around the electropositive channel. The time scale of potential barrier rise is about 2 ns, which is comparable to the time required to completely photo-detach the negative ions in the electropositive channel (∼3 ns). We find that neglecting the effect of the potential barrier on the background plasma leads to an erroneous determination of the negative ion density. Moreover, the background electron velocity distribution function within the electropositive channel is not Maxwellian. This is due to the acceleration of these electrons through the electrostatic potential barrier. In this work, the validity of the photo-detachment diagnostic assumptions is questioned and our results illustrate the weakness of these assumptions.

  3. Development of a Numerical Approach to Simulate Compressed Air Energy Storage Subjected to Cyclic Internal Pressure

    Directory of Open Access Journals (Sweden)

    Song-Hun Chong

    2017-10-01

    Full Text Available This paper analyzes the long-term response of unlined energy storage located at shallow depth to improve the distance between a wind farm and storage. The numerical approach follows the hybrid scheme that combined a mechanical constitutive model to extract stress and strains at the first cycle and polynomial-type strain accumulation functions to track the progressive plastic deformation. In particular, the strain function includes the fundamental features that requires simulating the long-term response of geomaterials: volumetric strain (terminal void ratio and shear strain (shakedown and ratcheting, the strain accumulation rate, and stress obliquity. The model is tested with a triaxial strain boundary condition under different stress obliquities. The unlined storage subjected to cyclic internal stress is simulated with different storage geometries and stress amplitudes that play a crucial role in estimating the long-term mechanical stability of underground storage. The simulations present the evolution of ground surface, yet their incremental rate approaches towards a terminal void ratio. With regular and smooth displacement fields for the large number of cycles, the inflection point is estimated with the previous surface settlement model.

  4. Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.

    Science.gov (United States)

    Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong

    2018-02-13

    Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.

  5. Current Status of the IAU Working Group for Numerical Standards of Fundamental Astronomy

    National Research Council Canada - National Science Library

    Luzum, B; Capitaine, N; Fienga, A; Folkner, W; Fukushima, T; Hilton, J; Hohenkerk, C; Krasinsky, G; Petit, G; Pitjeva, E; Soffel, M; Wallace, P

    2007-01-01

    ...) for Numerical Standards of Fundamental Astronomy. The goal of the WG are to update "IAU Current Best Estimates" conforming with IAU Resolutions, the International Earth Rotation and Reference System Service (IERS...

  6. Assessing groundwater availability in a folded carbonate aquifer through the development of a numerical model

    Science.gov (United States)

    Di Salvo, Cristina; Romano, Emanuele; Guyennon, Nicolas; Bruna Petrangeli, Anna; Preziosi, Elisabetta

    2015-04-01

    The study of aquifer systems from a quantitative point of view is fundamental for adopting water management plans aiming at preserving water resources and reducing environmental risks related to groundwater level and discharge changes. This is also what the European Union Water Framework Directive (WFD, 2000/60/EC) states, holding the development of numerical models as a key aspect for groundwater management. The objective of this research is to i) define a methodology for modeling a complex hydrogeological structure in a structurally folded carbonate area and ii) estimate the concurrent effects of exploitation and climate changes on groundwater availability through the implementation of a 3D groundwater flow model. This study concerns the Monte Coscerno karst aquifer located in the Apennine chain in Central Italy in the Nera River Valley.This aquifer, is planned to be exploited in the near future for water supply. Negative trends of precipitation in Central Italy have been reported in relation to global climate changes, which are expected to affect the availability of recharge to carbonate aquifers throughout the region . A great concern is the combined impact of climate change and groundwater exploitation, hence scenarios are needed taking into account the effect of possible temperature and precipitation trends on recharge rates. Following a previous experience with model conceptualization and long-term simulation of groundwater flow, an integrated three-dimensional groundwater model has been developed for the Monte Coscerno aquifer. In a previous paper (Preziosi et al 2014) the spatial distribution of recharge to this aquifer was estimated through the Thornthwaite Mather model at a daily time step using as inputs past precipitation and temperature values (1951-2013) as well as soil and landscape properties. In this paper the numerical model development is described. On the basis of well logs from private consulting companies and literature cross sections the

  7. Kidnapping Detection and Recognition in Previous Unknown Environment

    Directory of Open Access Journals (Sweden)

    Yang Tian

    2017-01-01

    Full Text Available An unaware event referred to as kidnapping makes the estimation result of localization incorrect. In a previous unknown environment, incorrect localization result causes incorrect mapping result in Simultaneous Localization and Mapping (SLAM by kidnapping. In this situation, the explored area and unexplored area are divided to make the kidnapping recovery difficult. To provide sufficient information on kidnapping, a framework to judge whether kidnapping has occurred and to identify the type of kidnapping with filter-based SLAM is proposed. The framework is called double kidnapping detection and recognition (DKDR by performing two checks before and after the “update” process with different metrics in real time. To explain one of the principles of DKDR, we describe a property of filter-based SLAM that corrects the mapping result of the environment using the current observations after the “update” process. Two classical filter-based SLAM algorithms, Extend Kalman Filter (EKF SLAM and Particle Filter (PF SLAM, are modified to show that DKDR can be simply and widely applied in existing filter-based SLAM algorithms. Furthermore, a technique to determine the adapted thresholds of metrics in real time without previous data is presented. Both simulated and experimental results demonstrate the validity and accuracy of the proposed method.

  8. Can rarefaction be used to estimate song repertoire size in birds?

    Directory of Open Access Journals (Sweden)

    Kathleen R. PESHEK, Daniel T. BLUMSTEIN

    2011-06-01

    Full Text Available Song repertoire size is the number of distinct syllables, phrases, or song types produced by an individual or population. Repertoire size estimation is particularly difficult for species that produce highly variable songs and those that produce many song types. Estimating repertoire size is important for ecological and evolutionary studies of speciation, studies of sexual selection, as well as studies of how species may adapt their songs to various acoustic environments. There are several methods to estimate repertoire size, however prior studies discovered that all but a full numerical count of song types might have substantial inaccuracies associated with them. We evaluated a somewhat novel approach to estimate repertoire size—rarefaction; a technique ecologists use to measure species diversity on individual and population levels. Using the syllables within American robins’ Turdus migratorius repertoire, we compared the most commonly used techniques of estimating repertoires to the results of a rarefaction analysis. American robins have elaborate and unique songs with few syllables shared between individuals, and there is no evidence that robins mimic their neighbors. Thus, they are an ideal system in which to compare techniques. We found that the rarefaction technique results resembled that of the numerical count, and were better than two alternative methods (behavioral accumulation curves, and capture-recapture to estimate syllable repertoire size. Future estimates of repertoire size, particularly in vocally complex species, may benefit from using rarefaction techniques when numerical counts are unable to be performed [Current Zoology 57 (3: 300–306, 2011].

  9. Numerical modeling of optical coherent transient processes with complex configurations-III: Noisy laser source

    International Nuclear Information System (INIS)

    Chang Tiejun; Tian Mingzhen

    2007-01-01

    A previously developed numerical model based on Maxwell-Bloch equations was modified to simulate optical coherent transient and spectral hole burning processes with noisy laser sources. Random walk phase noise was simulated using laser-phase sequences generated numerically according to the normal distribution of the phase shift. The noise model was tested by comparing the simulated spectral hole burning effect with the analytical solution. The noise effects on a few typical optical coherence transient processes were investigated using this numerical tool. Flicker and random walk frequency noises were considered in accumulation process

  10. Criteria for the reliability of numerical approximations to the solution of fluid flow problems

    International Nuclear Information System (INIS)

    Foias, C.

    1986-01-01

    The numerical approximation of the solutions of fluid flows models is a difficult problem in many cases of energy research. In all numerical methods implementable on digital computers, a basic question is if the number N of elements (Galerkin modes, finite-difference cells, finite-elements, etc.) is sufficient to describe the long time behavior of the exact solutions. It was shown using several approaches that some of the estimates based on physical intuition of N are rigorously valid under very general conditions and follow directly from the mathematical theory of the Navier-Stokes equations. Among the mathematical approaches to these estimates, the most promising (which can be and was already applied to many other dissipative partial differential systems) consists in giving upper estimates to the fractal dimension of the attractor associated to one (or all) solution(s) of the respective partial differential equations. 56 refs

  11. A desiccant-enhanced evaporative air conditioner: Numerical model and experiments

    International Nuclear Information System (INIS)

    Woods, Jason; Kozubal, Eric

    2013-01-01

    Highlights: ► We studied a new process combining liquid desiccants and evaporative cooling. ► We modeled the process using a finite-difference numerical model. ► We measured the performance of the process with experimental prototypes. ► Results show agreement between model and experiment of ±10%. ► Results add confidence to previous modeled energy savings estimates of 40–85%. - Abstract: This article presents modeling and experimental results on a recently proposed liquid desiccant air conditioner, which consists of two stages: a liquid desiccant dehumidifier and an indirect evaporative cooler. Each stage is a stack of channel pairs, where a channel pair is a process air channel separated from an exhaust air channel with a thin plastic plate. In the first stage, a liquid desiccant film, which lines the process air channels, removes moisture from the air through a porous hydrophobic membrane. An evaporating water film wets the surface of the exhaust channels and transfers the enthalpy of vaporization from the liquid desiccant into an exhaust airstream, cooling the desiccant and enabling lower outlet humidity. The second stage is a counterflow indirect evaporative cooler that siphons off and uses a portion of the cool-dry air exiting the second stage as the evaporative sink. The objectives of this article are to (1) present fluid-thermal numerical models for each stage, (2) present experimental results of prototypes for each stage, and (3) compare the modeled and experimental results. Several experiments were performed on the prototypes over a range of inlet temperatures and humidities, process and exhaust air flow rates, and desiccant concentrations and flow rates. The model predicts the experiments within ±10%.

  12. Spent Fuel Ratio Estimates from Numerical Models in ALE3D

    Energy Technology Data Exchange (ETDEWEB)

    Margraf, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dunn, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-02

    Potential threat of intentional sabotage of spent nuclear fuel storage facilities is of significant importance to national security. Paramount is the study of focused energy attacks on these materials and the potential release of aerosolized hazardous particulates into the environment. Depleted uranium oxide (DUO2) is often chosen as a surrogate material for testing due to the unreasonable cost and safety demands for conducting full-scale tests with real spent nuclear fuel. To account for differences in mechanical response resulting in changes to particle distribution it is necessary to scale the DUO2 results to get a proper measure for spent fuel. This is accomplished with the spent fuel ratio (SFR), the ratio of respirable aerosol mass released due to identical damage conditions between a spent fuel and a surrogate material like depleted uranium oxide (DUO2). A very limited number of full-scale experiments have been carried out to capture this data, and the oft-questioned validity of the results typically leads to overly-conservative risk estimates. In the present work, the ALE3D hydrocode is used to simulate DUO2 and spent nuclear fuel pellets impacted by metal jets. The results demonstrate an alternative approach to estimate the respirable release fraction of fragmented nuclear fuel.

  13. Numerical Estimation of Fatigue Life of Wind Turbines due to Shadow Effect

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Pedersen, Ronnie; Nielsen, Søren R.K.

    2009-01-01

    The influence of tower design on damage accumulation in up-wind turbine blades during tower passage is discussed. The fatigue life of a blade is estimated for a tripod tower configuration and a standard mono-tower. The blade stresses are determined from a dynamic mechanical model with a delay...

  14. Inferring uncertainty from interval estimates: Effects of alpha level and numeracy

    Directory of Open Access Journals (Sweden)

    Luke F. Rinne

    2013-05-01

    Full Text Available Interval estimates are commonly used to descriptively communicate the degree of uncertainty in numerical values. Conventionally, low alpha levels (e.g., .05 ensure a high probability of capturing the target value between interval endpoints. Here, we test whether alpha levels and individual differences in numeracy influence distributional inferences. In the reported experiment, participants received prediction intervals for fictitious towns' annual rainfall totals (assuming approximately normal distributions. Then, participants estimated probabilities that future totals would be captured within varying margins about the mean, indicating the approximate shapes of their inferred probability distributions. Results showed that low alpha levels (vs. moderate levels; e.g., .25 more frequently led to inferences of over-dispersed approximately normal distributions or approximately uniform distributions, reducing estimate accuracy. Highly numerate participants made more accurate estimates overall, but were more prone to inferring approximately uniform distributions. These findings have important implications for presenting interval estimates to various audiences.

  15. A numerical method for two-dimensional anisotropic transport problem in cylindrical geometry

    International Nuclear Information System (INIS)

    Du Mingsheng; Feng Tiekai; Fu Lianxiang; Cao Changshu; Liu Yulan

    1988-01-01

    The authors deal with the triangular mesh-discontinuous finite element method for solving the time-dependent anisotropic neutron transport problem in two-dimensional cylindrical geometry. A prior estimate of the numerical solution is given. Stability is proved. The authors have computed a two dimensional anisotropic neutron transport problem and a Tungsten-Carbide critical assembly problem by using the numerical method. In comparision with DSN method and the experimental results obtained by others both at home and abroad, the method is satisfactory

  16. Beach steepness effects on nonlinear infragravity-wave interactions : A numerical study

    NARCIS (Netherlands)

    de Bakker, A. T M; Tissier, M. F S; Ruessink, B. G.

    2016-01-01

    The numerical model SWASH is used to investigate nonlinear energy transfers between waves for a diverse set of beach profiles and wave conditions, with a specific focus on infragravity waves. We use bispectral analysis to study the nonlinear triad interactions, and estimate energy transfers to

  17. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    Science.gov (United States)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  18. Goal-oriented error estimation for Cahn-Hilliard models of binary phase transition

    KAUST Repository

    van der Zee, Kristoffer G.

    2010-10-27

    A posteriori estimates of errors in quantities of interest are developed for the nonlinear system of evolution equations embodied in the Cahn-Hilliard model of binary phase transition. These involve the analysis of wellposedness of dual backward-in-time problems and the calculation of residuals. Mixed finite element approximations are developed and used to deliver numerical solutions of representative problems in one- and two-dimensional domains. Estimated errors are shown to be quite accurate in these numerical examples. © 2010 Wiley Periodicals, Inc.

  19. Subspace Based Blind Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Hayashi, Kazunori; Matsushima, Hiroki; Sakai, Hideaki

    2012-01-01

    The paper proposes a subspace based blind sparse channel estimation method using 1–2 optimization by replacing the 2–norm minimization in the conventional subspace based method by the 1–norm minimization problem. Numerical results confirm that the proposed method can significantly improve...

  20. Deep sea animal density and size estimated using a Dual-frequency IDentification SONar (DIDSON) offshore the island of Hawaii

    Science.gov (United States)

    Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.

    2018-01-01

    Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.

  1. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  2. Numerical model of phase transformation of steel C80U during hardening

    Directory of Open Access Journals (Sweden)

    T. Domański

    2007-12-01

    Full Text Available The article concerns numerical modelling of the phase transformations in solid state hardening of tool steel C80U. The transformations were assumed: initial structure – austenite, austenite – perlite, bainite and austenite – martensite. Model for evaluation of fractions of phases and their kinetics based on continuous heating diagram (CHT and continuous cooling diagram (CCT. The dilatometric tests on the simulator of thermal cycles were performed. The results of dilatometric tests were compared with the results of the test numerical simulations. In this way the derived models for evaluating phase content and kinetics of transformations in heating and cooling processes were verified. The results of numerical simulations confirm correctness of the algorithm that were worked out. In the numerical example the simulated estimation of the phase fraction in the hardened axisimmetrical element was performed.

  3. Moving Horizon Estimation and Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp

    successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....

  4. Asymptotic preserving error estimates for numerical solutions of compressible Navier-Stokes equations in the low Mach number regime

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Medviďová-Lukáčová, M.; Nečasová, Šárka; Novotný, A.; She, Bangwei

    2018-01-01

    Roč. 16, č. 1 (2018), s. 150-183 ISSN 1540-3459 R&D Projects: GA ČR GA16-03230S EU Projects: European Commission(XE) 320078 - MATHEF Institutional support: RVO:67985840 Keywords : Navier-Stokes system * finite element numerical method * finite volume numerical method * asymptotic preserving schemes Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.865, year: 2016 http://epubs.siam.org/doi/10.1137/16M1094233

  5. Robust Estimation for a CSTR Using a High Order Sliding Mode Observer and an Observer-Based Estimator

    Directory of Open Access Journals (Sweden)

    Esteban Jiménez-Rodríguez

    2016-12-01

    Full Text Available This paper presents an estimation structure for a continuous stirred-tank reactor, which is comprised of a sliding mode observer-based estimator coupled with a high-order sliding-mode observer. The whole scheme allows the robust estimation of the state and some parameters, specifically the concentration of the reactive mass, the heat of reaction and the global coefficient of heat transfer, by measuring the temperature inside the reactor and the temperature inside the jacket. In order to verify the results, the convergence proof of the proposed structure is done, and numerical simulations are presented with noiseless and noisy measurements, suggesting the applicability of the posed approach.

  6. Numerical Solution of Stokes Flow in a Circular Cavity Using Mesh-free Local RBF-DQ

    DEFF Research Database (Denmark)

    Kutanaai, S Soleimani; Roshan, Naeem; Vosoughi, A

    2012-01-01

    This work reports the results of a numerical investigation of Stokes flow problem in a circular cavity as an irregular geometry using mesh-free local radial basis function-based differential quadrature (RBF-DQ) method. This method is the combination of differential quadrature approximation of der...... in solution of partial differential equations (PDEs).......This work reports the results of a numerical investigation of Stokes flow problem in a circular cavity as an irregular geometry using mesh-free local radial basis function-based differential quadrature (RBF-DQ) method. This method is the combination of differential quadrature approximation...... is applied on a two-dimensional geometry. The obtained results from the numerical simulations are compared with those gained by previous works. Outcomes prove that the current technique is in very good agreement with previous investigations and this fact that RBF-DQ method is an accurate and flexible method...

  7. Information content of slug tests for estimating hydraulic properties in realistic, high-conductivity aquifer scenarios

    Science.gov (United States)

    Cardiff, Michael; Barrash, Warren; Thoma, Michael; Malama, Bwalya

    2011-06-01

    SummaryA recently developed unified model for partially-penetrating slug tests in unconfined aquifers ( Malama et al., in press) provides a semi-analytical solution for aquifer response at the wellbore in the presence of inertial effects and wellbore skin, and is able to model the full range of responses from overdamped/monotonic to underdamped/oscillatory. While the model provides a unifying framework for realistically analyzing slug tests in aquifers (with the ultimate goal of determining aquifer properties such as hydraulic conductivity K and specific storage Ss), it is currently unclear whether parameters of this model can be well-identified without significant prior information and, thus, what degree of information content can be expected from such slug tests. In this paper, we examine the information content of slug tests in realistic field scenarios with respect to estimating aquifer properties, through analysis of both numerical experiments and field datasets. First, through numerical experiments using Markov Chain Monte Carlo methods for gauging parameter uncertainty and identifiability, we find that: (1) as noted by previous researchers, estimation of aquifer storage parameters using slug test data is highly unreliable and subject to significant uncertainty; (2) joint estimation of aquifer and skin parameters contributes to significant uncertainty in both unless prior knowledge is available; and (3) similarly, without prior information joint estimation of both aquifer radial and vertical conductivity may be unreliable. These results have significant implications for the types of information that must be collected prior to slug test analysis in order to obtain reliable aquifer parameter estimates. For example, plausible estimates of aquifer anisotropy ratios and bounds on wellbore skin K should be obtained, if possible, a priori. Secondly, through analysis of field data - consisting of over 2500 records from partially-penetrating slug tests in a

  8. Contributions to reinforced concrete structures numerical simulations

    International Nuclear Information System (INIS)

    Badel, P.B.

    2001-07-01

    In order to be able to carry out simulations of reinforced concrete structures, it is necessary to know two aspects: the behaviour laws have to reflect the complex behaviour of concrete and a numerical environment has to be developed in order to avoid to the user difficulties due to the softening nature of the behaviour. This work deals with these two subjects. After an accurate estimation of two behaviour models (micro-plan and mesoscopic models), two damage models (the first one using a scalar variable, the other one a tensorial damage of the 2 order) are proposed. These two models belong to the framework of generalized standard materials, which renders their numerical integration easy and efficient. A method of load control is developed in order to make easier the convergence of the calculations. At last, simulations of industrial structures illustrate the efficiency of the method. (O.M.)

  9. Numerical and experimental investigations on cavitation erosion

    Science.gov (United States)

    Fortes Patella, R.; Archer, A.; Flageul, C.

    2012-11-01

    A method is proposed to predict cavitation damage from cavitating flow simulations. For this purpose, a numerical process coupling cavitating flow simulations and erosion models was developed and applied to a two-dimensional (2D) hydrofoil tested at TUD (Darmstadt University of Technology, Germany) [1] and to a NACA 65012 tested at LMH-EPFL (Lausanne Polytechnic School) [2]. Cavitation erosion tests (pitting tests) were carried out and a 3D laser profilometry was used to analyze surfaces damaged by cavitation [3]. The method allows evaluating the pit characteristics, and mainly the volume damage rates. The paper describes the developed erosion model, the technique of cavitation damage measurement and presents some comparisons between experimental results and numerical damage predictions. The extent of cavitation erosion was correctly estimated in both hydrofoil geometries. The simulated qualitative influence of flow velocity, sigma value and gas content on cavitation damage agreed well with experimental observations.

  10. Prototype bucket foundation for wind turbines - natural frequency estimation

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    The first full scale prototype bucket foundation for wind turbines has been installed in October 2002 at Aalborg University offshore test facility in Frederikshavn, Denmark. The suction caisson and the wind turbine have been equipped with an online monitoring system, consisting of 15 accelerometers and a real-time data-acquisition system. The report concerns the in service performance of the wind turbine, with focus on estimation of the natural frequencies of the structure/foundation. The natural frequencies are initially estimated by means of experimental Output-only Modal analysis. The experimental estimates are then compared with numerical simulations of the suction caisson foundation and the wind turbine. The numerical model consists of a finite element section for the wind turbine tower and nacelle. The soil-structure interaction of the soil-foundation section is modelled by lumped-parameter models capable of simulating dynamic frequency dependent behaviour of the structure-foundation system. (au)

  11. Numerical model SMODERP

    Science.gov (United States)

    Kavka, P.; Jeřábek, J.; Strouhal, L.

    2016-12-01

    The contribution presents a numerical model SMODERP that is used for calculation and prediction of surface runoff and soil erosion from agricultural land. The physically based model includes the processes of infiltration (Phillips equation), surface runoff routing (kinematic wave based equation), surface retention, surface roughness and vegetation impact on runoff. The model is being developed at the Department of Irrigation, Drainage and Landscape Engineering, Civil Engineering Faculty, CTU in Prague. 2D version of the model was introduced in last years. The script uses ArcGIS system tools for data preparation. The physical relations are implemented through Python scripts. The main computing part is stand alone in numpy arrays. Flow direction is calculated by Steepest Descent algorithm and in multiple flow algorithm. Sheet flow is described by modified kinematic wave equation. Parameters for five different soil textures were calibrated on the set of hundred measurements performed on the laboratory and filed rainfall simulators. Spatially distributed models enable to estimate not only surface runoff but also flow in the rills. Development of the rills is based on critical shear stress and critical velocity. For modelling of the rills a specific sub model was created. This sub model uses Manning formula for flow estimation. Flow in the ditches and streams are also computed. Numerical stability of the model is controled by Courant criterion. Spatial scale is fixed. Time step is dynamic and depends on the actual discharge. The model is used in the framework of the project "Variability of Short-term Precipitation and Runoff in Small Czech Drainage Basins and its Influence on Water Resources Management". Main goal of the project is to elaborate a methodology and online utility for deriving short-term design precipitation series, which could be utilized by a broad community of scientists, state administration as well as design planners. The methodology will account for

  12. Mathematical and numerical modeling of early atherosclerotic lesions***

    Directory of Open Access Journals (Sweden)

    Raoult Annie

    2010-12-01

    Full Text Available This article is devoted to the construction of a mathematical model describing the early formation of atherosclerotic lesions. The early stage of atherosclerosis is an inflammatory process that starts with the penetration of low density lipoproteins in the intima and with their oxidation. This phenomenon is closely linked to the local blood flow dynamics. Extending a previous work [5] that was mainly restricted to a one-dimensional setting, we couple a simple lesion growth model relying on the biomolecular process that takes place in the intima with blood flow dynamics and mass transfer. We perform numerical simulations on a two-dimensional geometry taken from [6,7] that mimicks a carotid artery deformed by a perivascular cast and we compare the numerical results with experimental data.

  13. Symbolic-Numeric Integration of the Dynamical Cosserat Equations

    KAUST Repository

    Lyakhov, Dmitry A.

    2017-08-29

    We devise a symbolic-numeric approach to the integration of the dynamical part of the Cosserat equations, a system of nonlinear partial differential equations describing the mechanical behavior of slender structures, like fibers and rods. This is based on our previous results on the construction of a closed form general solution to the kinematic part of the Cosserat system. Our approach combines methods of numerical exponential integration and symbolic integration of the intermediate system of nonlinear ordinary differential equations describing the dynamics of one of the arbitrary vector-functions in the general solution of the kinematic part in terms of the module of the twist vector-function. We present an experimental comparison with the well-established generalized \\\\alpha -method illustrating the computational efficiency of our approach for problems in structural mechanics.

  14. Symbolic-Numeric Integration of the Dynamical Cosserat Equations

    KAUST Repository

    Lyakhov, Dmitry A.; Gerdt, Vladimir P.; Weber, Andreas G.; Michels, Dominik L.

    2017-01-01

    We devise a symbolic-numeric approach to the integration of the dynamical part of the Cosserat equations, a system of nonlinear partial differential equations describing the mechanical behavior of slender structures, like fibers and rods. This is based on our previous results on the construction of a closed form general solution to the kinematic part of the Cosserat system. Our approach combines methods of numerical exponential integration and symbolic integration of the intermediate system of nonlinear ordinary differential equations describing the dynamics of one of the arbitrary vector-functions in the general solution of the kinematic part in terms of the module of the twist vector-function. We present an experimental comparison with the well-established generalized \\alpha -method illustrating the computational efficiency of our approach for problems in structural mechanics.

  15. Numerical estimation of aircrafts' unsteady lateral-directional stability derivatives

    Directory of Open Access Journals (Sweden)

    Maričić N.L.

    2006-01-01

    Full Text Available A technique for predicting steady and oscillatory aerodynamic loads on general configuration has been developed. The prediction is based on the Doublet-Lattice Method, Slender Body Theory and Method of Images. The chord and span wise loading on lifting surfaces and longitudinal bodies (in horizontal and vertical plane load distributions are determined. The configuration may be composed of an assemblage of lifting surfaces (with control surfaces and bodies (with circular cross sections and a longitudinal variation of radius. Loadings predicted by this method are used to calculate (estimate steady and unsteady (dynamic lateral-directional stability derivatives. The short outline of the used methods is given in [1], [2], [3], [4] and [5]. Applying the described methodology software DERIV is developed. The obtained results from DERIV are compared to NASTRAN examples HA21B and HA21D from [4]. In the first example (HA21B, the jet transport wing (BAH wing is steady rolling and lateral stability derivatives are determined. In the second example (HA21D, lateral-directional stability derivatives are calculated for forward- swept-wing (FSW airplane in antisymmetric quasi-steady maneuvers. Acceptable agreement is achieved comparing the results from [4] and DERIV.

  16. Beach steepness effects on nonlinear infragravity-wave interactions : A numerical study

    NARCIS (Netherlands)

    De Bakker, A. T M; Tissier, M.F.S.; Ruessink, B. G.

    2016-01-01

    The numerical model SWASH is used to investigate nonlinear energy transfers between waves for a diverse set of beach profiles and wave conditions, with a specific focus on infragravity waves. We use bispectral analysis to study the nonlinear triad interactions, and estimate energy transfers to

  17. Evaluation of wave runup predictions from numerical and parametric models

    Science.gov (United States)

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  18. A Numerical Approach to Solving an Inverse Heat Conduction Problem Using the Levenberg-Marquardt Algorithm

    Directory of Open Access Journals (Sweden)

    Tao Min

    2014-01-01

    Full Text Available This paper is intended to provide a numerical algorithm involving the combined use of the Levenberg-Marquardt algorithm and the Galerkin finite element method for estimating the diffusion coefficient in an inverse heat conduction problem (IHCP. In the present study, the functional form of the diffusion coefficient is unknown a priori. The unknown diffusion coefficient is approximated by the polynomial form and the present numerical algorithm is employed to find the solution. Numerical experiments are presented to show the efficiency of the proposed method.

  19. Estimation of groundwater flow from temperature monitoring in a borehole heat exchanger during a thermal response test

    Science.gov (United States)

    Yoshioka, Mayumi; Takakura, Shinichi; Uchida, Youhei

    2018-05-01

    To estimate the groundwater flow around a borehole heat exchanger (BHE), thermal properties of geological core samples were measured and a thermal response test (TRT) was performed in the Tsukuba upland, Japan. The thermal properties were measured at 57 points along a 50-m-long geological core, consisting predominantly of sand, silt, and clay, drilled near the BHE. In this TRT, the vertical temperature in the BHE was also monitored during and after the test. Results for the thermal properties of the core samples and from the monitoring indicated that groundwater flow enhanced thermal transfers, especially at shallow depths. The groundwater velocities around the BHE were estimated using a two-dimensional numerical model with monitoring data on temperature changes. According to the results, the estimated groundwater velocity was generally consistent with hydrogeological data from previous studies, except for the data collected at shallow depths consisting of a clay layer. The reasons for this discrepancy at shallow depths were predicted to be preferential flow and the occurrence of vertical flow through the BHE grout, induced by the hydrogeological conditions.

  20. Neglect Of Parameter Estimation Uncertainty Can Significantly Overestimate Structural Reliability

    Directory of Open Access Journals (Sweden)

    Rózsás Árpád

    2015-12-01

    Full Text Available Parameter estimation uncertainty is often neglected in reliability studies, i.e. point estimates of distribution parameters are used for representative fractiles, and in probabilistic models. A numerical example examines the effect of this uncertainty on structural reliability using Bayesian statistics. The study reveals that the neglect of parameter estimation uncertainty might lead to an order of magnitude underestimation of failure probability.

  1. Sequential bayes estimation algorithm with cubic splines on uniform meshes

    International Nuclear Information System (INIS)

    Hossfeld, F.; Mika, K.; Plesser-Walk, E.

    1975-11-01

    After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de

  2. Intentional and Automatic Numerical Processing as Predictors of Mathematical Abilities in Primary School Children

    Directory of Open Access Journals (Sweden)

    Violeta ePina

    2015-03-01

    Full Text Available Previous studies have suggested that numerical processing relates to mathematical performance, but it seems that such relationship is more evident for intentional than for automatic numerical processing. In the present study we assessed the relationship between the two types of numerical processing and specific mathematical abilities in a sample of 109 children in grades 1 to 6. Participants were tested in an ample range of mathematical tests and also performed both a numerical and a size comparison task. The results showed that numerical processing related to mathematical performance only when inhibitory control was involved in the comparison tasks. Concretely, we found that intentional numerical processing, as indexed by the numerical distance effect in the numerical comparison task, was related to mathematical reasoning skills only when the task-irrelevant dimension (the physical size was incongruent; whereas automatic numerical processing, indexed by the congruency effect in the size comparison task, was related to mathematical calculation skills only when digits were separated by small distance. The observed double dissociation highlights the relevance of both intentional and automatic numerical processing in mathematical skills, but when inhibitory control is also involved.

  3. A numerical study on manoeuvrability of wind turbine installation vessel using OpenFOAM

    Directory of Open Access Journals (Sweden)

    Sungwook Lee

    2015-05-01

    Full Text Available In this study, a numerical prediction method on manoeuvrability of Wind Turbine Installation Vessel (WTIV is presented. Planar Motion Mechanism (PMM captive test for the bare hull of WTIV is carried out in the model basin and compared with the numerical results using RANS simulation based on Open-source Field Operation And Manipulation (OpenFOAM calculation to validate the developed method. The manoeuvrability of WTIV with skeg and/or without skeg is investigated using the numerical approach along with the captive model test. In the numerical calculations, the dynamic stability index which indicates the course keeping ability is evaluated and compared for three different hull configurations i.e. bare hull and other two hulls with center skeg and twin skeg. This paper proves that the numerical approach using RANS simulation can be readily applied to estimate the manoeuvrability of WTIV at the initial design stage.

  4. Detailed numerical modeling of a linear parallel-plate Active Magnetic Regenerator

    DEFF Research Database (Denmark)

    Nielsen, Kaspar Kirstein; Bahl, Christian Robert Haffenden; Smith, Anders

    2009-01-01

    A numerical model simulating Active Magnetic Regeneration (AMR) is presented and compared to a selection of experiments. The model is an extension and re-implementation of a previous two-dimensional model. The new model is extended to 2.5D, meaning that parasitic thermal losses are included...

  5. Source Estimation for the Damped Wave Equation Using Modulating Functions Method: Application to the Estimation of the Cerebral Blood Flow

    KAUST Repository

    Asiri, Sharefa M.

    2017-10-19

    In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations of blood mass density. The method is described and its performance is assessed through some numerical simulations. The robustness of the method in presence of noise is also studied.

  6. Numerical solution of an inverse 2D Cauchy problem connected with the Helmholtz equation

    International Nuclear Information System (INIS)

    Wei, T; Qin, H H; Shi, R

    2008-01-01

    In this paper, the Cauchy problem for the Helmholtz equation is investigated. By Green's formulation, the problem can be transformed into a moment problem. Then we propose a numerical algorithm for obtaining an approximate solution to the Neumann data on the unspecified boundary. Error estimate and convergence analysis have also been given. Finally, we present numerical results for several examples and show the effectiveness of the proposed method

  7. Numerical Simulations of Settlement of Jet Grouting Columns

    Directory of Open Access Journals (Sweden)

    Juzwa Anna

    2016-03-01

    Full Text Available The paper presents the comparison of results of numerical analyses of interaction between group of jet grouting columns and subsoil. The analyses were conducted for single column and groups of three, seven and nine columns. The simulations are based on experimental research in real scale which were carried out by authors. The final goal for the research is an estimation of an influence of interaction between columns working in a group.

  8. Prediction of RNA secondary structure using generalized centroid estimators.

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Sato, Kengo; Mituyama, Toutai; Asai, Kiyoshi

    2009-02-15

    Recent studies have shown that the methods for predicting secondary structures of RNAs on the basis of posterior decoding of the base-pairing probabilities has an advantage with respect to prediction accuracy over the conventionally utilized minimum free energy methods. However, there is room for improvement in the objective functions presented in previous studies, which are maximized in the posterior decoding with respect to the accuracy measures for secondary structures. We propose novel estimators which improve the accuracy of secondary structure prediction of RNAs. The proposed estimators maximize an objective function which is the weighted sum of the expected number of the true positives and that of the true negatives of the base pairs. The proposed estimators are also improved versions of the ones used in previous works, namely CONTRAfold for secondary structure prediction from a single RNA sequence and McCaskill-MEA for common secondary structure prediction from multiple alignments of RNA sequences. We clarify the relations between the proposed estimators and the estimators presented in previous works, and theoretically show that the previous estimators include additional unnecessary terms in the evaluation measures with respect to the accuracy. Furthermore, computational experiments confirm the theoretical analysis by indicating improvement in the empirical accuracy. The proposed estimators represent extensions of the centroid estimators proposed in Ding et al. and Carvalho and Lawrence, and are applicable to a wide variety of problems in bioinformatics. Supporting information and the CentroidFold software are available online at: http://www.ncrna.org/software/centroidfold/.

  9. Numerical analysis

    CERN Document Server

    Rao, G Shanker

    2006-01-01

    About the Book: This book provides an introduction to Numerical Analysis for the students of Mathematics and Engineering. The book is designed in accordance with the common core syllabus of Numerical Analysis of Universities of Andhra Pradesh and also the syllabus prescribed in most of the Indian Universities. Salient features: Approximate and Numerical Solutions of Algebraic and Transcendental Equation Interpolation of Functions Numerical Differentiation and Integration and Numerical Solution of Ordinary Differential Equations The last three chapters deal with Curve Fitting, Eigen Values and Eigen Vectors of a Matrix and Regression Analysis. Each chapter is supplemented with a number of worked-out examples as well as number of problems to be solved by the students. This would help in the better understanding of the subject. Contents: Errors Solution of Algebraic and Transcendental Equations Finite Differences Interpolation with Equal Intervals Interpolation with Unequal Int...

  10. Numerical experiment on variance biases and Monte Carlo neutronics analysis with thermal hydraulic feedback

    International Nuclear Information System (INIS)

    Hyung, Jin Shim; Beom, Seok Han; Chang, Hyo Kim

    2003-01-01

    Monte Carlo (MC) power method based on the fixed number of fission sites at the beginning of each cycle is known to cause biases in the variances of the k-eigenvalue (keff) and the fission reaction rate estimates. Because of the biases, the apparent variances of keff and the fission reaction rate estimates from a single MC run tend to be smaller or larger than the real variances of the corresponding quantities, depending on the degree of the inter-generational correlation of the sample. We demonstrate this through a numerical experiment involving 100 independent MC runs for the neutronics analysis of a 17 x 17 fuel assembly of a pressurized water reactor (PWR). We also demonstrate through the numerical experiment that Gelbard and Prael's batch method and Ueki et al's covariance estimation method enable one to estimate the approximate real variances of keff and the fission reaction rate estimates from a single MC run. We then show that the use of the approximate real variances from the two-bias predicting methods instead of the apparent variances provides an efficient MC power iteration scheme that is required in the MC neutronics analysis of a real system to determine the pin power distribution consistent with the thermal hydraulic (TH) conditions of individual pins of the system. (authors)

  11. Unbiased estimators for spatial distribution functions of classical fluids

    Science.gov (United States)

    Adib, Artur B.; Jarzynski, Christopher

    2005-01-01

    We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.

  12. Numerical Analysis and Experimental Verification of Stresses Building up in Microelectronics Packaging

    NARCIS (Netherlands)

    Rezaie Adli, A.R.

    2017-01-01

    This thesis comprises a thorough study of the microelectronics packaging process by means of various experimental and numerical methods to estimate the process induced residual stresses. The main objective of the packaging is to encapsulate the die, interconnections and the other exposed internal

  13. Computational Enhancements for Direct Numerical Simulations of Statistically Stationary Turbulent Premixed Flames

    KAUST Repository

    Mukhadiyev, Nurzhan

    2017-05-01

    Combustion at extreme conditions, such as a turbulent flame at high Karlovitz and Reynolds numbers, is still a vast and an uncertain field for researchers. Direct numerical simulation of a turbulent flame is a superior tool to unravel detailed information that is not accessible to most sophisticated state-of-the-art experiments. However, the computational cost of such simulations remains a challenge even for modern supercomputers, as the physical size, the level of turbulence intensity, and chemical complexities of the problems continue to increase. As a result, there is a strong demand for computational cost reduction methods as well as in acceleration of existing methods. The main scope of this work was the development of computational and numerical tools for high-fidelity direct numerical simulations of premixed planar flames interacting with turbulence. The first part of this work was KAUST Adaptive Reacting Flow Solver (KARFS) development. KARFS is a high order compressible reacting flow solver using detailed chemical kinetics mechanism; it is capable to run on various types of heterogeneous computational architectures. In this work, it was shown that KARFS is capable of running efficiently on both CPU and GPU. The second part of this work was numerical tools for direct numerical simulations of planar premixed flames: such as linear turbulence forcing and dynamic inlet control. DNS of premixed turbulent flames conducted previously injected velocity fluctuations at an inlet. Turbulence injected at the inlet decayed significantly while reaching the flame, which created a necessity to inject higher than needed fluctuations. A solution for this issue was to maintain turbulence strength on the way to the flame using turbulence forcing. Therefore, a linear turbulence forcing was implemented into KARFS to enhance turbulence intensity. Linear turbulence forcing developed previously by other groups was corrected with net added momentum removal mechanism to prevent mean

  14. Numerical simulation of spin motion in circular accelerators using spinor formulation

    International Nuclear Information System (INIS)

    Nghiem, P.; Tkatchenko, A.

    1992-07-01

    A simple method is presented based on spinor algebra formalism for tracking the spin motion in circular accelerators. Using an analytical expression of the one-turn transformation matrix including the effects of perturbating fields or of siberian snakes, a simple and very fast numerical code has been written for studying spin motion in various circumstances. In particular, effects of synchrotron oscillations on final polarization after one isolated resonance crossing are simulated. Results of these calculations agree very well with those which have been obtained previously from analytical approaches or from other numerical-simulation programs. (author) 8 refs.; 14 figs

  15. Numerical Analysis on the Free Fall Motion of the Control Rod Assembly for the Sodium Cooled Fast Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Se-Hong; Choi, Choengryul; Son, Sung-Man [ELSOLTEC, Yongin (Korea, Republic of); Kim, Jae-Yong; Yoon, Kyung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    On receiving the scram signal, the control rod assemblies are released to fall into the reactor core by its weight. Thus drop time and falling velocity of the control rod assembly must be estimated for the safety evaluation. However, because of its complex shape, it is difficult to estimate the drop time by theoretical method. In this study, numerical analysis has been carried out in order to estimate drop time and falling velocity of the control rod assembly to provide the underlying data for the design optimization. Numerical analysis has been carried out to estimate the drop time and falling velocity of the control rod assembly for sodium-cooled fast reactor. Before performing the numerical analysis for the control rod assembly, sphere dropping experiment has been carried out for verification of the CFD methodology. The result of the numerical analysis for the method verification is almost same as the result of the experiment. Falling velocity and drag force increase rapidly in the beginning. And then it goes to the stable state. When the piston head of the control rod assembly is inserted into the damper, the drag force increases instantaneously and the falling velocity decreases quickly. The falling velocity is reduced about 14 % by damper. The total drop time of the control rod assembly is about 1.47s. In the next study, the experiment for the control rod assembly will be carried out, and its result is going to be compared with the CFD analysis result.

  16. Numerical modeling and design of a disk-type rotating permanent magnet induction pump

    Energy Technology Data Exchange (ETDEWEB)

    Koroteeva, E., E-mail: koroteeva@physics.msu.ru [Institute of Physics of University of Latvia, Salaspils 2169 (Latvia); Lomonosov Moscow State University, Moscow 119991 (Russian Federation); Ščepanskis, M. [Laboratory for Mathematical Modelling of Environmental and Technological Processes, University of Latvia, Rīga 1002 (Latvia); Bucenieks, I.; Platacis, E. [Institute of Physics of University of Latvia, Salaspils 2169 (Latvia)

    2016-05-15

    Highlights: • The design and performance of a disk-type induction pump are described. • A 3D numerical model based on an iterative coupling between EM and hydrodynamic solvers is developed. • The model is verified by comparing with the experiments in a Pb-Bi loop facility. • The suggestions are given to estimate the pump performance in a Pb-Li loop at high pressures. - Abstract: Electromagnetic induction pumps with rotating permanent magnets appear to be the most promising devices to transport liquid metals in high-temperature applications. Here we present a numerical methodology to simulate the operation of one particular modification of these types of pumps: a disk-type induction pump. The numerical model allows for the calculation and analysis of the flow parameters, including the pressure–flow rate characteristics of the pump. The simulations are based on an iterative fully coupled scheme for electromagnetic and hydrodynamic solvers. The developed model is verified by comparing with experimental data obtained using a Pb-Bi loop test facility, for pressures up to 4 bar and flow rates up to 9 kg/s. The verified model is then expanded to higher pressures, beyond the limits of the experimental loop. Based on the numerical simulations, suggestions are given to extrapolate experimental data to higher (industrially important) pressure ranges. Using the numerical model and analytical estimation, the pump performance for the Pb-Li loop is also examined, and the ability of the designed pump to develop pressure heads over 6 bar and to provide flow rates over 15 kg/s is shown.

  17. The oldest magnetic record in our solar system identified using nanometric imaging and numerical modeling.

    Science.gov (United States)

    Shah, Jay; Williams, Wyn; Almeida, Trevor P; Nagy, Lesleis; Muxworthy, Adrian R; Kovács, András; Valdez-Grijalva, Miguel A; Fabian, Karl; Russell, Sara S; Genge, Matthew J; Dunin-Borkowski, Rafal E

    2018-03-21

    Recordings of magnetic fields, thought to be crucial to our solar system's rapid accretion, are potentially retained in unaltered nanometric low-Ni kamacite (~ metallic Fe) grains encased within dusty olivine crystals, found in the chondrules of unequilibrated chondrites. However, most of these kamacite grains are magnetically non-uniform, so their ability to retain four-billion-year-old magnetic recordings cannot be estimated by previous theories, which assume only uniform magnetization. Here, we demonstrate that non-uniformly magnetized nanometric kamacite grains are stable over solar system timescales and likely the primary carrier of remanence in dusty olivine. By performing in-situ temperature-dependent nanometric magnetic measurements using off-axis electron holography, we demonstrate the thermal stability of multi-vortex kamacite grains from the chondritic Bishunpur meteorite. Combined with numerical micromagnetic modeling, we determine the stability of the magnetization of these grains. Our study shows that dusty olivine kamacite grains are capable of retaining magnetic recordings from the accreting solar system.

  18. The numerical solution of ICRF fields in axisymmetric mirrors

    International Nuclear Information System (INIS)

    Phillips, M.W.; Todd, A.M.M.

    1986-01-01

    The numerics of a numerical code called GARFIELD (Grumman Aerospace RF fIELD code) designed to calculate the three-dimensional structure of ICRF fields in axisymmetric mirrors is presented. The code solves the electromagnetic wave equation for the electric field using a cold plasma dispersion relation with a small collision term to simulate absorption. The full wave solution including E.B is computed. The fields are Fourier analyzed in the poloidal direction and solved on a grid in the axial and radial directions. A two-dimensional equilibrium can be used as the source of equilibrium data. This allows us to extend previous studies of ICRF wave propagation and absorption in mirrors to include the effect of axial variation of the magnetic field and density. (orig.)

  19. Estimation of delays and other parameters in nonlinear functional differential equations

    Science.gov (United States)

    Banks, H. T.; Lamm, P. K. D.

    1983-01-01

    A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.

  20. Combustion Behaviour of Pulverised Wood - Numerical and Experimental Studies. Part 1 Numerical Study

    Energy Technology Data Exchange (ETDEWEB)

    Elfasakhany, A.; Xue-Song Bai [Lund Inst. of Tech. (Sweden). Dept. of Heat and Power Engineering

    2002-12-01

    This report describes a theoretical/numerical investigation of the particle motion and the particle drying, pyrolysis, oxidation of volatile and char in a pulverised biofuel (wood) flame. This work, along with the experimental measurement of a pulverised wood flame in a vertical furnace at TPS, is supported by the Swedish Energy Agency, STEM. The fundamental combustion process of a pulverised wood flame with determined size distribution and anisotropy character is studied. Comprehensive submodels are studied and some models not available in the literature are developed. The submodels are integrated to a CFD code, previously developed at LTH. The numerical code is used to simulate the experimental flame carried out at TPS (as sub-task 2 within the project). The sub-models describe the drying, devolatilization, char formation of wood particles, and the oxidation reaction of char and the gas phase volatile. At the present stage, the attention is focused on the understanding and modelling of non-spherical particle dynamics and the drying, pyrolysis, and oxidation of volatile and char. Validation of the sub-models against the experimental data is presented and discussed in this study. The influence of different factors on the pulverised wood flame in the TPS vertical furnace is investigated. This includes shape of the particles, the effect of volatile release, as well as the orientation of the particles on the motion of the particles. The effect of particle size on the flame structure (distribution of species and temperature along the axis of the furnace) is also studied. The numerical simulation is in close agreement with the TPS experimental data in the concentrations of species O{sub 2}, CO{sub 2} as well as temperature. Some discrepancy between the model simulations and measurements is observed, which suggests that further improvement in our understanding and modeling the pulverised wood flame is needed.

  1. Tensor estimation for double-pulsed diffusional kurtosis imaging.

    Science.gov (United States)

    Shaw, Calvin B; Hui, Edward S; Helpern, Joseph A; Jensen, Jens H

    2017-07-01

    Double-pulsed diffusional kurtosis imaging (DP-DKI) represents the double diffusion encoding (DDE) MRI signal in terms of six-dimensional (6D) diffusion and kurtosis tensors. Here a method for estimating these tensors from experimental data is described. A standard numerical algorithm for tensor estimation from conventional (i.e. single diffusion encoding) diffusional kurtosis imaging (DKI) data is generalized to DP-DKI. This algorithm is based on a weighted least squares (WLS) fit of the signal model to the data combined with constraints designed to minimize unphysical parameter estimates. The numerical algorithm then takes the form of a quadratic programming problem. The principal change required to adapt the conventional DKI fitting algorithm to DP-DKI is replacing the three-dimensional diffusion and kurtosis tensors with the 6D tensors needed for DP-DKI. In this way, the 6D diffusion and kurtosis tensors for DP-DKI can be conveniently estimated from DDE data by using constrained WLS, providing a practical means for condensing DDE measurements into well-defined mathematical constructs that may be useful for interpreting and applying DDE MRI. Data from healthy volunteers for brain are used to demonstrate the DP-DKI tensor estimation algorithm. In particular, representative parametric maps of selected tensor-derived rotational invariants are presented. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  3. A delta-rule model of numerical and non-numerical order processing.

    Science.gov (United States)

    Verguts, Tom; Van Opstal, Filip

    2014-06-01

    Numerical and non-numerical order processing share empirical characteristics (distance effect and semantic congruity), but there are also important differences (in size effect and end effect). At the same time, models and theories of numerical and non-numerical order processing developed largely separately. Currently, we combine insights from 2 earlier models to integrate them in a common framework. We argue that the same learning principle underlies numerical and non-numerical orders, but that environmental features determine the empirical differences. Implications for current theories on order processing are pointed out. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  4. GIS-based two-dimensional numerical simulation of rainfall-induced debris flow

    Directory of Open Access Journals (Sweden)

    C. Wang

    2008-02-01

    Full Text Available This paper aims to present a useful numerical method to simulate the propagation and deposition of debris flow across the three dimensional complex terrain. A depth-averaged two-dimensional numerical model is developed, in which the debris and water mixture is assumed to be continuous, incompressible, unsteady flow. The model is based on the continuity equations and Navier-Stokes equations. Raster grid networks of digital elevation model in GIS provide a uniform grid system to describe complex topography. As the raster grid can be used as the finite difference mesh, the continuity and momentum equations are solved numerically using the finite difference method. The numerical model is applied to simulate the rainfall-induced debris flow occurred in 20 July 2003, in Minamata City of southern Kyushu, Japan. The simulation reproduces the propagation and deposition and the results are in good agreement with the field investigation. The synthesis of numerical method and GIS makes possible the solution of debris flow over a realistic terrain, and can be used to estimate the flow range, and to define potentially hazardous areas for homes and road section.

  5. Analytical and Numerical Studies of Several Fluid Mechanical Problems

    Science.gov (United States)

    Kong, D. L.

    2014-03-01

    In this thesis, three parts, each with several chapters, are respectively devoted to hydrostatic, viscous, and inertial fluids theories and applications. Involved topics include planetary, biological fluid systems, and high performance computing technology. In the hydrostatics part, the classical Maclaurin spheroids theory is generalized, for the first time, to a more realistic multi-layer model, establishing geometries of both the outer surface and the interfaces. For one of its astrophysical applications, the theory explicitly predicts physical shapes of surface and core-mantle-boundary for layered terrestrial planets, which enables the studies of some gravity problems, and the direct numerical simulations of dynamo flows in rotating planetary cores. As another application of the figure theory, the zonal flow in the deep atmosphere of Jupiter is investigated for a better understanding of the Jovian gravity field. An upper bound of gravity field distortions, especially in higher-order zonal gravitational coefficients, induced by deep zonal winds is estimated firstly. The oblate spheroidal shape of an undistorted Jupiter resulting from its fast solid body rotation is fully taken into account, which marks the most significant improvement from previous approximation based Jovian wind theories. High viscosity flows, for example Stokes flows, occur in a lot of processes involving low-speed motions in fluids. Microorganism swimming is such a typical case. A fully three dimensional analytic solution of incompressible Stokes equation is derived in the exterior domain of an arbitrarily translating and rotating prolate spheroid, which models a large family of microorganisms such as cocci bacteria. The solution is then applied to the magnetotactic bacteria swimming problem, and good consistency has been found between theoretical predictions and laboratory observations of the moving patterns of such bacteria under magnetic fields. In the analysis of dynamics of planetary

  6. Steady-state transport equation resolution by particle methods, and numerical results

    International Nuclear Information System (INIS)

    Mercier, B.

    1985-10-01

    A method to solve steady-state transport equation has been given. Principles of the method are given. The method is studied in two different cases; estimations given by the theory are compared to numerical results. Results got in 1-D (spherical geometry) and in 2-D (axisymmetric geometry) are given [fr

  7. Combining four Monte Carlo estimators for radiation momentum deposition

    International Nuclear Information System (INIS)

    Hykes, Joshua M.; Urbatsch, Todd J.

    2011-01-01

    Using four distinct Monte Carlo estimators for momentum deposition - analog, absorption, collision, and track-length estimators - we compute a combined estimator. In the wide range of problems tested, the combined estimator always has a figure of merit (FOM) equal to or better than the other estimators. In some instances the FOM of the combined estimator is only a few percent higher than the FOM of the best solo estimator, the track-length estimator, while in one instance it is better by a factor of 2.5. Over the majority of configurations, the combined estimator's FOM is 10 - 20% greater than any of the solo estimators' FOM. The numerical results show that the track-length estimator is the most important term in computing the combined estimator, followed far behind by the analog estimator. The absorption and collision estimators make negligible contributions. (author)

  8. Numerical Investigation of Mixing Characteristics in Cavity Flow at Various Aspect Ratios

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Myung Seob [Dongyang Mirae University, Seoul (Korea, Republic of); Yang, Seung Deok; Yoon, Joon Yong [Hanyang University, Seoul (Korea, Republic of)

    2015-01-15

    This study numerically examined the mixing characteristics of rectangular cavity flows by using the hybrid lattice Boltzmann method (HLBM) applied to the finite difference method (FDM). Multi-relaxation time was used along with a passive scalar method which assumes that two substances have the same mass and that there is no interaction. First, we studied numerical results such as the stream function, position of vortices, and velocity profile for a square cavity and rectangular cavity with an aspect ratio of 2. The data were compared with previous numerical results that have been proven to be reliable. We also studied the mixing characteristics of a rectangular cavity flow such as the concentration profile and average Sherwood number at various Pe numbers and aspect ratios.

  9. Interest of numerical dosimetry in radiation protection: mean of substitution or measurements consolidation?

    International Nuclear Information System (INIS)

    Lahaye, T.; Chau, Q.; Ferragut, A.; Gillot, J.Y.

    2003-01-01

    The use of calculation codes allows to reduce the costs and the time limits. These codes brings to operators elements to reinforce their projected dosimetry. In the cases of accidental overexposure, the numerical dosimetry comes in complement of clinical and biological investigations to give an estimation as precise as possible of the received dose. For particular situations where it does not exist an adapted instrumentation, the numerical dosimetry can substitute to conventional techniques used by regulatory dosimetry (project for aviation personnel). (N.C.)

  10. Modeling and numerical simulation of multi-component flow in porous media

    International Nuclear Information System (INIS)

    Saad, B.

    2011-01-01

    This work deals with the modelization and numerical simulation of two phase multi-component flow in porous media. The study is divided into two parts. First we study and prove the mathematical existence in a weak sense of two degenerate parabolic systems modeling two phase (liquid and gas) two component (water and hydrogen) flow in porous media. In the first model, we assume that there is a local thermodynamic equilibrium between both phases of hydrogen by using the Henry's law. The second model consists of a relaxation of the previous model: the kinetic of the mass exchange between dissolved hydrogen and hydrogen in the gas phase is no longer instantaneous. The second part is devoted to the numerical analysis of those models. Firstly, we propose a numerical scheme to compare numerical solutions obtained with the first model and numerical solutions obtained with the second model where the characteristic time to recover the thermodynamic equilibrium goes to zero. Secondly, we present a finite volume scheme with a phase-by-phase upstream weighting scheme without simplified assumptions on the state law of gas densities. We also validate this scheme on a 2D test cases. (author)

  11. Sensitivity analysis of numerical results of one- and two-dimensional advection-diffusion problems

    International Nuclear Information System (INIS)

    Motoyama, Yasunori; Tanaka, Nobuatsu

    2005-01-01

    Numerical simulation has been playing an increasingly important role in the fields of science and engineering. However, every numerical result contains errors such as modeling, truncation, and computing errors, and the magnitude of the errors that are quantitatively contained in the results is unknown. This situation causes a large design margin in designing by analyses and prevents further cost reduction by optimizing design. To overcome this situation, we developed a new method to numerically analyze the quantitative error of a numerical solution by using the sensitivity analysis method and modified equation approach. If a reference case of typical parameters is calculated once by this method, then no additional calculation is required to estimate the results of other numerical parameters such as those of parameters with higher resolutions. Furthermore, we can predict the exact solution from the sensitivity analysis results and can quantitatively evaluate the error of numerical solutions. Since the method incorporates the features of the conventional sensitivity analysis method, it can evaluate the effect of the modeling error as well as the truncation error. In this study, we confirm the effectiveness of the method through some numerical benchmark problems of one- and two-dimensional advection-diffusion problems. (author)

  12. Numerical Modeling of a Wave Energy Point Absorber

    DEFF Research Database (Denmark)

    Hernandez, Lorenzo Banos; Frigaard, Peter; Kirkegaard, Poul Henning

    2009-01-01

    The present study deals with numerical modelling of the Wave Star Energy WSE device. Hereby, linear potential theory is applied via a BEM code on the wave hydrodynamics exciting the floaters. Time and frequency domain solutions of the floater response are determined for regular and irregular seas....... Furthermore, these results are used to estimate the power and the energy absorbed by a single oscillating floater. Finally, a latching control strategy is analysed in open-loop configuration for energy maximization....

  13. Assesment of a soil moisture retrieval with numerical weather prediction model temperature

    Science.gov (United States)

    The effect of using a Numerical Weather Prediction (NWP) soil temperature product instead of estimates provided by concurrent 37 GHz data on satellite-based passive microwave retrieval of soil moisture retrieval was evaluated. This was prompted by the change in system configuration of preceding mult...

  14. Playing Linear Numerical Board Games Promotes Low-Income Children's Numerical Development

    Science.gov (United States)

    Siegler, Robert S.; Ramani, Geetha B.

    2008-01-01

    The numerical knowledge of children from low-income backgrounds trails behind that of peers from middle-income backgrounds even before the children enter school. This gap may reflect differing prior experience with informal numerical activities, such as numerical board games. Experiment 1 indicated that the numerical magnitude knowledge of…

  15. Determining ecoregional numeric nutrient criteria by stressor-response models in Yungui ecoregion lakes, China.

    Science.gov (United States)

    Huo, Shouliang; Ma, Chunzi; Xi, Beidou; Tong, Zhonghua; He, Zhuoshi; Su, Jing; Wu, Fengchang

    2014-01-01

    The importance of developing numeric nutrient criteria has been recognized to protect the designated uses of water bodies from nutrient enrichment that is associated with broadly occurring levels of nitrogen/phosphorus pollution. The identification and estimation of stressor-response models in aquatic ecosystems has been shown to be useful in the determination of nutrient criteria. In this study, three methods based on stressor-response relationships were applied to determine nutrient criteria for Yungui ecoregion lakes with respect to total phosphorus (TP), total nitrogen (TN), and planktonic chlorophyll a (Chl a). Simple linear regression (SLR) models were established to provide an estimate of the relationship between a response variable and a stressor. Multiple linear regressions were used to simultaneously estimate the effect of TP and TN on Chl a. A morphoedaphic index (MEI) was applied to derive nutrient criteria using data from Yungui ecoregion lakes, which were considered as areas with less anthropogenic influences. Nutrient criteria, as determined by these three methods, showed broad agreement for all parameters. The ranges of numeric nutrient criteria for Yungui ecoregion lakes were determined as follows: TP 0.008-0.010 mg/L and TN 0.140-0.178 mg/L. The stressor-response analysis described will be of benefit to support countries in their numeric criteria development programs and to further the goal of reducing nitrogen/phosphorus pollution in China.

  16. Modulating Function-Based Method for Parameter and Source Estimation of Partial Differential Equations

    KAUST Repository

    Asiri, Sharefa M.

    2017-10-08

    Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters

  17. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  18. Various types of numerical schema for the one-dimensional spherical geometry transport equation

    International Nuclear Information System (INIS)

    Jaber, Abdelouhab.

    1981-07-01

    Mathematical and numerical studies of new schemas possessing high accuracy spatial variable properties are described and the corresponding studies presented. In order to do this, the [0,R] x [-1,+1] rectangle is decomposad into Ksub(ij) = [rsub(i),rsub(i+1)] x [μsub(j),μsub(j+1) ] rectangles. Continuous finite element methods employing polynominals of degree 1 in μ and degree 2 in r are defined for each elements. In chapter I, different ways of rendering the particular equation (for μ = -1) discrete are studied. In chapter II, numerical schemas are described and their stability investigated. In chapter III, error estimation theories are exposed and numerical results for different second members, S, given [fr

  19. Exponentially convergent state estimation for delayed switched recurrent neural networks.

    Science.gov (United States)

    Ahn, Choon Ki

    2011-11-01

    This paper deals with the delay-dependent exponentially convergent state estimation problem for delayed switched neural networks. A set of delay-dependent criteria is derived under which the resulting estimation error system is exponentially stable. It is shown that the gain matrix of the proposed state estimator is characterised in terms of the solution to a set of linear matrix inequalities (LMIs), which can be checked readily by using some standard numerical packages. An illustrative example is given to demonstrate the effectiveness of the proposed state estimator.

  20. Influence of hypo- and hyperthermia on death time estimation - A simulation study.

    Science.gov (United States)

    Muggenthaler, H; Hubig, M; Schenkl, S; Mall, G

    2017-09-01

    Numerous physiological and pathological mechanisms can cause elevated or lowered body core temperatures. Deviations from the physiological level of about 37°C can influence temperature based death time estimations. However, it has not been investigated by means of thermodynamics, to which extent hypo- and hyperthermia bias death time estimates. Using numerical simulation, the present study investigates the errors inherent in temperature based death time estimation in case of elevated or lowered body core temperatures before death. The most considerable errors with regard to the normothermic model occur in the first few hours post-mortem. With decreasing body core temperature and increasing post-mortem time the error diminishes and stagnates at a nearly constant level. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Numerical algorithms for intragranular diffusional fission gas release incorporated in the Transuranus code

    International Nuclear Information System (INIS)

    Lassmann, K.

    2002-01-01

    Complicated physical processes govern diffusional fission gas release in nuclear fuels. In addition to the physical problem there exists a numerical problem, as some solutions of the underlying diffusion equation contain numerical errors that by far exceed the physical details. In this paper the two algorithms incorporated in the TRANSURANUS code, the URGAS and the new FORMAS algorithm are compared. The previously reported deficiency of the most elegant and mathematically sound FORMAS algorithm at low release could be overcome. Both algorithms are simple, fast, without numerical problems, insensitive to time step lengths and well balanced over the entire range of fission gas release. They can be made available on request as FORTRAN subroutines. (author)

  2. CYGNSS Surface Wind Observations and Surface Flux Estimates within Low-Latitude Extratropical Cyclones

    Science.gov (United States)

    Crespo, J.; Posselt, D. J.

    2017-12-01

    The Cyclone Global Navigation Satellite System (CYGNSS), launched in December 2016, aims to improve estimates of surface wind speeds over the tropical oceans. While CYGNSS's core mission is to provide better estimates of surface winds within the core of tropical cyclones, previous research has shown that the constellation, with its orbital inclination of 35°, also has the ability to observe numerous extratropical cyclones that form in the lower latitudes. Along with its high spatial and temporal resolution, CYGNSS can provide new insights into how extratropical cyclones develop and evolve, especially in the presence of thick clouds and precipitation. We will demonstrate this by presenting case studies of multiple extratropical cyclones observed by CYGNSS early on in its mission in both Northern and Southern Hemispheres. By using the improved estimates of surface wind speeds from CYGNSS, we can obtain better estimates of surface latent and sensible heat fluxes within and around extratropical cyclones. Surface heat fluxes, driven by surface winds and strong vertical gradients of water vapor and temperature, play a key role in marine cyclogenesis as they increase instability within the boundary layer and may contribute to extreme marine cyclogenesis. In the past, it has been difficult to estimate surface heat fluxes from space borne instruments, as these fluxes cannot be observed directly from space, and deficiencies in spatial coverage and attenuation from clouds and precipitation lead to inaccurate estimates of surface flux components, such as surface wind speeds. While CYGNSS only contributes estimates of surface wind speeds, we can combine this data with other reanalysis and satellite data to provide improved estimates of surface sensible and latent heat fluxes within and around extratropical cyclones and throughout the entire CYGNSS mission.

  3. Numerical analysis of anisotropic diffusion effect on ICF hydrodynamic instabilities

    Directory of Open Access Journals (Sweden)

    Olazabal-Loumé M.

    2013-11-01

    Full Text Available The effect of anisotropic diffusion on hydrodynamic instabilities in the context of Inertial Confinement Fusion (ICF flows is numerically assessed. This anisotropy occurs in indirect-drive when laminated ablators are used to modify the lateral transport [1,2]. In direct-drive, non-local transport mechanisms and magnetic fields may modify the lateral conduction [3]. In this work, numerical simulations obtained with the code PERLE [4], dedicated to linear stability analysis, are compared with previous theoretical results [5]. In these approaches, the diffusion anisotropy can be controlled by a characteristic coefficient which enables a comprehensive study. This work provides new results on the ablative Rayleigh-Taylor (RT, ablative Richtmyer-Meshkov (RM and Darrieus-Landau (DL instabilities.

  4. Fast numerical upscaling of heat equation for fibrous materials

    KAUST Repository

    Iliev, Oleg; Lazarov, Raytcho; Willems, Joerg

    2010-01-01

    We are interested in numerical methods for computing the effective heat conductivities of fibrous insulation materials, such as glass or mineral wool, characterized by low solid volume fractions and high contrasts, i.e., high ratios between the thermal conductivities of the fibers and the surrounding air. We consider a fast numerical method for solving some auxiliary cell problems appearing in this upscaling procedure. The auxiliary problems are boundary value problems of the steady-state heat equation in a representative elementary volume occupied by fibers and air. We make a simplification by replacing these problems with appropriate boundary value problems in the domain occupied by the fibers only. Finally, the obtained problems are further simplified by taking advantage of the slender shape of the fibers and assuming that they form a network. A discretization on the graph defined by the fibers is presented and error estimates are provided. The resulting algorithm is discussed and the accuracy and the performance of the method are illusrated on a number of numerical experiments. © Springer-Verlag 2010.

  5. Fast numerical upscaling of heat equation for fibrous materials

    KAUST Repository

    Iliev, Oleg

    2010-08-01

    We are interested in numerical methods for computing the effective heat conductivities of fibrous insulation materials, such as glass or mineral wool, characterized by low solid volume fractions and high contrasts, i.e., high ratios between the thermal conductivities of the fibers and the surrounding air. We consider a fast numerical method for solving some auxiliary cell problems appearing in this upscaling procedure. The auxiliary problems are boundary value problems of the steady-state heat equation in a representative elementary volume occupied by fibers and air. We make a simplification by replacing these problems with appropriate boundary value problems in the domain occupied by the fibers only. Finally, the obtained problems are further simplified by taking advantage of the slender shape of the fibers and assuming that they form a network. A discretization on the graph defined by the fibers is presented and error estimates are provided. The resulting algorithm is discussed and the accuracy and the performance of the method are illusrated on a number of numerical experiments. © Springer-Verlag 2010.

  6. Consistent estimate of ocean warming, land ice melt and sea level rise from Observations

    Science.gov (United States)

    Blazquez, Alejandro; Meyssignac, Benoît; Lemoine, Jean Michel

    2016-04-01

    Based on the sea level budget closure approach, this study investigates the consistency of observed Global Mean Sea Level (GMSL) estimates from satellite altimetry, observed Ocean Thermal Expansion (OTE) estimates from in-situ hydrographic data (based on Argo for depth above 2000m and oceanic cruises below) and GRACE observations of land water storage and land ice melt for the period January 2004 to December 2014. The consistency between these datasets is a key issue if we want to constrain missing contributions to sea level rise such as the deep ocean contribution. Numerous previous studies have addressed this question by summing up the different contributions to sea level rise and comparing it to satellite altimetry observations (see for example Llovel et al. 2015, Dieng et al. 2015). Here we propose a novel approach which consists in correcting GRACE solutions over the ocean (essentially corrections of stripes and leakage from ice caps) with mass observations deduced from the difference between satellite altimetry GMSL and in-situ hydrographic data OTE estimates. We check that the resulting GRACE corrected solutions are consistent with original GRACE estimates of the geoid spherical harmonic coefficients within error bars and we compare the resulting GRACE estimates of land water storage and land ice melt with independent results from the literature. This method provides a new mass redistribution from GRACE consistent with observations from Altimetry and OTE. We test the sensibility of this method to the deep ocean contribution and the GIA models and propose best estimates.

  7. Gradient-based stochastic estimation of the density matrix

    Science.gov (United States)

    Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton

    2018-03-01

    Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.

  8. Numerical Uncertainty Analysis for Computational Fluid Dynamics using Student T Distribution -- Application of CFD Uncertainty Analysis Compared to Exact Analytical Solution

    Science.gov (United States)

    Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.

    2014-01-01

    Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.

  9. New numerical method for solving the solute transport equation

    International Nuclear Information System (INIS)

    Ross, B.; Koplik, C.M.

    1978-01-01

    The solute transport equation can be solved numerically by approximating the water flow field by a network of stream tubes and using a Green's function solution within each stream tube. Compared to previous methods, this approach permits greater computational efficiency and easier representation of small discontinuities, and the results are easier to interpret physically. The method has been used to study hypothetical sites for disposal of high-level radioactive waste

  10. Wave Velocity Estimation in Heterogeneous Media

    KAUST Repository

    Asiri, Sharefa M.

    2016-03-21

    In this paper, modulating functions-based method is proposed for estimating space-time dependent unknown velocity in the wave equation. The proposed method simplifies the identification problem into a system of linear algebraic equations. Numerical simulations on noise-free and noisy cases are provided in order to show the effectiveness of the proposed method.

  11. Ensemble Kalman Filtering with Residual Nudging: An Extension to State Estimation Problems with Nonlinear Observation Operators

    KAUST Repository

    Luo, Xiaodong

    2014-10-01

    The ensemble Kalman filter (EnKF) is an efficient algorithm for many data assimilation problems. In certain circumstances, however, divergence of the EnKF might be spotted. In previous studies, the authors proposed an observation-space-based strategy, called residual nudging, to improve the stability of the EnKF when dealing with linear observation operators. The main idea behind residual nudging is to monitor and, if necessary, adjust the distances (misfits) between the real observations and the simulated ones of the state estimates, in the hope that by doing so one may be able to obtain better estimation accuracy. In the present study, residual nudging is extended and modified in order to handle nonlinear observation operators. Such extension and modification result in an iterative filtering framework that, under suitable conditions, is able to achieve the objective of residual nudging for data assimilation problems with nonlinear observation operators. The 40-dimensional Lorenz-96 model is used to illustrate the performance of the iterative filter. Numerical results show that, while a normal EnKF may diverge with nonlinear observation operators, the proposed iterative filter remains stable and leads to reasonable estimation accuracy under various experimental settings.

  12. Computable error estimates of a finite difference scheme for option pricing in exponential Lévy models

    KAUST Repository

    Kiessling, Jonas

    2014-05-06

    Option prices in exponential Lévy models solve certain partial integro-differential equations. This work focuses on developing novel, computable error approximations for a finite difference scheme that is suitable for solving such PIDEs. The scheme was introduced in (Cont and Voltchkova, SIAM J. Numer. Anal. 43(4):1596-1626, 2005). The main results of this work are new estimates of the dominating error terms, namely the time and space discretisation errors. In addition, the leading order terms of the error estimates are determined in a form that is more amenable to computations. The payoff is only assumed to satisfy an exponential growth condition, it is not assumed to be Lipschitz continuous as in previous works. If the underlying Lévy process has infinite jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time and space discretisation errors on this approximation. Consequently, it is possible to determine how to jointly choose the space and time grid sizes and the cut off parameter (Formula presented.). © 2014 Springer Science+Business Media Dordrecht.

  13. Quantitative hyperbolicity estimates in one-dimensional dynamics

    International Nuclear Information System (INIS)

    Day, S; Kokubu, H; Pilarczyk, P; Luzzatto, S; Mischaikow, K; Oka, H

    2008-01-01

    We develop a rigorous computational method for estimating the Lyapunov exponents in uniformly expanding regions of the phase space for one-dimensional maps. Our method uses rigorous numerics and graph algorithms to provide results that are mathematically meaningful and can be achieved in an efficient way

  14. Numerical Investigation of Masonry Strengthened with Composites

    Directory of Open Access Journals (Sweden)

    Giancarlo Ramaglia

    2018-03-01

    Full Text Available In this work, two main fiber strengthening systems typically applied in masonry structures have been investigated: composites made of basalt and hemp fibers, coupled with inorganic matrix. Starting from the experimental results on composites, the out-of-plane behavior of the strengthened masonry was assessed according to several numerical analyses. In a first step, the ultimate behavior was assessed in terms of P (axial load-M (bending moment domain (i.e., failure surface, changing several mechanical parameters. In order to assess the ductility capacity of the strengthened masonry elements, the P-M domain was estimated starting from the bending moment-curvature diagrams. Key information about the impact of several mechanical parameters on both the capacity and the ductility was considered. Furthermore, the numerical analyses allow the assessment of the efficiency of the strengthening system, changing the main mechanical properties. Basalt fibers had lower efficiency when applied to weak masonry. In this case, the elastic properties of the masonry did not influence the structural behavior under a no tension assumption for the masonry. Conversely, their impact became non-negligible, especially for higher values of the compressive strength of the masonry. The stress-strain curve used to model the composite impacted the flexural strength. Natural fibers provided similar outcomes, but a first difference regards the higher mechanical compatibility of the strengthening system with the substrate. In this case, the ultimate condition is due to the failure mode of the composite. The stress-strain curves used to model the strengthening system are crucial in the ductility estimation of the strengthened masonry. However, the behavior of the composite strongly influences the curvature ductility in the case of higher compressive strength for masonry. The numerical results discussed in this paper provide the base to develop normalized capacity models able to

  15. Application of numerical inverse method in calculation of composition-dependent interdiffusion coefficients in finite diffusion couples

    DEFF Research Database (Denmark)

    Liu, Yuanrong; Chen, Weimin; Zhong, Jing

    2017-01-01

    The previously developed numerical inverse method was applied to determine the composition-dependent interdiffusion coefficients in single-phase finite diffusion couples. The numerical inverse method was first validated in a fictitious binary finite diffusion couple by pre-assuming four standard...... sets of interdiffusion coefficients. After that, the numerical inverse method was then adopted in a ternary Al-Cu-Ni finite diffusion couple. Based on the measured composition profiles, the ternary interdiffusion coefficients along the entire diffusion path of the target ternary diffusion couple were...... obtained by using the numerical inverse approach. The comprehensive comparisons between the computations and the experiments indicate that the numerical inverse method is also applicable to high-throughput determination of the composition-dependent interdiffusion coefficients in finite diffusion couples....

  16. Procedures for using expert judgment to estimate human-error probabilities in nuclear power plant operations

    International Nuclear Information System (INIS)

    Seaver, D.A.; Stillwell, W.G.

    1983-03-01

    This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use

  17. Linearized motion estimation for articulated planes.

    Science.gov (United States)

    Datta, Ankur; Sheikh, Yaser; Kanade, Takeo

    2011-04-01

    In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.

  18. Use of computational methods for substitution and numerical dosimetry of real bones

    International Nuclear Information System (INIS)

    Silva, I.C.S.; Gonzalez, K.M.L.; Barbosa, A.J.A.; Lucindo Junior, C.R.; Vieira, J.W.; Lima, F.R.A.

    2017-01-01

    Estimating the dose that ionizing radiation deposits in the soft tissues of the skeleton within the cavities of the trabecular bones represents one of the greatest difficulties faced by numerical dosimetry. The Numerical Dosimetry Group (GDN/CNPq) Brazil, Recife-PE has used a method based on micro-CT images. The problem of the implementation of micro-CT is the difficulty in obtaining samples of real bones (OR). The objective of this work was to evaluate the sample of a virtual block of trabecular bone through the nonparametric method based on the voxel frequencies (VF) and samples of the climbing plant called Luffa aegyptica, whose dry fruit is known as vegetal bush (BV) substitution of OR samples. For this, a theoretical study of the two techniques developed by the GDN was made. The study showed in both techniques, after the dosimetric evaluations, that the actual sample can be replaced by the synthetic samples, since they have shown dose estimates close to the actual one

  19. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  20. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    Science.gov (United States)

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  1. Assessing the performance of dynamical trajectory estimates

    Science.gov (United States)

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  2. Phytoremediation: realistic estimation of modern efficiency and future possibility

    International Nuclear Information System (INIS)

    Kravets, A.; Pavlenko, Y.; Kusmenko, L.; Ermak, M.

    1996-01-01

    Kinetic peculiarities of the radionuclides migration in the system 'soil-plant' of the Chernobyl region have been investigated by means of numerical modelling. Quantitative estimation of half-time of natural cleaning of soil has been realised. Potential possibility and efficiency of the modem phytoremediation technology has been estimated. Outlines of the general demands and future possibility of biotechnology of the phytoremediation creation have been formulated. (author)

  3. Phytoremediation: realistic estimation of modern efficiency and future possibility

    Energy Technology Data Exchange (ETDEWEB)

    Kravets, A; Pavlenko, Y [Institute of Cell Biology and Genetic Engineering NAS, Kiev (Ukraine); Kusmenko, L; Ermak, M [Institute of Plant Physiology and Genetic NAS, Vasilkovsky, Kiev (Ukraine)

    1996-11-01

    Kinetic peculiarities of the radionuclides migration in the system 'soil-plant' of the Chernobyl region have been investigated by means of numerical modelling. Quantitative estimation of half-time of natural cleaning of soil has been realised. Potential possibility and efficiency of the modem phytoremediation technologyhas been estimated. Outlines of the general demands and future possibility of biotechnology of the phytoremediation creation have been formulated. (author)

  4. External cephalic version among women with a previous cesarean delivery: report on 36 cases and review of the literature.

    Science.gov (United States)

    Abenhaim, Haim A; Varin, Jocelyne; Boucher, Marc

    2009-01-01

    Whether or not women with a previous cesarean section should be considered for an external cephalic version remains unclear. In our study, we sought to examine the relationship between a history of previous cesarean section and outcomes of external cephalic version for pregnancies at 36 completed weeks of gestation or more. Data on obstetrical history and on external cephalic version outcomes was obtained from the C.H.U. Sainte-Justine External Cephalic Version Database. Baseline clinical characteristics were compared among women with and without a history of previous cesarean section. We used logistic regression analysis to evaluate the effect of previous cesarean section on success of external cephalic version while adjusting for parity, maternal body mass index, gestational age, estimated fetal weight, and amniotic fluid index. Over a 15-year period, 1425 external cephalic versions were attempted of which 36 (2.5%) were performed on women with a previous cesarean section. Although women with a history of previous cesarean section were more likely to be older and para >2 (38.93% vs. 15.0%), there were no difference in gestational age, estimated fetal weight, and amniotic fluid index. Women with a prior cesarean section had a success rate similar to women without [50.0% vs. 51.6%, adjusted OR: 1.31 (0.48-3.59)]. Women with a previous cesarean section who undergo an external cephalic version have similar success rates than do women without. Concern about procedural success in women with a previous cesarean section is unwarranted and should not deter attempting an external cephalic version.

  5. Residents' numeric inputting error in computerized physician order entry prescription.

    Science.gov (United States)

    Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

    2016-04-01

    Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial

  6. Estimation of doses to patients from ''complex'' conventional X-ray examinations

    International Nuclear Information System (INIS)

    Calzado, A.; Vano, E.; Moran, P.; Ruiz, S.; Gonzalez, L.; Castellote, C.

    1991-01-01

    A numerical method has been developed to estimate organ doses and effective dose-equivalent for patients undergoing three 'complex' examinations (barium meal, barium enema and intravenous urography). The separation of radiological procedures into a set of standard numerical views is based on the use of Monte Carlo conversion factors and measurements within a Remab phantom. Radiation doses measured in a phantom for such examinations were compared with predictions of the ''numerical'' method. Dosimetric measurements with thermoluminescent dosemeters attached to the patient's skin along with measurements of the dose-area product during the examination have enabled the derivation of organ doses and to estimate effective dose-equivalent. Mean frequency weighted values of dose-area product, energy imparted to the patient, doses to a set of organs and effective dose-equivalent in the area of Madrid are reported. Comparisons of results with those from similar surveys in other countries were made. (author)

  7. Numerical modeling of batch formation in waste incineration plants

    Directory of Open Access Journals (Sweden)

    Obroučka Karel

    2015-03-01

    Full Text Available The aim of this paper is a mathematical description of algorithm for controlled assembly of incinerated batch of waste. The basis for formation of batch is selected parameters of incinerated waste as its calorific value or content of pollutants or the combination of both. The numerical model will allow, based on selected criteria, to compile batch of wastes which continuously follows the previous batch, which is a prerequisite for optimized operation of incinerator. The model was prepared as for waste storage in containers, as well as for waste storage in continuously refilled boxes. The mathematical model was developed into the computer program and its functionality was verified either by practical measurements or by numerical simulations. The proposed model can be used in incinerators for hazardous and municipal waste.

  8. Self-learning estimation of quantum states

    International Nuclear Information System (INIS)

    Hannemann, Th.; Reiss, D.; Balzer, Ch.; Neuhauser, W.; Toschek, P.E.; Wunderlich, Ch.

    2002-01-01

    We report the experimental estimation of arbitrary qubit states using a succession of N measurements on individual qubits, where the measurement basis is changed during the estimation procedure conditioned on the outcome of previous measurements (self-learning estimation). Two hyperfine states of a single trapped 171 Yb + ion serve as a qubit. It is demonstrated that the difference in fidelity between this adaptive strategy and passive strategies increases in the presence of decoherence

  9. Evaluation and comparison of estimation methods for failure rates and probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, Jussi K. [Fortum Power and Heat Oy, P.O. Box 23, 07901 Loviisa (Finland)]. E-mail: jussi.vaurio@fortum.com; Jaenkaelae, Kalle E. [Fortum Nuclear Services, P.O. Box 10, 00048 Fortum (Finland)

    2006-02-01

    An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.

  10. Force-controlled absorption in a fully-nonlinear numerical wave tank

    International Nuclear Information System (INIS)

    Spinneken, Johannes; Christou, Marios; Swan, Chris

    2014-01-01

    An active control methodology for the absorption of water waves in a numerical wave tank is introduced. This methodology is based upon a force-feedback technique which has previously been shown to be very effective in physical wave tanks. Unlike other methods, an a-priori knowledge of the wave conditions in the tank is not required; the absorption controller being designed to automatically respond to a wide range of wave conditions. In comparison to numerical sponge layers, effective wave absorption is achieved on the boundary, thereby minimising the spatial extent of the numerical wave tank. In contrast to the imposition of radiation conditions, the scheme is inherently capable of absorbing irregular waves. Most importantly, simultaneous generation and absorption can be achieved. This is an important advance when considering inclusion of reflective bodies within the numerical wave tank. In designing the absorption controller, an infinite impulse response filter is adopted, thereby eliminating the problem of non-causality in the controller optimisation. Two alternative controllers are considered, both implemented in a fully-nonlinear wave tank based on a multiple-flux boundary element scheme. To simplify the problem under consideration, the present analysis is limited to water waves propagating in a two-dimensional domain. The paper presents an extensive numerical validation which demonstrates the success of the method for a wide range of wave conditions including regular, focused and random waves. The numerical investigation also highlights some of the limitations of the method, particularly in simultaneously generating and absorbing large amplitude or highly-nonlinear waves. The findings of the present numerical study are directly applicable to related fields where optimum absorption is sought; these include physical wavemaking, wave power absorption and a wide range of numerical wave tank schemes

  11. Incorporation of the capillary hysteresis model HYSTR into the numerical code TOUGH

    International Nuclear Information System (INIS)

    Niemi, A.; Bodvarsson, G.S.; Pruess, K.

    1991-11-01

    As part of the work performed to model flow in the unsaturated zone at Yucca Mountain Nevada, a capillary hysteresis model has been developed. The computer program HYSTR has been developed to compute the hysteretic capillary pressure -- liquid saturation relationship through interpolation of tabulated data. The code can be easily incorporated into any numerical unsaturated flow simulator. A complete description of HYSTR, including a brief summary of the previous hysteresis literature, detailed description of the program, and instructions for its incorporation into a numerical simulator are given in the HYSTR user's manual (Niemi and Bodvarsson, 1991a). This report describes the incorporation of HYSTR into the numerical code TOUGH (Transport of Unsaturated Groundwater and Heat; Pruess, 1986). The changes made and procedures for the use of TOUGH for hysteresis modeling are documented

  12. Coupling impedance of an in-vacuum undulator: Measurement, simulation, and analytical estimation

    Science.gov (United States)

    Smaluk, Victor; Fielder, Richard; Blednykh, Alexei; Rehm, Guenther; Bartolini, Riccardo

    2014-07-01

    One of the important issues of the in-vacuum undulator design is the coupling impedance of the vacuum chamber, which includes tapered transitions with variable gap size. To get complete and reliable information on the impedance, analytical estimate, numerical simulations and beam-based measurements have been performed at Diamond Light Source, a forthcoming upgrade of which includes introducing additional insertion device (ID) straights. The impedance of an already existing ID vessel geometrically similar to the new one has been measured using the orbit bump method. The measurement results in comparison with analytical estimations and numerical simulations are discussed in this paper.

  13. On the elimination of numerical Cerenkov radiation in PIC simulations

    International Nuclear Information System (INIS)

    Greenwood, Andrew D.; Cartwright, Keith L.; Luginsland, John W.; Baca, Ernest A.

    2004-01-01

    Particle-in-cell (PIC) simulations are a useful tool in modeling plasma in physical devices. The Yee finite difference time domain (FDTD) method is commonly used in PIC simulations to model the electromagnetic fields. However, in the Yee FDTD method, poorly resolved waves at frequencies near the cut off frequency of the grid travel slower than the physical speed of light. These slowly traveling, poorly resolved waves are not a problem in many simulations because the physics of interest are at much lower frequencies. However, when high energy particles are present, the particles may travel faster than the numerical speed of their own radiation, leading to non-physical, numerical Cerenkov radiation. Due to non-linear interaction between the particles and the fields, the numerical Cerenkov radiation couples into the frequency band of physical interest and corrupts the PIC simulation. There are two methods of mitigating the effects of the numerical Cerenkov radiation. The computational stencil used to approximate the curl operator can be altered to improve the high frequency physics, or a filtering scheme can be introduced to attenuate the waves that cause the numerical Cerenkov radiation. Altering the computational stencil is more physically accurate but is difficult to implement while maintaining charge conservation in the code. Thus, filtering is more commonly used. Two previously published filters by Godfrey and Friedman are analyzed and compared to ideally desired filter properties

  14. Eulerian and Lagrangian statistics from high resolution numerical simulations of weakly compressible turbulence

    NARCIS (Netherlands)

    Benzi, R.; Biferale, L.; Fisher, R.T.; Lamb, D.Q.; Toschi, F.

    2009-01-01

    We report a detailed study of Eulerian and Lagrangian statistics from high resolution Direct Numerical Simulations of isotropic weakly compressible turbulence. Reynolds number at the Taylor microscale is estimated to be around 600. Eulerian and Lagrangian statistics is evaluated over a huge data

  15. Systematic Approach for Decommissioning Planning and Estimating

    International Nuclear Information System (INIS)

    Dam, A. S.

    2002-01-01

    Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises

  16. Reliability of Estimation Pile Load Capacity Methods

    Directory of Open Access Journals (Sweden)

    Yudhi Lastiasih

    2014-04-01

    Full Text Available None of numerous previous methods for predicting pile capacity is known how accurate any of them are when compared with the actual ultimate capacity of piles tested to failure. The author’s of the present paper have conducted such an analysis, based on 130 data sets of field loading tests. Out of these 130 data sets, only 44 could be analysed, of which 15 were conducted until the piles actually reached failure. The pile prediction methods used were: Brinch Hansen’s method (1963, Chin’s method (1970, Decourt’s Extrapolation Method (1999, Mazurkiewicz’s method (1972, Van der Veen’s method (1953, and the Quadratic Hyperbolic Method proposed by Lastiasih et al. (2012. It was obtained that all the above methods were sufficiently reliable when applied to data from pile loading tests that loaded to reach failure. However, when applied to data from pile loading tests that loaded without reaching failure, the methods that yielded lower values for correction factor N are more recommended. Finally, the empirical method of Reese and O’Neill (1988 was found to be reliable enough to be used to estimate the Qult of a pile foundation based on soil data only.

  17. Obtaining numerically consistent estimates from a mix of administrative data and surveys

    OpenAIRE

    de Waal, A.G.

    2016-01-01

    National statistical institutes (NSIs) fulfil an important role as providers of objective and undisputed statistical information on many different aspects of society. To this end NSIs try to construct data sets that are rich in information content and that can be used to estimate a large variety of population figures. At the same time NSIs aim to construct these rich data sets as efficiently and cost effectively as possible. This can be achieved by utilizing already available administrative d...

  18. Numerical modeling of fires on gas pipelines

    International Nuclear Information System (INIS)

    Zhao Yang; Jianbo Lai; Lu Liu

    2011-01-01

    When natural gas is released through a hole on a high-pressure pipeline, it disperses in the atmosphere as a jet. A jet fire will occur when the leaked gas meets an ignition source. To estimate the dangerous area, the shape and size of the fire must be known. The evolution of the jet fire in air is predicted by using a finite-volume procedure to solve the flow equations. The model is three-dimensional, elliptic and calculated by using a compressibility corrected version of the k - ξ turbulence model, and also includes a probability density function/laminar flamelet model of turbulent non-premixed combustion process. Radiation heat transfer is described using an adaptive version of the discrete transfer method. The model is compared with the experiments about a horizontal jet fire in a wind tunnel in the literature with success. The influence of wind and jet velocity on the fire shape has been investigated. And a correlation based on numerical results for predicting the stoichiometric flame length is proposed. - Research highlights: → We developed a model to predict the evolution of turbulent jet diffusion flames. → Measurements of temperature distributions match well with the numerical predictions. → A correlation has been proposed to predict the stoichiometric flame length. → Buoyancy effects are higher in the numerical results. → The radiative heat loss is bigger in the experimental results.

  19. Experiments and Numerical Simulations of Electrodynamic Tether

    Science.gov (United States)

    Iki, Kentaro; Kawamoto, Satomi; Takahashi, Ayaka; Ishimoto, Tomori; Yanagida, Atsushi; Toda, Susumu

    As an effective means of suppressing space debris growth, the Aerospace Research and Development Directorate of the Japan Aerospace Exploration Agency (JAXA) has been investigating an active space debris removal system that employs highly efficient electrodynamic tether (EDT) technology for orbital transfer. This study investigates tether deployment dynamics by means of on-ground experiments and numerical simulations of an electrodynamic tether system. Some key parameters used in the numerical simulations, such as the elastic modulus and damping ratio of the tether, the spring constant of the coiling of the tether, and deployment friction, must be estimated, and various experiments are conducted to determine these values. As a result, the following values were obtained: The elastic modulus of the tether was 40 GPa, and the damping ratio of the tether was 0.02. The spring constant and the damping ratio of the tether coiling were 10-4 N/m and 0.025 respectively. The deployment friction was 0.038ν + 0.005 N. In numerical simulations using a multiple mass tether model, tethers with lengths of several kilometers are deployed and the attitude dynamics of satellites attached to the end of the tether and tether libration are calculated. As a result, the simulations confirmed successful deployment of the tether with a length of 500 m using the electrodynamic tether system.

  20. Estimating the mass of the Local Group using machine learning applied to numerical simulations

    Science.gov (United States)

    McLeod, M.; Libeskind, N.; Lahav, O.; Hoffman, Y.

    2017-12-01

    We present a new approach to calculating the combined mass of the Milky Way (MW) and Andromeda (M31), which together account for the bulk of the mass of the Local Group (LG). We base our work on an ensemble of 30,190 halo pairs from the Small MultiDark simulation, assuming a ΛCDM (Cosmological Constant and Cold Dark Matter) cosmology. This is used in conjunction with machine learning methods (artificial neural networks, ANN) to investigate the relationship between the mass and selected parameters characterising the orbit and local environment of the binary. ANN are employed to take account of additional physics arising from interactions with larger structures or dynamical effects which are not analytically well understood. Results from the ANN are most successful when the velocity shear is provided, which demonstrates the flexibility of machine learning to model physical phenomena and readily incorporate new information. The resulting estimate for the Local Group mass, when shear information is included, is 4.9×1012Msolar, with an error of ±0.8×1012Msolar from the 68% uncertainty in observables, and a r.m.s. scatter interval of +1.7‑1.3×1012Msolar estimated scatter from the differences between the model estimates and simulation masses for a testing sample of halo pairs. We also consider a recently reported large relative transverse velocity of M31 and the Milky Way, and produce an alternative mass estimate of 3.6±0.3+2.1‑1.3×1012Msolar. Although the methods used predict similar values for the most likely mass of the LG, application of ANN compared to the traditional Timing Argument reduces the scatter in the log mass by approximately half when tested on samples from the simulation.

  1. Circular orbits of corotating binary black holes: Comparison between analytical and numerical results

    International Nuclear Information System (INIS)

    Damour, Thibault; Gourgoulhon, Eric; Grandclement, Philippe

    2002-01-01

    We compare recent numerical results, obtained within a 'helical Killing vector' approach, on circular orbits of corotating binary black holes to the analytical predictions made by the effective one-body (EOB) method (which has been recently extended to the case of spinning bodies). On the scale of the differences between the results obtained by different numerical methods, we find good agreement between numerical data and analytical predictions for several invariant functions describing the dynamical properties of circular orbits. This agreement is robust against the post-Newtonian accuracy used for the analytical estimates, as well as under choices of the resummation method for the EOB 'effective potential', and gets better as one uses a higher post-Newtonian accuracy. These findings open the way to a significant 'merging' of analytical and numerical methods, i.e. to matching an EOB-based analytical description of the (early and late) inspiral, up to the beginning of the plunge, to a numerical description of the plunge and merger. We illustrate also the 'flexibility' of the EOB approach, i.e. the possibility of determining some 'best fit' values for the analytical parameters by comparison with numerical data

  2. Estimation of biochemical variables using quantumbehaved particle ...

    African Journals Online (AJOL)

    To generate a more efficient neural network estimator, we employed the previously proposed quantum-behaved particle swarm optimization (QPSO) algorithm for neural network training. The experiment results of L-glutamic acid fermentation process showed that our established estimator could predict variables such as the ...

  3. Theoretical and numerical study of an optimum design algorithm

    International Nuclear Information System (INIS)

    Destuynder, Philippe.

    1976-08-01

    This work can be separated into two main parts. First, the behavior of the solution of an elliptic variational equation is analyzed when the domain is submitted to a small perturbation. The case of inequations is also considered. Secondly the previous results are used for deriving an optimum design algorithm. This algorithm was suggested by the center-method proposed by Huard. Numerical results show the superiority of the method on other different optimization techniques [fr

  4. A Numerical Approach to Estimate the Ballistic Coefficient of Space Debris from TLE Orbital Data

    Science.gov (United States)

    Narkeliunas, Jonas

    2016-01-01

    theoretical simulations, even few continuous mode 10 kW ground-based lasers, focused by 1.5 m telescopes with adaptive optics, were enough to prevent significant amount of the debris collisions. Simulations were done by propagating all space objects in LEO by 1 year into the future and checking whether the probability of collision was high. For those space objects different ground-based lasers were used to divert them, afterwards collision probabilities were reevaluated. However, the actual accuracy of the LightForce software, which has been developed at NASA AmesResearch Center, depends on the veracity of the input parameters, one of which is the objects ballistic coefficient. It is a measure of bodys ability to overcome air resistance, which has a significant impact on the debris in LEO, and thus it is responsible for the shape of the trajectory of the debris. Having the exact values of the ballistic coefficient would make significantly better collision predictions, unfortunately, we do not know what are the values for most of the objects.In this research, we were working with part of LightForce code, which estimates the ballistic coefficient from ephemerides. Previously used method gave highly inaccurate values, when compared to known objects, and it needed to be changed. The goal of this work was to try out a different method of estimating the ballistic coefficient and to check whether or not it gives noticeable improvements.

  5. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2013-01-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates

  6. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.

  7. A CFD numerical model for the flow distribution in a MTR fuel element

    International Nuclear Information System (INIS)

    Andrade, Delvonei Alves de; Santos, Pedro Henrique Di Giovanni; Oliveira, Fabio Branco Vaz de; Torres, Walmir Maximo; Umbehaun, Pedro Ernesto; Souza, Jose Antonio Batista de; Belchior Junior, Antonio; Sabundjian, Gaiane; Prado, Adelk de Carvalho; Angelo, Gabriel

    2015-01-01

    Previously, an instrumented dummy fuel element (DMPV-01), with the same geometric characteristics of a MTR fuel element, was designed and constructed for pressure drop and flow distribution measurement experiments at the IEA-R1 reactor core. This dummy element was also used to measure the flow distribution among the rectangular flow channels formed by element fuel plates. A CFD numerical model was developed to complement the studies. This work presents the proposed CFD model as well as a comparison between numerical and experimental results of flow rate distribution among the internal flow channels. Numerical results show that the model reproduces the experiments very well and can be used for the studies as a more convenient and complementary tool. (author)

  8. A CFD numerical model for the flow distribution in a MTR fuel element

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Delvonei Alves de; Santos, Pedro Henrique Di Giovanni; Oliveira, Fabio Branco Vaz de; Torres, Walmir Maximo; Umbehaun, Pedro Ernesto; Souza, Jose Antonio Batista de; Belchior Junior, Antonio; Sabundjian, Gaiane; Prado, Adelk de Carvalho, E-mail: acprado@ipen.br, E-mail: delvonei@ipen.br, E-mail: dpedro_digiovanni_s@hotmail.com, E-mail: fabio@ipen.br, E-mail: wmtorres@ipen.br, E-mail: umbehaun@ipen.br, E-mail: jasouza@ipen.br, E-mail: abelchior@ipen.br, E-mail: gdjian@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil). Centro de Engenharia Nuclear; Angelo, Edvaldo, E-mail: eangelo@mackenzie.br [Universidade Presbiteriana Mackenzie, Sao Paulo, SP (Brazil); Angelo, Gabriel, E-mail: gangelo@fei.edu.br [Fundacao Educacional Inaciana (FEI), Sao Bernardo do Campo, SP (Brazil)

    2015-07-01

    Previously, an instrumented dummy fuel element (DMPV-01), with the same geometric characteristics of a MTR fuel element, was designed and constructed for pressure drop and flow distribution measurement experiments at the IEA-R1 reactor core. This dummy element was also used to measure the flow distribution among the rectangular flow channels formed by element fuel plates. A CFD numerical model was developed to complement the studies. This work presents the proposed CFD model as well as a comparison between numerical and experimental results of flow rate distribution among the internal flow channels. Numerical results show that the model reproduces the experiments very well and can be used for the studies as a more convenient and complementary tool. (author)

  9. Experimental and Numerical Simulations Predictions Comparison of Power and Efficiency in Hydraulic Turbine

    Directory of Open Access Journals (Sweden)

    Laura Castro

    2011-01-01

    Full Text Available On-site power and mass flow rate measurements were conducted in a hydroelectric power plant (Mexico. Mass flow rate was obtained using Gibson's water hammer-based method. A numerical counterpart was carried out by using the commercial CFD software, and flow simulations were performed to principal components of a hydraulic turbine: runner and draft tube. Inlet boundary conditions for the runner were obtained from a previous simulation conducted in the spiral case. The computed results at the runner's outlet were used to conduct the subsequent draft tube simulation. The numerical results from the runner's flow simulation provided data to compute the torque and the turbine's power. Power-versus-efficiency curves were built, and very good agreement was found between experimental and numerical data.

  10. A numerical guide to the solution of the bidomain equations of cardiac electrophysiology

    KAUST Repository

    Pathmanathan, Pras

    2010-06-01

    Simulation of cardiac electrical activity using the bidomain equations can be a massively computationally demanding problem. This study provides a comprehensive guide to numerical bidomain modelling. Each component of bidomain simulations-discretisation, ODE-solution, linear system solution, and parallelisation-is discussed, and previously-used methods are reviewed, new methods are proposed, and issues which cause particular difficulty are highlighted. Particular attention is paid to the choice of stimulus currents, compatibility conditions for the equations, the solution of singular linear systems, and convergence of the numerical scheme. © 2010 Elsevier Ltd.

  11. A numerical guide to the solution of the bidomain equations of cardiac electrophysiology

    KAUST Repository

    Pathmanathan, Pras; Bernabeu, Miguel O.; Bordas, Rafel; Cooper, Jonathan; Garny, Alan; Pitt-Francis, Joe M.; Whiteley, Jonathan P.; Gavaghan, David J.

    2010-01-01

    Simulation of cardiac electrical activity using the bidomain equations can be a massively computationally demanding problem. This study provides a comprehensive guide to numerical bidomain modelling. Each component of bidomain simulations-discretisation, ODE-solution, linear system solution, and parallelisation-is discussed, and previously-used methods are reviewed, new methods are proposed, and issues which cause particular difficulty are highlighted. Particular attention is paid to the choice of stimulus currents, compatibility conditions for the equations, the solution of singular linear systems, and convergence of the numerical scheme. © 2010 Elsevier Ltd.

  12. A new approach for estimation of component failure rate

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Kljenak, I.

    1999-01-01

    In the paper, a formal method for component failure rate estimation is described, which is proposed to be used for components, for which no specific numerical data necessary for probabilistic estimation exist. The framework of the method is the Bayesian updating procedure. A prior distribution is selected from a generic database, whereas the likelihood distribution is assessed from specific data on component state using principles of fuzzy logic theory. With the proposed method the component failure rate estimation is based on a much larger quantity of information compared to presently used classical methods.(author)

  13. Registration and Summation of Respiratory-Gated or Breath-Hold PET Images Based on Deformation Estimation of Lung from CT Image

    Directory of Open Access Journals (Sweden)

    Hideaki Haneishi

    2016-01-01

    Full Text Available Lung motion due to respiration causes image degradation in medical imaging, especially in nuclear medicine which requires long acquisition times. We have developed a method for image correction between the respiratory-gated (RG PET images in different respiration phases or breath-hold (BH PET images in an inconsistent respiration phase. In the method, the RG or BH-PET images in different respiration phases are deformed under two criteria: similarity of the image intensity distribution and smoothness of the estimated motion vector field (MVF. However, only these criteria may cause unnatural motion estimation of lung. In this paper, assuming the use of a PET-CT scanner, we add another criterion that is the similarity for the motion direction estimated from inhalation and exhalation CT images. The proposed method was first applied to a numerical phantom XCAT with tumors and then applied to BH-PET image data for seven patients. The resultant tumor contrasts and the estimated motion vector fields were compared with those obtained by our previous method. Through those experiments we confirmed that the proposed method can provide an improved and more stable image quality for both RG and BH-PET images.

  14. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig

    2015-04-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  15. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2015-01-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  16. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    Science.gov (United States)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  17. Response-Based Estimation of Sea State Parameters

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2007-01-01

    of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence...... calculated by a 3-D time domain code and by closed-form (analytical) expressions, respectively. Based on comparisons with wave radar measurements and satellite measurements it is seen that the wave estimations based on closedform expressions exhibit a reasonable energy content, but the distribution of energy...

  18. Numerical Calculation of Transport Based on the Drift-Kinetic Equation for Plasmas in General Toroidal Magnetic Geometry: Numerical Methods; Calculo Numerico de Transporte mediante la Ecuacion Cinetica de Deriva para Plasmas en Geometria Magnetica Toroidal: Metodos Numericos

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, J. M.; Lopez-Bruna, D.

    2009-10-12

    In this report we continue with the description of a newly developed numerical method to solve the drift kinetic equation for ions and electrons in toroidal plasmas. Several numerical aspects, already outlined in a previous report [Informes Tecnicos Ciemat 1165, mayo 2009], will be treated now in more detail. Aside from discussing the method in the context of other existing codes, various aspects will be now explained from the viewpoint of numerical methods: the way to solve convection equations, the adopted boundary conditions, the real-space meshing procedures along with a new software developed to build them, and some additional questions related with the parallelization and the numerical integration. (Author) 16 refs.

  19. An alternative procedure for estimating the population mean in simple random sampling

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2012-03-01

    Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.

  20. PMICALC: an R code-based software for estimating post-mortem interval (PMI) compatible with Windows, Mac and Linux operating systems.

    Science.gov (United States)

    Muñoz-Barús, José I; Rodríguez-Calvo, María Sol; Suárez-Peñaranda, José M; Vieira, Duarte N; Cadarso-Suárez, Carmen; Febrero-Bande, Manuel

    2010-01-30

    In legal medicine the correct determination of the time of death is of utmost importance. Recent advances in estimating post-mortem interval (PMI) have made use of vitreous humour chemistry in conjunction with Linear Regression, but the results are questionable. In this paper we present PMICALC, an R code-based freeware package which estimates PMI in cadavers of recent death by measuring the concentrations of potassium ([K+]), hypoxanthine ([Hx]) and urea ([U]) in the vitreous humor using two different regression models: Additive Models (AM) and Support Vector Machine (SVM), which offer more flexibility than the previously used Linear Regression. The results from both models are better than those published to date and can give numerical expression of PMI with confidence intervals and graphic support within 20 min. The program also takes into account the cause of death. 2009 Elsevier Ireland Ltd. All rights reserved.

  1. Monitoring hydraulic fractures: state estimation using an extended Kalman filter

    International Nuclear Information System (INIS)

    Rochinha, Fernando Alves; Peirce, Anthony

    2010-01-01

    There is considerable interest in using remote elastostatic deformations to identify the evolving geometry of underground fractures that are forced to propagate by the injection of high pressure viscous fluids. These so-called hydraulic fractures are used to increase the permeability in oil and gas reservoirs as well as to pre-fracture ore-bodies for enhanced mineral extraction. The undesirable intrusion of these hydraulic fractures into environmentally sensitive areas or into regions in mines which might pose safety hazards has stimulated the search for techniques to enable the evolving hydraulic fracture geometries to be monitored. Previous approaches to this problem have involved the inversion of the elastostatic data at isolated time steps in the time series provided by tiltmeter measurements of the displacement gradient field at selected points in the elastic medium. At each time step, parameters in simple static models of the fracture (e.g. a single displacement discontinuity) are identified. The approach adopted in this paper is not to regard the sequence of sampled elastostatic data as independent, but rather to treat the data as linked by the coupled elastic-lubrication equations that govern the propagation of the evolving hydraulic fracture. We combine the Extended Kalman Filter (EKF) with features of a recently developed implicit numerical scheme to solve the coupled free boundary problem in order to form a novel algorithm to identify the evolving fracture geometry. Numerical experiments demonstrate that, despite excluding significant physical processes in the forward numerical model, the EKF-numerical algorithm is able to compensate for the un-modeled dynamics by using the information fed back from tiltmeter data. Indeed the proposed algorithm is able to provide reasonably faithful estimates of the fracture geometry, which are shown to converge to the actual hydraulic fracture geometry as the number of tiltmeters is increased. Since the location of

  2. Model uncertainty of various settlement estimation methods in shallow tunnels excavation; case study: Qom subway tunnel

    Science.gov (United States)

    Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb

    2017-10-01

    In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.

  3. Ferrofluids: Modeling, numerical analysis, and scientific computation

    Science.gov (United States)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  4. Numerical analysis

    CERN Document Server

    Scott, L Ridgway

    2011-01-01

    Computational science is fundamentally changing how technological questions are addressed. The design of aircraft, automobiles, and even racing sailboats is now done by computational simulation. The mathematical foundation of this new approach is numerical analysis, which studies algorithms for computing expressions defined with real numbers. Emphasizing the theory behind the computation, this book provides a rigorous and self-contained introduction to numerical analysis and presents the advanced mathematics that underpin industrial software, including complete details that are missing from most textbooks. Using an inquiry-based learning approach, Numerical Analysis is written in a narrative style, provides historical background, and includes many of the proofs and technical details in exercises. Students will be able to go beyond an elementary understanding of numerical simulation and develop deep insights into the foundations of the subject. They will no longer have to accept the mathematical gaps that ex...

  5. Numerical method for the nonlinear Fokker-Planck equation

    International Nuclear Information System (INIS)

    Zhang, D.S.; Wei, G.W.; Kouri, D.J.; Hoffman, D.K.

    1997-01-01

    A practical method based on distributed approximating functionals (DAFs) is proposed for numerically solving a general class of nonlinear time-dependent Fokker-Planck equations. The method relies on a numerical scheme that couples the usual path-integral concept to the DAF idea. The high accuracy and reliability of the method are illustrated by applying it to an exactly solvable nonlinear Fokker-Planck equation, and the method is compared with the accurate K-point Stirling interpolation formula finite-difference method. The approach is also used successfully to solve a nonlinear self-consistent dynamic mean-field problem for which both the cumulant expansion and scaling theory have been found by Drozdov and Morillo [Phys. Rev. E 54, 931 (1996)] to be inadequate to describe the occurrence of a long-lived transient bimodality. The standard interpretation of the transient bimodality in terms of the flat region in the kinetic potential fails for the present case. An alternative analysis based on the effective potential of the Schroedinger-like Fokker-Planck equation is suggested. Our analysis of the transient bimodality is strongly supported by two examples that are numerically much more challenging than other examples that have been previously reported for this problem. copyright 1997 The American Physical Society

  6. Field Evaluation of the System Identification Approach for Tension Estimation of External Tendons

    Directory of Open Access Journals (Sweden)

    Myung-Hyun Noh

    2015-01-01

    Full Text Available Various types of external tendons are considered to verify the applicability of tension estimation method based on the finite element model with system identification technique. The proposed method is applied to estimate the tension of benchmark numerical example, model structure, and field structure. The numerical and experimental results show that the existing methods such as taut string theory and linear regression method show large error in the estimated tension when the condition of external tendon is different with the basic assumption used during the derivation of relationship between tension and natural frequency. However, the proposed method gives reasonable results for all of the considered external tendons in this study. Furthermore, the proposed method can evaluate the accuracy of estimated tension indirectly by comparing the measured and calculated natural frequencies. Therefore, the proposed method can be effectively used for field application of various types of external tendons.

  7. Axisymmetric Numerical Modeling of Pulse Detonation Rocket Engines

    Science.gov (United States)

    Morris, Christopher I.

    2005-01-01

    Pulse detonation rocket engines (PDREs) have generated research interest in recent years as a chemical propulsion system potentially offering improved performance and reduced complexity compared to conventional rocket engines. The detonative mode of combustion employed by these devices offers a thermodynamic advantage over the constant-pressure deflagrative combustion mode used in conventional rocket engines and gas turbines. However, while this theoretical advantage has spurred considerable interest in building PDRE devices, the unsteady blowdown process intrinsic to the PDRE has made realistic estimates of the actual propulsive performance problematic. The recent review article by Kailasanath highlights some of the progress that has been made in comparing the available experimental measurements with analytical and numerical models. In recent work by the author, a quasi-one-dimensional, finite rate chemistry CFD model was utilized to study the gasdynamics and performance characteristics of PDREs over a range of blowdown pressure ratios from 1-1000. Models of this type are computationally inexpensive, and enable first-order parametric studies of the effect of several nozzle and extension geometries on PDRE performance over a wide range of conditions. However, the quasi-one-dimensional approach is limited in that it cannot properly capture the multidimensional blast wave and flow expansion downstream of the PDRE, nor can it resolve nozzle flow separation if present. Moreover, the previous work was limited to single-pulse calculations. In this paper, an axisymmetric finite rate chemistry model is described and utilized to study these issues in greater detail. Example Mach number contour plots showing the multidimensional blast wave and nozzle exhaust plume are shown. The performance results are compared with the quasi-one-dimensional results from the previous paper. Both Euler and Navier-Stokes solutions are calculated in order to determine the effect of viscous

  8. Analysis of control rod behavior based on numerical simulation

    International Nuclear Information System (INIS)

    Ha, D. G.; Park, J. K.; Park, N. G.; Suh, J. M.; Jeon, K. L.

    2010-01-01

    The main function of a control rod is to control core reactivity change during operation associated with changes in power, coolant temperature, and dissolved boron concentration by the insertion and withdrawal of control rods from the fuel assemblies. In a scram, the control rod assemblies are released from the CRDMs (Control Rod Drive Mechanisms) and, due to gravity, drop rapidly into the fuel assemblies. The control rod insertion time during a scram must be within the time limits established by the overall core safety analysis. To assure the control rod operational functions, the guide thimbles shall not obstruct the insertion and withdrawal of the control rods or cause any damage to the fuel assembly. When fuel assembly bow occurs, it can affect both the operating performance and the core safety. In this study, the drag forces of the control rod are estimated by a numerical simulation to evaluate the guide tube bow effect on control rod withdrawal. The contact condition effects are also considered. A full scale 3D model is developed for the evaluation, and ANSYS - commercial numerical analysis code - is used for this numerical simulation. (authors)

  9. Key issues review: numerical studies of turbulence in stars

    Science.gov (United States)

    Arnett, W. David; Meakin, Casey

    2016-10-01

    Three major problems of single-star astrophysics are convection, magnetic fields and rotation. Numerical simulations of convection in stars now have sufficient resolution to be truly turbulent, with effective Reynolds numbers of \\text{Re}>{{10}4} , and some turbulent boundary layers have been resolved. Implications of these developments are discussed for stellar structure, evolution and explosion as supernovae. Methods for three-dimensional (3D) simulations of stars are compared and discussed for 3D atmospheres, solar rotation, core-collapse and stellar boundary layers. Reynolds-averaged Navier-Stokes (RANS) analysis of the numerical simulations has been shown to provide a novel and quantitative estimate of resolution errors. Present treatments of stellar boundaries require revision, even for early burning stages (e.g. for mixing regions during He-burning). As stellar core-collapse is approached, asymmetry and fluctuations grow, rendering spherically symmetric models of progenitors more unrealistic. Numerical resolution of several different types of three-dimensional (3D) stellar simulations are compared; it is suggested that core-collapse simulations may be under-resolved. The Rayleigh-Taylor instability in explosions has a deep connection to convection, for which the abundance structure in supernova remnants may provide evidence.

  10. Coupling impedance of an in-vacuum undulator: Measurement, simulation, and analytical estimation

    Directory of Open Access Journals (Sweden)

    Victor Smaluk

    2014-07-01

    Full Text Available One of the important issues of the in-vacuum undulator design is the coupling impedance of the vacuum chamber, which includes tapered transitions with variable gap size. To get complete and reliable information on the impedance, analytical estimate, numerical simulations and beam-based measurements have been performed at Diamond Light Source, a forthcoming upgrade of which includes introducing additional insertion device (ID straights. The impedance of an already existing ID vessel geometrically similar to the new one has been measured using the orbit bump method. The measurement results in comparison with analytical estimations and numerical simulations are discussed in this paper.

  11. Equilibrium method for estimating the first hydrolysis constant of tetravalent plutonium

    International Nuclear Information System (INIS)

    Silver, G.L.

    2010-01-01

    A new method for estimating the numerical value of the first hydrolysis constant of tetravalent plutonium is illustrated by examples. It uses the pH and the equilibrium fractions of two of the Pu oxidation states. They are substituted into one or more of a choice of formulas that render explicit estimates of the hydrolysis constant. (author)

  12. Analytical Model for Estimating Terrestrial Cosmic Ray Fluxes Nearly Anytime and Anywhere in the World: Extension of PARMA/EXPACS.

    Directory of Open Access Journals (Sweden)

    Tatsuhiko Sato

    Full Text Available By extending our previously established model, here we present a new model called "PHITS-based Analytical Radiation Model in the Atmosphere (PARMA version 3.0," which can instantaneously estimate terrestrial cosmic ray fluxes of neutrons, protons, ions with charge up to 28 (Ni, muons, electrons, positrons, and photons nearly anytime and anywhere in the Earth's atmosphere. The model comprises numerous analytical functions with parameters whose numerical values were fitted to reproduce the results of the extensive air shower (EAS simulation performed by Particle and Heavy Ion Transport code System (PHITS. The accuracy of the EAS simulation was well verified using various experimental data, while that of PARMA3.0 was confirmed by the high R2 values of the fit. The models to be used for estimating radiation doses due to cosmic ray exposure, cosmic ray induced ionization rates, and count rates of neutron monitors were validated by investigating their capability to reproduce those quantities measured under various conditions. PARMA3.0 is available freely and is easy to use, as implemented in an open-access software program EXcel-based Program for Calculating Atmospheric Cosmic ray Spectrum (EXPACS. Because of these features, the new version of PARMA/EXPACS can be an important tool in various research fields such as geosciences, cosmic ray physics, and radiation research.

  13. Joint Estimation of Multiple Precision Matrices with Common Structures.

    Science.gov (United States)

    Lee, Wonyul; Liu, Yufeng

    Estimation of inverse covariance matrices, known as precision matrices, is important in various areas of statistical analysis. In this article, we consider estimation of multiple precision matrices sharing some common structures. In this setting, estimating each precision matrix separately can be suboptimal as it ignores potential common structures. This article proposes a new approach to parameterize each precision matrix as a sum of common and unique components and estimate multiple precision matrices in a constrained l 1 minimization framework. We establish both estimation and selection consistency of the proposed estimator in the high dimensional setting. The proposed estimator achieves a faster convergence rate for the common structure in certain cases. Our numerical examples demonstrate that our new estimator can perform better than several existing methods in terms of the entropy loss and Frobenius loss. An application to a glioblastoma cancer data set reveals some interesting gene networks across multiple cancer subtypes.

  14. Temperature field and heat flow of the Danish-German border region − borehole measurements and numerical modelling

    DEFF Research Database (Denmark)

    Fuchs, Sven; Balling, Niels

    We present a regional 3D numerical crustal temperature model and analyze the present-day conductive thermal field of the Danish-German border region located in the North German Basin. A comprehensive analysis of borehole and well-log data on a regional scale is conducted to derive both the model......W/m² higher than low values reported in some previous studies for this region. Heat flow from the mantle is estimated to be between 33 and 40 mW/m² (q1–q3; mean of 37 ± 7 mW/m²). Pronounced lateral temperature variations are caused mainly by complex geological structures, including a large amount of salt...... structures and marked lateral variations in the thickness of basin sediments. The associated variations in rock thermal conductivity generate significant variations in model heat flow and large variations in temperature gradients. Major geothermal sandstone reservoirs (e.g. Rhaetian and Middle Buntsandstein...

  15. Numerical analysis for prediction of fatigue crack opening level

    International Nuclear Information System (INIS)

    Choi, Hyeon Chang

    2004-01-01

    Finite Element Analysis (FEA) is the most popular numerical method to simulate plasticity-induced fatigue crack closure and can predict fatigue crack closure behavior. Finite element analysis under plane stress state using 4-node isoparametric elements is performed to investigate the detailed closure behavior of fatigue cracks and the numerical results are compared with experimental results. The mesh of constant size elements on the crack surface can not correctly predict the opening level for fatigue crack as shown in the previous works. The crack opening behavior for the size mesh with a linear change shows almost flat stress level after a crack tip has passed by the monotonic plastic zone. The prediction of crack opening level presents a good agreement with published experimental data regardless of stress ratios, which are using the mesh of the elements that are in proportion to the reversed plastic zone size considering the opening stress intensity factors. Numerical interpolation results of finite element analysis can precisely predict the crack opening level. This method shows a good agreement with the experimental data regardless of the stress ratios and kinds of materials

  16. New population-based exome data question the pathogenicity of some genetic variants previously associated with Marfan syndrome

    DEFF Research Database (Denmark)

    Yang, Ren-Qiang; Jabbari, Javad; Cheng, Xiao-Shu

    2014-01-01

    BACKGROUND: Marfan syndrome (MFS) is a rare autosomal dominantly inherited connective tissue disorder with an estimated prevalence of 1:5,000. More than 1000 variants have been previously reported to be associated with MFS. However, the disease-causing effect of these variants may be questionable...

  17. Orientation estimation algorithm applied to high-spin projectiles

    International Nuclear Information System (INIS)

    Long, D F; Lin, J; Zhang, X M; Li, J

    2014-01-01

    High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm. (paper)

  18. Orientation estimation algorithm applied to high-spin projectiles

    Science.gov (United States)

    Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.

    2014-06-01

    High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.

  19. Estimation and Control for Linear Systems with Additive Cauchy Noise

    Science.gov (United States)

    2013-12-17

    man & Hall, New York, 1994. [11] J. L. Speyer and W. H. Chung, Stochastic Processes, Estimation, and Control, SIAM, 2008. [12] Nassim N. Taleb ...Gaussian control algorithms. 18 4 References [1] N. N. Taleb . The Black Swan: The Impact of the Highly Improbable...the multivariable system. The estimator was then evaluated numerically for a third-order example. REFERENCES [1] N. N. Taleb , The Black Swan: The

  20. Numerical simulation and experimental validation of coiled adiabatic capillary tubes

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Valladares, O. [Centro de Investigacion en Energia, Universidad Nacional Autonoma de Mexico (UNAM), Apdo. Postal 34, 62580 Temixco, Morelos (Mexico)

    2007-04-15

    The objective of this study is to extend and validate the model developed and presented in previous works [O. Garcia-Valladares, C.D. Perez-Segarra, A. Oliva, Numerical simulation of capillary tube expansion devices behaviour with pure and mixed refrigerants considering metastable region. Part I: mathematical formulation and numerical model, Applied Thermal Engineering 22 (2) (2002) 173-182; O. Garcia-Valladares, C.D. Perez-Segarra, A. Oliva, Numerical simulation of capillary tube expansion devices behaviour with pure and mixed refrigerants considering metastable region. Part II: experimental validation and parametric studies, Applied Thermal Engineering 22 (4) (2002) 379-391] to coiled adiabatic capillary tube expansion devices working with pure and mixed refrigerants. The discretized governing equations are coupled using an implicit step by step method. A special treatment has been implemented in order to consider transitions (subcooled liquid region, metastable liquid region, metastable two-phase region and equilibrium two-phase region). All the flow variables (enthalpies, temperatures, pressures, vapor qualities, velocities, heat fluxes, etc.) together with the thermophysical properties are evaluated at each point of the grid in which the domain is discretized. The numerical model allows analysis of aspects such as geometry, type of fluid (pure substances and mixtures), critical or non-critical flow conditions, metastable regions, and transient aspects. Comparison of the numerical simulation with a wide range of experimental data presented in the technical literature will be shown in the present article in order to validate the model developed. (author)

  1. Numerical Development

    Science.gov (United States)

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  2. Numerical investigation of fluid flow and heat transfer under high heat flux using rectangular micro-channels

    KAUST Repository

    Mansoor, Mohammad M.

    2012-02-01

    A 3D-conjugate numerical investigation was conducted to predict heat transfer characteristics in a rectangular cross-sectional micro-channel employing simultaneously developing single-phase flows. The numerical code was validated by comparison with previous experimental and numerical results for the same micro-channel dimensions and classical correlations based on conventional sized channels. High heat fluxes up to 130W/cm 2 were applied to investigate micro-channel thermal characteristics. The entire computational domain was discretized using a 120×160×100 grid for the micro-channel with an aspect ratio of (α=4.56) and examined for Reynolds numbers in the laminar range (Re 500-2000) using FLUENT. De-ionized water served as the cooling fluid while the micro-channel substrate used was made of copper. Validation results were found to be in good agreement with previous experimental and numerical data [1] with an average deviation of less than 4.2%. As the applied heat flux increased, an increase in heat transfer coefficient values was observed. Also, the Reynolds number required for transition from single-phase fluid to two-phase was found to increase. A correlation is proposed for the results of average Nusselt numbers for the heat transfer characteristics in micro-channels with simultaneously developing, single-phase flows. © 2011 Elsevier Ltd.

  3. A numerical approach for assessing effects of shear on equivalent permeability and nonlinear flow characteristics of 2-D fracture networks

    Science.gov (United States)

    Liu, Richeng; Li, Bo; Jiang, Yujing; Yu, Liyuan

    2018-01-01

    Hydro-mechanical properties of rock fractures are core issues for many geoscience and geo-engineering practices. Previous experimental and numerical studies have revealed that shear processes could greatly enhance the permeability of single rock fractures, yet the shear effects on hydraulic properties of fractured rock masses have received little attention. In most previous fracture network models, single fractures are typically presumed to be formed by parallel plates and flow is presumed to obey the cubic law. However, related studies have suggested that the parallel plate model cannot realistically represent the surface characters of natural rock fractures, and the relationship between flow rate and pressure drop will no longer be linear at sufficiently large Reynolds numbers. In the present study, a numerical approach was established to assess the effects of shear on the hydraulic properties of 2-D discrete fracture networks (DFNs) in both linear and nonlinear regimes. DFNs considering fracture surface roughness and variation of aperture in space were generated using an originally developed code DFNGEN. Numerical simulations by solving Navier-Stokes equations were performed to simulate the fluid flow through these DFNs. A fracture that cuts through each model was sheared and by varying the shear and normal displacements, effects of shear on equivalent permeability and nonlinear flow characteristics of DFNs were estimated. The results show that the critical condition of quantifying the transition from a linear flow regime to a nonlinear flow regime is: 10-4 〈 J hydraulic gradient. When the fluid flow is in a linear regime (i.e., J reduce the equivalent permeability significantly in the orientation perpendicular to the sheared fracture as much as 53.86% when J = 1, shear displacement Ds = 7 mm, and normal displacement Dn = 1 mm. By fitting the calculated results, the mathematical expression for δ2 is established to help choose proper governing equations when

  4. Robustness of numerical TIG welding simulation of 3D structures in stainless steel 316L

    International Nuclear Information System (INIS)

    El-Ahmar, W.

    2007-04-01

    The numerical welding simulation is considered to be one of those mechanical problems that have the great level of nonlinearity and which requires a good knowledge in various scientific fields. The 'Robustness Analysis' is a suitable tool to control the quality and guarantee the reliability of numerical welding results. The robustness of a numerical simulation of welding is related to the sensitivity of the modelling assumptions on the input parameters. A simulation is known as robust if the result that it produces is not very sensitive to uncertainties of the input data. The term 'Robust' was coined in statistics by G.E.P. Box in 1953. Various definitions of greater or lesser mathematical rigor are possible for the term, but in general, referring to a statistical estimator, it means 'insensitive to small deviation from the idealized assumptions for which the estimator is optimized. In order to evaluate the robustness of numerical welding simulation, sensitivity analyses on thermomechanical models and parameters have been conducted. At the first step, we research a reference solution which gives the best agreement with the thermal and mechanical experimental results. The second step consists in determining through numerical simulations which parameters have the largest influence on residual stresses induced by the welding process. The residual stresses were predicted using finite element method performed with Code-Aster of EDF and SYSWELD of ESI-GROUP. An analysis of robustness can prove to be heavy and expensive making it an unjustifiable route. However, only with development such tool of analysis can predictive methods become a useful tool for industry. (author)

  5. Combining weather radar nowcasts and numerical weather prediction models to estimate short-term quantitative precipitation and uncertainty

    DEFF Research Database (Denmark)

    Jensen, David Getreuer

    The topic of this Ph.D. thesis is short term forecasting of precipitation for up to 6 hours called nowcasts. The focus is on improving the precision of deterministic nowcasts, assimilation of radar extrapolation model (REM) data into Danish Meteorological Institutes (DMI) HIRLAM numerical weather...

  6. Numerical Optimization in Microfluidics

    DEFF Research Database (Denmark)

    Jensen, Kristian Ejlebjærg

    2017-01-01

    Numerical modelling can illuminate the working mechanism and limitations of microfluidic devices. Such insights are useful in their own right, but one can take advantage of numerical modelling in a systematic way using numerical optimization. In this chapter we will discuss when and how numerical...... optimization is best used....

  7. Twelve previously unknown phage genera are ubiquitous in global oceans.

    Science.gov (United States)

    Holmfeldt, Karin; Solonenko, Natalie; Shah, Manesh; Corrier, Kristen; Riemann, Lasse; Verberkmoes, Nathan C; Sullivan, Matthew B

    2013-07-30

    Viruses are fundamental to ecosystems ranging from oceans to humans, yet our ability to study them is bottlenecked by the lack of ecologically relevant isolates, resulting in "unknowns" dominating culture-independent surveys. Here we present genomes from 31 phages infecting multiple strains of the aquatic bacterium Cellulophaga baltica (Bacteroidetes) to provide data for an underrepresented and environmentally abundant bacterial lineage. Comparative genomics delineated 12 phage groups that (i) each represent a new genus, and (ii) represent one novel and four well-known viral families. This diversity contrasts the few well-studied marine phage systems, but parallels the diversity of phages infecting human-associated bacteria. Although all 12 Cellulophaga phages represent new genera, the podoviruses and icosahedral, nontailed ssDNA phages were exceptional, with genomes up to twice as large as those previously observed for each phage type. Structural novelty was also substantial, requiring experimental phage proteomics to identify 83% of the structural proteins. The presence of uncommon nucleotide metabolism genes in four genera likely underscores the importance of scavenging nutrient-rich molecules as previously seen for phages in marine environments. Metagenomic recruitment analyses suggest that these particular Cellulophaga phages are rare and may represent a first glimpse into the phage side of the rare biosphere. However, these analyses also revealed that these phage genera are widespread, occurring in 94% of 137 investigated metagenomes. Together, this diverse and novel collection of phages identifies a small but ubiquitous fraction of unknown marine viral diversity and provides numerous environmentally relevant phage-host systems for experimental hypothesis testing.

  8. Laparoscopy After Previous Laparotomy

    Directory of Open Access Journals (Sweden)

    Zulfo Godinjak

    2006-11-01

    Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.

  9. Performance analysis of numeric solutions applied to biokinetics of radionuclides

    International Nuclear Information System (INIS)

    Mingatos, Danielle dos Santos; Bevilacqua, Joyce da Silva

    2013-01-01

    Biokinetics models for radionuclides applied to dosimetry problems are constantly reviewed by ICRP. The radionuclide trajectory could be represented by compartmental models, assuming constant transfer rates between compartments. A better understanding of physiological or biochemical phenomena, improve the comprehension of radionuclide behavior in the human body and, in general, more complex compartmental models are proposed, increasing the difficulty of obtaining the analytical solution for the system of first order differential equations. Even with constant transfer rates numerical solutions must be carefully implemented because of almost singular characteristic of the matrix of coefficients. In this work we compare numerical methods with different strategies for ICRP-78 models for Thorium-228 and Uranium-234. The impact of uncertainty in the parameters of the equations is also estimated for local and global truncation errors. (author)

  10. Hindi Numerals.

    Science.gov (United States)

    Bright, William

    In most languages encountered by linguists, the numerals, considered as a paradigmatic set, constitute a morpho-syntactic problem of only moderate complexity. The Indo-Aryan language family of North India, however, presents a curious contrast. The relatively regular numeral system of Sanskrit, as it has developed historically into the modern…

  11. Possible dendritic contribution to unimodal numerosity tuning and Weber-Fechner law-dependent numerical cognition

    Directory of Open Access Journals (Sweden)

    Kenji Morita

    2009-08-01

    Full Text Available Humans and animals are known to share an ability to estimate or compare the numerosity of visual stimuli, and this ability is considered to be supported by the cortical neurons that have unimodal tuning for numerosity, referred to as the numerosity detector neurons. How such unimodal numerosity tuning is shaped through plasticity mechanisms is unknown. Here I propose a testable hypothetical mechanism based on recently revealed features of the neuronal dendrite, namely, cooperative plasticity induction and nonlinear input integration at nearby dendritic sites, on the basis of the existing proposal that individual visual stimuli are represented as similar localized activities regardless of the size or the shape in a cortical region in the dorsal visual pathway. Intriguingly, the proposed mechanism naturally explains a prominent feature of the numerosity detector neurons, namely, the broadening of the tuning curve in proportion to the preferred numerosity, which is considered to underlie the known Weber-Fechner law-dependent accuracy of numerosity estimation and comparison. The simulated tuning curves are less sharp than reality, however, and together with the evidence from human imaging studies that numerical representation is a distributed phenomenon, it may not be likely that the proposed mechanism operates by itself. Rather, the proposed mechanism might facilitate the formation of hierarchical circuitry proposed in the previous studies, which includes neurons with monotonic numerosity tuning as well as those with sharp unimodal tuning, by serving as an efficient initial condition.

  12. Methods of numerical relativity

    International Nuclear Information System (INIS)

    Piran, T.

    1983-01-01

    Numerical Relativity is an alternative to analytical methods for obtaining solutions for Einstein equations. Numerical methods are particularly useful for studying generation of gravitational radiation by potential strong sources. The author reviews the analytical background, the numerical analysis aspects and techniques and some of the difficulties involved in numerical relativity. (Auth.)

  13. On the numerical verification of industrial codes

    International Nuclear Information System (INIS)

    Montan, Sethy Akpemado

    2013-01-01

    Numerical verification of industrial codes, such as those developed at EDF R and D, is required to estimate the precision and the quality of computed results, even more for code running in HPC environments where millions of instructions are performed each second. These programs usually use external libraries (MPI, BLACS, BLAS, LAPACK). In this context, it is required to have a tool as non intrusive as possible to avoid rewriting the original code. In this regard, the CADNA library, which implements the Discrete Stochastic Arithmetic, appears to be one of a promising approach for industrial applications. In the first part of this work, we are interested in an efficient implementation of the BLAS routine DGEMM (General Matrix Multiply) implementing Discrete Stochastic Arithmetic. The implementation of a basic algorithm for matrix product using stochastic types leads to an overhead greater than 1000 for a matrix of 1024 * 1024 compared to the standard version and commercial versions of xGEMM. Here, we detail different solutions to reduce this overhead and the results we have obtained. A new routine Dgemm- CADNA have been designed. This routine has allowed to reduce the overhead from 1100 to 35 compare to optimized BLAS implementations (GotoBLAS). Then, we focus on the numerical verification of Telemac-2D computed results. Performing a numerical validation with the CADNA library shows that more than 30% of the numerical instabilities occurring during an execution come from the dot product function. A more accurate implementation of the dot product with compensated algorithms is presented in this work. We show that implementing these kinds of algorithms, in order to improve the accuracy of computed results does not alter the code performance. (author)

  14. Numerical calculation of velocity distribution near a vertical flat plate immersed in bubble flow

    International Nuclear Information System (INIS)

    Matsuura, Akihiro; Nakamura, Hajime; Horihata, Hideyuki; Hiraoka, Setsuro; Aragaki, Tsutomu; Yamada, Ikuho; Isoda, Shinji.

    1992-01-01

    Liquid and gas velocity distributions for bubble flow near a vertical flat plate were calculated numerically by using the SIMPLER method, where the flow was assumed to be laminar, two-dimensional, and at steady state. The two-fluid flow model was used in the numerical analysis. To calculate the drag force on a small bubble, Stokes' law for a rigid sphere is applicable. The dimensionless velocity distributions which were arranged with characteristic boundary layer thickness and maximum liquid velocity were adjusted with a single line and their forms were similar to that for single-phase wall-jet flow. The average wall shear stress derived from the velocity gradient at the plate wall was strongly affected by bubble diameter but not by inlet liquid velocity. The present dimensionless velocity distributions obtained numerically agreed well with previous experimental results, and the proposed numerical algorithm was validated. (author)

  15. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  16. Numerical simulation model of hyperacute/acute stage white matter infarction.

    Science.gov (United States)

    Sakai, Koji; Yamada, Kei; Oouchi, Hiroyuki; Nishimura, Tsunehiko

    2008-01-01

    Although previous studies have revealed the mechanisms of changes in diffusivity (apparent diffusion coefficient [ADC]) in acute brain infarction, changes in diffusion anisotropy (fractional anisotropy [FA]) in white matter have not been examined. We hypothesized that membrane permeability as well as axonal swelling play important roles, and we therefore constructed a simulation model using random walk simulation to replicate the diffusion of water molecules. We implemented a numerical diffusion simulation model of normal and infarcted human brains using C++ language. We constructed this 2-pool model using simple tubes aligned in a single direction. Random walk simulation diffused water. Axon diameters and membrane permeability were then altered in step-wise fashion. To estimate the effects of axonal swelling, axon diameters were changed from 6 to 10 microm. Membrane permeability was altered from 0% to 40%. Finally, both elements were combined to explain increasing FA in the hyperacute stage of white matter infarction. The simulation demonstrated that simple water shift into the intracellular space reduces ADC and increases FA, but not to the extent expected from actual human cases (ADC approximately 50%; FA approximately +20%). Similarly, membrane permeability alone was insufficient to explain this phenomenon. However, a combination of both factors successfully replicated changes in diffusivity indices. Both axonal swelling and reduced membrane permeability appear important in explaining changes in ADC and FA based on eigenvalues in hyperacute-stage white matter infarction.

  17. Accounting for subgroup structure in line-transect abundance estimates of false killer whales (Pseudorca crassidens in Hawaiian waters.

    Directory of Open Access Journals (Sweden)

    Amanda L Bradford

    Full Text Available For biological populations that form aggregations (or clusters of individuals, cluster size is an important parameter in line-transect abundance estimation and should be accurately measured. Cluster size in cetaceans has traditionally been represented as the total number of individuals in a group, but group size may be underestimated if group members are spatially diffuse. Groups of false killer whales (Pseudorca crassidens can comprise numerous subgroups that are dispersed over tens of kilometers, leading to a spatial mismatch between a detected group and the theoretical framework of line-transect analysis. Three stocks of false killer whales are found within the U.S. Exclusive Economic Zone of the Hawaiian Islands (Hawaiian EEZ: an insular main Hawaiian Islands stock, a pelagic stock, and a Northwestern Hawaiian Islands (NWHI stock. A ship-based line-transect survey of the Hawaiian EEZ was conducted in the summer and fall of 2010, resulting in six systematic-effort visual sightings of pelagic (n = 5 and NWHI (n = 1 false killer whale groups. The maximum number and spatial extent of subgroups per sighting was 18 subgroups and 35 km, respectively. These sightings were combined with data from similar previous surveys and analyzed within the conventional line-transect estimation framework. The detection function, mean cluster size, and encounter rate were estimated separately to appropriately incorporate data collected using different methods. Unlike previous line-transect analyses of cetaceans, subgroups were treated as the analytical cluster instead of groups because subgroups better conform to the specifications of line-transect theory. Bootstrap values (n = 5,000 of the line-transect parameters were randomly combined to estimate the variance of stock-specific abundance estimates. Hawai'i pelagic and NWHI false killer whales were estimated to number 1,552 (CV = 0.66; 95% CI = 479-5,030 and 552 (CV = 1.09; 95% CI = 97

  18. The analytical and numerical study of the fluorination of uranium dioxide particles

    International Nuclear Information System (INIS)

    Sazhin, S.S.

    1997-01-01

    A detailed analytical study of the equations describing the fluorination of UO 2 particles is presented for some limiting cases assuming that the mass flowrate of these particles is so small that they do not affect the state of the gas. The analytical solutions obtained can be used for approximate estimates of the effect of fluorination on particle diameter and temperature but their major application, however, is probably in the verification of self-consistent numerical solutions. Computational results are presented and discussed for a self-consistent problem in which both the effects of gas on particles and particles on gas are accounted for. It has been shown that in the limiting cases for which analytical solutions have been obtained, the coincidence between numerical and analytical results is almost exact. This can be considered as a verification of both the analytical and numerical solutions. (orig.)

  19. A residual-based a posteriori error estimator for single-phase Darcy flow in fractured porous media

    KAUST Repository

    Chen, Huangxin

    2016-12-09

    In this paper we develop an a posteriori error estimator for a mixed finite element method for single-phase Darcy flow in a two-dimensional fractured porous media. The discrete fracture model is applied to model the fractures by one-dimensional fractures in a two-dimensional domain. We consider Raviart–Thomas mixed finite element method for the approximation of the coupled Darcy flows in the fractures and the surrounding porous media. We derive a robust residual-based a posteriori error estimator for the problem with non-intersecting fractures. The reliability and efficiency of the a posteriori error estimator are established for the error measured in an energy norm. Numerical results verifying the robustness of the proposed a posteriori error estimator are given. Moreover, our numerical results indicate that the a posteriori error estimator also works well for the problem with intersecting fractures.

  20. The Guarani Aquifer System: estimation of recharge along the Uruguay-Brazil border

    Science.gov (United States)

    Gómez, Andrea A.; Rodríguez, Leticia B.; Vives, Luis S.

    2010-11-01

    The cities of Rivera and Santana do Livramento are located on the outcropping area of the sandstone Guarani Aquifer on the Brazil-Uruguay border, where the aquifer is being increasingly exploited. Therefore, recharge estimates are needed to address sustainability. First, a conceptual model of the area was developed. A multilayer, heterogeneous and anisotropic groundwater-flow model was built to validate the conceptual model and to estimate recharge. A field campaign was conducted to collect water samples and monitor water levels used for model calibration. Field data revealed that there exists vertical gradients between confining basalts and underlying sandstones, suggesting basalts could indirectly recharge sandstone in fractured areas. Simulated downward flow between them was a small amount within the global water budget. Calibrated recharge rates over basalts and over outcropping sandstones were 1.3 and 8.1% of mean annual precipitation, respectively. A big portion of sandstone recharge would be drained by streams. The application of a water balance yielded a recharge of 8.5% of average annual precipitation. The numerical model and the water balance yielded similar recharge values consistent with determinations from previous authors in the area and other regions of the aquifer, providing an upper bound for recharge in this transboundary aquifer.