WorldWideScience

Sample records for previous numerical estimates

  1. Numerical Estimation in Preschoolers

    Science.gov (United States)

    Berteletti, Ilaria; Lucangeli, Daniela; Piazza, Manuela; Dehaene, Stanislas; Zorzi, Marco

    2010-01-01

    Children's sense of numbers before formal education is thought to rely on an approximate number system based on logarithmically compressed analog magnitudes that increases in resolution throughout childhood. School-age children performing a numerical estimation task have been shown to increasingly rely on a formally appropriate, linear…

  2. Development of Numerical Estimation in Young Children

    Science.gov (United States)

    Siegler, Robert S.; Booth, Julie L.

    2004-01-01

    Two experiments examined kindergartners', first graders', and second graders' numerical estimation, the internal representations that gave rise to the estimates, and the general hypothesis that developmental sequences within a domain tend to repeat themselves in new contexts. Development of estimation in this age range on 0-to-100 number lines…

  3. Representational Change and Children's Numerical Estimation

    Science.gov (United States)

    Opfer, John E.; Siegler, Robert S.

    2007-01-01

    We applied overlapping waves theory and microgenetic methods to examine how children improve their estimation proficiency, and in particular how they shift from reliance on immature to mature representations of numerical magnitude. We also tested the theoretical prediction that feedback on problems on which the discrepancy between two…

  4. Numerical estimation in individuals with Down syndrome.

    Science.gov (United States)

    Lanfranchi, Silvia; Berteletti, Ilaria; Torrisi, Erika; Vianello, Renzo; Zorzi, Marco

    2014-10-31

    We investigated numerical estimation in children with Down syndrome (DS) in order to assess whether their pattern of performance is tied to experience (age), overall cognitive level, or specifically impaired. Siegler and Opfer's (2003) number to position task, which requires translating a number into a spatial position on a number line, was administered to a group of 21 children with DS and to two control groups of typically developing children (TD), matched for mental and chronological age. Results suggest that numerical estimation and the developmental transition between logarithm and linear patterns of estimates in children with DS is more similar to that of children with the same mental age than to children with the same chronological age. Moreover linearity was related to the cognitive level in DS while in TD children it was related to the experience level. Copyright © 2014. Published by Elsevier Ltd.

  5. Numerical simulation of the shot peening process under previous loading conditions

    International Nuclear Information System (INIS)

    Romero-Ángeles, B; Urriolagoitia-Sosa, G; Torres-San Miguel, C R; Molina-Ballinas, A; Benítez-García, H A; Vargas-Bustos, J A; Urriolagoitia-Calderón, G

    2015-01-01

    This research presents a numerical simulation of the shot peening process and determines the residual stress field induced into a component with a previous loading history. The importance of this analysis is based on the fact that mechanical elements under shot peening are also subjected to manufacturing processes, which convert raw material into finished product. However, material is not provided in a virgin state, it has a previous loading history caused by the manner it is fabricated. This condition could alter some beneficial aspects of the residual stress induced by shot peening and could accelerate the crack nucleation and propagation progression. Studies were performed in beams subjected to strain hardening in tension (5ε y ) before shot peening was applied. Latter results were then compared in a numerical assessment of an induced residual stress field by shot peening carried out in a component (beam) without any previous loading history. In this paper, it is clearly shown the detrimental or beneficial effect that previous loading history can bring to the mechanical component and how it can be controlled to improve the mechanical behavior of the material

  6. Developmental and Individual Differences in Pure Numerical Estimation

    Science.gov (United States)

    Booth, Julie L.; Siegler, Robert S.

    2006-01-01

    The authors examined developmental and individual differences in pure numerical estimation, the type of estimation that depends solely on knowledge of numbers. Children between kindergarten and 4th grade were asked to solve 4 types of numerical estimation problems: computational, numerosity, measurement, and number line. In Experiment 1,…

  7. Developmental and individual differences in pure numerical estimation.

    Science.gov (United States)

    Booth, Julie L; Siegler, Robert S

    2006-01-01

    The authors examined developmental and individual differences in pure numerical estimation, the type of estimation that depends solely on knowledge of numbers. Children between kindergarten and 4th grade were asked to solve 4 types of numerical estimation problems: computational, numerosity, measurement, and number line. In Experiment 1, kindergartners and 1st, 2nd, and 3rd graders were presented problems involving the numbers 0-100; in Experiment 2, 2nd and 4th graders were presented problems involving the numbers 0-1,000. Parallel developmental trends, involving increasing reliance on linear representations of numbers and decreasing reliance on logarithmic ones, emerged across different types of estimation. Consistent individual differences across tasks were also apparent, and all types of estimation skill were positively related to math achievement test scores. Implications for understanding of mathematics learning in general are discussed. Copyright 2006 APA, all rights reserved.

  8. Estimating surface fluxes using eddy covariance and numerical ogive optimization

    DEFF Research Database (Denmark)

    Sievers, J.; Papakyriakou, T.; Larsen, Søren Ejling

    2015-01-01

    Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low-frequency con......Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low...

  9. New Estimates of Numerical Values Related to a Simplex

    Directory of Open Access Journals (Sweden)

    Mikhail V. Nevskii

    2017-01-01

    if \\(\\xi_n=n.\\ This statement is valid only in one direction. There exists a simplex \\(S\\subset Q_5\\ such that the boundary of the simplex \\(5S\\ contains all the vertices of the cube \\(Q_5\\. We describe a one-parameter family of simplices contained in \\(Q_5\\ with the property \\(\\alpha(S=\\xi(S=5.\\ These simplices were found with the use of numerical and symbolic computations. %Numerical experiments allow to discover Another new result is an inequality \\(\\xi_6\\ <6.0166\\. %Прежняя оценка имела вид \\(6\\leq \\xi_6\\leq 6.6\\. We also systematize some of our estimates of numbers \\(\\xi_n\\, \\(\\theta_n\\, \\(\\varkappa_n\\ derived by~now. The symbol \\(\\theta_n\\ denotes the minimal norm of interpolation projection on the space of linear functions of \\(n\\ variables as~an~operator from \\(C(Q_n\\ to~\\(C(Q_n\\.

  10. Asynchronous machine rotor speed estimation using a tabulated numerical approach

    Science.gov (United States)

    Nguyen, Huu Phuc; De Miras, Jérôme; Charara, Ali; Eltabach, Mario; Bonnet, Stéphane

    2017-12-01

    This paper proposes a new method to estimate the rotor speed of the asynchronous machine by looking at the estimation problem as a nonlinear optimal control problem. The behavior of the nonlinear plant model is approximated off-line as a prediction map using a numerical one-step time discretization obtained from simulations. At each time-step, the speed of the induction machine is selected satisfying the dynamic fitting problem between the plant output and the predicted output, leading the system to adopt its dynamical behavior. Thanks to the limitation of the prediction horizon to a single time-step, the execution time of the algorithm can be completely bounded. It can thus easily be implemented and embedded into a real-time system to observe the speed of the real induction motor. Simulation results show the performance and robustness of the proposed estimator.

  11. A methodology for modeling photocatalytic reactors for indoor pollution control using previously estimated kinetic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Passalia, Claudio; Alfano, Orlando M. [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina); Brandi, Rodolfo J., E-mail: rbrandi@santafe-conicet.gov.ar [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Indoor pollution control via photocatalytic reactors. Black-Right-Pointing-Pointer Scaling-up methodology based on previously determined mechanistic kinetics. Black-Right-Pointing-Pointer Radiation interchange model between catalytic walls using configuration factors. Black-Right-Pointing-Pointer Modeling and experimental validation of a complex geometry photocatalytic reactor. - Abstract: A methodology for modeling photocatalytic reactors for their application in indoor air pollution control is carried out. The methodology implies, firstly, the determination of intrinsic reaction kinetics for the removal of formaldehyde. This is achieved by means of a simple geometry, continuous reactor operating under kinetic control regime and steady state. The kinetic parameters were estimated from experimental data by means of a nonlinear optimization algorithm. The second step was the application of the obtained kinetic parameters to a very different photoreactor configuration. In this case, the reactor is a corrugated wall type using nanosize TiO{sub 2} as catalyst irradiated by UV lamps that provided a spatially uniform radiation field. The radiative transfer within the reactor was modeled through a superficial emission model for the lamps, the ray tracing method and the computation of view factors. The velocity and concentration fields were evaluated by means of a commercial CFD tool (Fluent 12) where the radiation model was introduced externally. The results of the model were compared experimentally in a corrugated wall, bench scale reactor constructed in the laboratory. The overall pollutant conversion showed good agreement between model predictions and experiments, with a root mean square error less than 4%.

  12. CONTROL BASED ON NUMERICAL METHODS AND RECURSIVE BAYESIAN ESTIMATION IN A CONTINUOUS ALCOHOLIC FERMENTATION PROCESS

    Directory of Open Access Journals (Sweden)

    Olga L. Quintero

    Full Text Available Biotechnological processes represent a challenge in the control field, due to their high nonlinearity. In particular, continuous alcoholic fermentation from Zymomonas mobilis (Z.m presents a significant challenge. This bioprocess has high ethanol performance, but it exhibits an oscillatory behavior in process variables due to the influence of inhibition dynamics (rate of ethanol concentration over biomass, substrate, and product concentrations. In this work a new solution for control of biotechnological variables in the fermentation process is proposed, based on numerical methods and linear algebra. In addition, an improvement to a previously reported state estimator, based on particle filtering techniques, is used in the control loop. The feasibility estimator and its performance are demonstrated in the proposed control loop. This methodology makes it possible to develop a controller design through the use of dynamic analysis with a tested biomass estimator in Z.m and without the use of complex calculations.

  13. Numerical estimation of concrete beams reinforced with FRP bars

    Directory of Open Access Journals (Sweden)

    Protchenko Kostiantyn

    2016-01-01

    Full Text Available This paper introduces numerical investigation on mechanical performance of a concrete beam reinforced with Fibre Reinforced Polymer (FRP bars, which can be competitive alternative to steel bars for enhancing concrete structures. The objective of this work is being identified as elaborating of reliable numerical model for predicting strength capacity of structural elements with implementation of Finite Element Analysis (FEA. The numerical model is based on experimental study prepared for the beams, which were reinforced with Basalt FRP (BFRP bars and steel bars (for comparison. The results obtained for the beams reinforced with steel bars are found to be in close agreement with the experimental results. However, the beams reinforced with BFRP bars in experimental programme demonstrated higher bearing capacity than those reinforced with steel bars, which is not in a good convergence with numerical results. Authors did attempt to describe the reasons on achieving experimentally higher bearing capacity of beams reinforced with BFRP bars.

  14. NUMERICAL AND ANALYTIC METHODS OF ESTIMATION BRIDGES’ CONSTRUCTIONS

    Directory of Open Access Journals (Sweden)

    Y. Y. Luchko

    2010-03-01

    Full Text Available In this article the numerical and analytical methods of calculation of the stressed-and-strained state of bridge constructions are considered. The task on increasing of reliability and accuracy of the numerical method and its solution by means of calculations in two bases are formulated. The analytical solution of the differential equation of deformation of a ferro-concrete plate under the action of local loads is also obtained.

  15. On Numerical Characteristics of а Simplex and their Estimates

    Directory of Open Access Journals (Sweden)

    M. V. Nevskii

    2016-01-01

    precisely\\(\\frac{19+5\\sqrt{13}}{9}\\. Applying this valuein numerical computations we achive the value$$\\varkappa_4 = \\frac{4+\\sqrt{13}}{5}=1.5211\\ldots$$Denote by \\(\\theta_n\\ the minimal normof interpolation projection on the space of linear functions of \\(n\\variables as an operator from\\(C(Q_n\\in \\(C(Q_n\\. It is known that, for each \\(n\\,$$\\xi_n\\leq \\frac{n+1}{2}\\left(\\theta_n-1\\right+1,$$and for \\(n=1,2,3,7\\ here we have an equality.Using computer methods we obtain the result \\(\\theta_4=\\frac{7}{3}\\.Hence, the minimal \\(n\\ such that the above inequality has a strong formis equal to 4.%, a principal architecture of common purpose CPU and its main components are discussed, CPUs evolution is considered and drawbacks that prevent future CPU development are mentioned. Further, solutions proposed so far are addressed and new CPU architecture is introduced. The proposed architecture is based on wireless cache access that enables reliable interaction between cores in multicore CPUs using terahertz band, 0.1-10THz. The presented architecture addresses the scalability problem of existing processors and may potentially allow to scale them to tens of cores. As in-depth analysis of the applicability of suggested architecture requires accurate prediction of traffic in current and next generations of processors we then consider a set of approaches for traffic estimation in modern CPUs discussing their benefits and drawbacks. The authors identify traffic measurements using existing software tools as the most promising approach for traffic estimation, and use Intel Performance Counter Monitor for this purpose. Three types of CPU loads are considered including two artificial tests and background system load. For each load type the amount of data transmitted through the L2-L3 interface is reported for various input parameters including the number of active cores and their dependences on number of cores and operational frequency.

  16. GM-PHD Filter Combined with Track-Estimate Association and Numerical Interpolation

    Directory of Open Access Journals (Sweden)

    Jinguang Chen

    2015-01-01

    Full Text Available For the standard Gaussian mixture probability hypothesis density (GM-PHD filter, the number of targets can be overestimated if the clutter rate is too high or underestimated if the detection rate is too low. These problems seriously affect the accuracy of multitarget tracking for the number and the value of measurements and clutters cannot be distinguished and recognized. Therefore, we proposed an improved GM-PHD filter to tackle these problems. Firstly, a track-estimate association was implemented in the filtering process to detect and remove false-alarm targets. Secondly, a numerical interpolation technique was used to compensate the missing targets caused by low detection rate. At the end of this paper, simulation results were presented to demonstrate the proposed GM-PHD algorithm is more effective in estimating the number and state of targets than the previous ones.

  17. Analyses of more than 60,000 exomes questions the role of numerous genes previously associated with dilated cardiomyopathy

    DEFF Research Database (Denmark)

    Nouhravesh, Nina; Ahlberg, Gustav; Ghouse, Jonas

    2016-01-01

    BACKGROUND: Hundreds of genetic variants have been described as disease causing in dilated cardiomyopathy (DCM). Some of these associations are now being questioned. We aimed to identify the prevalence of previously DCM associated variants in the Exome Aggregation Consortium (ExAC), in order...... to identify potentially false-positive DCM variants. METHODS: Variants listed as DCM disease-causing variants in the Human Gene Mutation Database were extracted from ExAC. Pathogenicity predictions for these variants were mined from dbNSFP v 2.9 database. RESULTS: Of the 473 DCM variants listed in HGMD, 148...... (31%) were found in ExAC. The expected number of individuals with DCM in ExAC is 25 based on the prevalence in the general population. Yet, 35 variants were found in more than 25 individuals. In 13 genes, we identified all variants previously associated with DCM; four genes contained variants above...

  18. Numerical estimation of structural integrity of salt cavern wells.

    NARCIS (Netherlands)

    Orlic, B.; Thienen-Visser, K. van; Schreppers, G.J.

    2016-01-01

    Finite element analyses were performed to estimate axial deformation of cavern wells due to gas storage operations in solution-mined salt caverns. Caverns shrink over time due to salt creep and the cavern roof subsides potentially threatening well integrity. Cavern deformation, deformation of salt

  19. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  20. Satellite telemetry reveals higher fishing mortality rates than previously estimated, suggesting overfishing of an apex marine predator.

    Science.gov (United States)

    Byrne, Michael E; Cortés, Enric; Vaudo, Jeremy J; Harvey, Guy C McN; Sampson, Mark; Wetherbee, Bradley M; Shivji, Mahmood

    2017-08-16

    Overfishing is a primary cause of population declines for many shark species of conservation concern. However, means of obtaining information on fishery interactions and mortality, necessary for the development of successful conservation strategies, are often fisheries-dependent and of questionable quality for many species of commercially exploited pelagic sharks. We used satellite telemetry as a fisheries-independent tool to document fisheries interactions, and quantify fishing mortality of the highly migratory shortfin mako shark ( Isurus oxyrinchus ) in the western North Atlantic Ocean. Forty satellite-tagged shortfin mako sharks tracked over 3 years entered the Exclusive Economic Zones of 19 countries and were harvested in fisheries of five countries, with 30% of tagged sharks harvested. Our tagging-derived estimates of instantaneous fishing mortality rates ( F = 0.19-0.56) were 10-fold higher than previous estimates from fisheries-dependent data (approx. 0.015-0.024), suggesting data used in stock assessments may considerably underestimate fishing mortality. Additionally, our estimates of F were greater than those associated with maximum sustainable yield, suggesting a state of overfishing. This information has direct application to evaluations of stock status and for effective management of populations, and thus satellite tagging studies have potential to provide more accurate estimates of fishing mortality and survival than traditional fisheries-dependent methodology. © 2017 The Author(s).

  1. Wildlife Loss Estimates and Summary of Previous Mitigation Related to Hydroelectric Projects in Montana, Volume Three, Hungry Horse Project.

    Energy Technology Data Exchange (ETDEWEB)

    Casey, Daniel

    1984-10-01

    This assessment addresses the impacts to the wildlife populations and wildlife habitats due to the Hungry Horse Dam project on the South Fork of the Flathead River and previous mitigation of theses losses. In order to develop and focus mitigation efforts, it was first necessary to estimate wildlife and wildlife hatitat losses attributable to the construction and operation of the project. The purpose of this report was to document the best available information concerning the degree of impacts to target wildlife species. Indirect benefits to wildlife species not listed will be identified during the development of alternative mitigation measures. Wildlife species incurring positive impacts attributable to the project were identified.

  2. Air Space Proportion in Pterosaur Limb Bones Using Computed Tomography and Its Implications for Previous Estimates of Pneumaticity

    Science.gov (United States)

    Martin, Elizabeth G.; Palmer, Colin

    2014-01-01

    Air Space Proportion (ASP) is a measure of how much air is present within a bone, which allows for a quantifiable comparison of pneumaticity between specimens and species. Measured from zero to one, higher ASP means more air and less bone. Conventionally, it is estimated from measurements of the internal and external bone diameter, or by analyzing cross-sections. To date, the only pterosaur ASP study has been carried out by visual inspection of sectioned bones within matrix. Here, computed tomography (CT) scans are used to calculate ASP in a small sample of pterosaur wing bones (mainly phalanges) and to assess how the values change throughout the bone. These results show higher ASPs than previous pterosaur pneumaticity studies, and more significantly, higher ASP values in the heads of wing bones than the shaft. This suggests that pneumaticity has been underestimated previously in pterosaurs, birds, and other archosaurs when shaft cross-sections are used to estimate ASP. Furthermore, ASP in pterosaurs is higher than those found in birds and most sauropod dinosaurs, giving them among the highest ASP values of animals studied so far, supporting the view that pterosaurs were some of the most pneumatized animals to have lived. The high degree of pneumaticity found in pterosaurs is proposed to be a response to the wing bone bending stiffness requirements of flight rather than a means to reduce mass, as is often suggested. Mass reduction may be a secondary result of pneumaticity that subsequently aids flight. PMID:24817312

  3. Advantages in using Kalman phasor estimation in numerical differential protective relaying compared to the Fourier estimation method

    DEFF Research Database (Denmark)

    Bukh, Bjarne; Gudmundsdottir, Unnur Stella; Balle Holst, Per

    2007-01-01

    This paper demonstrates the results obtained from detailed studies of Kalman phasor estimation used in numerical differential protective relaying of a power transformer. The accuracy and expeditiousness of a current estimate in the numerical differential protection is critical for correct...... and not unnecessary activation of the breakers by the protecting relay, and the objective of the study was to utilize the capability of Kalman phasor estimation in a signal model representing the expected current signal from the current transformers of the power transformer. The used signal model included...

  4. Numerical estimation of aircrafts' unsteady lateral-directional stability derivatives

    Directory of Open Access Journals (Sweden)

    Maričić N.L.

    2006-01-01

    Full Text Available A technique for predicting steady and oscillatory aerodynamic loads on general configuration has been developed. The prediction is based on the Doublet-Lattice Method, Slender Body Theory and Method of Images. The chord and span wise loading on lifting surfaces and longitudinal bodies (in horizontal and vertical plane load distributions are determined. The configuration may be composed of an assemblage of lifting surfaces (with control surfaces and bodies (with circular cross sections and a longitudinal variation of radius. Loadings predicted by this method are used to calculate (estimate steady and unsteady (dynamic lateral-directional stability derivatives. The short outline of the used methods is given in [1], [2], [3], [4] and [5]. Applying the described methodology software DERIV is developed. The obtained results from DERIV are compared to NASTRAN examples HA21B and HA21D from [4]. In the first example (HA21B, the jet transport wing (BAH wing is steady rolling and lateral stability derivatives are determined. In the second example (HA21D, lateral-directional stability derivatives are calculated for forward- swept-wing (FSW airplane in antisymmetric quasi-steady maneuvers. Acceptable agreement is achieved comparing the results from [4] and DERIV.

  5. Numerical methods of estimating the dispersion of radionuclides in atmosphere

    International Nuclear Information System (INIS)

    Vladu, Mihaela; Ghitulescu, Alina; Popescu, Gheorghe; Piciorea, Iuliana

    2007-01-01

    Full text: The paper presents the method of dispersion calculation, witch can be applied for the DLE calculation. This is necessary to ensure a secure performance of the Experimental Pilot Plant for Tritium and Deuterium Separation (using the technology for detritiation based upon isotope catalytic exchange between tritiated heavy water and deuterium followed by cryogenic distillation of the hydrogen isotopes). For the calculation of the dispersion of radioactivity effluents in the atmosphere, at a given distance between source and receiver, the Gaussian mathematical model was used. This model is currently applied for estimating the long-term results of dispersion in case of continuous or intermittent emissions as basic information for long-term radioprotection measures for areas of the order of kilometres from the source. We have considered intermittent or continuous emissions of intensity lower than 1% per day relative to the annual emission. It is supposed that the radioactive material released into environment presents a gaussian dispersion both in horizontal and vertical plan. The local dispersion parameters could be determined directly with turbulence measurements or indirectly by determination of atmospheric stability. Weather parameters for characterizing the atmospheric dispersion include: - direction of wind relative to the source; - the speed of the wind at the height of emission; - parameters of dispersion to different distances, depending on the atmospheric turbulence which characterizes the mixing of radioactive materials in the atmosphere; - atmospheric stability range; - the height of mixture stratum; - the type and intensity of precipitations. The choice of the most adequate version of Gaussian model depends on the relation among the height where effluent emission is in progress, H (m), and the height at which the buildings influence the air motion, HB (m). There were defined three zones of distinct dispersion. This zones can have variable lengths

  6. Estimation of Resting Energy Expenditure: Validation of Previous and New Predictive Equations in Obese Children and Adolescents.

    Science.gov (United States)

    Acar-Tek, Nilüfer; Ağagündüz, Duygu; Çelik, Bülent; Bozbulut, Rukiye

    2017-08-01

    Accurate estimation of resting energy expenditure (REE) in childrenand adolescents is important to establish estimated energy requirements. The aim of the present study was to measure REE in obese children and adolescents by indirect calorimetry method, compare these values with REE values estimated by equations, and develop the most appropriate equation for this group. One hundred and three obese children and adolescents (57 males, 46 females) between 7 and 17 years (10.6 ± 2.19 years) were recruited for the study. REE measurements of subjects were made with indirect calorimetry (COSMED, FitMatePro, Rome, Italy) and body compositions were analyzed. In females, the percentage of accurate prediction varied from 32.6 (World Health Organization [WHO]) to 43.5 (Molnar and Lazzer). The bias for equations was -0.2% (Kim), 3.7% (Molnar), and 22.6% (Derumeaux-Burel). Kim's (266 kcal/d), Schmelzle's (267 kcal/d), and Henry's equations (268 kcal/d) had the lowest root mean square error (RMSE; respectively 266, 267, 268 kcal/d). The equation that has the highest RMSE values among female subjects was the Derumeaux-Burel equation (394 kcal/d). In males, when the Institute of Medicine (IOM) had the lowest accurate prediction value (12.3%), the highest values were found using Schmelzle's (42.1%), Henry's (43.9%), and Müller's equations (fat-free mass, FFM; 45.6%). When Kim and Müller had the smallest bias (-0.6%, 9.9%), Schmelzle's equation had the smallest RMSE (331 kcal/d). The new specific equation based on FFM was generated as follows: REE = 451.722 + (23.202 * FFM). According to Bland-Altman plots, it has been found out that the new equations are distributed randomly in both males and females. Previously developed predictive equations mostly provided unaccurate and biased estimates of REE. However, the new predictive equations allow clinicians to estimate REE in an obese children and adolescents with sufficient and acceptable accuracy.

  7. Numerical method for estimating the size of chaotic regions of phase space

    International Nuclear Information System (INIS)

    Henyey, F.S.; Pomphrey, N.

    1987-10-01

    A numerical method for estimating irregular volumes of phase space is derived. The estimate weights the irregular area on a surface of section with the average return time to the section. We illustrate the method by application to the stadium and oval billiard systems and also apply the method to the continuous Henon-Heiles system. 15 refs., 10 figs

  8. [Estimating child mortality using the previous child technique, with data from health centers and household surveys: methodological aspects].

    Science.gov (United States)

    Aguirre, A; Hill, A G

    1988-01-01

    2 trials of the previous child or preceding birth technique in Bamako, Mali, and Lima, Peru, gave very promising results for measurement of infant and early child mortality using data on survivorship of the 2 most recent births. In the Peruvian study, another technique was tested in which each woman was asked about her last 3 births. The preceding birth technique described by Brass and Macrae has rapidly been adopted as a simple means of estimating recent trends in early childhood mortality. The questions formulated and the analysis of results are direct when the mothers are visited at the time of birth or soon after. Several technical aspects of the method believed to introduce unforeseen biases have now been studied and found to be relatively unimportant. But the problems arising when the data come from a nonrepresentative fraction of the total fertile-aged population have not been resolved. The analysis based on data from 5 maternity centers including 1 hospital in Bamako, Mali, indicated some practical problems and the information obtained showed the kinds of subtle biases that can result from the effects of selection. The study in Lima tested 2 abbreviated methods for obtaining recent early childhood mortality estimates in countries with deficient vital registration. The basic idea was that a few simple questions added to household surveys on immunization or diarrheal disease control for example could produce improved child mortality estimates. The mortality estimates in Peru were based on 2 distinct sources of information in the questionnaire. All women were asked their total number of live born children and the number still alive at the time of the interview. The proportion of deaths was converted into a measure of child survival using a life table. Then each woman was asked for a brief history of the 3 most recent live births. Dates of birth and death were noted in month and year of occurrence. The interviews took only slightly longer than the basic survey

  9. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Science.gov (United States)

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  10. [Estimating non work-related sickness leave absences related to a previous occupational injury in Catalonia (Spain)].

    Science.gov (United States)

    Molinero-Ruiz, Emilia; Navarro, Albert; Moriña, David; Albertí-Casas, Constança; Jardí-Lliberia, Josefina; de Montserrat-Nonó, Jaume

    2015-01-01

    To estimate the frequency of non-work sickness absence (ITcc) related to previous occupational injuries with (ATB) or without (ATSB) sick leave. Prospective longitudinal study. Workers with ATB or ATSB notified to the Occupational Accident Registry of Catalonia were selected in the last term of 2009. They were followed-up for six months after returning to work (ATB) or after the accident (ATSB), by sex and occupation. Official labor and health authority registries were used as information sources. An "injury-associated ITcc" was defined when the sick leave occurred in the following six months and within the same diagnosis group. The absolute and relative frequency were calculated according to time elapsed and its duration (cumulated days, measures of central trend and dispersion), by diagnosis group or affected body area, as compared to all of Catalonia. 2,9%of ATB (n=627) had an injury-associated ITcc, with differences by diagnosis, sex and occupation; this was also the case for 2,1% of ATSB (n=496).With the same diagnosis, duration of ITcc was longer among those who had an associated injury, and with respect to all of Catalonia. Some of the under-reporting of occupational pathology corresponds to episodes initially recognized as being work-related. Duration of sickness absence depends not only on diagnosis and clinical course, but also on criteria established by the entities managing the case. This could imply that more complicated injuries are referred to the national health system, resulting in personal, legal, healthcare and economic cost consequences for all involved stakeholders. Copyright belongs to the Societat Catalana de Salut Laboral.

  11. Estimation of water diffusion coefficient into polycarbonate at different temperatures using numerical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Nasirabadi, P. Shojaee; Jabbari, M.; Hattel, J. H. [Process Modelling Group, Department of Mechanical Engineering, Technical University of Denmark, Nils Koppels Allé, 2800 Kgs. Lyngby (Denmark)

    2016-06-08

    Nowadays, many electronic systems are exposed to harsh conditions of relative humidity and temperature. Mass transport properties of electronic packaging materials are needed in order to investigate the influence of moisture and temperature on reliability of electronic devices. Polycarbonate (PC) is widely used in the electronics industry. Thus, in this work the water diffusion coefficient into PC is investigated. Furthermore, numerical methods used for estimation of the diffusion coefficient and their assumptions are discussed. 1D and 3D numerical solutions are compared and based on this, it is shown how the estimated value can be different depending on the choice of dimensionality in the model.

  12. Estimation of Water Diffusion Coefficient into Polycarbonate at Different Temperatures Using Numerical Simulation

    DEFF Research Database (Denmark)

    Shojaee Nasirabadi, Parizad; Jabbaribehnam, Mirmasoud; Hattel, Jesper Henri

    2016-01-01

    ) is widely used in the electronics industry. Thus, in this work the water diffusion coefficient into PC is investigated. Furthermore, numerical methods used for estimation of the diffusion coefficient and their assumptions are discussed. 1D and 3D numerical solutions are compared and based on this, itis......Nowadays, many electronic systems are exposed to harsh conditions of relative humidity and temperature. Masstransport properties of electronic packaging materials are needed in order to investigate the influence of moisture andtemperature on reliability of electronic devices. Polycarbonate (PC...... shown how the estimated value can be different depending on the choice of dimensionality in the model....

  13. Estimating local atmosphere-surface fluxes using eddy covariance and numerical Ogive optimization

    DEFF Research Database (Denmark)

    Sievers, Jakob; Papakyriakou, Tim; Larsen, Søren

    2014-01-01

    Estimating representative surface-fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modeling efforts, low-frequency cont......Estimating representative surface-fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modeling efforts, low...

  14. Estimation of state and material properties during heat-curing molding of composite materials using data assimilation: A numerical study

    Directory of Open Access Journals (Sweden)

    Ryosuke Matsuzaki

    2018-03-01

    Full Text Available Accurate simulations of carbon fiber-reinforced plastic (CFRP molding are vital for the development of high-quality products. However, such simulations are challenging and previous attempts to improve the accuracy of simulations by incorporating the data acquired from mold monitoring have not been completely successful. Therefore, in the present study, we developed a method to accurately predict various CFRP thermoset molding characteristics based on data assimilation, a process that combines theoretical and experimental values. The degree of cure as well as temperature and thermal conductivity distributions during the molding process were estimated using both temperature data and numerical simulations. An initial numerical experiment demonstrated that the internal mold state could be determined solely from the surface temperature values. A subsequent numerical experiment to validate this method showed that estimations based on surface temperatures were highly accurate in the case of degree of cure and internal temperature, although predictions of thermal conductivity were more difficult. Keywords: Engineering, Materials science, Applied mathematics

  15. Estimating the effect of current, previous and never use of drugs in studies based on prescription registries

    DEFF Research Database (Denmark)

    Nielsen, Lars Hougaard; Løkkegaard, Ellen; Andreasen, Anne Helms

    2009-01-01

    of this misclassification for analysing the risk of breast cancer. MATERIALS AND METHODS: Prescription data were obtained from Danish Registry of Medicinal Products Statistics and we applied various methods to approximate treatment episodes. We analysed the duration of HT episodes to study the ability to identify......PURPOSE: Many studies which investigate the effect of drugs categorize the exposure variable into never, current, and previous use of the study drug. When prescription registries are used to make this categorization, the exposure variable possibly gets misclassified since the registries do...... not carry any information on the time of discontinuation of treatment.In this study, we investigated the amount of misclassification of exposure (never, current, previous use) to hormone therapy (HT) when the exposure variable was based on prescription data. Furthermore, we evaluated the significance...

  16. Estimating and localizing the algebraic and total numerical errors using flux reconstructions

    Czech Academy of Sciences Publication Activity Database

    Papež, Jan; Strakoš, Z.; Vohralík, M.

    2018-01-01

    Roč. 138, č. 3 (2018), s. 681-721 ISSN 0029-599X R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * a posteriori error estimation * algebraic error * discretization error * stopping criteria * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016

  17. Unconditional convergence and error estimates for bounded numerical solutions of the barotropic Navier-Stokes system

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Hošek, Radim; Maltese, D.; Novotný, A.

    2017-01-01

    Roč. 33, č. 4 (2017), s. 1208-1223 ISSN 0749-159X EU Projects: European Commission(XE) 320078 - MATH EF Institutional support: RVO:67985840 Keywords : convergence * error estimates * mixed numerical method * Navier–Stokes system Subject RIV: BA - General Math ematics OBOR OECD: Pure math ematics Impact factor: 1.079, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/num.22140/abstract

  18. Numerical

    Directory of Open Access Journals (Sweden)

    M. Boumaza

    2015-07-01

    Full Text Available Transient convection heat transfer is of fundamental interest in many industrial and environmental situations, as well as in electronic devices and security of energy systems. Transient fluid flow problems are among the more difficult to analyze and yet are very often encountered in modern day technology. The main objective of this research project is to carry out a theoretical and numerical analysis of transient convective heat transfer in vertical flows, when the thermal field is due to different kinds of variation, in time and space of some boundary conditions, such as wall temperature or wall heat flux. This is achieved by the development of a mathematical model and its resolution by suitable numerical methods, as well as performing various sensitivity analyses. These objectives are achieved through a theoretical investigation of the effects of wall and fluid axial conduction, physical properties and heat capacity of the pipe wall on the transient downward mixed convection in a circular duct experiencing a sudden change in the applied heat flux on the outside surface of a central zone.

  19. Quantity estimation based on numerical cues in the mealworm beetle (Tenebrio molitor

    Directory of Open Access Journals (Sweden)

    Pau eCarazo

    2012-11-01

    Full Text Available In this study, we used a biologically relevant experimental procedure to ask whether mealworm beetles (Tenebrio molitor are spontaneously capable of assessing quantities based on numerical cues. Like other insect species, mealworm beetles adjust their reproductive behaviour (i.e. investment in mate guarding according to the perceived risk of sperm competition (i.e. probability that a female will mate with another male. To test whether males have the ability to estimate numerosity based on numerical cues, we staged matings between virgin females and virgin males in which we varied the number of rival males the experimental male had access to immediately preceding mating as a cue to sperm competition risk (from 1 to 4. Rival males were presented sequentially, and we controlled for continuous cues by ensuring that males in all treatments were exposed to the same amount of male-male contact. Males exhibited a marked increase in the time they devoted to mate guarding in response to an increase in the number of different rival males they were exposed to. Since males could not rely on continuous cues we conclude that they kept a running tally of the number of individuals they encountered serially, which meets the requirements of the basic ordinality and cardinality principles of proto-counting. Our results thus offer good evidence of ‘true’ numerosity estimation or quantity estimation and, along with recent studies in honey-bees, suggest that vertebrates and invertebrates share similar core systems of non-verbal numerical representation.

  20. A different approach to estimate nonlinear regression model using numerical methods

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  1. Numerical estimation of the effective electrical conductivity in carbon paper diffusion media

    International Nuclear Information System (INIS)

    Zamel, Nada; Li, Xianguo; Shen, Jun

    2012-01-01

    Highlights: ► Anisotropic effective electrical conductivity of the GDL is estimated numerically. ► The electrical conductivity is a key component in understanding the structure of the GDL. ► Expressions for evaluating the electrical conductivity were proposed. ► The tortuosity factor was evaluated as 1.7 and 3.4 in the in- and through-plane directions, respectively. - Abstract: The transport of electrons through the gas diffusion layer (GDL) of polymer electrolyte membrane (PEM) fuel cells has a significant impact on the optimal design and operation of PEM fuel cells and is directly affected by the anisotropic nature of the carbon paper material. In this study, a three-dimensional reconstruction of the GDL is used to numerically estimate the directional dependent effective electrical conductivity of the layer for various porosity values. The distribution of the fibers in the through-plane direction results in high electrical resistivity; hence, decreasing the overall effective electrical conductivity in this direction. This finding is in agreement with measured experimental data. Further, using the numerical results of this study, two mathematical expressions were proposed for the calculation of the effective electrical conductivity of the carbon paper GDL. Finally, the tortuosity factor was evaluated as 1.7 and 3.4 in the in- and through-plane directions, respectively.

  2. Estimating the Numerical Diapycnal Mixing in the GO5.0 Ocean Model

    Science.gov (United States)

    Megann, A.; Nurser, G.

    2014-12-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2014), and forms part of the GC1 and GC2 climate models. It uses version 3.4 of the NEMO model, on the ORCA025 ¼° global tripolar grid. We describe various approaches to quantifying the numerical diapycnal mixing in this model, and present results from analysis of the GO5.0 model based on the isopycnal watermass analysis of Lee et al (2002) that indicate that numerical mixing does indeed form a significant component of the watermass transformation in the ocean interior.

  3. Estimation of Dynamic Friction Process of the Akatani Landslide Based on the Waveform Inversion and Numerical Simulation

    Science.gov (United States)

    Yamada, M.; Mangeney, A.; Moretti, L.; Matsushi, Y.

    2014-12-01

    Understanding physical parameters, such as frictional coefficients, velocity change, and dynamic history, is important issue for assessing and managing the risks posed by deep-seated catastrophic landslides. Previously, landslide motion has been inferred qualitatively from topographic changes caused by the event, and occasionally from eyewitness reports. However, these conventional approaches are unable to evaluate source processes and dynamic parameters. In this study, we use broadband seismic recordings to trace the dynamic process of the deep-seated Akatani landslide that occurred on the Kii Peninsula, Japan, which is one of the best recorded large slope failures. Based on the previous results of waveform inversions and precise topographic surveys done before and after the event, we applied numerical simulations using the SHALTOP numerical model (Mangeney et al., 2007). This model describes homogeneous continuous granular flows on a 3D topography based on a depth averaged thin layer approximation. We assume a Coulomb's friction law with a constant friction coefficient, i. e. the friction is independent of the sliding velocity. We varied the friction coefficients in the simulation so that the resulting force acting on the surface agrees with the single force estimated from the seismic waveform inversion. Figure shows the force history of the east-west components after the band-pass filtering between 10-100 seconds. The force history of the simulation with frictional coefficient 0.27 (thin red line) the best agrees with the result of seismic waveform inversion (thick gray line). Although the amplitude is slightly different, phases are coherent for the main three pulses. This is an evidence that the point-source approximation works reasonably well for this particular event. The friction coefficient during the sliding was estimated to be 0.38 based on the seismic waveform inversion performed by the previous study and on the sliding block model (Yamada et al., 2013

  4. A novel numerical model for estimating the collapse pressure of flexible pipes

    Energy Technology Data Exchange (ETDEWEB)

    Nogueira, Victor P.P.; Antoun Netto, Theodoro [Universidade Federal do Rio de Janeiro (COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao em Engenharia], e-mail: victor@lts.coppe.ufrj.br

    2009-07-01

    As the worldwide oil and gas industry operational environments move to ultra-deep waters, failure mechanisms in flexible pipes such as instability of the armor layers under compression and hydrostatic collapse are more likely to occur. Therefore, it is important to develop reliable numerical tools to reproduce the failure mechanisms that may occur in flexible pipes. This work presents a representative finite element model of flexible pipe capable to reproduce its pre and post-collapse behavior under hydrostatic pressure. The model, developed in the scope of this work, uses beam elements and includes nonlinear kinematics and material behavior influences. The dependability of the numerical results is assessed in light of experimental tests on flexible pipes with 4 inches and 8 inches nominal diameter available in the literature (Souza, 2002). The applied methodology provided coherent values regarding the estimation of the collapse pressures and results have shown that the proposed model is capable to reproduce experimental results. (author)

  5. Is the difference between chemical and numerical estimates of baseflow meaningful?

    Science.gov (United States)

    Cartwright, Ian; Gilfedder, Ben; Hofmann, Harald

    2014-05-01

    Both chemical and numerical techniques are commonly used to calculate baseflow inputs to gaining rivers. In general the chemical methods yield lower estimates of baseflow than the numerical techniques. In part, this may be due to the techniques assuming two components (event water and baseflow) whereas there may also be multiple transient stores of water. Bank return waters, interflow, or waters stored on floodplains are delayed components that may be geochemically similar to the surface water from which they are derived; numerical techniques may record these components as baseflow whereas chemical mass balance studies are likely to aggregate them with the surface water component. This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. While more sophisticated techniques exist, these methods of estimating baseflow are readily applied with the available data and have been used widely elsewhere. During the early stages of high-discharge events, chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those from chemical mass balance using Cl calculated from continuous electrical conductivity. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of annual discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of annual discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge). These differences most probably reflect how the different techniques characterise the transient water sources in this catchment. The local minimum and recursive digital filters aggregate much of the

  6. Parameter Estimation for Partial Differential Equations by Collage-Based Numerical Approximation

    Directory of Open Access Journals (Sweden)

    Xiaoyan Deng

    2009-01-01

    into a minimization problem of a function of several variables after the partial differential equation is approximated by a differential dynamical system. Then numerical schemes for solving this minimization problem are proposed, including grid approximation and ant colony optimization. The proposed schemes are applied to a parameter estimation problem for the Belousov-Zhabotinskii equation, and the results show that the proposed approximation method is efficient for both linear and nonlinear partial differential equations with respect to unknown parameters. At worst, the presented method provides an excellent starting point for traditional inversion methods that must first select a good starting point.

  7. BAESNUM, a conversational computer program for the Bayesian estimation of a parameter by a numerical method

    International Nuclear Information System (INIS)

    Colombo, A.G.; Jaarsma, R.J.

    1982-01-01

    This report describes a conversational computer program which, via Bayes' theorem, numerically combines the prior distribution of a parameter with a likelihood function. Any type of prior and likelihood function can be considered. The present version of the program includes six types of prior and employs the binomial likelihood. As input the program requires the law and parameters of the prior distribution and the sample data. As output it gives the posterior distribution as a histogram. The use of the program for estimating the constant failure rate of an item is briefly described

  8. ESTIMATION OF THE WANDA GLACIER (SOUTH SHETLANDS SEDIMENT EROSION RATE USING NUMERICAL MODELLING

    Directory of Open Access Journals (Sweden)

    Kátia Kellem Rosa

    2013-09-01

    Full Text Available Glacial sediment yield results from glacial erosion and is influenced by several factors including glacial retreat rate, ice flow velocity and thermal regime. This paper estimates the contemporary subglacial erosion rate and sediment yield of Wanda Glacier (King George Island, South Shetlands. This work also examines basal sediment evacuation mechanisms by runoff and glacial erosion processes during the subglacial transport. This is small temperate glacier that has seen retreating for the last decades. In this work, we examine basal sediment evacuation mechanisms by runoff and analyze glacial erosion processes occurring during subglacial transport. The glacial erosion rate at Wanda Glacier, estimated using a numerical model that consider sediment evacuated to outlet streams, ice flow velocity, ice thickness and glacier area, is 1.1 ton m yr-1.

  9. Numerical algorithm for rigid body position estimation using the quaternion approach

    Science.gov (United States)

    Zigic, Miodrag; Grahovac, Nenad

    2017-11-01

    This paper deals with rigid body attitude estimation on the basis of the data obtained from an inertial measurement unit mounted on the body. The aim of this work is to present the numerical algorithm, which can be easily applied to the wide class of problems concerning rigid body positioning, arising in aerospace and marine engineering, or in increasingly popular robotic systems and unmanned aerial vehicles. Following the considerations of kinematics of rigid bodies, the relations between accelerations of different points of the body are given. A rotation matrix is formed using the quaternion approach to avoid singularities. We present numerical procedures for determination of the absolute accelerations of the center of mass and of an arbitrary point of the body expressed in the inertial reference frame, as well as its attitude. An application of the algorithm to the example of a heavy symmetrical gyroscope is presented, where input data for the numerical procedure are obtained from the solution of differential equations of motion, instead of using sensor measurements.

  10. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    Science.gov (United States)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  11. Estimating the mirror seeing for a large optical telescope with a numerical method

    Science.gov (United States)

    Zhang, En-Peng; Cui, Xiang-Qun; Li, Guo-Ping; Zhang, Yong; Shi, Jian-Rong; Zhao, Yong-Heng

    2018-05-01

    It is widely accepted that mirror seeing is caused by turbulent fluctuations in the index of air refraction in the vicinity of a telescope mirror. Computational Fluid Dynamics (CFD) is a useful tool to evaluate the effects of mirror seeing. In this paper, we present a numerical method to estimate the mirror seeing for a large optical telescope (∼ 4 m) in cases of natural convection with the ANSYS ICEPAK software. We get the FWHM of the image for different inclination angles (i) of the mirror and different temperature differences (ΔT) between the mirror and ambient air. Our results show that the mirror seeing depends very weakly on i, which agrees with observational data from the Canada-France-Hawaii Telescope. The numerical model can be used to estimate mirror seeing in the case of natural convection although with some limitations. We can determine ΔT for thermal control of the primary mirror according to the simulation, empirical data and site seeing.

  12. Is 27 a Big Number? Correlational and Causal Connections among Numerical Categorization, Number Line Estimation, and Numerical Magnitude Comparison

    Science.gov (United States)

    Laski, Elida V.; Siegler, Robert S.

    2007-01-01

    This study examined the generality of the logarithmic to linear transition in children's representations of numerical magnitudes and the role of subjective categorization of numbers in the acquisition of more advanced understanding. Experiment 1 (49 girls and 41 boys, ages 5-8 years) suggested parallel transitions from kindergarten to second grade…

  13. Parameter estimation in IMEX-trigonometrically fitted methods for the numerical solution of reaction-diffusion problems

    Science.gov (United States)

    D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice

    2018-05-01

    In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.

  14. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    Science.gov (United States)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  15. ESTIMATION OF TURBULENT DIFFUSIVITY WITH DIRECT NUMERICAL SIMULATION OF STELLAR CONVECTION

    Energy Technology Data Exchange (ETDEWEB)

    Hotta, H.; Iida, Y.; Yokoyama, T., E-mail: hotta.h@eps.s.u-tokyo.ac.jp [Department of Earth and Planetary Science, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan)

    2012-05-20

    We investigate the value of horizontal turbulent diffusivity {eta} by numerical calculation of thermal convection. In this study, we introduce a new method whereby the turbulent diffusivity is estimated by monitoring the time development of the passive scalar, which is initially distributed in a given Gaussian function with a spatial scale d{sub 0}. Our conclusions are as follows: (1) assuming the relation {eta} = L{sub c} v{sub rms}/3, where v{sub rms} is the root-mean-square (rms) velocity, the characteristic length L{sub c} is restricted by the shortest one among the pressure (density) scale height and the region depth. (2) The value of turbulent diffusivity becomes greater with the larger initial distribution scale d{sub 0}. (3) The approximation of turbulent diffusion holds better when the ratio of the initial distribution scale d{sub 0} to the characteristic length L{sub c} is larger.

  16. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.

    Science.gov (United States)

    Frick, Eric; Rahmatalla, Salam

    2018-04-04

    The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.

  17. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Eric Frick

    2018-04-01

    Full Text Available The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA. This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO. First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated (r > 0.82 with the true, time-varying joint center solution.

  18. Parameter estimation method that directly compares gravitational wave observations to numerical relativity

    Science.gov (United States)

    Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2017-11-01

    We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.

  19. A simple numerical model to estimate the effect of coal selection on pulverized fuel burnout

    Energy Technology Data Exchange (ETDEWEB)

    Sun, J.K.; Hurt, R.H.; Niksa, S.; Muzio, L.; Mehta, A.; Stallings, J. [Brown University, Providence, RI (USA). Division Engineering

    2003-06-01

    The amount of unburned carbon in ash is an important performance characteristic in commercial boilers fired with pulverized coal. Unburned carbon levels are known to be sensitive to fuel selection, and there is great interest in methods of estimating the burnout propensity of coals based on proximate and ultimate analysis - the only fuel properties readily available to utility practitioners. A simple numerical model is described that is specifically designed to estimate the effects of coal selection on burnout in a way that is useful for commercial coal screening. The model is based on a highly idealized description of the combustion chamber but employs detailed descriptions of the fundamental fuel transformations. The model is validated against data from laboratory and pilot-scale combustors burning a range of international coals, and then against data obtained from full-scale units during periods of coal switching. The validated model form is then used in a series of sensitivity studies to explore the role of various individual fuel properties that influence burnout.

  20. Estimation of permeability and permeability anisotropy in horizontal wells through numerical simulation of mud filtrate invasion

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Nelson [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Exploracao e Producao; Altman, Raphael; Rasmus, John; Oliveira, Jansen [Schlumberger Servicos de Petroleo Ltda., Rio de Janeiro, RJ (Brazil)

    2008-07-01

    This paper describes how permeability and permeability anisotropy is estimated in horizontal wells using LWD (logging-while-drilling) laterolog resistivity data. Laterolog-while-drilling resistivity passes of while-drilling and timelapse (while reaming) were used to capture the invasion process. Radial positions of water based mud invasion fronts were calculated from while-drilling and reaming resistivity data. The invasion process was then recreated by constructing forward models with a fully implicit, near-wellbore numerical simulation such that the invasion front at a given time was consistent with the position of the front predicted by resistivity inversions. The radial position of the invasion front was shown to be sensitive to formation permeability. The while-drilling environment provides a fertile scenario to investigate reservoir dynamic properties because mud cake integrity and growth is not fully developed which means that the position of the invasion front at a particular point in time is more sensitive to formation permeability. The estimation of dynamic formation properties in horizontal wells is of particular value in marginal fields and deep-water offshore developments where running wireline and obtaining core is not always feasible, and where the accuracy of reservoir models can reduce the risk in field development decisions. (author)

  1. Numerical study of the evaporation process and parameter estimation analysis of an evaporation experiment

    Directory of Open Access Journals (Sweden)

    K. Schneider-Zapp

    2010-05-01

    Full Text Available Evaporation is an important process in soil-atmosphere interaction. The determination of hydraulic properties is one of the crucial parts in the simulation of water transport in porous media. Schneider et al. (2006 developed a new evaporation method to improve the estimation of hydraulic properties in the dry range. In this study we used numerical simulations of the experiment to study the physical dynamics in more detail, to optimise the boundary conditions and to choose the optimal combination of measurements. The physical analysis exposed, in accordance to experimental findings in the literature, two different evaporation regimes: (i a soil-atmosphere boundary layer dominated regime (regime I close to saturation and (ii a hydraulically dominated regime (regime II. During this second regime a drying front (interface between unsaturated and dry zone with very steep gradients forms which penetrates deeper into the soil as time passes. The sensitivity analysis showed that the result is especially sensitive at the transition between the two regimes. By changing the boundary conditions it is possible to force the system to switch between the two regimes, e.g. from II back to I. Based on this findings a multistep experiment was developed. The response surfaces for all parameter combinations are flat and have a unique, localised minimum. Best parameter estimates are obtained if the evaporation flux and a potential measurement in 2 cm depth are used as target variables. Parameter estimation from simulated experiments with realistic measurement errors with a two-stage Monte-Carlo Levenberg-Marquardt procedure and manual rejection of obvious misfits lead to acceptable results for three different soil textures.

  2. A hybrid numerical prediction scheme for solar radiation estimation in un-gauged catchments.

    Science.gov (United States)

    Shamim, M. A.; Bray, M.; Ishak, A. M.; Remesan, R.; Han, D.

    2009-09-01

    The importance of solar radiation on earth's surface is depicted in its wide range of applications in the fields of meteorology, agricultural sciences, engineering, hydrology, crop water requirements, climatic changes and energy assessment. It is quite random in nature as it has to go through different processes of assimilation and dispersion while on its way to earth. Compared to other meteorological parameters, solar radiation is quite infrequently measured, for example, the worldwide ratio of stations collecting solar radiation to those collecting temperature is 1:500 (Badescu, 2008). Researchers, therefore, have to rely on indirect techniques of estimation that include nonlinear models, artificial intelligence (e.g. neural networks), remote sensing and numerical weather predictions (NWP). This study proposes a hybrid numerical prediction scheme for solar radiation estimation in un-gauged catchments. It uses the PSU/NCAR's Mesoscale Modelling system (MM5) (Grell et al., 1995) to parameterise the cloud effect on extraterrestrial radiation by dividing the atmosphere into four layers of very high (6-12 km), high (3-6 km), medium (1.5-3) and low (0-1.5) altitudes from earth. It is believed that various cloud forms exist within each of these layers. An hourly time series of upper air pressure and relative humidity data sets corresponding to all of these layers is determined for the Brue catchment, southwest UK, using MM5. Cloud Index (CI) was then determined using (Yang and Koike, 2002): 1 p?bi [ (Rh - Rh )] ci =------- max 0.0,---------cri dp pbi - ptipti (1- Rhcri) where, pbi and pti represent the air pressure at the top and bottom of each layer and Rhcri is the critical value of relative humidity at which a certain cloud type is formed. Output from a global clear sky solar radiation model (MRM v-5) (Kambezidis and Psiloglu, 2008) is used along with meteorological datasets of temperature and precipitation and astronomical information. The analysis is aided by the

  3. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  4. Wind gust estimation by combining numerical weather prediction model and statistical post-processing

    Science.gov (United States)

    Patlakas, Platon; Drakaki, Eleni; Galanis, George; Spyrou, Christos; Kallos, George

    2017-04-01

    The continuous rise of off-shore and near-shore activities as well as the development of structures, such as wind farms and various offshore platforms, requires the employment of state-of-the-art risk assessment techniques. Such analysis is used to set the safety standards and can be characterized as a climatologically oriented approach. Nevertheless, a reliable operational support is also needed in order to minimize cost drawbacks and human danger during the construction and the functioning stage as well as during maintenance activities. One of the most important parameters for this kind of analysis is the wind speed intensity and variability. A critical measure associated with this variability is the presence and magnitude of wind gusts as estimated in the reference level of 10m. The latter can be attributed to different processes that vary among boundary-layer turbulence, convection activities, mountain waves and wake phenomena. The purpose of this work is the development of a wind gust forecasting methodology combining a Numerical Weather Prediction model and a dynamical statistical tool based on Kalman filtering. To this end, the parameterization of Wind Gust Estimate method was implemented to function within the framework of the atmospheric model SKIRON/Dust. The new modeling tool combines the atmospheric model with a statistical local adaptation methodology based on Kalman filters. This has been tested over the offshore west coastline of the United States. The main purpose is to provide a useful tool for wind analysis and prediction and applications related to offshore wind energy (power prediction, operation and maintenance). The results have been evaluated by using observational data from the NOAA's buoy network. As it was found, the predicted output shows a good behavior that is further improved after the local adjustment post-process.

  5. Dynamic State Estimation for Multi-Machine Power System by Unscented Kalman Filter With Enhanced Numerical Stability

    Energy Technology Data Exchange (ETDEWEB)

    Qi, Junjian; Sun, Kai; Wang, Jianhui; Liu, Hui

    2018-03-01

    In this paper, in order to enhance the numerical stability of the unscented Kalman filter (UKF) used for power system dynamic state estimation, a new UKF with guaranteed positive semidifinite estimation error covariance (UKFGPS) is proposed and compared with five existing approaches, including UKFschol, UKF-kappa, UKFmodified, UKF-Delta Q, and the squareroot UKF (SRUKF). These methods and the extended Kalman filter (EKF) are tested by performing dynamic state estimation on WSCC 3-machine 9-bus system and NPCC 48-machine 140-bus system. For WSCC system, all methods obtain good estimates. However, for NPCC system, both EKF and the classic UKF fail. It is found that UKFschol, UKF-kappa, and UKF-Delta Q do not work well in some estimations while UKFGPS works well in most cases. UKFmodified and SRUKF can always work well, indicating their better scalability mainly due to the enhanced numerical stability.

  6. Estimating groundwater-ephemeral stream exchange in hyper-arid environments: Field experiments and numerical simulations

    Science.gov (United States)

    Wang, Ping; Pozdniakov, Sergey P.; Vasilevskiy, Peter Yu.

    2017-12-01

    Surface water infiltration from ephemeral dryland streams is particularly important in hyporheic exchange and biogeochemical processes in arid and semi-arid regions. However, streamflow transmission losses can vary significantly, partly due to spatiotemporal variations in streambed permeability. To extend our understanding of changes in streambed hydraulic properties, field investigations of streambed hydraulic conductivity were conducted in an ephemeral dryland stream in north-western China during high and low streamflow periods. Additionally, streamflow transmission losses were numerically estimated using combined stream and groundwater hydraulic head data and stream and streambed temperature data. An analysis of slug test data at two different river flow stages (one test was performed at a low river stage with clean water and the other at a high river stage with muddy water) suggested that sedimentation from fine-grained particles, i.e., physical clogging processes, likely led to a reduction in streambed hydraulic properties. To account for the effects of streambed clogging on changes in hydraulic properties, an iteratively increasing total hydraulic resistance during the slug test was considered to correct the estimation of streambed hydraulic conductivity. The stream and streambed temperature can also greatly influence the hydraulic properties of the streambed. One-dimensional coupled water and heat flux modelling with HYDRUS-1D was used to quantify the effects of seasonal changes in stream and streambed temperature on streamflow losses. During the period from 6 August 2014 to 4 June 2015, the total infiltration estimated using temperature-dependent hydraulic conductivity accounted for approximately 88% of that using temperature-independent hydraulic conductivity. Streambed clogging processes associated with fine particle settling/wash up cycles during flow events, and seasonal changes in streamflow temperature are two considerable factors that affect water

  7. Reconciling experimental and static-dynamic numerical estimations of seismic anisotropy in Alpine Fault mylonites

    Science.gov (United States)

    Adam, L.; Frehner, M.; Sauer, K. M.; Toy, V.; Guerin-Marthe, S.; Boulton, C. J.

    2017-12-01

    Reconciling experimental and static-dynamic numerical estimations of seismic anisotropy in Alpine Fault mylonitesLudmila Adam1, Marcel Frehner2, Katrina Sauer3, Virginia Toy3, Simon Guerin-Marthe4, Carolyn Boulton5(1) University of Auckland, New Zealand, (2) ETH Zurich, Switzerland, (3) University of Otago, New Zealand (4) Durham University, Earth Sciences, United Kingdom (5) Victoria University of Wellington, New Zealand Quartzo-feldspathic mylonites and schists are the main contributors to seismic wave anisotropy in the vicinity of the Alpine Fault (New Zealand). We must determine how the physical properties of rocks like these influence elastic wave anisotropy if we want to unravel both the reasons for heterogeneous seismic wave propagation, and interpret deformation processes in fault zones. To study such controls on velocity anisotropy we can: 1) experimentally measure elastic wave anisotropy on cores at in-situ conditions or 2) estimate wave velocities by static (effective medium averaging) or dynamic (finite element) modelling based on EBSD data or photomicrographs. Here we compare all three approaches in study of schist and mylonite samples from the Alpine Fault. Volumetric proportions of intrinsically anisotropic micas in cleavage domains and comparatively isotropic quartz+feldspar in microlithons commonly vary significantly within one sample. Our analysis examines the effects of these phases and their arrangement, and further addresses how heterogeneity influences elastic wave anisotropy. We compare P-wave seismic anisotropy estimates based on millimetres-scale ultrasonic waves under in situ conditions, with simulations that account for micrometre-scale variations in elastic properties of constitutent minerals with the MTEX toolbox and finite-element wave propagation on EBSD images. We observe that the sorts of variations in the distribution of micas and quartz+feldspar within any one of our real core samples can change the elastic wave anisotropy by 10

  8. Low cycle fatigue numerical estimation of a high pressure turbine disc for the AL-31F jet engine

    Directory of Open Access Journals (Sweden)

    Spodniak Miroslav

    2017-01-01

    Full Text Available This article deals with the description of an approximate numerical estimation approach of a low cycle fatigue of a high pressure turbine disc for the AL-31F turbofan jet engine. The numerical estimation is based on the finite element method carried out in the SolidWorks software. The low cycle fatigue assessment of a high pressure turbine disc was carried out on the basis of dimensional, shape and material disc characteristics, which are available for the particular high pressure engine turbine. The method described here enables relatively fast setting of economically feasible low cycle fatigue of the assessed high pressure turbine disc using a commercially available software. The numerical estimation of accuracy of a low cycle fatigue depends on the accuracy of required input data for the particular investigated object.

  9. Numerical Estimation Method for the NonStationary Thrust of Pulsejet Ejector Nozzle

    Directory of Open Access Journals (Sweden)

    A. Yu. Mikushkin

    2016-01-01

    Full Text Available The article considers a calculation method for the non-stationary thrust of pulsejet ejector nozzle that is based on detonation combustion of gaseous fuel.To determine initial distributions of the thermodynamic parameters inside the detonation tube was carried out a rapid analysis based on x-t-diagrams of motion of glowing combustion products. For this purpose, the section with transparent walls was connected to the outlet of the tube to register the movement of products of combustion.Based on obtained images and gas-dynamic and thermodynamic equations the velocity distribution of the combustion products, its density, pressure and temperature required for numerical analysis were calculated. The world literature presents data on distribution of parameters, however they are given only for direct initiation of detonation at the closed end and for chemically "frozen" gas composition. The article presents the interpolation methods of parameters measured at the temperatures of 2500-2800K.Estimation of the thermodynamic parameters is based on the Chapman-Jouguet theory that the speed of the combustion products directly behind the detonation wave front with respect to the wave front is equal to the speed of sound of these products at a given point. The method of minimizing enthalpy of the final thermodynamic state was used to calculate the equilibrium parameters. Thus, a software package «IVTANTHERMO», which is a database of thermodynamic properties of many individual substances in a wide temperature range, was used.An integral thrust was numerically calculated according to the ejector nozzle surface. We solved the Navier-Stokes equations using the finite-difference Roe scheme of the second order. The combustion products were considered both as an inert mixture with "frozen" composition and as a mixture in chemical equilibrium with the changing temperature. The comparison with experimental results was made.The above method can be used for rapid

  10. Contributions to the numerical modeling of concrete structures cracking with creep and estimation of the permeability

    International Nuclear Information System (INIS)

    Dufour, F.

    2007-12-01

    The industrial context of this research work is to study the durability of the internal barriers of nuclear power plants. This paper is divided in two parts, the first part is relative to the crack-damage state and the second part to the creep consequences on the rupture properties of concrete. In the first part, the analysis of the experimental results, (carried out on a compression cylinder on which the radial permeability has been measured), shows that the permeability decreases until a deformation of half of those at the force peak, by re-closure of the preexisting microcracks in the material; then the permeability strongly increases until after the force peak by initiation, connexion and opening of the crack, and at last it increases less rapidly until the rupture because only the opening of the macro-cracks increases. In order to simulate these phenomena, two original methods are presented, in post-treatment phase, for estimating the leaks from a mechanical computing based on finite element methods. With the first method, it is possible to measure the permeability from the damage field and from a relation between the permeability and the damage which bind the Poiseuille law to an empirical law established for weak damages. The second method is on the deformations field from which the position and opening of the crack are calculated. The Poiseuille relation is then applied along the crack to estimate the leaks rates. The relation between the concrete creep and its mechanical characteristics is analyzed in the second part. In particular, are studied the creep consequences on the long term mechanical properties. After having given the experimental results which show essentially an embrittlement of the material after creep, a qualitative analysis by the bifurcations study is proposed, and then by a discrete numerical method to find again the same influence of the visco-elasticity on the rupture embrittlement experimentally observed. At last, the first results of

  11. Estimating biozone hydraulic conductivity in wastewater soil-infiltration systems using inverse numerical modeling.

    Science.gov (United States)

    Bumgarner, Johnathan R; McCray, John E

    2007-06-01

    During operation of an onsite wastewater treatment system, a low-permeability biozone develops at the infiltrative surface (IS) during application of wastewater to soil. Inverse numerical-model simulations were used to estimate the biozone saturated hydraulic conductivity (K(biozone)) under variably saturated conditions for 29 wastewater infiltration test cells installed in a sandy loam field soil. Test cells employed two loading rates (4 and 8cm/day) and 3 IS designs: open chamber, gravel, and synthetic bundles. The ratio of K(biozone) to the saturated hydraulic conductivity of the natural soil (K(s)) was used to quantify the reductions in the IS hydraulic conductivity. A smaller value of K(biozone)/K(s,) reflects a greater reduction in hydraulic conductivity. The IS hydraulic conductivity was reduced by 1-3 orders of magnitude. The reduction in IS hydraulic conductivity was primarily influenced by wastewater loading rate and IS type and not by the K(s) of the native soil. The higher loading rate yielded greater reductions in IS hydraulic conductivity than the lower loading rate for bundle and gravel cells, but the difference was not statistically significant for chamber cells. Bundle and gravel cells exhibited a greater reduction in IS hydraulic conductivity than chamber cells at the higher loading rates, while the difference between gravel and bundle systems was not statistically significant. At the lower rate, bundle cells exhibited generally lower K(biozone)/K(s) values, but not at a statistically significant level, while gravel and chamber cells were statistically similar. Gravel cells exhibited the greatest variability in measured values, which may complicate design efforts based on K(biozone) evaluations for these systems. These results suggest that chamber systems may provide for a more robust design, particularly for high or variable wastewater infiltration rates.

  12. Numerical estimation of temperature field in a laser welded butt joint made of dissimilar materials

    Directory of Open Access Journals (Sweden)

    Saternus Zbigniew

    2018-01-01

    Full Text Available The paper concerns numerical analysis of thermal phenomena occurring in the butt welding of two different materials by a laser beam welding. The temperature distribution for the welded butt-joint is obtained on the basis of numerical simulations performed in the ABAQUS program. Numerical analysis takes into account the thermophysical properties of welded plate made of two different materials. Temperature distribution in analysed joints is obtained on the basis of numerical simulation in Abaqus/Standard solver, which allowed the determination of the geometry of laser welded butt-joint.

  13. Numerical estimation of transport properties of cementitious materials using 3D digital images

    NARCIS (Netherlands)

    Ukrainczyk, N.; Koenders, E.A.B.; Van Breugel, K.

    2012-01-01

    A multi-scale characterisation of the transport process within cementitious microstructure possesses a great challenge in terms of modelling and schematization. In this paper a numerical method is proposed to mitigate the resolution problems in numerical methods for calculating effective transport

  14. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    Science.gov (United States)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  15. Numerical Estimation of Balanced and Falling States for Constrained Legged Systems

    Science.gov (United States)

    Mummolo, Carlotta; Mangialardi, Luigi; Kim, Joo H.

    2017-08-01

    Instability and risk of fall during standing and walking are common challenges for biped robots. While existing criteria from state-space dynamical systems approach or ground reference points are useful in some applications, complete system models and constraints have not been taken into account for prediction and indication of fall for general legged robots. In this study, a general numerical framework that estimates the balanced and falling states of legged systems is introduced. The overall approach is based on the integration of joint-space and Cartesian-space dynamics of a legged system model. The full-body constrained joint-space dynamics includes the contact forces and moments term due to current foot (or feet) support and another term due to altered contact configuration. According to the refined notions of balanced, falling, and fallen, the system parameters, physical constraints, and initial/final/boundary conditions for balancing are incorporated into constrained nonlinear optimization problems to solve for the velocity extrema (representing the maximum perturbation allowed to maintain balance without changing contacts) in the Cartesian space at each center-of-mass (COM) position within its workspace. The iterative algorithm constructs the stability boundary as a COM state-space partition between balanced and falling states. Inclusion in the resulting six-dimensional manifold is a necessary condition for a state of the given system to be balanced under the given contact configuration, while exclusion is a sufficient condition for falling. The framework is used to analyze the balance stability of example systems with various degrees of complexities. The manifold for a 1-degree-of-freedom (DOF) legged system is consistent with the experimental and simulation results in the existing studies for specific controller designs. The results for a 2-DOF system demonstrate the dependency of the COM state-space partition upon joint-space configuration (elbow-up vs

  16. Estimating EQ-5D values from the Oswestry Disability Index and numeric rating scales for back and leg pain.

    Science.gov (United States)

    Carreon, Leah Y; Bratcher, Kelly R; Das, Nandita; Nienhuis, Jacob B; Glassman, Steven D

    2014-04-15

    Cross-sectional cohort. The purpose of this study is to determine whether the EuroQOL-5D (EQ-5D) can be derived from commonly available low back disease-specific health-related quality of life measures. The Oswestry Disability Index (ODI) and numeric rating scales (0-10) for back pain (BP) and leg pain (LP) are widely used disease-specific measures in patients with lumbar degenerative disorders. Increasingly, the EQ-5D is being used as a measure of utility due to ease of administration and scoring. The EQ-5D, ODI, BP, and LP were prospectively collected in 14,544 patients seen in clinic for lumbar degenerative disorders. Pearson correlation coefficients for paired observations from multiple time points between ODI, BP, LP, and EQ-5D were determined. Regression modeling was done to compute the EQ-5D score from the ODI, BP, and LP. The mean age was 53.3 ± 16.4 years and 41% were male. Correlations between the EQ-5D and the ODI, BP, and LP were statistically significant (P < 0.0001) with correlation coefficients of -0.77, -0.50, and -0.57, respectively. The regression equation: [0.97711 + (-0.00687 × ODI) + (-0.01488 × LP) + (-0.01008 × BP)] to predict EQ-5D, had an R2 of 0.61 and a root mean square error of 0.149. The model using ODI alone had an R2 of 0.57 and a root mean square error of 0.156. The model using the individual ODI items had an R2 of 0.64 and a root mean square error of 0.143. The correlation coefficient between the observed and estimated EQ-5D score was 0.78. There was no statistically significant difference between the actual EQ-5D (0.553 ± 0.238) and the estimated EQ-5D score (0.553 ± 0.186) using the ODI, BP, and LP regression model. However, rounding off the coefficients to less than 5 decimal places produced less accurate results. Unlike previous studies showing a robust relationship between low back-specific measures and the Short Form-6D, a similar relationship was not seen between the ODI, BP, LP, and the EQ-5D. Thus, the EQ-5D cannot be

  17. A numerical method to estimate AC loss in superconducting coated conductors by finite element modelling

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Z; Jiang, Q; Pei, R; Campbell, A M; Coombs, T A [Engineering Department, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ (United Kingdom)

    2007-04-15

    A finite element method code based on the critical state model is proposed to solve the AC loss problem in YBCO coated conductors. This numerical method is based on a set of partial differential equations (PDEs) in which the magnetic field is used as the state variable. The AC loss problems have been investigated both in self-field condition and external field condition. Two numerical approaches have been introduced: the first model is configured on the cross-section plane of the YBCO tape to simulate an infinitely long superconducting tape. The second model represents the plane of the critical current flowing and is able to simulate the YBCO tape with finite length where the end effect is accounted. An AC loss measurement has been done to verify the numerical results and shows a good agreement with the numerical solution.

  18. A Numerical Approach to Estimate the Ballistic Coefficient of Space Debris from TLE Orbital Data

    Science.gov (United States)

    Narkeliunas, Jonas

    2016-01-01

    theoretical simulations, even few continuous mode 10 kW ground-based lasers, focused by 1.5 m telescopes with adaptive optics, were enough to prevent significant amount of the debris collisions. Simulations were done by propagating all space objects in LEO by 1 year into the future and checking whether the probability of collision was high. For those space objects different ground-based lasers were used to divert them, afterwards collision probabilities were reevaluated. However, the actual accuracy of the LightForce software, which has been developed at NASA AmesResearch Center, depends on the veracity of the input parameters, one of which is the objects ballistic coefficient. It is a measure of bodys ability to overcome air resistance, which has a significant impact on the debris in LEO, and thus it is responsible for the shape of the trajectory of the debris. Having the exact values of the ballistic coefficient would make significantly better collision predictions, unfortunately, we do not know what are the values for most of the objects.In this research, we were working with part of LightForce code, which estimates the ballistic coefficient from ephemerides. Previously used method gave highly inaccurate values, when compared to known objects, and it needed to be changed. The goal of this work was to try out a different method of estimating the ballistic coefficient and to check whether or not it gives noticeable improvements.

  19. Evaluation and adjustment of altimeter measurement and numerical hindcast in wave height trend estimation in China's coastal seas

    Science.gov (United States)

    Li, Shuiqing; Guan, Shoude; Hou, Yijun; Liu, Yahao; Bi, Fan

    2018-05-01

    A long-term trend of significant wave height (SWH) in China's coastal seas was examined based on three datasets derived from satellite measurements and numerical hindcasts. One set of altimeter data were obtained from the GlobWave, while the other two datasets of numerical hindcasts were obtained from the third-generation wind wave model, WAVEWATCH III, forced by wind fields from the Cross-Calibrated Multi-Platform (CCMP) and NCEP's Climate Forecast System Reanalysis (CFSR). The mean and extreme wave trends were estimated for the period 1992-2010 with respect to the annual mean and the 99th-percentile values of SWH, respectively. The altimeter wave trend estimates feature considerable uncertainties owing to the sparse sampling rate. Furthermore, the extreme wave trend tends to be overestimated because of the increasing sampling rate over time. Numerical wave trends strongly depend on the quality of the wind fields, as the CCMP waves significantly overestimate the wave trend, whereas the CFSR waves tend to underestimate the trend. Corresponding adjustments were applied which effectively improved the trend estimates from the altimeter and numerical data. The adjusted results show generally increasing mean wave trends, while the extreme wave trends are more spatially-varied, from decreasing trends prevailing in the South China Sea to significant increasing trends mainly in the East China Sea.

  20. Estimation of flushing time in a monsoonal estuary using observational and numerical approaches

    Digital Repository Service at National Institute of Oceanography (India)

    Manoj, N.T.

    and numerical model simulations to correlate TF with monthly mean river discharges. The power regression equation derived from FOS (numerical model) showed good statistical fit with data (r=-0.997 (-1.0)) for any given river discharge compared... was to calculate the TF in the Mandovi during three different seasons in a year. For this purpose, we adopted two approaches, first the computation of TF from FOS. The application of H2N-Model was another approach to calculate the TF in the estuary. The FWF...

  1. NUMERICAL ESTIMATES OF ELECTRODYNAMICS PROCESSES IN THE INDUCTOR SYSTEM WITH AN ATTRACTIVE SCREEN AND A FLAT RECTANGULAR SOLENOID

    Directory of Open Access Journals (Sweden)

    E. A. Chaplygin

    2018-04-01

    Full Text Available Purpose. To carry out numerical estimates of currents and forces in the investigated inductor system with an attractive screen (ISAS and determine the effectiveness of the force attraction. Methodology. The calculated relationships and graphical constructions were obtained using the initial data of the system: induced current in the screen and sheet metal; the distributed force of attraction (Ampère force; the repulsive force acting on the sheet metal (Lorentz force; amplitude values of the force of attraction and repulsion; phase dependence of the force of attraction, the repulsive force and the total resulting force. Results. The results of calculations in the form of graphical dependencies of electrodynamic processes in the region under the conductors of a rectangular solenoid of inductor system with an attracting screen are presented. The graphs of forces and currents in region of dent are obtained. In the paper the analysis of electrodynamics processes for whole area under the winding of inductor system with an attractive screen is shown. The flowing this processes in the region of dent a given geometry is presented. Originality. The considered inductor system with an attractive screen and a rectangular solenoid is improved, in comparison with the previous developed ISAS. It has a working area under the lines of parallel conductors in the cross section of a rectangular solenoid, and this allows to place a predetermined portion of the sheet metal anywhere within the working region. Comparison of the indicators of electrodynamics processes in the considered variants of calculation shows an approximate growth of almost 1.5 times the power indicators in the area of the accepted dent in comparison with similar values for the entire area under the winding of the ISAS. Practical value. The results obtained are important for the practice of real estimates of the excited forces of attraction. With a decrease in the dent, the amplitude of the

  2. Root water extraction and limiting soil hydraulic conditions estimated by numerical simulation

    NARCIS (Netherlands)

    Jong van Lier, de Q.; Metselaar, K.; Dam, van J.C.

    2006-01-01

    Root density, soil hydraulic functions, and hydraulic head gradients play an important role in the determination of transpiration-rate-limiting soil water contents. We developed an implicit numerical root water extraction model to solve the Richards equation for the modeling of radial root water

  3. Numerical estimation of structure constants in the three-dimensional Ising conformal field theory through Markov chain uv sampler

    Science.gov (United States)

    Herdeiro, Victor

    2017-09-01

    Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] introduced a numerical recipe, dubbed uv sampler, offering precise estimations of the conformal field theory (CFT) data of the planar two-dimensional (2D) critical Ising model. It made use of scale invariance emerging at the critical point in order to sample finite sublattice marginals of the infinite plane Gibbs measure of the model by producing holographic boundary distributions. The main ingredient of the Markov chain Monte Carlo sampler is the invariance under dilation. This paper presents a generalization to higher dimensions with the critical 3D Ising model. This leads to numerical estimations of a subset of the CFT data—scaling weights and structure constants—through fitting of measured correlation functions. The results are shown to agree with the recent most precise estimations from numerical bootstrap methods [Kos, Poland, Simmons-Duffin, and Vichi, J. High Energy Phys. 08 (2016) 036, 10.1007/JHEP08(2016)036].

  4. Effects of numerical dissipation and unphysical excursions on scalar-mixing estimates in large-eddy simulations

    Science.gov (United States)

    Sharan, Nek; Matheou, Georgios; Dimotakis, Paul

    2017-11-01

    Artificial numerical dissipation decreases dispersive oscillations and can play a key role in mitigating unphysical scalar excursions in large eddy simulations (LES). Its influence on scalar mixing can be assessed through the resolved-scale scalar, Z , its probability density function (PDF), variance, spectra, and the budget of the horizontally averaged equation for Z2. LES of incompressible temporally evolving shear flow enabled us to study the influence of numerical dissipation on unphysical scalar excursions and mixing estimates. Flows with different mixing behavior, with both marching and non-marching scalar PDFs, are studied. Scalar fields for each flow are compared for different grid resolutions and numerical scalar-convection term schemes. As expected, increasing numerical dissipation enhances scalar mixing in the development stage of shear flow characterized by organized large-scale pairings with a non-marching PDF, but has little influence in the self-similar stage of flows with marching PDFs. Flow parameters and regimes sensitive to numerical dissipation help identify approaches to mitigate unphysical excursions while minimizing dissipation.

  5. Measured and estimated glomerular filtration rate. Numerous methods of measurements (Part I

    Directory of Open Access Journals (Sweden)

    Jaime Pérez Loredo

    2017-04-01

    Equations applied for estimating GFR in population studies, should be reconsidered, given their imperfection and the difficulty for clinicians, who are not specialists on the subject, to interpret the results.

  6. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation

    OpenAIRE

    Eric Frick; Salam Rahmatalla

    2018-01-01

    The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential applic...

  7. Numerical estimates of the evolution of quark and gluon populations inside QCD jets

    International Nuclear Information System (INIS)

    Garetto, M.

    1980-01-01

    The system of first order differential equations for the probabilities of producing nsub(g) gluons and nsub(q) quarks in a single gluon or quark jet are solved numerically for a convenient choice of the parameters A, A-tilde, B. Relevant branching ratios as the evolution parameter Y increases are shown. The different behaviour of the distributions in the quark- and in the gluon-jet is discussed. (author)

  8. Numerical Simulation of Aerogasdynamics Processes in A Longwall Panel for Estimation of Spontaneous Combustion Hazards

    Science.gov (United States)

    Meshkov, Sergey; Sidorenko, Andrey

    2017-11-01

    The relevance of a solution of the problem of endogenous fire safety in seams liable to self-ignition is shown. The possibilities of numerical methods of researches of gasdynamic processes are considered. The analysis of methodical approaches with the purpose to create models and carry out numerical researches of aerogasdynamic processes in longwall panels of gas mines is made. Parameters of the gob for longwall mining are considered. The significant influence of geological and mining conditions of conducting mining operations on distribution of air streams on longwall panels and effective management of gas emission is shown. The aerogasdynamic model of longwall panels for further research of influence of parameters of ventilation and properties of gob is presented. The results of numerical researches including distribution of air streams, fields of concentration of methane and oxygen at application of various schemes of airing for conditions of perspective mines of the Pechora basin and Kuzbass are given. Recommendations for increase of efficiency of the coal seams mining liable to selfignition are made. The directions of further researches are defined.

  9. Numerical Estimation of the Outer Bank Resistance Characteristics in AN Evolving Meandering River

    Science.gov (United States)

    Wang, D.; Konsoer, K. M.; Rhoads, B. L.; Garcia, M. H.; Best, J.

    2017-12-01

    Few studies have examined the three-dimensional flow structure and its interaction with bed morphology within elongate loops of large meandering rivers. The present study uses a numerical model to simulate the flow pattern and sediment transport, especially the flow close to the outer-bank, at two elongate meandering loops in Wabash River, USA. The numerical grid for the model is based on a combination of airborne LIDAR data on floodplains and the multibeam data within the river channel. A Finite Element Method (FEM) is used to solve the non-hydrostatic RANS equation using a K-epsilon turbulence closure scheme. High-resolution topographic data allows detailed numerical simulation of flow patterns along the outer bank and model calibration involves comparing simulated velocities to ADCP measurements at 41 cross sections near this bank. Results indicate that flow along the outer bank is strongly influenced by large resistance elements, including woody debris, large erosional scallops within the bank face, and outcropping bedrock. In general, patterns of bank migration conform with zones of high near-bank velocity and shear stress. Using the existing model, different virtual events can be simulated to explore the impacts of different resistance characteristics on patterns of flow, sediment transport, and bank erosion.

  10. Numerical estimation on balance coefficients of central difference averaging method for quench detection of the KSTAR PF coils

    International Nuclear Information System (INIS)

    Kim, Jin Sub; An, Seok Chan; Ko, Tae Kuk; Chu, Yong

    2016-01-01

    A quench detection system of KSTAR Poloidal Field (PF) coils is inevitable for stable operation because normal zone generates overheating during quench occurrence. Recently, new voltage quench detection method, combination of Central Difference Averaging (CDA) and Mutual Inductance Compensation (MIK) for compensating mutual inductive voltage more effectively than conventional voltage detection method, has been suggested and studied. For better performance of mutual induction cancellation by adjacent coils of CDA+MIK method for KSTAR coil system, balance coefficients of CDA must be estimated and adjusted preferentially. In this paper, the balance coefficients of CDA for KSTAR PF coils were numerically estimated. The estimated result was adopted and tested by using simulation. The CDA method adopting balance coefficients effectively eliminated mutual inductive voltage, and also it is expected to improve performance of CDA+MIK method for quench detection of KSTAR PF coils

  11. Estimating the Cross-Shelf Export of Riverine Materials: Part 1. General Relationships From an Idealized Numerical Model

    Science.gov (United States)

    Izett, Jonathan G.; Fennel, Katja

    2018-02-01

    Rivers deliver large amounts of terrestrially derived materials (such as nutrients, sediments, and pollutants) to the coastal ocean, but a global quantification of the fate of this delivery is lacking. Nutrients can accumulate on shelves, potentially driving high levels of primary production with negative consequences like hypoxia, or be exported across the shelf to the open ocean where impacts are minimized. Global biogeochemical models cannot resolve the relatively small-scale processes governing river plume dynamics and cross-shelf export; instead, river inputs are often parameterized assuming an "all or nothing" approach. Recently, Sharples et al. (2017), https://doi.org/10.1002/2016GB005483 proposed the SP number—a dimensionless number relating the estimated size of a plume as a function of latitude to the local shelf width—as a simple estimator of cross-shelf export. We extend their work, which is solely based on theoretical and empirical scaling arguments, and address some of its limitations using a numerical model of an idealized river plume. In a large number of simulations, we test whether the SP number can accurately describe export in unforced cases and with tidal and wind forcings imposed. Our numerical experiments confirm that the SP number can be used to estimate export and enable refinement of the quantitative relationships proposed by Sharples et al. We show that, in general, external forcing has only a weak influence compared to latitude and derive empirical relationships from the results of the numerical experiments that can be used to estimate riverine freshwater export to the open ocean.

  12. A numerical integration-based yield estimation method for integrated circuits

    International Nuclear Information System (INIS)

    Liang Tao; Jia Xinzhang

    2011-01-01

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  13. A numerical integration-based yield estimation method for integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)

    2011-04-15

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  14. Numerical Estimation of Fatigue Life of Wind Turbines due to Shadow Effect

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Pedersen, Ronnie; Nielsen, Søren R.K.

    2009-01-01

    The influence of tower design on damage accumulation in up-wind turbine blades during tower passage is discussed. The fatigue life of a blade is estimated for a tripod tower configuration and a standard mono-tower. The blade stresses are determined from a dynamic mechanical model with a delay...

  15. Efficient PDE based numerical estimation of credit and liquidity risk measures for realistic derivative portfolios

    NARCIS (Netherlands)

    de Graaf, C.S.L.

    2016-01-01

    In the Basel III accords in 2013, it was stated that financial institutions should charge Credit Value Adjustment (CVA) to their counterparties for (previously under-regulated) Over-The-Counter (OTC) trades. This CVA can be used to hedge a possible default of the counterparty. One important

  16. Estimating non-circular motions in barred galaxies using numerical N-body simulations

    Science.gov (United States)

    Randriamampandry, T. H.; Combes, F.; Carignan, C.; Deg, N.

    2015-12-01

    The observed velocities of the gas in barred galaxies are a combination of the azimuthally averaged circular velocity and non-circular motions, primarily caused by gas streaming along the bar. These non-circular flows must be accounted for before the observed velocities can be used in mass modelling. In this work, we examine the performance of the tilted-ring method and the DISKFIT algorithm for transforming velocity maps of barred spiral galaxies into rotation curves (RCs) using simulated data. We find that the tilted-ring method, which does not account for streaming motions, under-/overestimates the circular motions when the bar is parallel/perpendicular to the projected major axis. DISKFIT, which does include streaming motions, is limited to orientations where the bar is not aligned with either the major or minor axis of the image. Therefore, we propose a method of correcting RCs based on numerical simulations of galaxies. We correct the RC derived from the tilted-ring method based on a numerical simulation of a galaxy with similar properties and projections as the observed galaxy. Using observations of NGC 3319, which has a bar aligned with the major axis, as a test case, we show that the inferred mass models from the uncorrected and corrected RCs are significantly different. These results show the importance of correcting for the non-circular motions and demonstrate that new methods of accounting for these motions are necessary as current methods fail for specific bar alignments.

  17. ESTIMATION OF THE WANDA GLACIER (SOUTH SHETLANDS) SEDIMENT EROSION RATE USING NUMERICAL MODELLING

    OpenAIRE

    Kátia Kellem Rosa; Rosemary Vieira; Jefferson Cardia Simões

    2013-01-01

    Glacial sediment yield results from glacial erosion and is influenced by several factors including glacial retreat rate, ice flow velocity and thermal regime. This paper estimates the contemporary subglacial erosion rate and sediment yield of Wanda Glacier (King George Island, South Shetlands). This work also examines basal sediment evacuation mechanisms by runoff and glacial erosion processes during the subglacial transport. This is small temperate glacier that has seen retreating for the l...

  18. Obtaining numerically consistent estimates from a mix of administrative data and surveys

    OpenAIRE

    de Waal, A.G.

    2016-01-01

    National statistical institutes (NSIs) fulfil an important role as providers of objective and undisputed statistical information on many different aspects of society. To this end NSIs try to construct data sets that are rich in information content and that can be used to estimate a large variety of population figures. At the same time NSIs aim to construct these rich data sets as efficiently and cost effectively as possible. This can be achieved by utilizing already available administrative d...

  19. Estimation of Rainfall Associated with Typhoons over the Ocean Using TRMM/TMI and Numerical Models

    Directory of Open Access Journals (Sweden)

    Nan-Ching Yeh

    2015-11-01

    Full Text Available This study quantitatively estimated the precipitation associated with a typhoon in the northwestern Pacific Ocean by using a physical algorithm which included the Weather Research and Forecasting model, Radiative Transfer for TIROS Operational Vertical Sounder model, and data from the Tropical Rainfall Measuring Mission (TRMM/TRMM Microwave Imager (TMI and TRMM/Precipitation Radar (PR. First, a prior probability distribution function (PDF was constructed using over three million rain rate retrievals from the TRMM/PR data for the period 2002–2010 over the northwestern Pacific Ocean. Subsequently, brightness temperatures for 15 typhoons that occurred over the northwestern Pacific Ocean were simulated using a microwave radiative transfer model and a conditional PDF was obtained for these typhoons. The aforementioned physical algorithm involved using a posterior PDF. A posterior PDF was obtained by combining the prior and conditional PDFs. Finally, the rain rate associated with a typhoon was estimated by inputting the observations of the TMI (attenuation indices at 10, 19, 37 GHz into the posterior PDF (lookup table. Results based on rain rate retrievals indicated that rainband locations with the heaviest rainfall showed qualitatively similar horizontal distributions. The correlation coefficient and root-mean-square error of the rain rate estimation were 0.63 and 4.45 mm·h−1, respectively. Furthermore, the correlation coefficient and root-mean-square error for convective rainfall were 0.78 and 7.25 mm·h−1, respectively, and those for stratiform rainfall were 0.58 and 9.60 mm·h−1, respectively. The main contribution of this study is introducing an approach to quickly and accurately estimate the typhoon precipitation, and remove the need for complex calculations.

  20. Estimating the mass of the Local Group using machine learning applied to numerical simulations

    Science.gov (United States)

    McLeod, M.; Libeskind, N.; Lahav, O.; Hoffman, Y.

    2017-12-01

    We present a new approach to calculating the combined mass of the Milky Way (MW) and Andromeda (M31), which together account for the bulk of the mass of the Local Group (LG). We base our work on an ensemble of 30,190 halo pairs from the Small MultiDark simulation, assuming a ΛCDM (Cosmological Constant and Cold Dark Matter) cosmology. This is used in conjunction with machine learning methods (artificial neural networks, ANN) to investigate the relationship between the mass and selected parameters characterising the orbit and local environment of the binary. ANN are employed to take account of additional physics arising from interactions with larger structures or dynamical effects which are not analytically well understood. Results from the ANN are most successful when the velocity shear is provided, which demonstrates the flexibility of machine learning to model physical phenomena and readily incorporate new information. The resulting estimate for the Local Group mass, when shear information is included, is 4.9×1012Msolar, with an error of ±0.8×1012Msolar from the 68% uncertainty in observables, and a r.m.s. scatter interval of +1.7‑1.3×1012Msolar estimated scatter from the differences between the model estimates and simulation masses for a testing sample of halo pairs. We also consider a recently reported large relative transverse velocity of M31 and the Milky Way, and produce an alternative mass estimate of 3.6±0.3+2.1‑1.3×1012Msolar. Although the methods used predict similar values for the most likely mass of the LG, application of ANN compared to the traditional Timing Argument reduces the scatter in the log mass by approximately half when tested on samples from the simulation.

  1. Numerical estimate of fracture parameters under elastic and elastic-plastic conditions

    International Nuclear Information System (INIS)

    Soba, Alejandro; Denis, Alicia C.

    2003-01-01

    The importance of the stress intensity factor K in the elastic fracture analysis is well known. In this work three methods are developed to estimate the parameter K I , corresponding to the normal loading mode, employing the finite elements method. The elastic-plastic condition is also analyzed, where the line integral J is the relevant parameter. Two cases of interest are studied: sample with a crack in its center and tubes with internal pressure. (author)

  2. A Numerical Estimation of a RFID Reader Field and SAR inside a Blood Bag at UHF

    Directory of Open Access Journals (Sweden)

    Alessandro Fanti

    2016-11-01

    Full Text Available In this paper, the effects of UHF electromagnetic fields produced by a RFID reader on a blood bag are evaluated numerically in several configurations. The results of the simulation, field level and distribution, specific absorption rate (SAR, and heating time show that an exposure to a typical reader field leads to a temperature increase smaller than 0.1 C and to a SAR smaller than 1 W/kg. As a consequence, no adverse biological effects occur during a typical UHF RFID reading cycle on a blood bag. Therefore, the blood contained in a bag traced using UHF-RFID is as safe as those traced using barcodes. The proposed analysis supports the use of UHF RFID in the blood transfusion supply chain.

  3. Lagrangian methods for blood damage estimation in cardiovascular devices--How numerical implementation affects the results.

    Science.gov (United States)

    Marom, Gil; Bluestein, Danny

    2016-01-01

    This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.

  4. Spent Fuel Ratio Estimates from Numerical Models in ALE3D

    Energy Technology Data Exchange (ETDEWEB)

    Margraf, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dunn, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-02

    Potential threat of intentional sabotage of spent nuclear fuel storage facilities is of significant importance to national security. Paramount is the study of focused energy attacks on these materials and the potential release of aerosolized hazardous particulates into the environment. Depleted uranium oxide (DUO2) is often chosen as a surrogate material for testing due to the unreasonable cost and safety demands for conducting full-scale tests with real spent nuclear fuel. To account for differences in mechanical response resulting in changes to particle distribution it is necessary to scale the DUO2 results to get a proper measure for spent fuel. This is accomplished with the spent fuel ratio (SFR), the ratio of respirable aerosol mass released due to identical damage conditions between a spent fuel and a surrogate material like depleted uranium oxide (DUO2). A very limited number of full-scale experiments have been carried out to capture this data, and the oft-questioned validity of the results typically leads to overly-conservative risk estimates. In the present work, the ALE3D hydrocode is used to simulate DUO2 and spent nuclear fuel pellets impacted by metal jets. The results demonstrate an alternative approach to estimate the respirable release fraction of fragmented nuclear fuel.

  5. Numerical estimation of ultrasonic production of hydrogen: Effect of ideal and real gas based models.

    Science.gov (United States)

    Kerboua, Kaouther; Hamdaoui, Oualid

    2018-01-01

    Based on two different assumptions regarding the equation describing the state of the gases within an acoustic cavitation bubble, this paper studies the sonochemical production of hydrogen, through two numerical models treating the evolution of a chemical mechanism within a single bubble saturated with oxygen during an oscillation cycle in water. The first approach is built on an ideal gas model, while the second one is founded on Van der Waals equation, and the main objective was to analyze the effect of the considered state equation on the ultrasonic hydrogen production retrieved by simulation under various operating conditions. The obtained results show that even when the second approach gives higher values of temperature, pressure and total free radicals production, yield of hydrogen does not follow the same trend. When comparing the results released by both models regarding hydrogen production, it was noticed that the ratio of the molar amount of hydrogen is frequency and acoustic amplitude dependent. The use of Van der Waals equation leads to higher quantities of hydrogen under low acoustic amplitude and high frequencies, while employing ideal gas law based model gains the upper hand regarding hydrogen production at low frequencies and high acoustic amplitudes. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. A numerical estimate of the small-kT region in the BFKL pomeron

    International Nuclear Information System (INIS)

    Bartels, J.

    1995-11-01

    A computer study is performed to estimate the influence of the small-k T region in the BFKL evolution equation. We consider the small-x region of the deep inelastic structure function F 2 and show that the magnitude of the small-k T region depends on Q 2 and x B . We suggest that the width of the log k T 2 -distribution in the final state may serve as an additional footprint of BFKL-dynamics. For diffractive dissociation it is shown that the contribution of the infrared region is large - event for large Q 2 . This contribution becomes smaller only if restrictions on the final state are imposed. (orig.)

  7. Various methods of numerical estimation of generalized stress intensity factors of bi-material notches

    Directory of Open Access Journals (Sweden)

    Klusák J.

    2009-12-01

    Full Text Available The study of bi-material notches becomes a topical problem as they can model efficiently geometrical or material discontinuities. When assessing crack initiation conditions in the bi-material notches, the generalized stress intensity factors H have to be calculated. Contrary to the determination of the K-factor for a crack in an isotropic homogeneous medium, for the ascertainment of the H-factor there is no procedure incorporated in the calculation systems. The calculation of these fracture parameters requires experience. Direct methods of estimation of H-factors need choosing usually length parameter entering into calculation. On the other hand the method combining the application of the reciprocal theorem (Ψ-integral and FEM does not require entering any length parameter and is capable to extract the near-tip information directly from the far-field deformation.

  8. Fine-root mortality rates in a temperate forest: Estimates using radiocarbon data and numerical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Riley, W.J.; Gaudinski, J.B.; Torn, M.S.; Joslin, J.D.; Hanson, P.J.

    2009-09-01

    We used an inadvertent whole-ecosystem {sup 14}C label at a temperate forest in Oak Ridge, Tennessee, USA to develop a model (Radix1.0) of fine-root dynamics. Radix simulates two live-root pools, two dead-root pools, non-normally distributed root mortality turnover times, a stored carbon (C) pool, and seasonal growth and respiration patterns. We applied Radix to analyze measurements from two root size classes (< 0.5 and 0.5-2.0 mm diameter) and three soil-depth increments (O horizon, 0-15 cm and 30-60 cm). Predicted live-root turnover times were < 1 yr and 10 yr for short- and long-lived pools, respectively. Dead-root pools had decomposition turnover times of 2 yr and 10 yr. Realistic characterization of C flows through fine roots requires a model with two live fine-root populations, two dead fine-root pools, and root respiration. These are the first fine-root turnover time estimates that take into account respiration, storage, seasonal growth patterns, and non-normal turnover time distributions. The presence of a root population with decadal turnover times implies a lower amount of belowground net primary production used to grow fine-root tissue than is currently predicted by models with a single annual turnover pool.

  9. The inverse Numerical Computer Program FLUX-BOT for estimating Vertical Water Fluxes from Temperature Time-Series.

    Science.gov (United States)

    Trauth, N.; Schmidt, C.; Munz, M.

    2016-12-01

    Heat as a natural tracer to quantify water fluxes between groundwater and surface water has evolved to a standard hydrological method. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. Analytical solutions can be easily implemented but assumptions on the boundary conditions have to be made a priori, e.g. sinusoidal upper temperature boundary. Numerical models offer more flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. This also reduced the effort of data preprocessing such as the extraction of the diurnal temperature variation. We developed a software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB which is intended to calculate vertical water fluxes in saturated sediments, based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation. Besides its core inverse numerical routines, FLUX-BOT includes functions visualizing the results and functions for performing uncertainty analysis. We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance.

  10. Electromagnetic Energy Released in the Subduction (Benioff) Zone in Weeks Previous to Earthquake Occurrence in Central Peru and the Estimation of Earthquake Magnitudes.

    Science.gov (United States)

    Heraud, J. A.; Centa, V. A.; Bleier, T.

    2017-12-01

    During the past four years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone and are connected with the occurrence of earthquakes within a few kilometers of the source of such pulses. This evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. Additional work has been done and the method has now been expanded to provide the instantaneous energy released at the stress areas on the Benioff zone during the precursory stage, before an earthquake occurs. Collected data from several events and in other parts of the country will be shown in a sequential animated form that illustrates the way energy is released in the ULF part of the electromagnetic spectrum. The process has been extended in time and geographical places. Only pulses associated with the occurrence of earthquakes are taken into account in an area which is highly associated with subduction-zone seismic events and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including the animated data video, constitute additional work towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. The method is providing clearer evidence that electromagnetic precursors in effect conveys physical and useful information prior to the advent of a seismic event

  11. When is best-worst best? A comparison of best-worst scaling, numeric estimation, and rating scales for collection of semantic norms.

    Science.gov (United States)

    Hollis, Geoff; Westbury, Chris

    2018-02-01

    Large-scale semantic norms have become both prevalent and influential in recent psycholinguistic research. However, little attention has been directed towards understanding the methodological best practices of such norm collection efforts. We compared the quality of semantic norms obtained through rating scales, numeric estimation, and a less commonly used judgment format called best-worst scaling. We found that best-worst scaling usually produces norms with higher predictive validities than other response formats, and does so requiring less data to be collected overall. We also found evidence that the various response formats may be producing qualitatively, rather than just quantitatively, different data. This raises the issue of potential response format bias, which has not been addressed by previous efforts to collect semantic norms, likely because of previous reliance on a single type of response format for a single type of semantic judgment. We have made available software for creating best-worst stimuli and scoring best-worst data. We also made available new norms for age of acquisition, valence, arousal, and concreteness collected using best-worst scaling. These norms include entries for 1,040 words, of which 1,034 are also contained in the ANEW norms (Bradley & Lang, Affective norms for English words (ANEW): Instruction manual and affective ratings (pp. 1-45). Technical report C-1, the center for research in psychophysiology, University of Florida, 1999).

  12. METRIC CHARACTERISTICS OF VARIOUS METHODS FOR NUMERICAL DENSITY ESTIMATION IN TRANSMISSION LIGHT MICROSCOPY – A COMPUTER SIMULATION

    Directory of Open Access Journals (Sweden)

    Miroslav Kališnik

    2011-05-01

    Full Text Available In the introduction the evolution of methods for numerical density estimation of particles is presented shortly. Three pairs of methods have been analysed and compared: (1 classical methods for particles counting in thin and thick sections, (2 original and modified differential counting methods and (3 physical and optical disector methods. Metric characteristics such as accuracy, efficiency, robustness, and feasibility of methods have been estimated and compared. Logical, geometrical and mathematical analysis as well as computer simulations have been applied. In computer simulations a model of randomly distributed equal spheres with maximal contrast against surroundings has been used. According to our computer simulation all methods give accurate results provided that the sample is representative and sufficiently large. However, there are differences in their efficiency, robustness and feasibility. Efficiency and robustness increase with increasing slice thickness in all three pairs of methods. Robustness is superior in both differential and both disector methods compared to both classical methods. Feasibility can be judged according to the additional equipment as well as to the histotechnical and counting procedures necessary for performing individual counting methods. However, it is evident that not all practical problems can efficiently be solved with models.

  13. Experimental and numerical investigations of soil water balance at the hinterland of the Badain Jaran Desert for groundwater recharge estimation

    Science.gov (United States)

    Hou, Lizhu; Wang, Xu-Sheng; Hu, Bill X.; Shang, Jie; Wan, Li

    2016-09-01

    Quantification of groundwater recharge from precipitation in the huge sand dunes is an issue in accounting for regional water balance in the Badain Jaran Desert (BJD) where about 100 lakes exist between dunes. In this study, field observations were conducted on a sand dune near a large saline lake in the BJD to investigate soil water movement through a thick vadose zone for groundwater estimation. The hydraulic properties of the soils at the site were determined using in situ experiments and laboratory measurements. A HYDRUS-1D model was built up for simulating the coupling processes of vertical water-vapor movement and heat transport in the desert soil. The model was well calibrated and validated using the site measurements of the soil water and temperature at various depths. Then, the model was applied to simulate the vertical flow across a 3-m-depth soil during a 53-year period under variable climate conditions. The simulated flow rate at the depth is an approximate estimation of groundwater recharge from the precipitation in the desert. It was found that the annual groundwater recharge would be 11-30 mm during 1983-2012, while the annual precipitation varied from 68 to 172 mm in the same period. The recharge rates are significantly higher than those estimated from the previous studies using chemical information. The modeling results highlight the role of the local precipitation as an essential source of groundwater in the BJD.

  14. Numerical experiment to estimate the validity of negative ion diagnostic using photo-detachment combined with Langmuir probing

    Energy Technology Data Exchange (ETDEWEB)

    Oudini, N. [Laboratoire des plasmas de décharges, Centre de Développement des Technologies Avancées, Cité du 20 Aout BP 17 Baba Hassen, 16081 Algiers (Algeria); Sirse, N.; Ellingboe, A. R. [Plasma Research Laboratory, School of Physical Sciences and NCPST, Dublin City University, Dublin 9 (Ireland); Benallal, R. [Unité de Recherche Matériaux et Energies Renouvelables, BP 119, Université Abou Bekr Belkaïd, Tlemcen 13000 (Algeria); Taccogna, F. [Istituto di Metodologie Inorganiche e di Plasmi, CNR, via Amendola 122/D, 70126 Bari (Italy); Aanesland, A. [Laboratoire de Physique des Plasmas, (CNRS, Ecole Polytechnique, Sorbonne Universités, UPMC Univ Paris 06, Univ Paris-Sud), École Polytechnique, 91128 Palaiseau Cedex (France); Bendib, A. [Laboratoire d' Electronique Quantique, Faculté de Physique, USTHB, El Alia BP 32, Bab Ezzouar, 16111 Algiers (Algeria)

    2015-07-15

    This paper presents a critical assessment of the theory of photo-detachment diagnostic method used to probe the negative ion density and electronegativity α = n{sub -}/n{sub e}. In this method, a laser pulse is used to photo-detach all negative ions located within the electropositive channel (laser spot region). The negative ion density is estimated based on the assumption that the increase of the current collected by an electrostatic probe biased positively to the plasma is a result of only the creation of photo-detached electrons. In parallel, the background electron density and temperature are considered as constants during this diagnostics. While the numerical experiments performed here show that the background electron density and temperature increase due to the formation of an electrostatic potential barrier around the electropositive channel. The time scale of potential barrier rise is about 2 ns, which is comparable to the time required to completely photo-detach the negative ions in the electropositive channel (∼3 ns). We find that neglecting the effect of the potential barrier on the background plasma leads to an erroneous determination of the negative ion density. Moreover, the background electron velocity distribution function within the electropositive channel is not Maxwellian. This is due to the acceleration of these electrons through the electrostatic potential barrier. In this work, the validity of the photo-detachment diagnostic assumptions is questioned and our results illustrate the weakness of these assumptions.

  15. A study for estimate of contamination source with numerical simulation method in the turbulent type clean room

    International Nuclear Information System (INIS)

    Han, Sang Mok; Hwang, Young Kyu; Kim, Dong Kwon

    2015-01-01

    Contamination in a clean room may appear even more complicated by the effect of complicated manufacturing processes and indoor equipment. For this reason, detailed information about the concentration of pollutant particles in the clean room is needed to control the level of contamination financially and efficiently without any problem in manufacturing process. Allocation method has been developed as one of main ideas to fulfill a function of controlling contamination under the situation. By using this method, weighting factor can be predicted based on cleanliness on sampling spots and the values based on numerical analysis. In this point, the weighting factor indicates how each of contaminant sources influences the concentration of pollutant in the clean room. In this paper, when applied allocation method, we propose zoning method to accelerate the calculation time. And it was applied to cleanliness the actual improvement of the turbulent type clean room. As a result, we could estimate quantitatively the amount of contamination generated from the pollution sources. And was proved by experiments that it is possible to improve the level of cleanliness of the clean rooms by using these results.

  16. Quantitative precipitation estimation based on high-resolution numerical weather prediction and data assimilation with WRF – a performance test

    Directory of Open Access Journals (Sweden)

    Hans-Stefan Bauer

    2015-04-01

    Full Text Available Quantitative precipitation estimation and forecasting (QPE and QPF are among the most challenging tasks in atmospheric sciences. In this work, QPE based on numerical modelling and data assimilation is investigated. Key components are the Weather Research and Forecasting (WRF model in combination with its 3D variational assimilation scheme, applied on the convection-permitting scale with sophisticated model physics over central Europe. The system is operated in a 1-hour rapid update cycle and processes a large set of in situ observations, data from French radar systems, the European GPS network and satellite sensors. Additionally, a free forecast driven by the ECMWF operational analysis is included as a reference run representing current operational precipitation forecasting. The verification is done both qualitatively and quantitatively by comparisons of reflectivity, accumulated precipitation fields and derived verification scores for a complex synoptic situation that developed on 26 and 27 September 2012. The investigation shows that even the downscaling from ECMWF represents the synoptic situation reasonably well. However, significant improvements are seen in the results of the WRF QPE setup, especially when the French radar data are assimilated. The frontal structure is more defined and the timing of the frontal movement is improved compared with observations. Even mesoscale band-like precipitation structures on the rear side of the cold front are reproduced, as seen by radar. The improvement in performance is also confirmed by a quantitative comparison of the 24-hourly accumulated precipitation over Germany. The mean correlation of the model simulations with observations improved from 0.2 in the downscaling experiment and 0.29 in the assimilation experiment without radar data to 0.56 in the WRF QPE experiment including the assimilation of French radar data.

  17. Hunger and thirst numeric rating scales are not valid estimates for gastric content volumes: a prospective investigation in healthy children.

    Science.gov (United States)

    Buehrer, Sabin; Hanke, Ursula; Klaghofer, Richard; Fruehauf, Melanie; Weiss, Markus; Schmitz, Achim

    2014-03-01

    A rating scale for thirst and hunger was evaluated as a noninvasive, simple and commonly available tool to estimate preanesthetic gastric volume, a surrogate parameter for the risk of perioperative pulmonary aspiration, in healthy volunteer school age children. Numeric scales with scores from 0 to 10 combined with smileys to rate thirst and hunger were analyzed and compared with residual gastric volumes as measured by magnetic resonance imaging and fasting times in three settings: before and for 2 h after drinking clear fluid (group A, 7 ml/kg), before and for 4 vs 6 h after a light breakfast followed by clear fluid (7 ml/kg) after 2 vs 4 h (crossover, group B), and before and for 1 h after drinking clear fluid (crossover, group C, 7 vs 3 ml/kg). In 30 children aged 6.4-12.8 (median 9.8) years, participating on 1-5 (median two) study days, 496 sets of scores and gastric volumes were determined. Large inter- and intra-individual variations were seen at baseline and in response to fluid and food intake. Significant correlations were found between hunger and thirst ratings in all groups, with children generally being more hungry than thirsty. Correlations between scores and duration of fasting or gastric residual volumes were poor to moderate. Receiver operating characteristic (ROC) analysis revealed that thirst and hunger rating scales cannot predict gastric content. Hunger and thirst scores vary considerably inter- and intra-individually and cannot predict gastric volume, nor do they correlate with fasting times in school age children. © 2013 John Wiley & Sons Ltd.

  18. Estimating Hydraulic Resistance for Floodplain Mapping and Hydraulic Studies from High-Resolution Topography: Physical and Numerical Simulations

    Science.gov (United States)

    Minear, J. T.

    2017-12-01

    One of the primary unknown variables in hydraulic analyses is hydraulic resistance, values for which are typically set using broad assumptions or calibration, with very few methods available for independent and robust determination. A better understanding of hydraulic resistance would be highly useful for understanding floodplain processes, forecasting floods, advancing sediment transport and hydraulic coupling, and improving higher dimensional flood modeling (2D+), as well as correctly calculating flood discharges for floods that are not directly measured. The relationship of observed features to hydraulic resistance is difficult to objectively quantify in the field, partially because resistance occurs at a variety of scales (i.e. grain, unit and reach) and because individual resistance elements, such as trees, grass and sediment grains, are inherently difficult to measure. Similar to photogrammetric techniques, Terrestrial Laser Scanning (TLS, also known as Ground-based LiDAR) has shown great ability to rapidly collect high-resolution topographic datasets for geomorphic and hydrodynamic studies and could be used to objectively quantify the features that collectively create hydraulic resistance in the field. Because of its speed in data collection and remote sensing ability, TLS can be used both for pre-flood and post-flood studies that require relatively quick response in relatively dangerous settings. Using datasets collected from experimental flume runs and numerical simulations, as well as field studies of several rivers in California and post-flood rivers in Colorado, this study evaluates the use of high-resolution topography to estimate hydraulic resistance, particularly from grain-scale elements. Contrary to conventional practice, experimental laboratory runs with bed grain size held constant but with varying grain-scale protusion create a nearly twenty-fold variation in measured hydraulic resistance. The ideal application of this high-resolution topography

  19. Numerical tools to estimate the flux of a gas across the air–water interface and assess the heterogeneity of its forcing functions

    Directory of Open Access Journals (Sweden)

    V. M. N. C. S. Vieira

    2013-03-01

    Full Text Available A numerical tool was developed for the estimation of gas fluxes across the air–water interface. The primary objective is to use it to estimate CO2 fluxes. Nevertheless application to other gases is easily accomplished by changing the values of the parameters related to the physical properties of the gases. A user-friendly software was developed allowing to build upon a standard kernel a custom-made gas flux model with the preferred parameterizations. These include single or double layer models; several numerical schemes for the effects of wind in the air-side and water-side transfer velocities; the effects of atmospheric stability, surface roughness and turbulence from current drag with the bottom; and the effects on solubility of water temperature, salinity, air temperature and pressure. An analysis was also developed which decomposes the difference between the fluxes in a reference situation and in alternative situations into its several forcing functions. This analysis relies on the Taylor expansion of the gas flux model, requiring the numerical estimation of partial derivatives by a multivariate version of the collocation polynomial. Both the flux model and the difference decomposition analysis were tested with data taken from surveys done in the lagoon system of Ria Formosa, south Portugal, in which the CO2 fluxes were estimated using the infrared gas analyzer (IRGA and floating chamber method, whereas the CO2 concentrations were estimated using the IRGA and degasification chamber. Observations and estimations show a remarkable fit.

  20. Numerical aspects of drift kinetic turbulence: Ill-posedness, regularization and a priori estimates of sub-grid-scale terms

    KAUST Repository

    Samtaney, Ravi

    2012-01-01

    We present a numerical method based on an Eulerian approach to solve the Vlasov-Poisson system for 4D drift kinetic turbulence. Our numerical approach uses a conservative formulation with high-order (fourth and higher) evaluation of the numerical fluxes coupled with a fourth-order accurate Poisson solver. The fluxes are computed using a low-dissipation high-order upwind differencing method or a tuned high-resolution finite difference method with no numerical dissipation. Numerical results are presented for the case of imposed ion temperature and density gradients. Different forms of controlled regularization to achieve a well-posed system are used to obtain convergent resolved simulations. The regularization of the equations is achieved by means of a simple collisional model, by inclusion of an ad-hoc hyperviscosity or artificial viscosity term or by implicit dissipation in upwind schemes. Comparisons between the various methods and regularizations are presented. We apply a filtering formalism to the Vlasov equation and derive sub-grid-scale (SGS) terms analogous to the Reynolds stress terms in hydrodynamic turbulence. We present a priori quantifications of these SGS terms in resolved simulations of drift-kinetic turbulence by applying a sharp filter. © 2012 IOP Publishing Ltd.

  1. Numerical aspects of drift kinetic turbulence: ill-posedness, regularization and a priori estimates of sub-grid-scale terms

    International Nuclear Information System (INIS)

    Samtaney, Ravi

    2012-01-01

    We present a numerical method based on an Eulerian approach to solve the Vlasov-Poisson system for 4D drift kinetic turbulence. Our numerical approach uses a conservative formulation with high-order (fourth and higher) evaluation of the numerical fluxes coupled with a fourth-order accurate Poisson solver. The fluxes are computed using a low-dissipation high-order upwind differencing method or a tuned high-resolution finite difference method with no numerical dissipation. Numerical results are presented for the case of imposed ion temperature and density gradients. Different forms of controlled regularization to achieve a well-posed system are used to obtain convergent resolved simulations. The regularization of the equations is achieved by means of a simple collisional model, by inclusion of an ad-hoc hyperviscosity or artificial viscosity term or by implicit dissipation in upwind schemes. Comparisons between the various methods and regularizations are presented. We apply a filtering formalism to the Vlasov equation and derive sub-grid-scale (SGS) terms analogous to the Reynolds stress terms in hydrodynamic turbulence. We present a priori quantifications of these SGS terms in resolved simulations of drift-kinetic turbulence by applying a sharp filter.

  2. Numerical estimation of phase transformations in solid state during Yb:YAG laser heating of steel sheets

    Energy Technology Data Exchange (ETDEWEB)

    Kubiak, Marcin, E-mail: kubiak@imipkm.pcz.pl; Piekarska, Wiesława; Domański, Tomasz; Saternus, Zbigniew [Institute of Mechanics and Machine Design Foundations, Częstochowa University of Technology, Dąbrowskiego 73, 42-200 Częstochowa (Poland); Stano, Sebastian [Welding Technologies Department, Welding Institute, Błogosławionego Czesława 16-18, 44-100 Gliwice (Poland)

    2015-03-10

    This work concerns the numerical modeling of heat transfer and phase transformations in solid state occurring during the Yb:YAG laser beam heating process. The temperature field is obtained by the numerical solution into transient heat transfer equation with convective term. The laser beam heat source model is developed using the Kriging interpolation method with experimental measurements of Yb:YAG laser beam profile taken into account. Phase transformations are calculated on the basis of Johnson - Mehl - Avrami (JMA) and Koistinen - Marburger (KM) kinetics models as well as continuous heating transformation (CHT) and continuous cooling transformation (CCT) diagrams for S355 steel. On the basis of developed numerical algorithms 3D computer simulations are performed in order to predict temperature history and phase transformations in Yb:YAG laser heating process.

  3. Asymptotic preserving error estimates for numerical solutions of compressible Navier-Stokes equations in the low Mach number regime

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Medviďová-Lukáčová, M.; Nečasová, Šárka; Novotný, A.; She, Bangwei

    2018-01-01

    Roč. 16, č. 1 (2018), s. 150-183 ISSN 1540-3459 R&D Projects: GA ČR GA16-03230S EU Projects: European Commission(XE) 320078 - MATHEF Institutional support: RVO:67985840 Keywords : Navier-Stokes system * finite element numerical method * finite volume numerical method * asymptotic preserving schemes Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.865, year: 2016 http://epubs.siam.org/doi/10.1137/16M1094233

  4. Error estimates for a numerical method for the compressible Navier-Stokes system on sufficiently smooth domains

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Hošek, Radim; Maltese, D.; Novotný, A.

    2017-01-01

    Roč. 51, č. 1 (2017), s. 279-319 ISSN 0764-583X EU Projects: European Commission(XE) 320078 - MATHEF Institutional support: RVO:67985840 Keywords : Navier-Stokes system * finite element numerical method * finite volume numerical method Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.727, year: 2016 http://www.esaim-m2an.org/ articles /m2an/abs/2017/01/m2an150157/m2an150157.html

  5. Combining weather radar nowcasts and numerical weather prediction models to estimate short-term quantitative precipitation and uncertainty

    DEFF Research Database (Denmark)

    Jensen, David Getreuer

    The topic of this Ph.D. thesis is short term forecasting of precipitation for up to 6 hours called nowcasts. The focus is on improving the precision of deterministic nowcasts, assimilation of radar extrapolation model (REM) data into Danish Meteorological Institutes (DMI) HIRLAM numerical weather...

  6. Estimation of thawing cryolithic area with numerical modeling in 3D geometry while exploiting underground small nuclear power plant

    Directory of Open Access Journals (Sweden)

    Melnikov N. N.

    2016-03-01

    Full Text Available The paper presents results on 3D numerical calculation of a thermal task related to assessing a thawing area when placing modules with reactor and steam-turbine facility of a small nuclear power plant in thickness of permafrost rocks. The paper discusses influence of the coefficient of thermal conductivity for large-scaled underground excavations lining and cryolithic area porosity on thawing depth and front movement velocity under different spatial directions

  7. Laparoscopy After Previous Laparotomy

    Directory of Open Access Journals (Sweden)

    Zulfo Godinjak

    2006-11-01

    Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.

  8. Improving estimates of subsurface gas transport in unsaturated fractured media using experimental Xe diffusion data and numerical methods

    Science.gov (United States)

    Ortiz, J. P.; Ortega, A. D.; Harp, D. R.; Boukhalfa, H.; Stauffer, P. H.

    2017-12-01

    Gas transport in unsaturated fractured media plays an important role in a variety of applications, including detection of underground nuclear explosions, transport from volatile contaminant plumes, shallow CO2 leakage from carbon sequestration sites, and methane leaks from hydraulic fracturing operations. Gas breakthrough times are highly sensitive to uncertainties associated with a variety of hydrogeologic parameters, including: rock type, fracture aperture, matrix permeability, porosity, and saturation. Furthermore, a couple simplifying assumptions are typically employed when representing fracture flow and transport. Aqueous phase transport is typically considered insignificant compared to gas phase transport in unsaturated fracture flow regimes, and an assumption of instantaneous dissolution/volatilization of radionuclide gas is commonly used to reduce computational expense. We conduct this research using a twofold approach that combines laboratory gas experimentation and numerical modeling to verify and refine these simplifying assumptions in our current models of gas transport. Using a gas diffusion cell, we are able to measure air pressure transmission through fractured tuff core samples while also measuring Xe gas breakthrough measured using a mass spectrometer. We can thus create synthetic barometric fluctuations akin to those observed in field tests and measure the associated gas flow through the fracture and matrix pore space for varying degrees of fluid saturation. We then attempt to reproduce the experimental results using numerical models in PLFOTRAN and FEHM codes to better understand the importance of different parameters and assumptions on gas transport. Our numerical approaches represent both single-phase gas flow with immobile water, as well as full multi-phase transport in order to test the validity of assuming immobile pore water. Our approaches also include the ability to simulate the reaction equilibrium kinetics of dissolution

  9. Numerical model for the thermal yield estimation of unglazed photovoltaic-thermal collectors using indoor solar simulator testing

    NARCIS (Netherlands)

    Katiyar, M.; van Balkom, M.W.; Rindt, C.C.M.; de Keizer, C.; Zondag, H.A.

    2017-01-01

    It is a common practice to test solar thermal and photovoltaic-thermal (PVT) collectors outdoors. This requires testing over several weeks to account for different weather conditions encountered throughout the year, which is costly and time consuming. The outcome of these tests is an estimation of

  10. Investigation of error estimation method of observational data and comparison method between numerical and observational results toward V and V of seismic simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Kawakami, Yoshiaki; Nakajima, Norihiro

    2017-01-01

    The method to estimate errors included in observational data and the method to compare numerical results with observational results are investigated toward the verification and validation (V and V) of a seismic simulation. For the method to estimate errors, 144 literatures for the past 5 years (from the year 2010 to 2014) in the structure engineering field and earthquake engineering field where the description about acceleration data is frequent are surveyed. As a result, it is found that some processes to remove components regarded as errors from observational data are used in about 30% of those literatures. Errors are caused by the resolution, the linearity, the temperature coefficient for sensitivity, the temperature coefficient for zero shift, the transverse sensitivity, the seismometer property, the aliasing, and so on. Those processes can be exploited to estimate errors individually. For the method to compare numerical results with observational results, public materials of ASME V and V Symposium 2012-2015, their references, and above 144 literatures are surveyed. As a result, it is found that six methods have been mainly proposed in existing researches. Evaluating those methods using nine items, advantages and disadvantages for those methods are arranged. The method is not well established so that it is necessary to employ those methods by compensating disadvantages and/or to search for a solution to a novel method. (author)

  11. Potential impact of thermal effects during ultrasonic neurostimulation: retrospective numerical estimation of temperature elevation in seven rodent setups

    Science.gov (United States)

    Constans, Charlotte; Mateo, Philippe; Tanter, Mickaël; Aubry, Jean-François

    2018-01-01

    In the past decade, a handful but growing number of groups have reported worldwide successful low intensity focused ultrasound induced neurostimulation trials on rodents. Its effects range from movement elicitations to reduction of anesthesia time or reduction of the duration of drug induced seizures. The mechanisms underlying ultrasonic neuromodulation are still not fully understood. Given the low intensities used in most of the studies, a mechanical effect is more likely to be responsible for the neuromodulation effect, but a clear description of the thermal and mechanical effects is necessary to optimize clinical applications. Based on five studies settings, we calculated the temperature rise and thermal doses in order to evaluate its implication in the neuromodulation phenomenon. Our retrospective analysis shows thermal rise ranging from 0.002 °C to 0.8 °C in the brain for all setups, except for one setup for which the temperature increase is estimated to be as high as 7 °C. We estimate that in the latter case, temperature rise cannot be neglected as a possible cause of neuromodulation. Simulations results were supported by temperature measurements on a mouse with two different sets of parameters. Although the calculated temperature is compatible with the absence of visible thermal lesions on the skin, it is high enough to impact brain circuits. Our study highlights the usefulness of performing thermal simulations prior to experiment in order to fully take into account not only the impact of the peak intensity but also pulse duration and pulse repetition.

  12. Communicating global cardiovascular risk: are icon arrays better than numerical estimates in improving understanding, recall and perception of risk?

    Science.gov (United States)

    Ruiz, Jorge G; Andrade, Allen D; Garcia-Retamero, Rocio; Anam, Ramanakumar; Rodriguez, Remberto; Sharit, Joseph

    2013-12-01

    Experts recommend that adults have their global cardiovascular risk assessed. We investigated whether icon arrays increase understanding, recall, perception of CVR, and behavioral intent as compared with numerical information. Male outpatient veterans, at an intermediate to high cardiovascular risk participated in a randomized controlled trial of a computer tutorial presenting individualized risk. Message format was presented in 3 formats: percentages, frequencies, and frequencies with icon arrays. We assessed understanding immediately (T1) and recall at 20 min (T2) and 2 weeks (T3) after the intervention. We assessed perceptions of importance/seriousness, intent to adhere, and self-efficacy at T1. Self-reported adherence was assessed at T3. One-hundred and twenty male veterans participated. Age, education, race, health literacy and numeracy were comparable at baseline. There were no differences in understanding at T1 [p = .31] and recall at T3 [p = .10]. Accuracy was inferior with frequencies with icon arrays than percentages or frequencies at T2 [p ≤ .001]. There were no differences in perception of seriousness and importance for heart disease, behavioral intent, self-efficacy, actual adherence and satisfaction. Icon arrays may impair short-term recall of CVR. Icon arrays will not necessarily result in better understanding and recall of medical risk in all patients. Published by Elsevier Ireland Ltd.

  13. A numerical method to estimate the parameters of the CEV model implied by American option prices: Evidence from NYSE

    International Nuclear Information System (INIS)

    Ballestra, Luca Vincenzo; Cecere, Liliana

    2016-01-01

    Highlights: • We develop a method to compute the parameters of the CEV model implied by American options. • This is the first procedure for calibrating the CEV model to American option prices. • The proposed approach is extensively tested on the NYSE market. • The novel method turns out to be very efficient in computing the CEV model parameters. • The CEV model provides only a marginal improvement over the lognormal model. - Abstract: We develop a highly efficient procedure to forecast the parameters of the constant elasticity of variance (CEV) model implied by American options. In particular, first of all, the American option prices predicted by the CEV model are calculated using an accurate and fast finite difference scheme. Then, the parameters of the CEV model are obtained by minimizing the distance between theoretical and empirical option prices, which yields an optimization problem that is solved using an ad-hoc numerical procedure. The proposed approach, which turns out to be very efficient from the computational standpoint, is used to test the goodness-of-fit of the CEV model in predicting the prices of American options traded on the NYSE. The results obtained reveal that the CEV model does not provide a very good agreement with real market data and yields only a marginal improvement over the more popular Black–Scholes model.

  14. Higher Order Numerical Methods and Use of Estimation Techniques to Improve Modeling of Two-Phase Flow in Pipelines and Wells

    Energy Technology Data Exchange (ETDEWEB)

    Lorentzen, Rolf Johan

    2002-04-01

    The main objective of this thesis is to develop methods which can be used to improve predictions of two-phase flow (liquid and gas) in pipelines and wells. More reliable predictions are accomplished by improvements of numerical methods, and by using measured data to tune the mathematical model which describes the two-phase flow. We present a way to extend simple numerical methods to second order spatial accuracy. These methods are implemented, tested and compared with a second order Godunov-type scheme. In addition, a new (and faster) version of the Godunov-type scheme utilizing primitive (observable) variables is presented. We introduce a least squares method which is used to tune parameters embedded in the two-phase flow model. This method is tested using synthetic generated measurements. We also present an ensemble Kalman filter which is used to tune physical state variables and model parameters. This technique is tested on synthetic generated measurements, but also on several sets of full-scale experimental measurements. The thesis is divided into an introductory part, and a part consisting of four papers. The introduction serves both as a summary of the material treated in the papers, and as supplementary background material. It contains five sections, where the first gives an overview of the main topics which are addressed in the thesis. Section 2 contains a description and discussion of mathematical models for two-phase flow in pipelines. Section 3 deals with the numerical methods which are used to solve the equations arising from the two-phase flow model. The numerical scheme described in Section 3.5 is not included in the papers. This section includes results in addition to an outline of the numerical approach. Section 4 gives an introduction to estimation theory, and leads towards application of the two-phase flow model. The material in Sections 4.6 and 4.7 is not discussed in the papers, but is included in the thesis as it gives an important validation

  15. Numerical simulation of temperature distribution using finite difference equations and estimation of the grain size during friction stir processing

    International Nuclear Information System (INIS)

    Arora, H.S.; Singh, H.; Dhindaw, B.K.

    2012-01-01

    Highlights: ► Magnesium alloy AE42 was friction stir processed under different cooling conditions. ► Heat flow model was developed using finite difference heat equations. ► Generalized MATLAB code was developed for solving heat flow model. ► Regression equation for estimation of grain size was developed. - Abstract: The present investigation is aimed at developing a heat flow model to simulate temperature history during friction stir processing (FSP). A new approach of developing implicit form of finite difference heat equations solved using MATLAB code was used. A magnesium based alloy AE42 was friction stir processed (FSPed) at different FSP parameters and cooling conditions. Temperature history was continuously recorded in the nugget zone during FSP using data acquisition system and k type thermocouples. The developed code was validated at different FSP parameters and cooling conditions during FSP experimentation. The temperature history at different locations in the nugget zone at different instants of time was further utilized for the estimation of grain growth rate and final average grain size of the FSPed specimen. A regression equation relating the final grain size, maximum temperature during FSP and the cooling rate was developed. The metallurgical characterization was done using optical microscopy, SEM, and FIB-SIM analysis. The simulated temperature profiles and final average grain size were found to be in good agreement with the experimental results. The presence of fine precipitate particles generated in situ in the investigated magnesium alloy also contributed in the evolution of fine grain structure through Zener pining effect at the grain boundaries.

  16. Power1D: a Python toolbox for numerical power estimates in experiments involving one-dimensional continua

    Directory of Open Access Journals (Sweden)

    Todd C. Pataky

    2017-07-01

    Full Text Available The unit of experimental measurement in a variety of scientific applications is the one-dimensional (1D continuum: a dependent variable whose value is measured repeatedly, often at regular intervals, in time or space. A variety of software packages exist for computing continuum-level descriptive statistics and also for conducting continuum-level hypothesis testing, but very few offer power computing capabilities, where ‘power’ is the probability that an experiment will detect a true continuum signal given experimental noise. Moreover, no software package yet exists for arbitrary continuum-level signal/noise modeling. This paper describes a package called power1d which implements (a two analytical 1D power solutions based on random field theory (RFT and (b a high-level framework for computational power analysis using arbitrary continuum-level signal/noise modeling. First power1d’s two RFT-based analytical solutions are numerically validated using its random continuum generators. Second arbitrary signal/noise modeling is demonstrated to show how power1d can be used for flexible modeling well beyond the assumptions of RFT-based analytical solutions. Its computational demands are non-excessive, requiring on the order of only 30 s to execute on standard desktop computers, but with approximate solutions available much more rapidly. Its broad signal/noise modeling capabilities along with relatively rapid computations imply that power1d may be a useful tool for guiding experimentation involving multiple measurements of similar 1D continua, and in particular to ensure that an adequate number of measurements is made to detect assumed continuum signals.

  17. Duration and numerical estimation in right brain-damaged patients with and without neglect: Lack of support for a mental time line.

    Science.gov (United States)

    Masson, Nicolas; Pesenti, Mauro; Dormal, Valérie

    2016-08-01

    Previous studies have shown that left neglect patients are impaired when they have to orient their attention leftward relative to a standard in numerical comparison tasks. This finding has been accounted for by the idea that numerical magnitudes are represented along a spatial continuum oriented from left to right with small magnitudes on the left and large magnitudes on the right. Similarly, it has been proposed that duration could be represented along a mental time line that shares the properties of the number continuum. By comparing directly duration and numerosity processing, this study investigates whether or not the performance of neglect patients supports the hypothesis of a mental time line. Twenty-two right brain-damaged patients (11 with and 11 without left neglect), as well as 11 age-matched healthy controls, had to judge whether a single dot presented visually lasted shorter or longer than 500 ms and whether a sequence of flashed dots was smaller or larger than 5. Digit spans were also assessed to measure verbal working memory capacities. In duration comparison, no spatial-duration bias was found in neglect patients. Moreover, a significant correlation between verbal working memory and duration performance was observed in right brain-damaged patients, irrespective of the presence or absence of neglect. In numerical comparison, only neglect patients showed an enhanced distance effect for numerical magnitude smaller than the standard. These results do not support the hypothesis of the existence of a mental continuum oriented from left to right for duration. We discuss an alternative account to explain the duration impairment observed in right brain-damaged patients. © 2015 The British Psychological Society.

  18. Efficacy of a numerical value of a fixed-effect estimator in stochastic frontier analysis as an indicator of hospital production structure

    Directory of Open Access Journals (Sweden)

    Kawaguchi Hiroyuki

    2012-09-01

    Full Text Available Abstract Background The casemix-based payment system has been adopted in many countries, although it often needs complementary adjustment taking account of each hospital’s unique production structure such as teaching and research duties, and non-profit motives. It has been challenging to numerically evaluate the impact of such structural heterogeneity on production, separately of production inefficiency. The current study adopted stochastic frontier analysis and proposed a method to assess unique components of hospital production structures using a fixed-effect variable. Methods There were two stages of analyses in this study. In the first stage, we estimated the efficiency score from the hospital production function using a true fixed-effect model (TFEM in stochastic frontier analysis. The use of a TFEM allowed us to differentiate the unobserved heterogeneity of individual hospitals as hospital-specific fixed effects. In the second stage, we regressed the obtained fixed-effect variable for structural components of hospitals to test whether the variable was explicitly related to the characteristics and local disadvantages of the hospitals. Results In the first analysis, the estimated efficiency score was approximately 0.6. The mean value of the fixed-effect estimator was 0.784, the standard deviation was 0.137, the range was between 0.437 and 1.212. The second-stage regression confirmed that the value of the fixed effect was significantly correlated with advanced technology and local conditions of the sample hospitals. Conclusion The obtained fixed-effect estimator may reflect hospitals’ unique structures of production, considering production inefficiency. The values of fixed-effect estimators can be used as evaluation tools to improve fairness in the reimbursement system for various functions of hospitals based on casemix classification.

  19. Coastal Amplification Laws for the French Tsunami Warning Center: Numerical Modeling and Fast Estimate of Tsunami Wave Heights Along the French Riviera

    Science.gov (United States)

    Gailler, A.; Hébert, H.; Schindelé, F.; Reymond, D.

    2018-04-01

    Tsunami modeling tools in the French tsunami Warning Center operational context provide rapidly derived warning levels with a dimensionless variable at basin scale. A new forecast method based on coastal amplification laws has been tested to estimate the tsunami onshore height, with a focus on the French Riviera test-site (Nice area). This fast prediction tool provides a coastal tsunami height distribution, calculated from the numerical simulation of the deep ocean tsunami amplitude and using a transfer function derived from the Green's law. Due to a lack of tsunami observations in the western Mediterranean basin, coastal amplification parameters are here defined regarding high resolution nested grids simulations. The preliminary results for the Nice test site on the basis of nine historical and synthetic sources show a good agreement with the time-consuming high resolution modeling: the linear approximation is obtained within 1 min in general and provides estimates within a factor of two in amplitude, although the resonance effects in harbors and bays are not reproduced. In Nice harbor especially, variation in tsunami amplitude is something that cannot be really assessed because of the magnitude range and maximum energy azimuth of possible events to account for. However, this method is well suited for a fast first estimate of the coastal tsunami threat forecast.

  20. The use of a numerical mass-balance model to estimate rates of soil redistribution on uncultivated land from 137Cs measurements

    International Nuclear Information System (INIS)

    Owens, P.N.; Walling, D.E.

    1988-01-01

    A numerical mass-balance model is developed which can be used to estimate rates of soil redistribution on uncultivated land from measurements of bombderived 137 Cs inventories. The model uses a budgeting approach, which takes account of temporal variations in atmospheric fallout of 137 Cs, radioactive decay, and net gains or losses of 137 Cs due to erosion and deposition processes, combined with parameters which describe internal 137 Cs redistribution processes, to estimate the 137 Cs content of topsoil and the 137 Cs inventory at specific points, from the start of 137 Cs fallout in the 1950s to the present day. The model is also able to account for potential differences in particle size composition and organic matter content between mobilised soil particles and the original soil, and the effect that these may have on 137 Cs concentrations and inventories. By running the model for a range of soil erosion and deposition rates, a calibration relationship can be constructed which relates the 137 Cs inventory at a sampling point to the average net soil loss or gain at that location. In addition to the magnitude and temporal distribution of the 137 Cs atmospheric fallout flux, the soil redistribution rates estimated by the model are sensitive to parameters which describe the relative texture and organic matter content of the eroded or deposited material, and the ability of the soil to retain 137 Cs in the upper part of the soil profile. (Copyright (c) 1988 Elsevier Science B.V., Amsterdam. All rights reserved.)

  1. Real-Time Estimation of Volcanic ASH/SO2 Cloud Height from Combined Uv/ir Satellite Observations and Numerical Modeling

    Science.gov (United States)

    Vicente, Gilberto A.

    An efficient iterative method has been developed to estimate the vertical profile of SO2 and ash clouds from volcanic eruptions by comparing near real-time satellite observations with numerical modeling outputs. The approach uses UV based SO2 concentration and IR based ash cloud images, the volcanic ash transport model PUFF and wind speed, height and directional information to find the best match between the simulated and the observed displays. The method is computationally fast and is being implemented for operational use at the NOAA Volcanic Ash Advisory Centers (VAACs) in Washington, DC, USA, to support the Federal Aviation Administration (FAA) effort to detect, track and measure volcanic ash cloud heights for air traffic safety and management. The presentation will show the methodology, results, statistical analysis and SO2 and Aerosol Index input products derived from the Ozone Monitoring Instrument (OMI) onboard the NASA EOS/Aura research satellite and from the Global Ozone Monitoring Experiment-2 (GOME-2) instrument in the MetOp-A. The volcanic ash products are derived from AVHRR instruments in the NOAA POES-16, 17, 18, 19 as well as MetOp-A. The presentation will also show how a VAAC volcanic ash analyst interacts with the system providing initial condition inputs such as location and time of the volcanic eruption, followed by the automatic real-time tracking of all the satellite data available, subsequent activation of the iterative approach and the data/product delivery process in numerical and graphical format for operational applications.

  2. A novel inverse numerical modeling method for the estimation of water and salt mass transfer coefficients during ultrasonic assisted-osmotic dehydration of cucumber cubes.

    Science.gov (United States)

    Kiani, Hosein; Karimi, Farzaneh; Labbafi, Mohsen; Fathi, Morteza

    2018-06-01

    The objective of this paper was to study the moisture and salt diffusivity during ultrasonic assisted-osmotic dehydration of cucumbers. Experimental measurements of moisture and salt concentration versus time were carried out and an inverse numerical method was performed by coupling a CFD package (OpenFOAM) with a parameter estimation software (DAKOTA) to determine mass transfer coefficients. A good agreement between experimental and numerical results was observed. Mass transfer coefficients were from 3.5 × 10 -9 to 7 × 10 -9  m/s for water and from 4.8 × 10 -9  m/s to 7.4 × 10 -9  m/s for salt at different conditions (diffusion coefficients of around 3.5 × 10 -12 -11.5 × 10 -12  m 2 /s for water and 5 × 10 -12  m/s-12 × 10 -12  m 2 /s for salt). Ultrasound irradiation could increase the mass transfer coefficient. The values obtained by this method were closer to the actual data. The inverse simulation method can be an accurate technique to study the mass transfer phenomena during food processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Contribution to the prediction of cavitation erosion from numerical simulations: proposition of a two scales model to estimate the charge imposed by the fluid

    International Nuclear Information System (INIS)

    Krumenacker, Laurent

    2015-01-01

    During the life's cycle of a hydraulic installation, the occurrence of cavitation can cause significant damages on the material's surface. The quantification of the cavitation intensity in different geometry can be useful to get better designs for new installations, but also to improve the operating and to optimize maintenance of existing equipments. The development of universal laws of similarity from experiments is difficult due to the large number of parameters governing cavitating flows. With the increase of computational performance, numerical simulations offer the opportunity to study this phenomenon in various geometries. The main difficulty of this approach is the scale's difference existing between the numerical simulations U-RANS used to calculate the cavitating flow and mechanisms of bubble's collapse held responsible for damages on the solid. The proposed method in this thesis is based on a post-treatment of the U-RANS simulations to characterize a distribution of bubbles and to simulate their behavior at lower spatial and temporal scales. Our first objective is to make explicit a system of equations corresponding to phenomena occurring locally in the two-phase flow. This work leads to the development of mixture variables taking into account the presence of non-condensable gases in the fluid. Assumptions are taken to make the system, after using the Reynolds averaging procedure, equivalent to those, using a homogeneous approach, implemented in the unsteady cavitating flows solvers previously developed in the laboratory. The characterization of bubbles made by this post-treatment takes into account both the surface tension and the presence of non-condensable gases. The development of a solver for the simulation of the dynamic of a bubble cloud is started. It aims to take into account both the interactions between bubbles and non-spherical deformations with a potential method. First results of these simulations are presented and small

  4. ESTIMACIÓN DE LA DEGRADABILIDAD EFECTIVA EN EL RUMEN MEDIANTE MÉTODOS NUMÉRICOS ESTIMATION OF EFFECTIVE DEGRADABILITY IN RUMEN TROUGH NUMERICAL METHODS

    Directory of Open Access Journals (Sweden)

    Héctor Jairo Correa Cardona

    2008-12-01

    kinematics properties of the ruminal degradation and passage of the potentially degradable fraction (b that requires the use of numeric methods to clear the time "t" of the expression 1 = e-kd*t + e-kp*t, that when replacing it in the expression b*e-kd*t, it allows to calculate EDb. The estimate of EDb for this method allows to obtain reliable and coherent data with the mathematical bases of the kinetics of the ruminal degradation and the passage of the nutritional fractions.

  5. Numerical relativity

    International Nuclear Information System (INIS)

    Piran, T.

    1982-01-01

    There are many recent developments in numerical relativity, but there remain important unsolved theoretical and practical problems. The author reviews existing numerical approaches to solution of the exact Einstein equations. A framework for classification and comparison of different numerical schemes is presented. Recent numerical codes are compared using this framework. The discussion focuses on new developments and on currently open questions, excluding a review of numerical techniques. (Auth.)

  6. Numerical analysis

    CERN Document Server

    Khabaza, I M

    1960-01-01

    Numerical Analysis is an elementary introduction to numerical analysis, its applications, limitations, and pitfalls. Methods suitable for digital computers are emphasized, but some desk computations are also described. Topics covered range from the use of digital computers in numerical work to errors in computations using desk machines, finite difference methods, and numerical solution of ordinary differential equations. This book is comprised of eight chapters and begins with an overview of the importance of digital computers in numerical analysis, followed by a discussion on errors in comput

  7. Numerical relativity

    CERN Document Server

    Shibata, Masaru

    2016-01-01

    This book is composed of two parts: First part describes basics in numerical relativity, that is, the formulations and methods for a solution of Einstein's equation and general relativistic matter field equations. This part will be helpful for beginners of numerical relativity who would like to understand the content of numerical relativity and its background. The second part focuses on the application of numerical relativity. A wide variety of scientific numerical results are introduced focusing in particular on the merger of binary neutron stars and black holes.

  8. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-02-12

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative

  9. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    Science.gov (United States)

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative

  10. Previously unknown species of Aspergillus.

    Science.gov (United States)

    Gautier, M; Normand, A-C; Ranque, S

    2016-08-01

    The use of multi-locus DNA sequence analysis has led to the description of previously unknown 'cryptic' Aspergillus species, whereas classical morphology-based identification of Aspergillus remains limited to the section or species-complex level. The current literature highlights two main features concerning these 'cryptic' Aspergillus species. First, the prevalence of such species in clinical samples is relatively high compared with emergent filamentous fungal taxa such as Mucorales, Scedosporium or Fusarium. Second, it is clearly important to identify these species in the clinical laboratory because of the high frequency of antifungal drug-resistant isolates of such Aspergillus species. Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) has recently been shown to enable the identification of filamentous fungi with an accuracy similar to that of DNA sequence-based methods. As MALDI-TOF MS is well suited to the routine clinical laboratory workflow, it facilitates the identification of these 'cryptic' Aspergillus species at the routine mycology bench. The rapid establishment of enhanced filamentous fungi identification facilities will lead to a better understanding of the epidemiology and clinical importance of these emerging Aspergillus species. Based on routine MALDI-TOF MS-based identification results, we provide original insights into the key interpretation issues of a positive Aspergillus culture from a clinical sample. Which ubiquitous species that are frequently isolated from air samples are rarely involved in human invasive disease? Can both the species and the type of biological sample indicate Aspergillus carriage, colonization or infection in a patient? Highly accurate routine filamentous fungi identification is central to enhance the understanding of these previously unknown Aspergillus species, with a vital impact on further improved patient care. Copyright © 2016 European Society of Clinical Microbiology and

  11. Numerical Development

    Science.gov (United States)

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  12. Hindi Numerals.

    Science.gov (United States)

    Bright, William

    In most languages encountered by linguists, the numerals, considered as a paradigmatic set, constitute a morpho-syntactic problem of only moderate complexity. The Indo-Aryan language family of North India, however, presents a curious contrast. The relatively regular numeral system of Sanskrit, as it has developed historically into the modern…

  13. Numerical analysis

    CERN Document Server

    Rao, G Shanker

    2006-01-01

    About the Book: This book provides an introduction to Numerical Analysis for the students of Mathematics and Engineering. The book is designed in accordance with the common core syllabus of Numerical Analysis of Universities of Andhra Pradesh and also the syllabus prescribed in most of the Indian Universities. Salient features: Approximate and Numerical Solutions of Algebraic and Transcendental Equation Interpolation of Functions Numerical Differentiation and Integration and Numerical Solution of Ordinary Differential Equations The last three chapters deal with Curve Fitting, Eigen Values and Eigen Vectors of a Matrix and Regression Analysis. Each chapter is supplemented with a number of worked-out examples as well as number of problems to be solved by the students. This would help in the better understanding of the subject. Contents: Errors Solution of Algebraic and Transcendental Equations Finite Differences Interpolation with Equal Intervals Interpolation with Unequal Int...

  14. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    Science.gov (United States)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  15. Integrated analysis of numerical weather prediction and computational fluid dynamics for estimating cross-ventilation effects on inhaled air quality inside a factory

    Science.gov (United States)

    Murga, Alicia; Sano, Yusuke; Kawamoto, Yoichi; Ito, Kazuhide

    2017-10-01

    Mechanical and passive ventilation strategies directly impact indoor air quality. Passive ventilation has recently become widespread owing to its ability to reduce energy demand in buildings, such as the case of natural or cross ventilation. To understand the effect of natural ventilation on indoor environmental quality, outdoor-indoor flow paths need to be analyzed as functions of urban atmospheric conditions, topology of the built environment, and indoor conditions. Wind-driven natural ventilation (e.g., cross ventilation) can be calculated through the wind pressure coefficient distributions of outdoor wall surfaces and openings of a building, allowing the study of indoor air parameters and airborne contaminant concentrations. Variations in outside parameters will directly impact indoor air quality and residents' health. Numerical modeling can contribute to comprehend these various parameters because it allows full control of boundary conditions and sampling points. In this study, numerical weather prediction modeling was used to calculate wind profiles/distributions at the atmospheric scale, and computational fluid dynamics was used to model detailed urban and indoor flows, which were then integrated into a dynamic downscaling analysis to predict specific urban wind parameters from the atmospheric to built-environment scale. Wind velocity and contaminant concentration distributions inside a factory building were analyzed to assess the quality of the human working environment by using a computer simulated person. The impact of cross ventilation flows and its variations on local average contaminant concentration around a factory worker, and inhaled contaminant dose, were then discussed.

  16. Numerical analysis

    CERN Document Server

    Scott, L Ridgway

    2011-01-01

    Computational science is fundamentally changing how technological questions are addressed. The design of aircraft, automobiles, and even racing sailboats is now done by computational simulation. The mathematical foundation of this new approach is numerical analysis, which studies algorithms for computing expressions defined with real numbers. Emphasizing the theory behind the computation, this book provides a rigorous and self-contained introduction to numerical analysis and presents the advanced mathematics that underpin industrial software, including complete details that are missing from most textbooks. Using an inquiry-based learning approach, Numerical Analysis is written in a narrative style, provides historical background, and includes many of the proofs and technical details in exercises. Students will be able to go beyond an elementary understanding of numerical simulation and develop deep insights into the foundations of the subject. They will no longer have to accept the mathematical gaps that ex...

  17. The solar UV exposure time required for vitamin D3 synthesis in the human body estimated by numerical simulation and observation in Japan

    Science.gov (United States)

    Nakajima, Hideaki; Miyauchi, Masaatsu; Hirai, Chizuko

    2013-04-01

    After the discovery of Antarctic ozone hole, the negative effect of exposure of human body to harmful solar ultraviolet (UV) radiation is widely known. However, there is positive effect of exposure to UV radiation, i.e., vitamin D synthesis. Although the importance of solar UV radiation for vitamin D3 synthesis in the human body is well known, the solar exposure time required to prevent vitamin D deficiency has not been well determined. This study attempted to identify the time of solar exposure required for vitamin D3 synthesis in the body by season, time of day, and geographic location (Sapporo, Tsukuba, and Naha, in Japan) using both numerical simulations and observations. According to the numerical simulation for Tsukuba at noon in July under a cloudless sky, 2.3 min of solar exposure are required to produce 5.5 μg vitamin D3 per 600 cm2 skin. This quantity of vitamin D represents the recommended intake for an adult by the Ministry of Health, Labour and Welfare, and the 2010 Japanese Dietary Reference Intakes (DRIs). In contrast, it took 49.5 min to produce the same amount of vitamin D3 at Sapporo in the northern part of Japan in December, at noon under a cloudless sky. The necessary exposure time varied considerably with the time of the day. For Tsukuba at noon in December, 14.5 min were required, but at 09:00 68.7 min were required and at 15:00 175.8 min were required for the same meteorological conditions. Naha receives high levels of UV radiation allowing vitamin D3 synthesis almost throughout the year. According to our results, we are further developing an index to quantify the necessary time of UV radiation exposure to produce required amount of vitamin D3 from a UV radiation data.

  18. Use of a numerical simulation approach to improve the estimation of air-water exchange fluxes of polycyclic aromatic hydrocarbons in a coastal zone.

    Science.gov (United States)

    Lai, I-Chien; Lee, Chon-Lin; Ko, Fung-Chi; Lin, Ju-Chieh; Huang, Hu-Ching; Shiu, Ruei-Feng

    2017-07-15

    The air-water exchange is important for determining the transport, fate, and chemical loading of polycyclic aromatic hydrocarbons (PAHs) in the atmosphere and in aquatic systems. Investigations of PAH air-water exchange are mostly based on observational data obtained using complicated field sampling processes. This study proposes a new approach to improve the estimation of long-term PAH air-water exchange fluxes by using a multivariate regression model to simulate hourly gaseous PAH concentrations. Model performance analysis and the benefits from this approach indicate its effectiveness at improving the flux estimations and at decreasing the field sampling difficulty. The proposed GIS mapping approach is useful for box model establishment and is tested for visualization of the spatiotemporal variations of air-water exchange fluxes in a coastal zone. The air-water exchange fluxes illustrated by contour maps suggest that the atmospheric PAHs might have greater impacts on offshore sites than on the coastal area in this study. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Special Issue: Very large eddy simulation. Issue Edited by Dimitris Drikakis.Copyright © 2002 John Wiley & Sons, Ltd.Save Title to My ProfileSet E-Mail Alert Previous Issue | Next Issue > Full Issue Listing-->Volume 39, Issue 9, Pages 763-864(30 July 2002)Research ArticleEmbedded turbulence model in numerical methods for hyperbolic conservation laws

    Science.gov (United States)

    Drikakis, D.

    2002-07-01

    The paper describes the use of numerical methods for hyperbolic conservation laws as an embedded turbulence modelling approach. Different Godunov-type schemes are utilized in computations of Burgers' turbulence and a two-dimensional mixing layer. The schemes include a total variation diminishing, characteristic-based scheme which is developed in this paper using the flux limiter approach. The embedded turbulence modelling property of the above methods is demonstrated through coarsely resolved large eddy simulations with and without subgrid scale models. Copyright

  20. Numerical analysis

    CERN Document Server

    Brezinski, C

    2012-01-01

    Numerical analysis has witnessed many significant developments in the 20th century. This book brings together 16 papers dealing with historical developments, survey papers and papers on recent trends in selected areas of numerical analysis, such as: approximation and interpolation, solution of linear systems and eigenvalue problems, iterative methods, quadrature rules, solution of ordinary-, partial- and integral equations. The papers are reprinted from the 7-volume project of the Journal of Computational and Applied Mathematics on '/homepage/sac/cam/na2000/index.html<

  1. Numerical Relativity

    Science.gov (United States)

    Baker, John G.

    2009-01-01

    Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

  2. Analytical and numerical models for estimating the effect of exhaust ventilation on radon entry in houses with basements or crawl spaces

    International Nuclear Information System (INIS)

    Mowris, R.J.

    1986-08-01

    Mechanical exhaust ventilation systems are being installed in newer, energy-efficient houses and their operation can increase the indoor-outdoor pressure differences that drive soil gas and thus radon entry. This thesis presents simplified models for estimating the pressure driven flow of radon into houses with basements or crawl spaces, due to underpressures induced by indoor-outdoor temperature differences, wind, or exhaust ventilation. A two-dimensional finite difference model is presented and used to calculate the pressure field and soil gas flow rate into a basement situated in soil of uniform permeability. A simplified analytical model is compared to the finite difference model with generally very good agreement. Another simplified model is presented for houses with a crawl space. Literature on radon research is also reviewed to show why pressure driven flow of soil gas is considered to be the major source of radon entry in houses with higher-than-average indoor radon concentrations. Comparisons of measured vs. calculated indoor radon concentrations for a house with a basement showed the simplified basement model underpredicting on average by 25%. For a house with a crawl space the simplified crawl space model overpredicted by 23% when the crawl space vents are open and 48% when the crawl space vents are sealed

  3. Numerical estimates of the maximum sustainable pore pressure in anticline formations using the tensor based concept of pore pressure-stress coupling

    Directory of Open Access Journals (Sweden)

    Andreas Eckert

    2015-02-01

    Full Text Available The advanced tensor based concept of pore pressure-stress coupling is used to provide pre-injection analytical estimates of the maximum sustainable pore pressure change, ΔPc, for fluid injection scenarios into generic anticline geometries. The heterogeneous stress distribution for different prevailing stress regimes in combination with the Young's modulus (E contrast between the injection layer and the cap rock and the interbedding friction coefficient, μ, may result in large spatial and directional differences of ΔPc. A single value characterizing the cap rock as for horizontal layered injection scenarios is not obtained. It is observed that a higher Young's modulus in the cap rock and/or a weak mechanical coupling between layers amplifies the maximum and minimum ΔPc values in the valley and limb, respectively. These differences in ΔPc imposed by E and μ are further amplified by different stress regimes. The more compressional the stress regime is, the larger the differences between the maximum and minimum ΔPc values become. The results of this study show that, in general compressional stress regimes yield the largest magnitudes of ΔPc and extensional stress regimes provide the lowest values of ΔPc for anticline formations. Yet this conclusion has to be considered with care when folded anticline layers are characterized by flexural slip and the friction coefficient between layers is low, i.e. μ = 0.1. For such cases of weak mechanical coupling, ΔPc magnitudes may range from 0 MPa to 27 MPa, indicating imminent risk of fault reactivation in the cap rock.

  4. Numerical relativity

    CERN Document Server

    Nakamura, T

    1993-01-01

    In GR13 we heard many reports on recent. progress as well as future plans of detection of gravitational waves. According to these reports (see the report of the workshop on the detection of gravitational waves by Paik in this volume), it is highly probable that the sensitivity of detectors such as laser interferometers and ultra low temperature resonant bars will reach the level of h ~ 10—21 by 1998. in this level we may expect the detection of the gravitational waves from astrophysical sources such as coalescing binary neutron stars once a year or so. Therefore the progress in numerical relativity is urgently required to predict the wave pattern and amplitude of the gravitational waves from realistic astrophysical sources. The time left for numerical relativists is only six years or so although there are so many difficulties in principle as well as in practice.

  5. Contribution to the asymptotic estimation of the global error of single step numerical integration methods. Application to the simulation of electric power networks; Contribution a l'estimation asymptotique de l'erreur globale des methodes d'integration numerique a un pas. Application a la simulation des reseaux electriques

    Energy Technology Data Exchange (ETDEWEB)

    Aid, R.

    1998-01-07

    This work comes from an industrial problem of validating numerical solutions of ordinary differential equations modeling power systems. This problem is solved using asymptotic estimators of the global error. Four techniques are studied: Richardson estimator (RS), Zadunaisky's techniques (ZD), integration of the variational equation (EV), and Solving for the correction (SC). We give some precisions on the relative order of SC w.r.t. the order of the numerical method. A new variant of ZD is proposed that uses the Modified Equation. In the case of variable step-size, it is shown that under suitable restriction, on the hypothesis of the step-size selection, ZD and SC are still valid. Moreover, some Runge-Kutta methods are shown to need less hypothesis on the step-sizes to exhibit a valid order of convergence for ZD and SC. Numerical tests conclude this analysis. Industrial cases are given. Finally, an algorithm to avoid the a priori specification of the integration path for complex time differential equations is proposed. (author)

  6. Biased calculations: Numeric anchors influence answers to math equations

    Directory of Open Access Journals (Sweden)

    Andrew R. Smith

    2011-02-01

    Full Text Available People must often perform calculations in order to produce a numeric estimate (e.g., a grocery-store shopper estimating the total price of his or her shopping cart contents. The current studies were designed to test whether estimates based on calculations are influenced by comparisons with irrelevant anchors. Previous research has demonstrated that estimates across a wide range of contexts assimilate toward anchors, but none has examined estimates based on calculations. In two studies, we had participants compare the answers to math problems with anchors. In both studies, participants' estimates assimilated toward the anchor values. This effect was moderated by time limit such that the anchoring effects were larger when the participants' ability to engage in calculations was limited by a restrictive time limit.

  7. Uncertainty estimation and global forecasting with a chemistry-transport model - application to the numerical simulation of air quality; Estimation de l'incertitude et prevision d'ensemble avec un modele de chimie transport - Application a la simulation numerique de la qualite de l'air

    Energy Technology Data Exchange (ETDEWEB)

    Mallet, V.

    2005-12-15

    The aim of this work is the evaluation of the quality of a chemistry-transport model, not by a classical comparison with observations, but by the estimation of its uncertainties due to the input data, to the model formulation and to the numerical approximations. The study of these 3 sources of uncertainty is carried out with Monte Carlo simulations, with multi-model simulations and with comparisons between numerical schemes, respectively. A high uncertainty is shown for ozone concentrations. To overcome the uncertainty-related limitations, a strategy consists in using the overall forecasting. By combining several models (up to 48) on the basis of past observations, forecasts can be significantly improved. This work has been also the occasion of developing an innovative modeling system, named Polyphemus. (J.S.)

  8. Uncertainty estimation and global forecasting with a chemistry-transport model - application to the numerical simulation of air quality; Estimation de l'incertitude et prevision d'ensemble avec un modele de chimie transport - Application a la simulation numerique de la qualite de l'air

    Energy Technology Data Exchange (ETDEWEB)

    Mallet, V

    2005-12-15

    The aim of this work is the evaluation of the quality of a chemistry-transport model, not by a classical comparison with observations, but by the estimation of its uncertainties due to the input data, to the model formulation and to the numerical approximations. The study of these 3 sources of uncertainty is carried out with Monte Carlo simulations, with multi-model simulations and with comparisons between numerical schemes, respectively. A high uncertainty is shown for ozone concentrations. To overcome the uncertainty-related limitations, a strategy consists in using the overall forecasting. By combining several models (up to 48) on the basis of past observations, forecasts can be significantly improved. This work has been also the occasion of developing an innovative modeling system, named Polyphemus. (J.S.)

  9. Numerical analysis

    CERN Document Server

    Jacques, Ian

    1987-01-01

    This book is primarily intended for undergraduates in mathematics, the physical sciences and engineering. It introduces students to most of the techniques forming the core component of courses in numerical analysis. The text is divided into eight chapters which are largely self-contained. However, with a subject as intricately woven as mathematics, there is inevitably some interdependence between them. The level of difficulty varies and, although emphasis is firmly placed on the methods themselves rather than their analysis, we have not hesitated to include theoretical material when we consider it to be sufficiently interesting. However, it should be possible to omit those parts that do seem daunting while still being able to follow the worked examples and to tackle the exercises accompanying each section. Familiarity with the basic results of analysis and linear algebra is assumed since these are normally taught in first courses on mathematical methods. For reference purposes a list of theorems used in the t...

  10. Cardiovascular magnetic resonance in adults with previous cardiovascular surgery.

    Science.gov (United States)

    von Knobelsdorff-Brenkenhoff, Florian; Trauzeddel, Ralf Felix; Schulz-Menger, Jeanette

    2014-03-01

    Cardiovascular magnetic resonance (CMR) is a versatile non-invasive imaging modality that serves a broad spectrum of indications in clinical cardiology and has proven evidence. Most of the numerous applications are appropriate in patients with previous cardiovascular surgery in the same manner as in non-surgical subjects. However, some specifics have to be considered. This review article is intended to provide information about the application of CMR in adults with previous cardiovascular surgery. In particular, the two main scenarios, i.e. following coronary artery bypass surgery and following heart valve surgery, are highlighted. Furthermore, several pictorial descriptions of other potential indications for CMR after cardiovascular surgery are given.

  11. Numerical analysis using Sage

    CERN Document Server

    Anastassiou, George A

    2015-01-01

    This is the first numerical analysis text to use Sage for the implementation of algorithms and can be used in a one-semester course for undergraduates in mathematics, math education, computer science/information technology, engineering, and physical sciences. The primary aim of this text is to simplify understanding of the theories and ideas from a numerical analysis/numerical methods course via a modern programming language like Sage. Aside from the presentation of fundamental theoretical notions of numerical analysis throughout the text, each chapter concludes with several exercises that are oriented to real-world application.  Answers may be verified using Sage.  The presented code, written in core components of Sage, are backward compatible, i.e., easily applicable to other software systems such as Mathematica®.  Sage is  open source software and uses Python-like syntax. Previous Python programming experience is not a requirement for the reader, though familiarity with any programming language is a p...

  12. Reasoning with Previous Decisions: Beyond the Doctrine of Precedent

    DEFF Research Database (Denmark)

    Komárek, Jan

    2013-01-01

    in different jurisdictions use previous judicial decisions in their argument, we need to move beyond the concept of precedent to a wider notion, which would embrace practices and theories in legal systems outside the Common law tradition. This article presents the concept of ‘reasoning with previous decisions...... law method’, but they are no less rational and intellectually sophisticated. The reason for the rather conceited attitude of some comparatists is in the dominance of the common law paradigm of precedent and the accompanying ‘case law method’. If we want to understand how courts and lawyers......’ as such an alternative and develops its basic models. The article first points out several shortcomings inherent in limiting the inquiry into reasoning with previous decisions by the common law paradigm (1). On the basis of numerous examples provided in section (1), I will present two basic models of reasoning...

  13. Numerical simulation of laser resonators

    International Nuclear Information System (INIS)

    Yoo, J. G.; Jeong, Y. U.; Lee, B. C.; Rhee, Y. J.; Cho, S. O.

    2004-01-01

    We developed numerical simulation packages for laser resonators on the bases of a pair of integral equations. Two numerical schemes, a matrix formalism and an iterative method, were programmed for finding numeric solutions to the pair of integral equations. The iterative method was tried by Fox and Li, but it was not applicable for high Fresnel numbers since the numerical errors involved propagate and accumulate uncontrollably. In this paper, we implement the matrix method to extend the computational limit further. A great number of case studies are carried out with various configurations of stable and unstable r;esonators to compute diffraction losses, phase shifts, intensity distributions and phases of the radiation fields on mirrors. Our results presented in this paper show not only a good agreement with the results previously obtained by Fox and Li, but also the legitimacy of our numerical procedures for high Fresnel numbers.

  14. Preoperative screening: value of previous tests.

    Science.gov (United States)

    Macpherson, D S; Snow, R; Lofgren, R P

    1990-12-15

    To determine the frequency of tests done in the year before elective surgery that might substitute for preoperative screening tests and to determine the frequency of test results that change from a normal value to a value likely to alter perioperative management. Retrospective cohort analysis of computerized laboratory data (complete blood count, sodium, potassium, and creatinine levels, prothrombin time, and partial thromboplastin time). Urban tertiary care Veterans Affairs Hospital. Consecutive sample of 1109 patients who had elective surgery in 1988. At admission, 7549 preoperative tests were done, 47% of which duplicated tests performed in the previous year. Of 3096 previous results that were normal as defined by hospital reference range and done closest to the time of but before admission (median interval, 2 months), 13 (0.4%; 95% CI, 0.2% to 0.7%), repeat values were outside a range considered acceptable for surgery. Most of the abnormalities were predictable from the patient's history, and most were not noted in the medical record. Of 461 previous tests that were abnormal, 78 (17%; CI, 13% to 20%) repeat values at admission were outside a range considered acceptable for surgery (P less than 0.001, frequency of clinically important abnormalities of patients with normal previous results with those with abnormal previous results). Physicians evaluating patients preoperatively could safely substitute the previous test results analyzed in this study for preoperative screening tests if the previous tests are normal and no obvious indication for retesting is present.

  15. Automatic electromagnetic valve for previous vacuum

    International Nuclear Information System (INIS)

    Granados, C. E.; Martin, F.

    1959-01-01

    A valve which permits the maintenance of an installation vacuum when electric current fails is described. It also lets the air in the previous vacuum bomb to prevent the oil ascending in the vacuum tubes. (Author)

  16. Past and current sediment dispersion pattern estimates through numerical modeling of wave climate: an example of the Holocene delta of the Doce River, Espírito Santo, Brazil

    Directory of Open Access Journals (Sweden)

    Abílio C.S.P. Bittencourt

    2007-06-01

    Full Text Available This paper presents a numerical modeling estimation of the sediment dispersion patterns caused by waves inciding through four distinct coastline contours of the delta plain of the Doce River during the Late Holocene. For this, a wave climate model based on the construction of wave refraction diagrams, as a function of current boundary conditions, was defined and was assumed to be valid for the four coastlines. The numerical modeling was carried out on basis of the refraction diagrams, taking into account the angle of approximation and the wave height along the coastline. The results are shown to be comparable with existing data regarding the directions of net longshore drift of sediments estimated from the integration of sediment cores, interpretation of aerial photographs and C14 datings. This fact apparently suggests that, on average, current boundary conditions appear to have remained with the same general characteristics since 5600 cal yr BP to the present. The used approach may prove useful to evaluate the sediment dispersion patterns during the Late Holocene in the Brazilian east-northeast coastal region.O presente trabalho apresenta uma estimativa, por modelagem numérica, dos padrões de dispersão de sedimentos causados por ondas ao longo de quatro distintos traçados da linha decosta durante o Holoceno Tardio na planície deltaica do Rio Doce. Para tanto, foi definido um modelo de clima de ondas baseado na construção de diagramas de refração de ondas, em função das condições de contorno atuais, que foi assumido como válido para as quatro linhas de costa. A modelagem numérica foi realizada a partir dos diagramas de refração, levando-se em conta o ângulo de aproximação e a altura da onda ao longo da linha de costa. Os resultados obtidos mostraram-se compatíveis com os dados existentes relativos aos sentidos da deriva litorânea efetiva de sedimentos estimados a partir da integração de testemunhos de vibra

  17. Moyamoya disease in a child with previous acute necrotizing encephalopathy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Taik-Kun; Cha, Sang Hoon; Chung, Kyoo Byung; Kim, Jung Hyuck; Kim, Baek Hyun; Chung, Hwan Hoon [Department of Diagnostic Radiology, Korea University College of Medicine, Ansan Hospital, 516 Kojan-Dong, Ansan City, Kyungki-Do 425-020 (Korea); Eun, Baik-Lin [Department of Pediatrics, Korea University College of Medicine, Seoul (Korea)

    2003-09-01

    A previously healthy 24-day-old boy presented with a 2-day history of fever and had a convulsion on the day of admission. MRI showed abnormal signal in the thalami, caudate nuclei and central white matter. Acute necrotising encephalopathy was diagnosed, other causes having been excluded after biochemical and haematological analysis of blood, urine and CSF. He recovered, but with spastic quadriparesis. At the age of 28 months, he suffered sudden deterioration of consciousness and motor weakness of his right limbs. MRI was consistent with an acute cerebrovascular accident. Angiography showed bilateral middle cerebral artery stenosis or frank occlusion with numerous lenticulostriate collateral vessels consistent with moyamoya disease. (orig.)

  18. Numerical modeling of economic uncertainty

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2007-01-01

    Representation and modeling of economic uncertainty is addressed by different modeling methods, namely stochastic variables and probabilities, interval analysis, and fuzzy numbers, in particular triple estimates. Focusing on discounted cash flow analysis numerical results are presented, comparisons...... are made between alternative modeling methods, and characteristics of the methods are discussed....

  19. Numerical modeling of slow shocks

    International Nuclear Information System (INIS)

    Winske, D.

    1987-01-01

    This paper reviews previous attempt and the present status of efforts to understand the structure of slow shocks by means of time dependent numerical calculations. Studies carried out using MHD or hybrid-kinetic codes have demonstrated qualitative agreement with theory. A number of unresolved issues related to hybrid simulations of the internal shock structure are discussed in some detail. 43 refs., 8 figs

  20. 77 FR 70176 - Previous Participation Certification

    Science.gov (United States)

    2012-11-23

    ... participants' previous participation in government programs and ensure that the past record is acceptable prior... information is designed to be 100 percent automated and digital submission of all data and certifications is... government programs and ensure that the past record is acceptable prior to granting approval to participate...

  1. On the Tengiz petroleum deposit previous study

    International Nuclear Information System (INIS)

    Nysangaliev, A.N.; Kuspangaliev, T.K.

    1997-01-01

    Tengiz petroleum deposit previous study is described. Some consideration about structure of productive formation, specific characteristic properties of petroleum-bearing collectors are presented. Recommendation on their detail study and using of experience on exploration and development of petroleum deposit which have analogy on most important geological and industrial parameters are given. (author)

  2. Subsequent pregnancy outcome after previous foetal death

    NARCIS (Netherlands)

    Nijkamp, J. W.; Korteweg, F. J.; Holm, J. P.; Timmer, A.; Erwich, J. J. H. M.; van Pampus, M. G.

    Objective: A history of foetal death is a risk factor for complications and foetal death in subsequent pregnancies as most previous risk factors remain present and an underlying cause of death may recur. The purpose of this study was to evaluate subsequent pregnancy outcome after foetal death and to

  3. BCJ numerators from reduced Pfaffian

    Energy Technology Data Exchange (ETDEWEB)

    Du, Yi-Jian [Center for Theoretical Physics, School of Physics and Technology, Wuhan University,No. 299 Bayi Road, Wuhan 430072 (China); Teng, Fei [Department of Physics and Astronomy, University of Utah,115 South 1400 East, Salt Lake City, UT 84112 (United States)

    2017-04-07

    By expanding the reduced Pfaffian in the tree level Cachazo-He-Yuan (CHY) integrands for Yang-Mills (YM) and nonlinear sigma model (NLSM), we can get the Bern-Carrasco-Johansson (BCJ) numerators in Del Duca-Dixon-Maltoni (DDM) form for arbitrary number of particles in any spacetime dimensions. In this work, we give a set of very straightforward graphic rules based on spanning trees for a direct evaluation of the BCJ numerators for YM and NLSM. Such rules can be derived from the Laplace expansion of the corresponding reduced Pfaffian. For YM, the each one of the (n−2)! DDM form BCJ numerators contains exactly (n−1)! terms, corresponding to the increasing trees with respect to the color order. For NLSM, the number of nonzero numerators is at most (n−2)!−(n−3)!, less than those of several previous constructions.

  4. Sensitivity analysis of numerical solutions for environmental fluid problems

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu; Motoyama, Yasunori

    2003-01-01

    In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)

  5. Response to health insurance by previously uninsured rural children.

    Science.gov (United States)

    Tilford, J M; Robbins, J M; Shema, S J; Farmer, F L

    1999-08-01

    To examine the healthcare utilization and costs of previously uninsured rural children. Four years of claims data from a school-based health insurance program located in the Mississippi Delta. All children who were not Medicaid-eligible or were uninsured, were eligible for limited benefits under the program. The 1987 National Medical Expenditure Survey (NMES) was used to compare utilization of services. The study represents a natural experiment in the provision of insurance benefits to a previously uninsured population. Premiums for the claims cost were set with little or no information on expected use of services. Claims from the insurer were used to form a panel data set. Mixed model logistic and linear regressions were estimated to determine the response to insurance for several categories of health services. The use of services increased over time and approached the level of utilization in the NMES. Conditional medical expenditures also increased over time. Actuarial estimates of claims cost greatly exceeded actual claims cost. The provision of a limited medical, dental, and optical benefit package cost approximately $20-$24 per member per month in claims paid. An important uncertainty in providing health insurance to previously uninsured populations is whether a pent-up demand exists for health services. Evidence of a pent-up demand for medical services was not supported in this study of rural school-age children. States considering partnerships with private insurers to implement the State Children's Health Insurance Program could lower premium costs by assembling basic data on previously uninsured children.

  6. Subsequent childbirth after a previous traumatic birth.

    Science.gov (United States)

    Beck, Cheryl Tatano; Watson, Sue

    2010-01-01

    Nine percent of new mothers in the United States who participated in the Listening to Mothers II Postpartum Survey screened positive for meeting the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria for posttraumatic stress disorder after childbirth. Women who have had a traumatic birth experience report fewer subsequent children and a longer length of time before their second baby. Childbirth-related posttraumatic stress disorder impacts couples' physical relationship, communication, conflict, emotions, and bonding with their children. The purpose of this study was to describe the meaning of women's experiences of a subsequent childbirth after a previous traumatic birth. Phenomenology was the research design used. An international sample of 35 women participated in this Internet study. Women were asked, "Please describe in as much detail as you can remember your subsequent pregnancy, labor, and delivery following your previous traumatic birth." Colaizzi's phenomenological data analysis approach was used to analyze the stories of the 35 women. Data analysis yielded four themes: (a) riding the turbulent wave of panic during pregnancy; (b) strategizing: attempts to reclaim their body and complete the journey to motherhood; (c) bringing reverence to the birthing process and empowering women; and (d) still elusive: the longed-for healing birth experience. Subsequent childbirth after a previous birth trauma has the potential to either heal or retraumatize women. During pregnancy, women need permission and encouragement to grieve their prior traumatic births to help remove the burden of their invisible pain.

  7. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

    2015-01-01

    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

  8. Count on dopamine: influences of COMT polymorphisms on numerical cognition

    Directory of Open Access Journals (Sweden)

    Annelise eJúlio-Costa

    2013-08-01

    Full Text Available Catechol-O-methyltransferase (COMT is an enzyme that is particularly important for the metabolism of dopamine. Functional polymorphisms of COMT have been implicated in working memory and numerical cognition. This is an exploratory study that aims at investigating associations between COMT polymorphisms, working memory and numerical cognition. Elementary school children from 2th to 6th grades were divided into two groups according to their COMT val158met polymorphism (homozygous for valine allele [n= 61] versus heterozygous plus methionine homozygous children or met+ group [n=94]. Both groups were matched for age and intelligence. Working memory was assessed through digit span and Corsi blocks. Symbolic numerical processing was assessed through transcoding and single-digit word problem tasks. Non-symbolic magnitude comparison and estimation tasks were used to assess number sense. Between-group differences were found in symbolic and non-symbolic numerical tasks, but not in working memory tasks. Children in the met+ group showed better performance in all numerical tasks while val homozygous children presented slower development of non-symbolic magnitude representations. These results suggest COMT-related dopaminergic modulation may be related not only to working memory, as found in previous studies, but also to the development of magnitude processing and magnitude representations.

  9. Numerical Optimization in Microfluidics

    DEFF Research Database (Denmark)

    Jensen, Kristian Ejlebjærg

    2017-01-01

    Numerical modelling can illuminate the working mechanism and limitations of microfluidic devices. Such insights are useful in their own right, but one can take advantage of numerical modelling in a systematic way using numerical optimization. In this chapter we will discuss when and how numerical...... optimization is best used....

  10. Methods of numerical relativity

    International Nuclear Information System (INIS)

    Piran, T.

    1983-01-01

    Numerical Relativity is an alternative to analytical methods for obtaining solutions for Einstein equations. Numerical methods are particularly useful for studying generation of gravitational radiation by potential strong sources. The author reviews the analytical background, the numerical analysis aspects and techniques and some of the difficulties involved in numerical relativity. (Auth.)

  11. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  12. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  13. Underestimation of Severity of Previous Whiplash Injuries

    Science.gov (United States)

    Naqui, SZH; Lovell, SJ; Lovell, ME

    2008-01-01

    INTRODUCTION We noted a report that more significant symptoms may be expressed after second whiplash injuries by a suggested cumulative effect, including degeneration. We wondered if patients were underestimating the severity of their earlier injury. PATIENTS AND METHODS We studied recent medicolegal reports, to assess subjects with a second whiplash injury. They had been asked whether their earlier injury was worse, the same or lesser in severity. RESULTS From the study cohort, 101 patients (87%) felt that they had fully recovered from their first injury and 15 (13%) had not. Seventy-six subjects considered their first injury of lesser severity, 24 worse and 16 the same. Of the 24 that felt the violence of their first accident was worse, only 8 had worse symptoms, and 16 felt their symptoms were mainly the same or less than their symptoms from their second injury. Statistical analysis of the data revealed that the proportion of those claiming a difference who said the previous injury was lesser was 76% (95% CI 66–84%). The observed proportion with a lesser injury was considerably higher than the 50% anticipated. CONCLUSIONS We feel that subjects may underestimate the severity of an earlier injury and associated symptoms. Reasons for this may include secondary gain rather than any proposed cumulative effect. PMID:18201501

  14. [Electronic cigarettes - effects on health. Previous reports].

    Science.gov (United States)

    Napierała, Marta; Kulza, Maksymilian; Wachowiak, Anna; Jabłecka, Katarzyna; Florek, Ewa

    2014-01-01

    Currently very popular in the market of tobacco products have gained electronic cigarettes (ang. E-cigarettes). These products are considered to be potentially less harmful in compared to traditional tobacco products. However, current reports indicate that the statements of the producers regarding to the composition of the e- liquids not always are sufficient, and consumers often do not have reliable information on the quality of the product used by them. This paper contain a review of previous reports on the composition of e-cigarettes and their impact on health. Most of the observed health effects was related to symptoms of the respiratory tract, mouth, throat, neurological complications and sensory organs. Particularly hazardous effects of the e-cigarettes were: pneumonia, congestive heart failure, confusion, convulsions, hypotension, aspiration pneumonia, face second-degree burns, blindness, chest pain and rapid heartbeat. In the literature there is no information relating to passive exposure by the aerosols released during e-cigarette smoking. Furthermore, the information regarding to the use of these products in the long term are not also available.

  15. Extensible numerical library in JAVA

    International Nuclear Information System (INIS)

    Aso, T.; Okazawa, H.; Takashimizu, N.

    2001-01-01

    The authors present the current status of the project for developing the numerical library in JAVA. The authors have presented how object-oriented techniques improve usage and also development of numerical libraries compared with the conventional way at previous conference. The authors need many functions for data analysis which is not provided within JAVA language, for example, good random number generators, special functions and so on. Authors' development strategy is focused on easiness of implementation and adding new features by users themselves not only by developers. In HPC field, there are other focus efforts to develop numerical libraries in JAVA. However, their focus is on the performance of execution, not easiness of extension. Following the strategy, the authors have designed and implemented more classes for random number generators and so on

  16. Numerical models for differential problems

    CERN Document Server

    Quarteroni, Alfio

    2017-01-01

    In this text, we introduce the basic concepts for the numerical modelling of partial differential equations. We consider the classical elliptic, parabolic and hyperbolic linear equations, but also the diffusion, transport, and Navier-Stokes equations, as well as equations representing conservation laws, saddle-point problems and optimal control problems. Furthermore, we provide numerous physical examples which underline such equations. We then analyze numerical solution methods based on finite elements, finite differences, finite volumes, spectral methods and domain decomposition methods, and reduced basis methods. In particular, we discuss the algorithmic and computer implementation aspects and provide a number of easy-to-use programs. The text does not require any previous advanced mathematical knowledge of partial differential equations: the absolutely essential concepts are reported in a preliminary chapter. It is therefore suitable for students of bachelor and master courses in scientific disciplines, an...

  17. Numerical relativity and asymptotic flatness

    International Nuclear Information System (INIS)

    Deadman, E; Stewart, J M

    2009-01-01

    It is highly plausible that the region of spacetime far from an isolated gravitating body is, in some sense, asymptotically Minkowskian. However theoretical studies of the full nonlinear theory, initiated by Bondi et al (1962 Proc. R. Soc. A 269 21-51), Sachs (1962 Proc. R. Soc. A 270 103-26) and Newman and Unti (1962 J. Math. Phys. 3 891-901), rely on careful, clever, a priori choices of a chart (and tetrad) and so are not readily accessible to the numerical relativist, who chooses her/his chart on the basis of quite different grounds. This paper seeks to close this gap. Starting from data available in a typical numerical evolution, we construct a chart and tetrad which are, asymptotically, sufficiently close to the theoretical ones, so that the key concepts of the Bondi news function, Bondi mass and its rate of decrease can be estimated. In particular, these estimates can be expressed in the numerical relativist's chart as numerical relativity recipes.

  18. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann G.; Liu, Dishi; Schillings, Claudia; Schulz, Volker

    2017-01-01

    In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al '17]. For modeling we used the TAU code, developed in DLR, Germany.

  19. Cuba: Multidimensional numerical integration library

    Science.gov (United States)

    Hahn, Thomas

    2016-08-01

    The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

  20. Numerical model SMODERP

    Science.gov (United States)

    Kavka, P.; Jeřábek, J.; Strouhal, L.

    2016-12-01

    The contribution presents a numerical model SMODERP that is used for calculation and prediction of surface runoff and soil erosion from agricultural land. The physically based model includes the processes of infiltration (Phillips equation), surface runoff routing (kinematic wave based equation), surface retention, surface roughness and vegetation impact on runoff. The model is being developed at the Department of Irrigation, Drainage and Landscape Engineering, Civil Engineering Faculty, CTU in Prague. 2D version of the model was introduced in last years. The script uses ArcGIS system tools for data preparation. The physical relations are implemented through Python scripts. The main computing part is stand alone in numpy arrays. Flow direction is calculated by Steepest Descent algorithm and in multiple flow algorithm. Sheet flow is described by modified kinematic wave equation. Parameters for five different soil textures were calibrated on the set of hundred measurements performed on the laboratory and filed rainfall simulators. Spatially distributed models enable to estimate not only surface runoff but also flow in the rills. Development of the rills is based on critical shear stress and critical velocity. For modelling of the rills a specific sub model was created. This sub model uses Manning formula for flow estimation. Flow in the ditches and streams are also computed. Numerical stability of the model is controled by Courant criterion. Spatial scale is fixed. Time step is dynamic and depends on the actual discharge. The model is used in the framework of the project "Variability of Short-term Precipitation and Runoff in Small Czech Drainage Basins and its Influence on Water Resources Management". Main goal of the project is to elaborate a methodology and online utility for deriving short-term design precipitation series, which could be utilized by a broad community of scientists, state administration as well as design planners. The methodology will account for

  1. Numerical validation of selected computer programs in nonlinear analysis of steel frame exposed to fire

    Science.gov (United States)

    Maślak, Mariusz; Pazdanowski, Michał; Woźniczka, Piotr

    2018-01-01

    Validation of fire resistance for the same steel frame bearing structure is performed here using three different numerical models, i.e. a bar one prepared in the SAFIR environment, and two 3D models developed within the framework of Autodesk Simulation Mechanical (ASM) and an alternative one developed in the environment of the Abaqus code. The results of the computer simulations performed are compared with the experimental results obtained previously, in a laboratory fire test, on a structure having the same characteristics and subjected to the same heating regimen. Comparison of the experimental and numerically determined displacement evolution paths for selected nodes of the considered frame during the simulated fire exposure constitutes the basic criterion applied to evaluate the validity of the numerical results obtained. The experimental and numerically determined estimates of critical temperature specific to the considered frame and related to the limit state of bearing capacity in fire have been verified as well.

  2. Overconfidence in Interval Estimates

    Science.gov (United States)

    Soll, Jack B.; Klayman, Joshua

    2004-01-01

    Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…

  3. Transportation package design using numerical optimization

    International Nuclear Information System (INIS)

    Harding, D.C.; Witkowski, W.R.

    1991-01-01

    The purpose of this overview is twofold: first, to outline the theory and basic elements of numerical optimization; and second, to show how numerical optimization can be applied to the transportation packaging industry and used to increase efficiency and safety of radioactive and hazardous material transportation packages. A more extensive review of numerical optimization and its applications to radioactive material transportation package design was performed previously by the authors (Witkowski and Harding 1992). A proof-of-concept Type B package design is also presented as a simplified example of potential improvements achievable using numerical optimization in the design process

  4. Multi-channel PSD Estimators for Speech Dereverberation

    DEFF Research Database (Denmark)

    Kuklasinski, Adam; Doclo, Simon; Gerkmann, Timo

    2015-01-01

    densities (PSDs). We first derive closed-form expressions for the mean square error (MSE) of both PSD estimators and then show that one estimatorpreviously used for speech dereverberation by the authors – always yields a better MSE. Only in the case of a two microphone array or for special spatial...... distributions of the interference both estimators yield the same MSE. The theoretically derived MSE values are in good agreement with numerical simulation results and with instrumental speech quality measures in a realistic speech dereverberation task for binaural hearing aids....

  5. Theory and numerics of gravitational waves from preheating after inflation

    International Nuclear Information System (INIS)

    Dufaux, Jean-Francois; Kofman, Lev; Bergman, Amanda; Felder, Gary; Uzan, Jean-Philippe

    2007-01-01

    Preheating after inflation involves large, time-dependent field inhomogeneities, which act as a classical source of gravitational radiation. The resulting spectrum might be probed by direct detection experiments if inflation occurs at a low enough energy scale. In this paper, we develop a theory and algorithm to calculate, analytically and numerically, the spectrum of energy density in gravitational waves produced from an inhomogeneous background of stochastic scalar fields in an expanding universe. We derive some generic analytical results for the emission of gravity waves by stochastic media of random fields, which can test the validity/accuracy of numerical calculations. We contrast our method with other numerical methods in the literature, and then we apply it to preheating after chaotic inflation. In this case, we are able to check analytically our numerical results, which differ significantly from previous works. We discuss how the gravity-wave spectrum builds up with time and find that the amplitude and the frequency of its peak depend in a relatively simple way on the characteristic spatial scale amplified during preheating. We then estimate the peak frequency and amplitude of the spectrum produced in two models of preheating after hybrid inflation, which for some parameters may be relevant for gravity-wave interferometric experiments

  6. Numerical simulation of complex multi-dimensional two-phase flows in nuclear power plant coolant circuits by means of the best-estimate thermal-hydraulic code BAGIRA

    International Nuclear Information System (INIS)

    Kalinichenko, S.D.; Kroshilin, A.E.; Kroshilin, V.E.; Smirnov, A.V.

    2009-01-01

    Recent results are exposed, obtained by applying the best-estimate thermal hydraulic code BAGIRA for three-dimensional modeling complex two-phase flow dynamics inside the vessel of the horizontal steam generator PGV-1000 used in reactor units with VVER-1000. Spatial volumetric void fraction and velocity distributions are calculated and compared with available experimental data. (author)

  7. Numerical experiments to investigate the accuracy of broad-band moment magnitude, Mwp

    Science.gov (United States)

    Hara, Tatsuhiko; Nishimura, Naoki

    2011-12-01

    We perform numerical experiments to investigate the accuracy of broad-band moment magnitude, Mwp. We conduct these experiments by measuring Mwp from synthetic seismograms and comparing the resulting values to the moment magnitudes used in the calculation of synthetic seismograms. In the numerical experiments using point sources, we have found that there is a significant dependence of Mwp on focal mechanisms, and that depths phases have a large impact on Mwp estimates, especially for large shallow earthquakes. Numerical experiments using line sources suggest that the effects of source finiteness and rupture propagation on Mwp estimates are on the order of 0.2 magnitude units for vertical fault planes with pure dip-slip mechanisms and 45° dipping fault planes with pure dip-slip (thrust) mechanisms, but that the dependence is small for strike-slip events on a vertical fault plane. Numerical experiments for huge thrust faulting earthquakes on a fault plane with a shallow dip angle suggest that the Mwp estimates do not saturate in the moment magnitude range between 8 and 9, although they are underestimates. Our results are consistent with previous studies that compared Mwp estimates to moment magnitudes calculated from seismic moment tensors obtained by analyses of observed data.

  8. Numerical methods using Matlab

    CERN Document Server

    Lindfield, George

    2012-01-01

    Numerical Methods using MATLAB, 3e, is an extensive reference offering hundreds of useful and important numerical algorithms that can be implemented into MATLAB for a graphical interpretation to help researchers analyze a particular outcome. Many worked examples are given together with exercises and solutions to illustrate how numerical methods can be used to study problems that have applications in the biosciences, chaos, optimization, engineering and science across the board. Numerical Methods using MATLAB, 3e, is an extensive reference offering hundreds of use

  9. Market projections of cellulose nanomaterial-enabled products-- Part 2: Volume estimates

    Science.gov (United States)

    John Cowie; E.M. (Ted) Bilek; Theodore H. Wegner; Jo Anne Shatkin

    2014-01-01

    Nanocellulose has enormous potential to provide an important materials platform in numerous product sectors. This study builds on previous work by the same authors in which likely high-volume, low-volume, and novel applications for cellulosic nanomaterials were identified. In particular, this study creates a transparent methodology and estimates the potential annual...

  10. Twelve previously unknown phage genera are ubiquitous in global oceans.

    Science.gov (United States)

    Holmfeldt, Karin; Solonenko, Natalie; Shah, Manesh; Corrier, Kristen; Riemann, Lasse; Verberkmoes, Nathan C; Sullivan, Matthew B

    2013-07-30

    Viruses are fundamental to ecosystems ranging from oceans to humans, yet our ability to study them is bottlenecked by the lack of ecologically relevant isolates, resulting in "unknowns" dominating culture-independent surveys. Here we present genomes from 31 phages infecting multiple strains of the aquatic bacterium Cellulophaga baltica (Bacteroidetes) to provide data for an underrepresented and environmentally abundant bacterial lineage. Comparative genomics delineated 12 phage groups that (i) each represent a new genus, and (ii) represent one novel and four well-known viral families. This diversity contrasts the few well-studied marine phage systems, but parallels the diversity of phages infecting human-associated bacteria. Although all 12 Cellulophaga phages represent new genera, the podoviruses and icosahedral, nontailed ssDNA phages were exceptional, with genomes up to twice as large as those previously observed for each phage type. Structural novelty was also substantial, requiring experimental phage proteomics to identify 83% of the structural proteins. The presence of uncommon nucleotide metabolism genes in four genera likely underscores the importance of scavenging nutrient-rich molecules as previously seen for phages in marine environments. Metagenomic recruitment analyses suggest that these particular Cellulophaga phages are rare and may represent a first glimpse into the phage side of the rare biosphere. However, these analyses also revealed that these phage genera are widespread, occurring in 94% of 137 investigated metagenomes. Together, this diverse and novel collection of phages identifies a small but ubiquitous fraction of unknown marine viral diversity and provides numerous environmentally relevant phage-host systems for experimental hypothesis testing.

  11. Finger-Based Numerical Skills Link Fine Motor Skills to Numerical Development in Preschoolers.

    Science.gov (United States)

    Suggate, Sebastian; Stoeger, Heidrun; Fischer, Ursula

    2017-12-01

    Previous studies investigating the association between fine-motor skills (FMS) and mathematical skills have lacked specificity. In this study, we test whether an FMS link to numerical skills is due to the involvement of finger representations in early mathematics. We gave 81 pre-schoolers (mean age of 4 years, 9 months) a set of FMS measures and numerical tasks with and without a specific finger focus. Additionally, we used receptive vocabulary and chronological age as control measures. FMS linked more closely to finger-based than to nonfinger-based numerical skills even after accounting for the control variables. Moreover, the relationship between FMS and numerical skill was entirely mediated by finger-based numerical skills. We concluded that FMS are closely related to early numerical skill development through finger-based numerical counting that aids the acquisition of mathematical mental representations.

  12. Kidnapping Detection and Recognition in Previous Unknown Environment

    Directory of Open Access Journals (Sweden)

    Yang Tian

    2017-01-01

    Full Text Available An unaware event referred to as kidnapping makes the estimation result of localization incorrect. In a previous unknown environment, incorrect localization result causes incorrect mapping result in Simultaneous Localization and Mapping (SLAM by kidnapping. In this situation, the explored area and unexplored area are divided to make the kidnapping recovery difficult. To provide sufficient information on kidnapping, a framework to judge whether kidnapping has occurred and to identify the type of kidnapping with filter-based SLAM is proposed. The framework is called double kidnapping detection and recognition (DKDR by performing two checks before and after the “update” process with different metrics in real time. To explain one of the principles of DKDR, we describe a property of filter-based SLAM that corrects the mapping result of the environment using the current observations after the “update” process. Two classical filter-based SLAM algorithms, Extend Kalman Filter (EKF SLAM and Particle Filter (PF SLAM, are modified to show that DKDR can be simply and widely applied in existing filter-based SLAM algorithms. Furthermore, a technique to determine the adapted thresholds of metrics in real time without previous data is presented. Both simulated and experimental results demonstrate the validity and accuracy of the proposed method.

  13. A Polynomial Estimate of Railway Line Delay

    DEFF Research Database (Denmark)

    Cerreto, Fabrizio; Harrod, Steven; Nielsen, Otto Anker

    2017-01-01

    Railway service may be measured by the aggregate delay over a time horizon or due to an event. Timetables for railway service may dampen aggregate delay by addition of additional process time, either supplement time or buffer time. The evaluation of these variables has previously been performed...... by numerical analysis with simulation. This paper proposes an analytical estimate of aggregate delay with a polynomial form. The function returns the aggregate delay of a railway line resulting from an initial, primary, delay. Analysis of the function demonstrates that there should be a balance between the two...

  14. Evaluation of steel corrosion by numerical analysis

    OpenAIRE

    Kawahigashi, Tatsuo

    2017-01-01

    Recently, various non-destructive and numerical methods have been used and many cases of steel corrosion are examined. For example, methods of evaluating corrosion through various numerical methods and evaluating macrocell corrosion and micro-cell corrosion using measurements have been proposed. However, there are few reports on estimating of corrosion loss with distinguishing the macro-cell and micro-cell corrosion and with resembling an actuality phenomenon. In this study, for distinguishin...

  15. Strategy for a numerical Rock Mechanics Site Descriptive Model. Further development of the theoretical/numerical approach

    International Nuclear Information System (INIS)

    Olofsson, Isabelle; Fredriksson, Anders

    2005-05-01

    The Swedish Nuclear and Fuel Management Company (SKB) is conducting Preliminary Site Investigations at two different locations in Sweden in order to study the possibility of a Deep Repository for spent fuel. In the frame of these Site Investigations, Site Descriptive Models are achieved. These products are the result of an interaction of several disciplines such as geology, hydrogeology, and meteorology. The Rock Mechanics Site Descriptive Model constitutes one of these models. Before the start of the Site Investigations a numerical method using Discrete Fracture Network (DFN) models and the 2D numerical software UDEC was developed. Numerical simulations were the tool chosen for applying the theoretical approach for characterising the mechanical rock mass properties. Some shortcomings were identified when developing the methodology. Their impacts on the modelling (in term of time and quality assurance of results) were estimated to be so important that the improvement of the methodology with another numerical tool was investigated. The theoretical approach is still based on DFN models but the numerical software used is 3DEC. The main assets of the programme compared to UDEC are an optimised algorithm for the generation of fractures in the model and for the assignment of mechanical fracture properties. Due to some numerical constraints the test conditions were set-up in order to simulate 2D plane strain tests. Numerical simulations were conducted on the same data set as used previously for the UDEC modelling in order to estimate and validate the results from the new methodology. A real 3D simulation was also conducted in order to assess the effect of the '2D' conditions in the 3DEC model. Based on the quality of the results it was decided to update the theoretical model and introduce the new methodology based on DFN models and 3DEC simulations for the establishment of the Rock Mechanics Site Descriptive Model. By separating the spatial variability into two parts, one

  16. Numerical Analysis of Partial Differential Equations

    CERN Document Server

    Lions, Jacques-Louis

    2011-01-01

    S. Albertoni: Alcuni metodi di calcolo nella teoria della diffusione dei neutroni.- I. Babuska: Optimization and numerical stability in computations.- J.H. Bramble: Error estimates in elliptic boundary value problems.- G. Capriz: The numerical approach to hydrodynamic problems.- A. Dou: Energy inequalities in an elastic cylinder.- T. Doupont: On the existence of an iterative method for the solution of elliptic difference equation with an improved work estimate.- J. Douglas, J.R. Cannon: The approximation of harmonic and parabolic functions of half-spaces from interior data.- B.E. Hubbard: Erro

  17. Numerical distance protection

    CERN Document Server

    Ziegler, Gerhard

    2011-01-01

    Distance protection provides the basis for network protection in transmission systems and meshed distribution systems. This book covers the fundamentals of distance protection and the special features of numerical technology. The emphasis is placed on the application of numerical distance relays in distribution and transmission systems.This book is aimed at students and engineers who wish to familiarise themselves with the subject of power system protection, as well as the experienced user, entering the area of numerical distance protection. Furthermore it serves as a reference guide for s

  18. Numerical problems in physics

    CERN Document Server

    Singh, Devraj

    2015-01-01

    Numerical Problems in Physics, Volume 1 is intended to serve the need of the students pursuing graduate and post graduate courses in universities with Physics and Materials Science as subject including those appearing in engineering, medical, and civil services entrance examinations. KEY FEATURES: * 29 chapters on Optics, Wave & Oscillations, Electromagnetic Field Theory, Solid State Physics & Modern Physics * 540 solved numerical problems of various universities and ompetitive examinations * 523 multiple choice questions for quick and clear understanding of subject matter * 567 unsolved numerical problems for grasping concepts of the various topic in Physics * 49 Figures for understanding problems and concept

  19. Stochastic goal-oriented error estimation with memory

    Science.gov (United States)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  20. Development of design technology on thermal-hydraulic performance in tight-lattice rod bundle. III - Numerical estimation on rod bowing effect based on X-ray CT data

    International Nuclear Information System (INIS)

    Misawa, Takeharu; Ohnuki, Akira; Katsuyama, Kozo; Nagamine, Tsuyoshi; Nakamura, Yasuo; Akimoto, Hajime; Mitsutake, Toru; Misawa, Susumu

    2007-01-01

    Design studies of the Innovative Water Reactor for Flexible Fuel Cycle (FLWR) are being carried out at the Japan Atomic Energy Agency (JAEA) as one candidate for the future reactors. In actual core design, it is precondition to prevent fuel rods contact due to fuel rod bowing. However, the FLWR cores have nonconventional characteristics such as a hexagonal tight lattice arrangement and a high enrichment fuel loading. Therefore, as conservative evaluation, it is important to investigate influence of fuel rod bowing upon the boiling transition. In the JAEA, a 37-rod bundle experiments (base case test section (1.3mm gap width), gap width effect test section (1.0mm gap width), and rod bowing test section) were performed in order to investigate the thermal hydraulic characteristics in the tight lattice bundle. In this paper, the rod bowing effect test is paid attention. It is suspected that the actual fuel rod positions in the rod bowing test section may be different from the design-based positions. Even a slight displacement from the design-based position of fuel rod may occur variation of flow area, and give influence upon the thermal hydraulic characteristics in the rod bundle. Therefore, if the critical power in the rod bundle is evaluated by an analytical approach, the analysis based on more correct input can be performed by using actual fuel rod position data. In this study, the rod positions in the rod bowing test section were measured using the high energy X-ray computer tomography (Xray-CT). Based on the measured rod positions data, the subchannel analysis by the NASCA code was performed, in order to investigate applicability of the NASCA code to BT estimation of the rod bowing test section, and influence of displacement from design-based rod position upon BT estimation by the NASCA code. The predicted critical powers are agreement with those obtained by the experiment. The analysis based on the design-based rod positions is also performed, and the result is

  1. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Maity, Arnab; Carroll, Raymond J.

    2013-01-01

    PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus

  2. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander

    2017-05-16

    We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.

  3. Development of estimation method for tephra transport and dispersal characteristics with numerical simulation technique. Part 2. A method of selecting meteorological conditions and the effects on ash deposition and concentration in air for Kanto-area

    International Nuclear Information System (INIS)

    Hattori, Yasuo; Suto, Hitoshi; Toshida, Kiyoshi; Hirakuchi, Hiromaru

    2016-01-01

    In the present study, we examine the estimation of ground deposition for a real test case, a volcanic ash hazard in Kanto-area with various meteorological conditions by using an ash transport- and deposition-model, fall3d; we consider three eruptions, which correspond to the stage 1 and 3 of Hoei eruption at Mt. Fuji and Tenmei Eruption at Mt. Asama. The meteorological conditions are generated with the 53 years reanalysis meteorological dataset, CRIEPI-RCM-Era2, which has a temporal- and spatial-resolutions of 1 hr and 5 km. The typical and extreme conditions were sampled by using Gumbel plot and an artificial neural network technique. The ash deposition is invariably limited to the west area of the vent, even with the typical wind conditions on summer, while the isopach of ground deposition depicted various distributions, which strongly depends on meteorological conditions. This implies that the concentric circular distribution must not be realistic. Also, a long-term eruption, such as the Hoei eruption during stage 3, yields large deposition area due to the daily variations of wind direction, suggesting that the attention to the differences between daily variation and fluctuations of wind direction on evaluating of volcanic ash risk is vital. (author)

  4. Remarks on numerical semigroups

    International Nuclear Information System (INIS)

    Torres, F.

    1995-12-01

    We extend results on Weierstrass semigroups at ramified points of double covering of curves to any numerical semigroup whose genus is large enough. As an application we strengthen the properties concerning Weierstrass weights state in [To]. (author). 25 refs

  5. Numerical semigroups and applications

    CERN Document Server

    Assi, Abdallah

    2016-01-01

    This work presents applications of numerical semigroups in Algebraic Geometry, Number Theory, and Coding Theory. Background on numerical semigroups is presented in the first two chapters, which introduce basic notation and fundamental concepts and irreducible numerical semigroups. The focus is in particular on free semigroups, which are irreducible; semigroups associated with planar curves are of this kind. The authors also introduce semigroups associated with irreducible meromorphic series, and show how these are used in order to present the properties of planar curves. Invariants of non-unique factorizations for numerical semigroups are also studied. These invariants are computationally accessible in this setting, and thus this monograph can be used as an introduction to Factorization Theory. Since factorizations and divisibility are strongly connected, the authors show some applications to AG Codes in the final section. The book will be of value for undergraduate students (especially those at a higher leve...

  6. Advances in Numerical Methods

    CERN Document Server

    Mastorakis, Nikos E

    2009-01-01

    Features contributions that are focused on significant aspects of current numerical methods and computational mathematics. This book carries chapters that advanced methods and various variations on known techniques that can solve difficult scientific problems efficiently.

  7. Introductory numerical analysis

    CERN Document Server

    Pettofrezzo, Anthony J

    2006-01-01

    Written for undergraduates who require a familiarity with the principles behind numerical analysis, this classical treatment encompasses finite differences, least squares theory, and harmonic analysis. Over 70 examples and 280 exercises. 1967 edition.

  8. Introduction to numerical analysis

    CERN Document Server

    Hildebrand, F B

    1987-01-01

    Well-known, respected introduction, updated to integrate concepts and procedures associated with computers. Computation, approximation, interpolation, numerical differentiation and integration, smoothing of data, other topics in lucid presentation. Includes 150 additional problems in this edition. Bibliography.

  9. Numerical analysis of bifurcations

    International Nuclear Information System (INIS)

    Guckenheimer, J.

    1996-01-01

    This paper is a brief survey of numerical methods for computing bifurcations of generic families of dynamical systems. Emphasis is placed upon algorithms that reflect the structure of the underlying mathematical theory while retaining numerical efficiency. Significant improvements in the computational analysis of dynamical systems are to be expected from more reliance of geometric insight coming from dynamical systems theory. copyright 1996 American Institute of Physics

  10. Numerical computations with GPUs

    CERN Document Server

    Kindratenko, Volodymyr

    2014-01-01

    This book brings together research on numerical methods adapted for Graphics Processing Units (GPUs). It explains recent efforts to adapt classic numerical methods, including solution of linear equations and FFT, for massively parallel GPU architectures. This volume consolidates recent research and adaptations, covering widely used methods that are at the core of many scientific and engineering computations. Each chapter is written by authors working on a specific group of methods; these leading experts provide mathematical background, parallel algorithms and implementation details leading to

  11. Accounting for Antenna in Half-Space Fresnel Coefficient Estimation

    Directory of Open Access Journals (Sweden)

    A. D'Alterio

    2012-01-01

    Full Text Available The problem of retrieving the Fresnel reflection coefficients of a half-space medium starting from measurements collected under a reflection mode multistatic configuration is dealt with. According to our previous results, reflection coefficient estimation is cast as the inversion of linear operator. However, here, we take a step ahead towards more realistic scenarios as the role of antennas (both transmitting and receiving is embodied in the estimation procedure. Numerical results are presented to show the effectiveness of the method for different types of half-space media.

  12. Fast fundamental frequency estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2017-01-01

    Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate the funda...

  13. Numerical methods for hyperbolic differential functional problems

    Directory of Open Access Journals (Sweden)

    Roman Ciarski

    2008-01-01

    Full Text Available The paper deals with the initial boundary value problem for quasilinear first order partial differential functional systems. A general class of difference methods for the problem is constructed. Theorems on the error estimate of approximate solutions for difference functional systems are presented. The convergence results are proved by means of consistency and stability arguments. A numerical example is given.

  14. Modelling and development of estimation and control algorithms: application to a bio process; Modelisation et elaboration d`algorithmes d`estimation et de commande: application a un bioprocede

    Energy Technology Data Exchange (ETDEWEB)

    Maher, M

    1995-02-03

    Modelling, estimation and control of an alcoholic fermentation process is the purpose of this thesis. A simple mathematical model of a fermentation process is established by using experimental results obtained on the plant. This nonlinear model is used for numerical simulation, analysis and synthesis of estimation and control algorithms. The problem of state and parameter nonlinear estimation of bio-processes is studied. Two estimation techniques are developed and proposed to bypass the lack of sensors for certain physical variables. Their performances are studied by numerical simulation. One of these estimators is validated on experimental results of batch and continuous fermentations. An adaptive control by law is proposed for the regulation and tracking of the substrate concentration of the plant by acting on the dilution rate. It is a nonlinear control strategy coupled with the previous validated estimator. The performance of this control law is evaluated by a real application to a continuous flow fermentation process. (author) refs.

  15. Approximate method in estimation sensitivity responses to variations in delayed neutron energy spectra

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J.; Shin, H. S.; Song, T. Y.; Park, W. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    Previous our numerical results in computing point kinetics equations show a possibility in developing approximations to estimate sensitivity responses of nuclear reactor. We recalculate sensitivity responses by maintaining the corrections with first order of sensitivity parameter. We present a method for computing sensitivity responses of nuclear reactor based on an approximation derived from point kinetics equations. Exploiting this approximation, we found that the first order approximation works to estimate variations in the time to reach peak power because of their linear dependence on a sensitivity parameter, and that there are errors in estimating the peak power in the first order approximation for larger sensitivity parameters. To confirm legitimacy of out approximation, these approximate results are compared with exact results obtained from out previous numerical study. 4 refs., 2 figs., 3 tabs. (Author)

  16. Approximate method in estimation sensitivity responses to variations in delayed neutron energy spectra

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J; Shin, H S; Song, T Y; Park, W S [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    Previous our numerical results in computing point kinetics equations show a possibility in developing approximations to estimate sensitivity responses of nuclear reactor. We recalculate sensitivity responses by maintaining the corrections with first order of sensitivity parameter. We present a method for computing sensitivity responses of nuclear reactor based on an approximation derived from point kinetics equations. Exploiting this approximation, we found that the first order approximation works to estimate variations in the time to reach peak power because of their linear dependence on a sensitivity parameter, and that there are errors in estimating the peak power in the first order approximation for larger sensitivity parameters. To confirm legitimacy of out approximation, these approximate results are compared with exact results obtained from out previous numerical study. 4 refs., 2 figs., 3 tabs. (Author)

  17. Numerical Transducer Modeling

    DEFF Research Database (Denmark)

    Henriquez, Vicente Cutanda

    This thesis describes the development of a numerical model of the propagation of sound waves in fluids with viscous and thermal losses, with application to the simulation of acoustic transducers, in particular condenser microphones for measurement. The theoretical basis is presented, numerical...... manipulations are developed to satisfy the more complicated boundary conditions, and a model of a condenser microphone with a coupled membrane is developed. The model is tested against measurements of ¼ inch condenser microphones and analytical calculations. A detailed discussion of the results is given....

  18. On numerical Bessel transformation

    International Nuclear Information System (INIS)

    Sommer, B.; Zabolitzky, J.G.

    1979-01-01

    The authors present a computer program to calculate the three dimensional Fourier or Bessel transforms and definite integrals with Bessel functions. Numerical integration of systems containing Bessel functions occurs in many physical problems, e.g. electromagnetic form factor of nuclei, all transitions involving multipole expansions at high momenta. Filon's integration rule is extended to spherical Bessel functions. The numerical error is of the order of the Simpson error term of the function which has to be transformed. Thus one gets a stable integral even at large arguments of the transformed function. (Auth.)

  19. Industrial numerical analysis

    International Nuclear Information System (INIS)

    McKee, S.; Elliott, C.M.

    1986-01-01

    The applications of mathematics to industrial problems involves the formulation of problems which are amenable to mathematical investigation, mathematical modelling, the solution of the mathematical problem and the inter-pretation of the results. There are 12 chapters describing industrial problems where mathematics and numerical analysis can be applied. These range from the numerical assessment of the flatness of engineering surfaces and plates, the design of chain links, control problems in tidal power generation and low thrust satellite trajectory optimization to mathematical models in welding. One chapter, on the ageing of stainless steels, is indexed separately. (UK)

  20. Numerical analysis targets

    International Nuclear Information System (INIS)

    Sollogoub, Pierre

    2001-01-01

    Numerical analyses are needed in different steps of the overall design process. Complex models or non-linear reactor core behaviour are important for qualification and/or comparison of results obtained. Adequate models and test should be defined. Fuel assembly, fuel row, and the complete core should be tested for seismic effects causing LOCA and flow-induced vibrations (FIV)

  1. Development of numerical concepts

    Directory of Open Access Journals (Sweden)

    Sabine Peucker

    2013-06-01

    Full Text Available The development of numerical concepts is described from infancy to preschool age. Infants a few days old exhibit an early sensitivity for numerosities. In the course of development, nonverbal mental models allow for the exact representation of small quantities as well as changes in these quantities. Subitising, as the accurate recognition of small numerosities (without counting, plays an important role. It can be assumed that numerical concepts and procedures start with insights about small numerosities. Protoquantitative schemata comprise fundamental knowledge about quantities. One-to-one-correspondence connects elements and numbers, and, for this reason, both quantitative and numerical knowledge. If children understand that they can determine the numerosity of a collection of elements by enumerating the elements, they have acquired the concept of cardinality. Protoquantitative knowledge becomes quantitative if it can be applied to numerosities and sequential numbers. The concepts of cardinality and part-part-whole are key to numerical development. Developmentally appropriate learning and teaching should focus on cardinality and part-part-whole concepts.

  2. Analysis of numerical methods

    CERN Document Server

    Isaacson, Eugene

    1994-01-01

    This excellent text for advanced undergraduates and graduate students covers norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, and other topics. It offers a careful analysis and stresses techniques for developing new methods, plus many examples and problems. 1966 edition.

  3. Paradoxes in numerical calculations

    Czech Academy of Sciences Publication Activity Database

    Brandts, J.; Křížek, Michal; Zhang, Z.

    2016-01-01

    Roč. 26, č. 3 (2016), s. 317-330 ISSN 1210-0552 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : round-off errors * numerical instability * recurrence formulae Subject RIV: BA - General Mathematics Impact factor: 0.394, year: 2016

  4. Impact of previously disadvantaged land-users on sustainable ...

    African Journals Online (AJOL)

    Impact of previously disadvantaged land-users on sustainable agricultural ... about previously disadvantaged land users involved in communal farming systems ... of input, capital, marketing, information and land use planning, with effect on ...

  5. Repeat immigration: A previously unobserved source of heterogeneity?

    Science.gov (United States)

    Aradhya, Siddartha; Scott, Kirk; Smith, Christopher D

    2017-07-01

    Register data allow for nuanced analyses of heterogeneities between sub-groups which are not observable in other data sources. One heterogeneity for which register data is particularly useful is in identifying unique migration histories of immigrant populations, a group of interest across disciplines. Years since migration is a commonly used measure of integration in studies seeking to understand the outcomes of immigrants. This study constructs detailed migration histories to test whether misclassified migrations may mask important heterogeneities. In doing so, we identify a previously understudied group of migrants called repeat immigrants, and show that they differ systematically from permanent immigrants. In addition, we quantify the degree to which migration information is misreported in the registers. The analysis is carried out in two steps. First, we estimate income trajectories for repeat immigrants and permanent immigrants to understand the degree to which they differ. Second, we test data validity by cross-referencing migration information with changes in income to determine whether there are inconsistencies indicating misreporting. From the first part of the analysis, the results indicate that repeat immigrants systematically differ from permanent immigrants in terms of income trajectories. Furthermore, income trajectories differ based on the way in which years since migration is calculated. The second part of the analysis suggests that misreported migration events, while present, are negligible. Repeat immigrants differ in terms of income trajectories, and may differ in terms of other outcomes as well. Furthermore, this study underlines that Swedish registers provide a reliable data source to analyze groups which are unidentifiable in other data sources.

  6. Estimation of Radar Cross Section of a Target under Track

    Directory of Open Access Journals (Sweden)

    Hong Sun-Mog

    2010-01-01

    Full Text Available In allocating radar beam for tracking a target, it is attempted to maintain the signal-to-noise ratio (SNR of signal returning from the illuminated target close to an optimum value for efficient track updates. An estimate of the average radar cross section (RCS of the target is required in order to adjust transmitted power based on the estimate such that a desired SNR can be realized. In this paper, a maximum-likelihood (ML approach is presented for estimating the average RCS, and a numerical solution to the approach is proposed based on a generalized expectation maximization (GEM algorithm. Estimation accuracy of the approach is compared to that of a previously reported procedure.

  7. Numerical evidence for 'multiscalar stars'

    International Nuclear Information System (INIS)

    Hawley, Scott H.; Choptuik, Matthew W.

    2003-01-01

    We present a class of general relativistic solitonlike solutions composed of multiple minimally coupled, massive, real scalar fields which interact only through the gravitational field. We describe a two-parameter family of solutions we call ''phase-shifted boson stars'' (parametrized by central density ρ 0 and phase δ), which are obtained by solving the ordinary differential equations associated with boson stars and then altering the phase between the real and imaginary parts of the field. These solutions are similar to boson stars as well as the oscillating soliton stars found by Seidel and Suen [E. Seidel and W. M. Suen, Phys. Rev. Lett. 66, 1659 (1991)]; in particular, long-time numerical evolutions suggest that phase-shifted boson stars are stable. Our results indicate that scalar solitonlike solutions are perhaps more generic than has been previously thought

  8. 22 CFR 40.91 - Certain aliens previously removed.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Certain aliens previously removed. 40.91... IMMIGRANTS UNDER THE IMMIGRATION AND NATIONALITY ACT, AS AMENDED Aliens Previously Removed § 40.91 Certain aliens previously removed. (a) 5-year bar. An alien who has been found inadmissible, whether as a result...

  9. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  10. Determining root correspondence between previously and newly detected objects

    Science.gov (United States)

    Paglieroni, David W.; Beer, N Reginald

    2014-06-17

    A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.

  11. Numerical Estimation of Information Theoretic Measures for Large Data Sets

    Science.gov (United States)

    2013-01-30

    probability including a new indifference rule,” J. Inst. of Actuaries Students’ Soc. 73, 285–334 (1947). 7. M. Hutter and M. Zaffalon, “Distribution...Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Dover Publications, New York (1972). 13. K.B. Oldham et al., An Atlas

  12. The Program Module of Information Risk Numerical Estimation

    Directory of Open Access Journals (Sweden)

    E. S. Stepanova

    2011-03-01

    Full Text Available The algorithm of information risks analysis realized in the program module on the basis of threats matrixes and fuzzy cognitive maps describing potential threats on resources is offered in this paper.

  13. Numerical Estimation in Adults with and without Developmental Dyscalculia

    Science.gov (United States)

    Mejias, Sandrine; Gregoire, Jacques; Noel, Marie-Pascale

    2012-01-01

    It has been hypothesized that developmental dyscalculia (DD) is either due to a defect of the approximate number system (ANS) or to an impaired access between that system and symbolic numbers. Several studies have tested these two hypotheses in children with DD but none has dealt with adults who had experienced DD as children. This study aimed to…

  14. Numerical trials of HISSE

    Science.gov (United States)

    Peters, C.; Kampe, F. (Principal Investigator)

    1980-01-01

    The mathematical description and implementation of the statistical estimation procedure known as the Houston integrated spatial/spectral estimator (HISSE) is discussed. HISSE is based on a normal mixture model and is designed to take advantage of spectral and spatial information of LANDSAT data pixels, utilizing the initial classification and clustering information provided by the AMOEBA algorithm. The HISSE calculates parametric estimates of class proportions which reduce the error inherent in estimates derived from typical classify and count procedures common to nonparametric clustering algorithms. It also singles out spatial groupings of pixels which are most suitable for labeling classes. These calculations are designed to aid the analyst/interpreter in labeling patches with a crop class label. Finally, HISSE's initial performance on an actual LANDSAT agricultural ground truth data set is reported.

  15. Numerical model CCC

    International Nuclear Information System (INIS)

    Bodvarsson, G.S.; Lippmann, M.J.

    1980-01-01

    The computer program CCC (conduction-convection-consolidation), developed at Lawrence Berkeley Laboratory, solves numerically the heat and mass flow equations for a fully saturated medium, and computes one-dimensional consolidation of the simulated systems. The model employs the Integrated Finite Difference Method (IFDM) in discretizing the saturated medium and formulating the governing equations. The sets of equations are solved either by an iterative solution technique (old version) or an efficient sparse solver (new version). The deformation of the medium is calculated using the one-dimensional consolidation theory of Terzaghi. In this paper, the numerical code is described, validation examples given and areas of application discussed. Several example problems involving flow through fractured media are also presented

  16. Numerical ecology with R

    CERN Document Server

    Borcard, Daniel; Legendre, Pierre

    2018-01-01

    This new edition of Numerical Ecology with R guides readers through an applied exploration of the major methods of multivariate data analysis, as seen through the eyes of three ecologists. It provides a bridge between a textbook of numerical ecology and the implementation of this discipline in the R language. The book begins by examining some exploratory approaches. It proceeds logically with the construction of the key building blocks of most methods, i.e. association measures and matrices, and then submits example data to three families of approaches: clustering, ordination and canonical ordination. The last two chapters make use of these methods to explore important and contemporary issues in ecology: the analysis of spatial structures and of community diversity. The aims of methods thus range from descriptive to explanatory and predictive and encompass a wide variety of approaches that should provide readers with an extensive toolbox that can address a wide palette of questions arising in contemporary mul...

  17. Numerical simulation in astrophysics

    International Nuclear Information System (INIS)

    Miyama, Shoken

    1985-01-01

    There have been many numerical simulations of hydrodynamical problems in astrophysics, e.g. processes of star formation, supernova explosion and formation of neutron stars, and general relativistic collapse of star to form black hole. The codes are made to be suitable for computing such problems. In astrophysical hydrodynamical problems, there are the characteristics: problems of self-gravity or external gravity acting, objects of scales very large or very short, objects changing by short period or long time scale, problems of magnetic force and/or centrifugal force acting. In this paper, we present one of methods of numerical simulations which may satisfy these requirements, so-called smoothed particle methods. We then introduce the methods briefly. Then, we show one of the applications of the methods to astrophysical problem (fragmentation and collapse of rotating isothermal cloud). (Mori, K.)

  18. Hybrid undulator numerical optimization

    Energy Technology Data Exchange (ETDEWEB)

    Hairetdinov, A.H. [Kurchatov Institute, Moscow (Russian Federation); Zukov, A.A. [Solid State Physics Institute, Chernogolovka (Russian Federation)

    1995-12-31

    3D properties of the hybrid undulator scheme arc studied numerically using PANDIRA code. It is shown that there exist two well defined sets of undulator parameters which provide either maximum on-axis field amplitude or minimal higher harmonics amplitude of the basic undulator field. Thus the alternative between higher field amplitude or pure sinusoidal field exists. The behavior of the undulator field amplitude and harmonics structure for a large set of (undulator gap)/(undulator wavelength) values is demonstrated.

  19. Comments on numerical simulations

    International Nuclear Information System (INIS)

    Sato, T.

    1984-01-01

    The author comments on a couple of things about numerical simulation. One is just about the philosophical discussion that is, spontaneous or driven. The other thing is the numerical or technical one. Frankly, the author didn't want to touch on the technical matter because this should be a common sense one for those who are working at numerical simulation. But since many people take numerical simulation results at their face value, he would like to remind you of the reality hidden behind them. First, he would point out that the meaning of ''driven'' in driven reconnection is different from that defined by Schindler or Akasofu. The author's definition is closer to Axford's definition. In the spontaneous case, for some unpredicted reason an excess energy of the system is suddenly released at a certain point. However, one does not answer how such an unstable state far beyond a stable limit is realized in the magnetotail. In the driven case, there is a definite energy buildup phase starting from a stable state; namely, energy in the black box increases from a stable level subject to an external source. When the state has reached a certain position, the energy is released suddenly. The difference between driven and spontaneous is whether the cause (plasma flow) to trigger reconnection is specified or reconnection is triggered unpredictably. Another difference is that in driven reconnection the reconnection rate is dependent on the speed of the external plasma flow, but in spontaneous reconnection the rate is dependent on the internal condition such as the resistivity

  20. Numerical simulation of plasmas

    International Nuclear Information System (INIS)

    Dnestrovskii, Y.N.; Kostomarov, D.P.

    1986-01-01

    This book contains a modern consistent and systematic presentation of numerical computer simulation of plasmas in controlled thermonuclear fusion. The authors focus on the Soviet research in mathematical modelling of Tokamak plasmas, and present kinetic hydrodynamic and transport models with special emphasis on the more recent hybrid models. Compared with the first edition (in Russian) this book has been greatly revised and updated. (orig./WL)

  1. Numerical analysis II essentials

    CERN Document Server

    REA, The Editors of; Staff of Research Education Association

    1989-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Numerical Analysis II covers simultaneous linear systems and matrix methods, differential equations, Fourier transformations, partial differential equations, and Monte Carlo methods.

  2. Handbook of numerical analysis

    CERN Document Server

    Ciarlet, Philippe G

    Mathematical finance is a prolific scientific domain in which there exists a particular characteristic of developing both advanced theories and practical techniques simultaneously. Mathematical Modelling and Numerical Methods in Finance addresses the three most important aspects in the field: mathematical models, computational methods, and applications, and provides a solid overview of major new ideas and results in the three domains. Coverage of all aspects of quantitative finance including models, computational methods and applications Provides an overview of new ideas an

  3. Achievement report for fiscal 1999. Research on mesh-based estimation of natural energy for Southeast Asia as represented by Myanmar (Assessment of wind power and solar energy using numerical weather model); 1999 nendo Myanmar koku wo rei ni shita Tonan Asia ni okeru shizen energy no mesh suitei ni kansuru kenkyu seika hokokusho. Suchi kisho model ni yoru furyoku taiyo energy hyoka

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    As the first step for the introduction of wind power systems and photovoltaic power systems into developing countries in Southeast Asia etc. and for their diffusion in them for the exploitation of natural energy, a numerical weather model useable in Southeast Asia is developed to make up for the insufficiency of weather data in the region. A technique is developed, to explain which the case of Myanmar is cited, for estimating with accuracy such natural conditions as wind direction, wind velocity, and solar radiation in the past one-year period for the assessment of power to be generated using wind turbines and solar panels. The results of the observation of wind conditions indicate that wind directions are mainly northerly or westerly and that wind speeds are as week as 1-3m/s on the average. As for total solar radiation per diem in December through March, it is found that there is 17-23MJ/m{sup 2}/day, which is twice the level to be measured in Tokyo. A comparison is made between the weather observation results and a model calculation, and it is found that the latter sufficiently reproduces the actual weather conditions. Based on the values of wind conditions and solar radiation estimated in Myanmar, the amount of power to be obtained from an assumed arrangement of wind power systems and solar panels is assessed. (NEDO)

  4. On the Hughes model and numerical aspects

    KAUST Repository

    Gomes, Diogo A.

    2017-01-05

    We study a crowd model proposed by R. Hughes in [11] and we describe a numerical approach to solve it. This model comprises a Fokker-Planck equation coupled with an eikonal equation with Dirichlet or Neumann data. First, we establish a priori estimates for the solutions. Second, we study radial solutions and identify a shock formation mechanism. Third, we illustrate the existence of congestion, the breakdown of the model, and the trend to the equilibrium. Finally, we propose a new numerical method and consider two examples.

  5. Numerical modelling of ion transport in flames

    KAUST Repository

    Han, Jie

    2015-10-20

    This paper presents a modelling framework to compute the diffusivity and mobility of ions in flames. The (n, 6, 4) interaction potential is adopted to model collisions between neutral and charged species. All required parameters in the potential are related to the polarizability of the species pair via semi-empirical formulas, which are derived using the most recently published data or best estimates. The resulting framework permits computation of the transport coefficients of any ion found in a hydrocarbon flame. The accuracy of the proposed method is evaluated by comparing its predictions with experimental data on the mobility of selected ions in single-component neutral gases. Based on this analysis, the value of a model constant available in the literature is modified in order to improve the model\\'s predictions. The newly determined ion transport coefficients are used as part of a previously developed numerical approach to compute the distribution of charged species in a freely propagating premixed lean CH4/O2 flame. Since a significant scatter of polarizability data exists in the literature, the effects of changes in polarizability on ion transport properties and the spatial distribution of ions in flames are explored. Our analysis shows that changes in polarizability propagate with decreasing effect from binary transport coefficients to species number densities. We conclude that the chosen polarizability value has a limited effect on the ion distribution in freely propagating flames. We expect that the modelling framework proposed here will benefit future efforts in modelling the effect of external voltages on flames. Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/13647830.2015.1090018. © 2015 Taylor & Francis.

  6. Numerical orbit generators of artificial earth satellites

    Science.gov (United States)

    Kugar, H. K.; Dasilva, W. C. C.

    1984-04-01

    A numerical orbit integrator containing updatings and improvements relative to the previous ones that are being utilized by the Departmento de Mecanica Espacial e Controle (DMC), of INPE, besides incorporating newer modellings resulting from the skill acquired along the time is presented. Flexibility and modularity were taken into account in order to allow future extensions and modifications. Characteristics of numerical accuracy, processing quickness, memory saving as well as utilization aspects were also considered. User's handbook, whole program listing and qualitative analysis of accuracy, processing time and orbit perturbation effects were included as well.

  7. Revised age estimates of the Euphrosyne family

    Science.gov (United States)

    Carruba, Valerio; Masiero, Joseph R.; Cibulková, Helena; Aljbaae, Safwan; Espinoza Huaman, Mariela

    2015-08-01

    The Euphrosyne family, a high inclination asteroid family in the outer main belt, is considered one of the most peculiar groups of asteroids. It is characterized by the steepest size frequency distribution (SFD) among families in the main belt, and it is the only family crossed near its center by the ν6 secular resonance. Previous studies have shown that the steep size frequency distribution may be the result of the dynamical evolution of the family.In this work we further explore the unique dynamical configuration of the Euphrosyne family by refining the previous age values, considering the effects of changes in shapes of the asteroids during YORP cycle (``stochastic YORP''), the long-term effect of close encounters of family members with (31) Euphrosyne itself, and the effect that changing key parameters of the Yarkovsky force (such as density and thermal conductivity) has on the estimate of the family age obtained using Monte Carlo methods. Numerical simulations accounting for the interaction with the local web of secular and mean-motion resonances allow us to refine previous estimates of the family age. The cratering event that formed the Euphrosyne family most likely occurred between 560 and 1160 Myr ago, and no earlier than 1400 Myr ago when we allow for larger uncertainties in the key parameters of the Yarkovsky force.

  8. Numerical computation of the transport matrix in toroidal plasma with a stochastic magnetic field

    Science.gov (United States)

    Zhu, Siqiang; Chen, Dunqiang; Dai, Zongliang; Wang, Shaojie

    2018-04-01

    A new numerical method, based on integrating along the full orbit of guiding centers, to compute the transport matrix is realized. The method is successfully applied to compute the phase-space diffusion tensor of passing electrons in a tokamak with a stochastic magnetic field. The new method also computes the Lagrangian correlation function, which can be used to evaluate the Lagrangian correlation time and the turbulence correlation length. For the case of the stochastic magnetic field, we find that the order of magnitude of the parallel correlation length can be estimated by qR0, as expected previously.

  9. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  10. Numerical calculation of particle collection efficiency in an ...

    Indian Academy of Sciences (India)

    Theoretical and numerical research has been previously done on ESPs to predict the efficiency ... Lagrangian simulations of particle transport in wire–plate ESP were .... The collection efficiency can be defined as the ratio of the number of ...

  11. Numerical Analysis Objects

    Science.gov (United States)

    Henderson, Michael

    1997-08-01

    The Numerical Analysis Objects project (NAO) is a project in the Mathematics Department of IBM's TJ Watson Research Center. While there are plenty of numerical tools available today, it is not an easy task to combine them into a custom application. NAO is directed at the dual problems of building applications from a set of tools, and creating those tools. There are several "reuse" projects, which focus on the problems of identifying and cataloging tools. NAO is directed at the specific context of scientific computing. Because the type of tools is restricted, problems such as tools with incompatible data structures for input and output, and dissimilar interfaces to tools which solve similar problems can be addressed. The approach we've taken is to define interfaces to those objects used in numerical analysis, such as geometries, functions and operators, and to start collecting (and building) a set of tools which use these interfaces. We have written a class library (a set of abstract classes and implementations) in C++ which demonstrates the approach. Besides the classes, the class library includes "stub" routines which allow the library to be used from C or Fortran, and an interface to a Visual Programming Language. The library has been used to build a simulator for petroleum reservoirs, using a set of tools for discretizing nonlinear differential equations that we have written, and includes "wrapped" versions of packages from the Netlib repository. Documentation can be found on the Web at "http://www.research.ibm.com/nao". I will describe the objects and their interfaces, and give examples ranging from mesh generation to solving differential equations.

  12. Numerical differential protection

    CERN Document Server

    Ziegler, Gerhard

    2012-01-01

    Differential protection is a fast and selective method of protection against short-circuits. It is applied in many variants for electrical machines, trans?formers, busbars, and electric lines.Initially this book covers the theory and fundamentals of analog and numerical differential protection. Current transformers are treated in detail including transient behaviour, impact on protection performance, and practical dimensioning. An extended chapter is dedicated to signal transmission for line protection, in particular, modern digital communication and GPS timing.The emphasis is then pla

  13. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  14. 49 CFR 173.23 - Previously authorized packaging.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Previously authorized packaging. 173.23 Section... REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for Transportation § 173.23 Previously authorized packaging. (a) When the regulations specify a packaging with a specification marking...

  15. 28 CFR 10.5 - Incorporation of papers previously filed.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Incorporation of papers previously filed... CARRYING ON ACTIVITIES WITHIN THE UNITED STATES Registration Statement § 10.5 Incorporation of papers previously filed. Papers and documents already filed with the Attorney General pursuant to the said act and...

  16. 75 FR 76056 - FEDERAL REGISTER CITATION OF PREVIOUS ANNOUNCEMENT:

    Science.gov (United States)

    2010-12-07

    ... SECURITIES AND EXCHANGE COMMISSION Sunshine Act Meeting FEDERAL REGISTER CITATION OF PREVIOUS ANNOUNCEMENT: STATUS: Closed meeting. PLACE: 100 F Street, NE., Washington, DC. DATE AND TIME OF PREVIOUSLY ANNOUNCED MEETING: Thursday, December 9, 2010 at 2 p.m. CHANGE IN THE MEETING: Time change. The closed...

  17. Extraction of gravitational waves in numerical relativity.

    Science.gov (United States)

    Bishop, Nigel T; Rezzolla, Luciano

    2016-01-01

    A numerical-relativity calculation yields in general a solution of the Einstein equations including also a radiative part, which is in practice computed in a region of finite extent. Since gravitational radiation is properly defined only at null infinity and in an appropriate coordinate system, the accurate estimation of the emitted gravitational waves represents an old and non-trivial problem in numerical relativity. A number of methods have been developed over the years to "extract" the radiative part of the solution from a numerical simulation and these include: quadrupole formulas, gauge-invariant metric perturbations, Weyl scalars, and characteristic extraction. We review and discuss each method, in terms of both its theoretical background as well as its implementation. Finally, we provide a brief comparison of the various methods in terms of their inherent advantages and disadvantages.

  18. No discrimination against previous mates in a sexually cannibalistic spider

    Science.gov (United States)

    Fromhage, Lutz; Schneider, Jutta M.

    2005-09-01

    In several animal species, females discriminate against previous mates in subsequent mating decisions, increasing the potential for multiple paternity. In spiders, female choice may take the form of selective sexual cannibalism, which has been shown to bias paternity in favor of particular males. If cannibalistic attacks function to restrict a male's paternity, females may have little interest to remate with males having survived such an attack. We therefore studied the possibility of female discrimination against previous mates in sexually cannibalistic Argiope bruennichi, where females almost always attack their mate at the onset of copulation. We compared mating latency and copulation duration of males having experienced a previous copulation either with the same or with a different female, but found no evidence for discrimination against previous mates. However, males copulated significantly shorter when inserting into a used, compared to a previously unused, genital pore of the female.

  19. Implant breast reconstruction after salvage mastectomy in previously irradiated patients.

    Science.gov (United States)

    Persichetti, Paolo; Cagli, Barbara; Simone, Pierfranco; Cogliandro, Annalisa; Fortunato, Lucio; Altomare, Vittorio; Trodella, Lucio

    2009-04-01

    The most common surgical approach in case of local tumor recurrence after quadrantectomy and radiotherapy is salvage mastectomy. Breast reconstruction is the subsequent phase of the treatment and the plastic surgeon has to operate on previously irradiated and manipulated tissues. The medical literature highlights that breast reconstruction with tissue expanders is not a pursuable option, considering previous radiotherapy a contraindication. The purpose of this retrospective study is to evaluate the influence of previous radiotherapy on 2-stage breast reconstruction (tissue expander/implant). Only patients with analogous timing of radiation therapy and the same demolitive and reconstructive procedures were recruited. The results of this study prove that, after salvage mastectomy in previously irradiated patients, implant reconstruction is still possible. Further comparative studies are, of course, advisable to draw any conclusion on the possibility to perform implant reconstruction in previously irradiated patients.

  20. Confidence in Numerical Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to “forecast,” that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists “think.” This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. “Confidence” derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  1. Confidence in Numerical Simulations

    International Nuclear Information System (INIS)

    Hemez, Francois M.

    2015-01-01

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to ''forecast,'' that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists ''think.'' This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. ''Confidence'' derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  2. Numerical properties of staggered overlap fermions

    CERN Document Server

    de Forcrand, Philippe; Panero, Marco

    2010-01-01

    We report the results of a numerical study of staggered overlap fermions, following the construction of Adams which reduces the number of tastes from 4 to 2 without fine-tuning. We study the sensitivity of the operator to the topology of the gauge field, its locality and its robustness to fluctuations of the gauge field. We make a first estimate of the computing cost of a quark propagator calculation, and compare with Neuberger's overlap.

  3. Conservative numerical methods for solitary wave interactions

    Energy Technology Data Exchange (ETDEWEB)

    Duran, A; Lopez-Marcos, M A [Departamento de Matematica Aplicada y Computacion, Facultad de Ciencias, Universidad de Valladolid, Paseo del Prado de la Magdalena s/n, 47005 Valladolid (Spain)

    2003-07-18

    The purpose of this paper is to show the advantages that represent the use of numerical methods that preserve invariant quantities in the study of solitary wave interactions for the regularized long wave equation. It is shown that the so-called conservative methods are more appropriate to study the phenomenon and provide a dynamic point of view that allows us to estimate the changes in the parameters of the solitary waves after the collision.

  4. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison

    KAUST Repository

    Bäck, Joakim

    2010-09-17

    Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods. By introducing a suitable generalization of the classical sparse grid SC method, we are able to compare SG and SC on the same underlying multivariate polynomial space in terms of accuracy vs. computational work. The approximation spaces considered here include isotropic and anisotropic versions of Tensor Product (TP), Total Degree (TD), Hyperbolic Cross (HC) and Smolyak (SM) polynomials. Numerical results for linear elliptic SPDEs indicate a slight computational work advantage of isotropic SC over SG, with SC-SM and SG-TD being the best choices of approximation spaces for each method. Finally, numerical results corroborate the optimality of the theoretical estimate of anisotropy ratios introduced by the authors in a previous work for the construction of anisotropic approximation spaces. © 2011 Springer.

  5. Numerical Simulation of Steady Supercavitating Flows

    OpenAIRE

    Ali Jafarian; Ahmad-Reza Pishevar

    2016-01-01

    In this research, the Supercavitation phenomenon in compressible liquid flows is simulated. The one-fluid method based on a new exact two-phase Riemann solver is used for modeling. The cavitation is considered as an isothermal process and a consistent equation of state with the physical behavior of the water is used. High speed flow of water over a cylinder and a projectile are simulated and the results are compared with the previous numerical and experimental results. The cavitation bubble p...

  6. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...

  7. Numerical relativity beyond astrophysics

    Science.gov (United States)

    Garfinkle, David

    2017-01-01

    Though the main applications of computer simulations in relativity are to astrophysical systems such as black holes and neutron stars, nonetheless there are important applications of numerical methods to the investigation of general relativity as a fundamental theory of the nature of space and time. This paper gives an overview of some of these applications. In particular we cover (i) investigations of the properties of spacetime singularities such as those that occur in the interior of black holes and in big bang cosmology. (ii) investigations of critical behavior at the threshold of black hole formation in gravitational collapse. (iii) investigations inspired by string theory, in particular analogs of black holes in more than 4 spacetime dimensions and gravitational collapse in spacetimes with a negative cosmological constant.

  8. Testability of numerical systems

    International Nuclear Information System (INIS)

    Soulas, B.

    1992-01-01

    In order to face up to the growing complexity of systems, the authors undertook to define a new approach for the qualification of systems. This approach is based on the concept of Testability which, supported by system modelization, validation and verification methods and tools, would allow Integrated Qualification process, applied throughout the life-span of systems. The general principles of this approach are introduced in the general case of numerical systems; in particular, this presentation points out the difference between the specification activity and the modelization and validation activity. This approach is illustrated firstly by the study of a global system and then by case of communication protocol as the software point of view. Finally MODEL which support this approach is described. MODEL tool is a commercial tool providing modelization and validation techniques based on Petri Nets with triple extension: Predicate/Transition, Timed and Stochastic Petri Nets

  9. Numerical relativity beyond astrophysics.

    Science.gov (United States)

    Garfinkle, David

    2017-01-01

    Though the main applications of computer simulations in relativity are to astrophysical systems such as black holes and neutron stars, nonetheless there are important applications of numerical methods to the investigation of general relativity as a fundamental theory of the nature of space and time. This paper gives an overview of some of these applications. In particular we cover (i) investigations of the properties of spacetime singularities such as those that occur in the interior of black holes and in big bang cosmology. (ii) investigations of critical behavior at the threshold of black hole formation in gravitational collapse. (iii) investigations inspired by string theory, in particular analogs of black holes in more than 4 spacetime dimensions and gravitational collapse in spacetimes with a negative cosmological constant.

  10. Personality disorders in previously detained adolescent females: a prospective study

    NARCIS (Netherlands)

    Krabbendam, A.; Colins, O.F.; Doreleijers, T.A.H.; van der Molen, E.; Beekman, A.T.F.; Vermeiren, R.R.J.M.

    2015-01-01

    This longitudinal study investigated the predictive value of trauma and mental health problems for the development of antisocial personality disorder (ASPD) and borderline personality disorder (BPD) in previously detained women. The participants were 229 detained adolescent females who were assessed

  11. Payload specialist Reinhard Furrer show evidence of previous blood sampling

    Science.gov (United States)

    1985-01-01

    Payload specialist Reinhard Furrer shows evidence of previous blood sampling while Wubbo J. Ockels, Dutch payload specialist (only partially visible), extends his right arm after a sample has been taken. Both men show bruises on their arms.

  12. Choice of contraception after previous operative delivery at a family ...

    African Journals Online (AJOL)

    Choice of contraception after previous operative delivery at a family planning clinic in Northern Nigeria. Amina Mohammed‑Durosinlorun, Joel Adze, Stephen Bature, Caleb Mohammed, Matthew Taingson, Amina Abubakar, Austin Ojabo, Lydia Airede ...

  13. Previous utilization of service does not improve timely booking in ...

    African Journals Online (AJOL)

    Previous utilization of service does not improve timely booking in antenatal care: Cross sectional study ... Journal Home > Vol 24, No 3 (2010) > ... Results: Past experience on antenatal care service utilization did not come out as a predictor for ...

  14. A previous hamstring injury affects kicking mechanics in soccer players.

    Science.gov (United States)

    Navandar, Archit; Veiga, Santiago; Torres, Gonzalo; Chorro, David; Navarro, Enrique

    2018-01-10

    Although the kicking skill is influenced by limb dominance and sex, how a previous hamstring injury affects kicking has not been studied in detail. Thus, the objective of this study was to evaluate the effect of sex and limb dominance on kicking in limbs with and without a previous hamstring injury. 45 professional players (males: n=19, previously injured players=4, age=21.16 ± 2.00 years; females: n=19, previously injured players=10, age=22.15 ± 4.50 years) performed 5 kicks each with their preferred and non-preferred limb at a target 7m away, which were recorded with a three-dimensional motion capture system. Kinematic and kinetic variables were extracted for the backswing, leg cocking, leg acceleration and follow through phases. A shorter backswing (20.20 ± 3.49% vs 25.64 ± 4.57%), and differences in knee flexion angle (58 ± 10o vs 72 ± 14o) and hip flexion velocity (8 ± 0rad/s vs 10 ± 2rad/s) were observed in previously injured, non-preferred limb kicks for females. A lower peak hip linear velocity (3.50 ± 0.84m/s vs 4.10 ± 0.45m/s) was observed in previously injured, preferred limb kicks of females. These differences occurred in the backswing and leg-cocking phases where the hamstring muscles were the most active. A variation in the functioning of the hamstring muscles and that of the gluteus maximus and iliopsoas in the case of a previous injury could account for the differences observed in the kicking pattern. Therefore, the effects of a previous hamstring injury must be considered while designing rehabilitation programs to re-educate kicking movement.

  15. Computing the Alexander Polynomial Numerically

    DEFF Research Database (Denmark)

    Hansen, Mikael Sonne

    2006-01-01

    Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically.......Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically....

  16. Improved Estimates of Thermodynamic Parameters

    Science.gov (United States)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  17. Secondary recurrent miscarriage is associated with previous male birth.

    LENUS (Irish Health Repository)

    Ooi, Poh Veh

    2012-01-31

    Secondary recurrent miscarriage (RM) is defined as three or more consecutive pregnancy losses after delivery of a viable infant. Previous reports suggest that a firstborn male child is associated with less favourable subsequent reproductive potential, possibly due to maternal immunisation against male-specific minor histocompatibility antigens. In a retrospective cohort study of 85 cases of secondary RM we aimed to determine if secondary RM was associated with (i) gender of previous child, maternal age, or duration of miscarriage history, and (ii) increased risk of pregnancy complications. Fifty-three women (62.0%; 53\\/85) gave birth to a male child prior to RM compared to 32 (38.0%; 32\\/85) who gave birth to a female child (p=0.002). The majority (91.7%; 78\\/85) had uncomplicated, term deliveries and normal birth weight neonates, with one quarter of the women previously delivered by Caesarean section. All had routine RM investigations and 19.0% (16\\/85) had an abnormal result. Fifty-seven women conceived again and 33.3% (19\\/57) miscarried, but there was no significant difference in failure rates between those with a previous male or female child (13\\/32 vs. 6\\/25, p=0.2). When patients with abnormal results were excluded, or when women with only one previous child were considered, there was still no difference in these rates. A previous male birth may be associated with an increased risk of secondary RM but numbers preclude concluding whether this increases recurrence risk. The suggested association with previous male birth provides a basis for further investigations at a molecular level.

  18. Secondary recurrent miscarriage is associated with previous male birth.

    LENUS (Irish Health Repository)

    Ooi, Poh Veh

    2011-01-01

    Secondary recurrent miscarriage (RM) is defined as three or more consecutive pregnancy losses after delivery of a viable infant. Previous reports suggest that a firstborn male child is associated with less favourable subsequent reproductive potential, possibly due to maternal immunisation against male-specific minor histocompatibility antigens. In a retrospective cohort study of 85 cases of secondary RM we aimed to determine if secondary RM was associated with (i) gender of previous child, maternal age, or duration of miscarriage history, and (ii) increased risk of pregnancy complications. Fifty-three women (62.0%; 53\\/85) gave birth to a male child prior to RM compared to 32 (38.0%; 32\\/85) who gave birth to a female child (p=0.002). The majority (91.7%; 78\\/85) had uncomplicated, term deliveries and normal birth weight neonates, with one quarter of the women previously delivered by Caesarean section. All had routine RM investigations and 19.0% (16\\/85) had an abnormal result. Fifty-seven women conceived again and 33.3% (19\\/57) miscarried, but there was no significant difference in failure rates between those with a previous male or female child (13\\/32 vs. 6\\/25, p=0.2). When patients with abnormal results were excluded, or when women with only one previous child were considered, there was still no difference in these rates. A previous male birth may be associated with an increased risk of secondary RM but numbers preclude concluding whether this increases recurrence risk. The suggested association with previous male birth provides a basis for further investigations at a molecular level.

  19. Numerical Calculation of Transport Based on the Drift-Kinetic Equation for Plasmas in General Toroidal Magnetic Geometry: Numerical Methods

    International Nuclear Information System (INIS)

    Reynolds, J. M.; Lopez-Bruna, D.

    2009-01-01

    In this report we continue with the description of a newly developed numerical method to solve the drift kinetic equation for ions and electrons in toroidal plasmas. Several numerical aspects, already outlined in a previous report [Informes Tecnicos Ciemat 1165, mayo 2009], will be treated now in more detail. Aside from discussing the method in the context of other existing codes, various aspects will be now explained from the viewpoint of numerical methods: the way to solve convection equations, the adopted boundary conditions, the real-space meshing procedures along with a new software developed to build them, and some additional questions related with the parallelization and the numerical integration. (Author) 16 refs

  20. Numerical Hydrodynamics and Magnetohydrodynamics in General Relativity

    Directory of Open Access Journals (Sweden)

    Font José A.

    2008-09-01

    Full Text Available This article presents a comprehensive overview of numerical hydrodynamics and magnetohydrodynamics (MHD in general relativity. Some significant additions have been incorporated with respect to the previous two versions of this review (2000, 2003, most notably the coverage of general-relativistic MHD, a field in which remarkable activity and progress has occurred in the last few years. Correspondingly, the discussion of astrophysical simulations in general-relativistic hydrodynamics is enlarged to account for recent relevant advances, while those dealing with general-relativistic MHD are amply covered in this review for the first time. The basic outline of this article is nevertheless similar to its earlier versions, save for the addition of MHD-related issues throughout. Hence, different formulations of both the hydrodynamics and MHD equations are presented, with special mention of conservative and hyperbolic formulations well adapted to advanced numerical methods. A large sample of numerical approaches for solving such hyperbolic systems of equations is discussed, paying particular attention to solution procedures based on schemes exploiting the characteristic structure of the equations through linearized Riemann solvers. As previously stated, a comprehensive summary of astrophysical simulations in strong gravitational fields is also presented. These are detailed in three basic sections, namely gravitational collapse, black-hole accretion, and neutron-star evolutions; despite the boundaries, these sections may (and in fact do overlap throughout the discussion. The material contained in these sections highlights the numerical challenges of various representative simulations. It also follows, to some extent, the chronological development of the field, concerning advances in the formulation of the gravitational field, hydrodynamics and MHD equations and the numerical methodology designed to solve them. To keep the length of this article reasonable

  1. Numerical aerodynamic simulation (NAS)

    International Nuclear Information System (INIS)

    Peterson, V.L.; Ballhaus, W.F. Jr.; Bailey, F.R.

    1984-01-01

    The Numerical Aerodynamic Simulation (NAS) Program is designed to provide a leading-edge computational capability to the aerospace community. It was recognized early in the program that, in addition to more advanced computers, the entire computational process ranging from problem formulation to publication of results needed to be improved to realize the full impact of computational aerodynamics. Therefore, the NAS Program has been structured to focus on the development of a complete system that can be upgraded periodically with minimum impact on the user and on the inventory of applications software. The implementation phase of the program is now under way. It is based upon nearly 8 yr of study and should culminate in an initial operational capability before 1986. The objective of this paper is fivefold: 1) to discuss the factors motivating the NAS program, 2) to provide a history of the activity, 3) to describe each of the elements of the processing-system network, 4) to outline the proposed allocation of time to users of the facility, and 5) to describe some of the candidate problems being considered for the first benchmark codes

  2. Erlotinib-induced rash spares previously irradiated skin

    International Nuclear Information System (INIS)

    Lips, Irene M.; Vonk, Ernest J.A.; Koster, Mariska E.Y.; Houwing, Ronald H.

    2011-01-01

    Erlotinib is an epidermal growth factor receptor inhibitor prescribed to patients with locally advanced or metastasized non-small cell lung carcinoma after failure of at least one earlier chemotherapy treatment. Approximately 75% of the patients treated with erlotinib develop acneiform skin rashes. A patient treated with erlotinib 3 months after finishing concomitant treatment with chemotherapy and radiotherapy for non-small cell lung cancer is presented. Unexpectedly, the part of the skin that had been included in his previously radiotherapy field was completely spared from the erlotinib-induced acneiform skin rash. The exact mechanism of erlotinib-induced rash sparing in previously irradiated skin is unclear. The underlying mechanism of this phenomenon needs to be explored further, because the number of patients being treated with a combination of both therapeutic modalities is increasing. The therapeutic effect of erlotinib in the area of the previously irradiated lesion should be assessed. (orig.)

  3. [Prevalence of previously diagnosed diabetes mellitus in Mexico.

    Science.gov (United States)

    Rojas-Martínez, Rosalba; Basto-Abreu, Ana; Aguilar-Salinas, Carlos A; Zárate-Rojas, Emiliano; Villalpando, Salvador; Barrientos-Gutiérrez, Tonatiuh

    2018-01-01

    To compare the prevalence of previously diagnosed diabetes in 2016 with previous national surveys and to describe treatment and its complications. Mexico's national surveys Ensa 2000, Ensanut 2006, 2012 and 2016 were used. For 2016, logistic regression models and measures of central tendency and dispersion were obtained. The prevalence of previously diagnosed diabetes in 2016 was 9.4%. The increase of 2.2% relative to 2012 was not significant and only observed in patients older than 60 years. While preventive measures have increased, the access to medical treatment and lifestyle has not changed. The treatment has been modified, with an increase in insulin and decrease in hypoglycaemic agents. Population aging, lack of screening actions and the increase in diabetes complications will lead to an increase on the burden of disease. Policy measures targeting primary and secondary prevention of diabetes are crucial.

  4. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  5. Squamous cell carcinoma arising in previously burned or irradiated skin

    International Nuclear Information System (INIS)

    Edwards, M.J.; Hirsch, R.M.; Broadwater, J.R.; Netscher, D.T.; Ames, F.C.

    1989-01-01

    Squamous cell carcinoma (SCC) arising in previously burned or irradiated skin was reviewed in 66 patients treated between 1944 and 1986. Healing of the initial injury was complicated in 70% of patients. Mean interval from initial injury to diagnosis of SCC was 37 years. The overwhelming majority of patients presented with a chronic intractable ulcer in previously injured skin. The regional relapse rate after surgical excision was very high, 58% of all patients. Predominant patterns of recurrence were in local skin and regional lymph nodes (93% of recurrences). Survival rates at 5, 10, and 20 years were 52%, 34%, and 23%, respectively. Five-year survival rates in previously burned and irradiated patients were not significantly different (53% and 50%, respectively). This review, one of the largest reported series, better defines SCC arising in previously burned or irradiated skin as a locally aggressive disease that is distinct from SCC arising in sunlight-damaged skin. An increased awareness of the significance of chronic ulceration in scar tissue may allow earlier diagnosis. Regional disease control and survival depend on surgical resection of all known disease and may require radical lymph node dissection or amputation

  6. Outcome Of Pregnancy Following A Previous Lower Segment ...

    African Journals Online (AJOL)

    Background: A previous ceasarean section is an important variable that influences patient management in subsequent pregnancies. A trial of vaginal delivery in such patients is a feasible alternative to a secondary section, thus aiding to reduce the ceasarean section rate and its associated co-morbidities. Objective: To ...

  7. 24 CFR 1710.552 - Previously accepted state filings.

    Science.gov (United States)

    2010-04-01

    ... of Substantially Equivalent State Law § 1710.552 Previously accepted state filings. (a) Materials... and contracts or agreements contain notice of purchaser's revocation rights. In addition see § 1715.15..., unless the developer is obligated to do so in the contract. (b) If any such filing becomes inactive or...

  8. The job satisfaction of principals of previously disadvantaged schools

    African Journals Online (AJOL)

    The aim of this study was to identify influences on the job satisfaction of previously disadvantaged ..... I am still riding the cloud … I hope it lasts. .... as a way of creating a climate and culture in schools where individuals are willing to explore.

  9. Haemophilus influenzae type f meningitis in a previously healthy boy

    DEFF Research Database (Denmark)

    Ronit, Andreas; Berg, Ronan M G; Bruunsgaard, Helle

    2013-01-01

    Non-serotype b strains of Haemophilus influenzae are extremely rare causes of acute bacterial meningitis in immunocompetent individuals. We report a case of acute bacterial meningitis in a 14-year-old boy, who was previously healthy and had been immunised against H influenzae serotype b (Hib...

  10. Research Note Effects of previous cultivation on regeneration of ...

    African Journals Online (AJOL)

    We investigated the effects of previous cultivation on regeneration potential under miombo woodlands in a resettlement area, a spatial product of Zimbabwe's land reforms. We predicted that cultivation would affect population structure, regeneration, recruitment and potential grazing capacity of rangelands. Plant attributes ...

  11. Cryptococcal meningitis in a previously healthy child | Chimowa ...

    African Journals Online (AJOL)

    An 8-year-old previously healthy female presented with a 3 weeks history of headache, neck stiffness, deafness, fever and vomiting and was diagnosed with cryptococcal meningitis. She had documented hearing loss and was referred to tertiary-level care after treatment with fluconazole did not improve her neurological ...

  12. Investigation of previously derived Hyades, Coma, and M67 reddenings

    International Nuclear Information System (INIS)

    Taylor, B.J.

    1980-01-01

    New Hyades polarimetry and field star photometry have been obtained to check the Hyades reddening, which was found to be nonzero in a previous paper. The new Hyades polarimetry implies essentially zero reddening; this is also true of polarimetry published by Behr (which was incorrectly interpreted in the previous paper). Four photometric techniques which are presumed to be insensitive to blanketing are used to compare the Hyades to nearby field stars; these four techniques also yield essentially zero reddening. When all of these results are combined with others which the author has previously published and a simultaneous solution for the Hyades, Coma, and M67 reddenings is made, the results are E (B-V) =3 +- 2 (sigma) mmag, -1 +- 3 (sigma) mmag, and 46 +- 6 (sigma) mmag, respectively. No support for a nonzero Hyades reddening is offered by the new results. When the newly obtained reddenings for the Hyades, Coma, and M67 are compared with results from techniques given by Crawford and by users of the David Dunlap Observatory photometric system, no differences between the new and other reddenings are found which are larger than about 2 sigma. The author had previously found that the M67 main-sequence stars have about the same blanketing as that of Coma and less blanketing than the Hyades; this conclusion is essentially unchanged by the revised reddenings

  13. Rapid fish stock depletion in previously unexploited seamounts: the ...

    African Journals Online (AJOL)

    Rapid fish stock depletion in previously unexploited seamounts: the case of Beryx splendens from the Sierra Leone Rise (Gulf of Guinea) ... A spectral analysis and red-noise spectra procedure (REDFIT) algorithm was used to identify the red-noise spectrum from the gaps in the observed time-series of catch per unit effort by ...

  14. 18 CFR 154.302 - Previously submitted material.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Previously submitted material. 154.302 Section 154.302 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... concurrently with the rate change filing. There must be furnished to the Director, Office of Energy Market...

  15. Process cells dismantling of EUREX pant: previous activities

    International Nuclear Information System (INIS)

    Gili, M.

    1998-01-01

    In the '98-'99 period some process cells of the EUREX pant will be dismantled, in order to place there the liquid wastes conditioning plant 'CORA'. This report resumes the previous activities (plant rinsing campaigns and inactive Cell 014 dismantling), run in the past three years and the drawn experience [it

  16. The job satisfaction of principals of previously disadvantaged schools

    African Journals Online (AJOL)

    The aim of this study was to identify influences on the job satisfaction of previously disadvantaged school principals in North-West Province. Evans's theory of job satisfaction, morale and motivation was useful as a conceptual framework. A mixedmethods explanatory research design was important in discovering issues with ...

  17. Obstructive pulmonary disease in patients with previous tuberculosis ...

    African Journals Online (AJOL)

    Obstructive pulmonary disease in patients with previous tuberculosis: Pathophysiology of a community-based cohort. B.W. Allwood, R Gillespie, M Galperin-Aizenberg, M Bateman, H Olckers, L Taborda-Barata, G.L. Calligaro, Q Said-Hartley, R van Zyl-Smit, C.B. Cooper, E van Rikxoort, J Goldin, N Beyers, E.D. Bateman ...

  18. Abiraterone in metastatic prostate cancer without previous chemotherapy

    NARCIS (Netherlands)

    Ryan, Charles J.; Smith, Matthew R.; de Bono, Johann S.; Molina, Arturo; Logothetis, Christopher J.; de Souza, Paul; Fizazi, Karim; Mainwaring, Paul; Piulats, Josep M.; Ng, Siobhan; Carles, Joan; Mulders, Peter F. A.; Basch, Ethan; Small, Eric J.; Saad, Fred; Schrijvers, Dirk; van Poppel, Hendrik; Mukherjee, Som D.; Suttmann, Henrik; Gerritsen, Winald R.; Flaig, Thomas W.; George, Daniel J.; Yu, Evan Y.; Efstathiou, Eleni; Pantuck, Allan; Winquist, Eric; Higano, Celestia S.; Taplin, Mary-Ellen; Park, Youn; Kheoh, Thian; Griffin, Thomas; Scher, Howard I.; Rathkopf, Dana E.; Boyce, A.; Costello, A.; Davis, I.; Ganju, V.; Horvath, L.; Lynch, R.; Marx, G.; Parnis, F.; Shapiro, J.; Singhal, N.; Slancar, M.; van Hazel, G.; Wong, S.; Yip, D.; Carpentier, P.; Luyten, D.; de Reijke, T.

    2013-01-01

    Abiraterone acetate, an androgen biosynthesis inhibitor, improves overall survival in patients with metastatic castration-resistant prostate cancer after chemotherapy. We evaluated this agent in patients who had not received previous chemotherapy. In this double-blind study, we randomly assigned

  19. Modelling landscape-level numerical responses of predators to prey: the case of cats and rabbits.

    Directory of Open Access Journals (Sweden)

    Jennyffer Cruz

    Full Text Available Predator-prey systems can extend over large geographical areas but empirical modelling of predator-prey dynamics has been largely limited to localised scales. This is due partly to difficulties in estimating predator and prey abundances over large areas. Collection of data at suitably large scales has been a major problem in previous studies of European rabbits (Oryctolagus cuniculus and their predators. This applies in Western Europe, where conserving rabbits and predators such as Iberian lynx (Lynx pardinus is important, and in other parts of the world where rabbits are an invasive species supporting populations of introduced, and sometimes native, predators. In pastoral regions of New Zealand, rabbits are the primary prey of feral cats (Felis catus that threaten native fauna. We estimate the seasonal numerical response of cats to fluctuations in rabbit numbers in grassland-shrubland habitat across the Otago and Mackenzie regions of the South Island of New Zealand. We use spotlight counts over 1645 km of transects to estimate rabbit and cat abundances with a novel modelling approach that accounts simultaneously for environmental stochasticity, density dependence and varying detection probability. Our model suggests that cat abundance is related consistently to rabbit abundance in spring and summer, possibly through increased rabbit numbers improving the fecundity and juvenile survival of cats. Maintaining rabbits at low abundance should therefore suppress cat numbers, relieving predation pressure on native prey. Our approach provided estimates of the abundance of cats and rabbits over a large geographical area. This was made possible by repeated sampling within each season, which allows estimation of detection probabilities. A similar approach could be applied to predator-prey systems elsewhere, and could be adapted to any method of direct observation in which there is no double-counting of individuals. Reliable estimates of numerical

  20. Statistical methods of parameter estimation for deterministically chaotic time series

    Science.gov (United States)

    Pisarenko, V. F.; Sornette, D.

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

  1. The Recidivism Patterns of Previously Deported Aliens Released from a Local Jail: Are They High-Risk Offenders?

    Science.gov (United States)

    Hickman, Laura J.; Suttorp, Marika J.

    2010-01-01

    Previously deported aliens are a group about which numerous claims are made but very few facts are known. Using data on male deportable aliens released from a local jail, the study sought to test the ubiquitous claim that they pose a high risk of recidivism. Using multiple measures of recidivism and propensity score weighting to account for…

  2. Reoperative sentinel lymph node biopsy after previous mastectomy.

    Science.gov (United States)

    Karam, Amer; Stempel, Michelle; Cody, Hiram S; Port, Elisa R

    2008-10-01

    Sentinel lymph node (SLN) biopsy is the standard of care for axillary staging in breast cancer, but many clinical scenarios questioning the validity of SLN biopsy remain. Here we describe our experience with reoperative-SLN (re-SLN) biopsy after previous mastectomy. Review of the SLN database from September 1996 to December 2007 yielded 20 procedures done in the setting of previous mastectomy. SLN biopsy was performed using radioisotope with or without blue dye injection superior to the mastectomy incision, in the skin flap in all patients. In 17 of 20 patients (85%), re-SLN biopsy was performed for local or regional recurrence after mastectomy. Re-SLN biopsy was successful in 13 of 20 patients (65%) after previous mastectomy. Of the 13 patients, 2 had positive re-SLN, and completion axillary dissection was performed, with 1 having additional positive nodes. In the 11 patients with negative re-SLN, 2 patients underwent completion axillary dissection demonstrating additional negative nodes. One patient with a negative re-SLN experienced chest wall recurrence combined with axillary recurrence 11 months after re-SLN biopsy. All others remained free of local or axillary recurrence. Re-SLN biopsy was unsuccessful in 7 of 20 patients (35%). In three of seven patients, axillary dissection was performed, yielding positive nodes in two of the three. The remaining four of seven patients all had previous modified radical mastectomy, so underwent no additional axillary surgery. In this small series, re-SLN was successful after previous mastectomy, and this procedure may play some role when axillary staging is warranted after mastectomy.

  3. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  4. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin; Genton, Marc G.

    2013-01-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  5. Fast focus estimation using frequency analysis in digital holography.

    Science.gov (United States)

    Oh, Seungtaik; Hwang, Chi-Young; Jeong, Il Kwon; Lee, Sung-Keun; Park, Jae-Hyeung

    2014-11-17

    A novel fast frequency-based method to estimate the focus distance of digital hologram for a single object is proposed. The focus distance is computed by analyzing the distribution of intersections of smoothed-rays. The smoothed-rays are determined by the directions of energy flow which are computed from local spatial frequency spectrum based on the windowed Fourier transform. So our method uses only the intrinsic frequency information of the optical field on the hologram and therefore does not require any sequential numerical reconstructions and focus detection techniques of conventional photography, both of which are the essential parts in previous methods. To show the effectiveness of our method, numerical results and analysis are presented as well.

  6. Left ventricular asynergy score as an indicator of previous myocardial infarction

    International Nuclear Information System (INIS)

    Backman, C.; Jacobsson, K.A.; Linderholm, H.; Osterman, G.

    1986-01-01

    Sixty-eight patients with coronary heart disease (CHD) i.e. a hisotry of angina of effort and/or previous 'possible infarction' were examined inter alia with ECG and cinecardioangiography. A system of scoring was designed which allowed a semiquantitative estimate of the left ventricular asynergy from cinecardioangiography - the left ventricular motion score (LVMS). The LVMS was associated with the presence of a previous myocardial infarction (MI), as indicated by the history and ECG findings. The ECG changes specific for a previous MI were associated with high LVMS values and unspecific or absent ECG changes with low LVMS values. Decision thresholds for ECG changes and asynergy in diagnosing a previous MI were evaluated by means of a ROC analysis. The accuracy of ECG in detecting a previous MI was slightly higher when asynergy indicated a 'true MI' than when autopsy result did so in a comparable group. Therefore the accuracy of asynergy (LVMS ≥ 1) in detecting a previous MI or myocardial fibrosis in patients with CHD should be at least comparable with that of autopsy (scar > 1 cm). (orig.)

  7. Estimation of spectral kurtosis

    Science.gov (United States)

    Sutawanir

    2017-03-01

    Rolling bearings are the most important elements in rotating machinery. Bearing frequently fall out of service for various reasons: heavy loads, unsuitable lubrications, ineffective sealing. Bearing faults may cause a decrease in performance. Analysis of bearing vibration signals has attracted attention in the field of monitoring and fault diagnosis. Bearing vibration signals give rich information for early detection of bearing failures. Spectral kurtosis, SK, is a parameter in frequency domain indicating how the impulsiveness of a signal varies with frequency. Faults in rolling bearings give rise to a series of short impulse responses as the rolling elements strike faults, SK potentially useful for determining frequency bands dominated by bearing fault signals. SK can provide a measure of the distance of the analyzed bearings from a healthy one. SK provides additional information given by the power spectral density (psd). This paper aims to explore the estimation of spectral kurtosis using short time Fourier transform known as spectrogram. The estimation of SK is similar to the estimation of psd. The estimation falls in model-free estimation and plug-in estimator. Some numerical studies using simulations are discussed to support the methodology. Spectral kurtosis of some stationary signals are analytically obtained and used in simulation study. Kurtosis of time domain has been a popular tool for detecting non-normality. Spectral kurtosis is an extension of kurtosis in frequency domain. The relationship between time domain and frequency domain analysis is establish through power spectrum-autocovariance Fourier transform. Fourier transform is the main tool for estimation in frequency domain. The power spectral density is estimated through periodogram. In this paper, the short time Fourier transform of the spectral kurtosis is reviewed, a bearing fault (inner ring and outer ring) is simulated. The bearing response, power spectrum, and spectral kurtosis are plotted to

  8. Analytical and Numerical Studies of Several Fluid Mechanical Problems

    Science.gov (United States)

    Kong, D. L.

    2014-03-01

    In this thesis, three parts, each with several chapters, are respectively devoted to hydrostatic, viscous, and inertial fluids theories and applications. Involved topics include planetary, biological fluid systems, and high performance computing technology. In the hydrostatics part, the classical Maclaurin spheroids theory is generalized, for the first time, to a more realistic multi-layer model, establishing geometries of both the outer surface and the interfaces. For one of its astrophysical applications, the theory explicitly predicts physical shapes of surface and core-mantle-boundary for layered terrestrial planets, which enables the studies of some gravity problems, and the direct numerical simulations of dynamo flows in rotating planetary cores. As another application of the figure theory, the zonal flow in the deep atmosphere of Jupiter is investigated for a better understanding of the Jovian gravity field. An upper bound of gravity field distortions, especially in higher-order zonal gravitational coefficients, induced by deep zonal winds is estimated firstly. The oblate spheroidal shape of an undistorted Jupiter resulting from its fast solid body rotation is fully taken into account, which marks the most significant improvement from previous approximation based Jovian wind theories. High viscosity flows, for example Stokes flows, occur in a lot of processes involving low-speed motions in fluids. Microorganism swimming is such a typical case. A fully three dimensional analytic solution of incompressible Stokes equation is derived in the exterior domain of an arbitrarily translating and rotating prolate spheroid, which models a large family of microorganisms such as cocci bacteria. The solution is then applied to the magnetotactic bacteria swimming problem, and good consistency has been found between theoretical predictions and laboratory observations of the moving patterns of such bacteria under magnetic fields. In the analysis of dynamics of planetary

  9. Adaptive Methods for Permeability Estimation and Smart Well Management

    Energy Technology Data Exchange (ETDEWEB)

    Lien, Martha Oekland

    2005-04-01

    The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement

  10. [Fatal amnioinfusion with previous choriocarcinoma in a parturient woman].

    Science.gov (United States)

    Hrgović, Z; Bukovic, D; Mrcela, M; Hrgović, I; Siebzehnrübl, E; Karelovic, D

    2004-04-01

    The case of 36-year-old tercipare is described who developed choriocharcinoma in a previous pregnancy. During the first term labour the patient developed cardiac arrest, so reanimation and sectio cesarea was performed. A male new-born was delivered in good condition, but even after intensive therapy and reanimation occurred death of parturient woman with picture of disseminate intravascular coagulopathia (DIK). On autopsy and on histology there was no sign of malignant disease, so it was not possible to connect previous choricarcinoma with amniotic fluid embolism. Maybe was place of choriocarcinoma "locus minoris resistentiae" which later resulted with failure in placentation what was hard to prove. On autopsy we found embolia of lung with a microthrombosis of terminal circulation with punctiformis bleeding in mucous, what stands for DIK.

  11. Challenging previous conceptions of vegetarianism and eating disorders.

    Science.gov (United States)

    Fisak, B; Peterson, R D; Tantleff-Dunn, S; Molnar, J M

    2006-12-01

    The purpose of this study was to replicate and expand upon previous research that has examined the potential association between vegetarianism and disordered eating. Limitations of previous research studies are addressed, including possible low reliability of measures of eating pathology within vegetarian samples, use of only a few dietary restraint measures, and a paucity of research examining potential differences in body image and food choice motives of vegetarians versus nonvegetarians. Two hundred and fifty-six college students completed a number of measures of eating pathology and body image, and a food choice motives questionnaire. Interestingly, no significant differences were found between vegetarians and nonvegetarians in measures of eating pathology or body image. However, significant differences in food choice motives were found. Implications for both researchers and clinicians are discussed.

  12. Previously unreported abnormalities in Wolfram Syndrome Type 2.

    Science.gov (United States)

    Akturk, Halis Kaan; Yasa, Seda

    2017-01-01

    Wolfram syndrome (WFS) is a rare autosomal recessive disease with non-autoimmune childhood onset insulin dependent diabetes and optic atrophy. WFS type 2 (WFS2) differs from WFS type 1 (WFS1) with upper intestinal ulcers, bleeding tendency and the lack ofdiabetes insipidus. Li-fespan is short due to related comorbidities. Only a few familieshave been reported with this syndrome with the CISD2 mutation. Here we report two siblings with a clinical diagnosis of WFS2, previously misdiagnosed with type 1 diabetes mellitus and diabetic retinopathy-related blindness. We report possible additional clinical and laboratory findings that have not been pre-viously reported, such as asymptomatic hypoparathyroidism, osteomalacia, growth hormone (GH) deficiency and hepatomegaly. Even though not a requirement for the diagnosis of WFS2 currently, our case series confirm hypogonadotropic hypogonadism to be also a feature of this syndrome, as reported before. © Polish Society for Pediatric Endocrinology and Diabetology.

  13. Estimation of Poisson-Dirichlet Parameters with Monotone Missing Data

    Directory of Open Access Journals (Sweden)

    Xueqin Zhou

    2017-01-01

    Full Text Available This article considers the estimation of the unknown numerical parameters and the density of the base measure in a Poisson-Dirichlet process prior with grouped monotone missing data. The numerical parameters are estimated by the method of maximum likelihood estimates and the density function is estimated by kernel method. A set of simulations was conducted, which shows that the estimates perform well.

  14. Previous climatic alterations are caused by the sun

    International Nuclear Information System (INIS)

    Groenaas, Sigbjoern

    2003-01-01

    The article surveys the scientific results of previous research into the contribution of the sun to climatic alterations. The author concludes that there is evidence of eight cold periods after the last ice age and that the alterations largely were due to climate effects from the sun. However, these effects are only causing a fraction of the registered global warming. It is assumed that the human activities are contributing to the rest of the greenhouse effect

  15. Influence of previous knowledge in Torrance tests of creative thinking

    OpenAIRE

    Aranguren, María; Consejo Nacional de Investigaciones Científicas y Técnicas CONICET

    2015-01-01

    The aim of this work is to analyze the influence of study field, expertise and recreational activities participation in Torrance Tests of Creative Thinking (TTCT, 1974) performance. Several hypotheses were postulated to explore the possible effects of previous knowledge in TTCT verbal and TTCT figural university students’ outcomes. Participants in this study included 418 students from five study fields: Psychology;Philosophy and Literature, Music; Engineering; and Journalism and Advertisin...

  16. Analysis of previous screening examinations for patients with breast cancer

    International Nuclear Information System (INIS)

    Lee, Eun Hye; Cha, Joo Hee; Han, Dae Hee; Choi, Young Ho; Hwang, Ki Tae; Ryu, Dae Sik; Kwak, Jin Ho; Moon, Woo Kyung

    2007-01-01

    We wanted to improve the quality of subsequent screening by reviewing the previous screening of breast cancer patients. Twenty-four breast cancer patients who underwent previous screening were enrolled. All 24 took mammograms and 15 patients also took sonograms. We reviewed the screening retrospectively according to the BI-RADS criteria and we categorized the results into false negative, true negative, true positive and occult cancers. We also categorized the causes of false negative cancers into misperception, misinterpretation and technical factors and then we analyzed the attributing factors. Review of the previous screening revealed 66.7% (16/24) false negative, 25.0% (6/24) true negative, and 8.3% (2/24) true positive cancers. False negative cancers were caused by the mammogram in 56.3% (9/16) and by the sonogram in 43.7% (7/16). For the false negative cases, all of misperception were related with mammograms and this was attributed to dense breast, a lesion located at the edge of glandular tissue or the image, and findings seen on one view only. Almost all misinterpretations were related with sonograms and attributed to loose application of the final assessment. To improve the quality of breast screening, it is essential to overcome the main causes of false negative examinations, including misperception and misinterpretation. We need systematic education and strict application of final assessment categories of BI-RADS. For effective communication among physicians, it is also necessary to properly educate them about BI-RADS

  17. Introduction to precise numerical methods

    CERN Document Server

    Aberth, Oliver

    2007-01-01

    Precise numerical analysis may be defined as the study of computer methods for solving mathematical problems either exactly or to prescribed accuracy. This book explains how precise numerical analysis is constructed. The book also provides exercises which illustrate points from the text and references for the methods presented. All disc-based content for this title is now available on the Web. · Clearer, simpler descriptions and explanations ofthe various numerical methods· Two new types of numerical problems; accurately solving partial differential equations with the included software and computing line integrals in the complex plane.

  18. Contributions to reinforced concrete structures numerical simulations

    International Nuclear Information System (INIS)

    Badel, P.B.

    2001-07-01

    In order to be able to carry out simulations of reinforced concrete structures, it is necessary to know two aspects: the behaviour laws have to reflect the complex behaviour of concrete and a numerical environment has to be developed in order to avoid to the user difficulties due to the softening nature of the behaviour. This work deals with these two subjects. After an accurate estimation of two behaviour models (micro-plan and mesoscopic models), two damage models (the first one using a scalar variable, the other one a tensorial damage of the 2 order) are proposed. These two models belong to the framework of generalized standard materials, which renders their numerical integration easy and efficient. A method of load control is developed in order to make easier the convergence of the calculations. At last, simulations of industrial structures illustrate the efficiency of the method. (O.M.)

  19. A student's guide to numerical methods

    CERN Document Server

    Hutchinson, Ian H

    2015-01-01

    This concise, plain-language guide for senior undergraduates and graduate students aims to develop intuition, practical skills and an understanding of the framework of numerical methods for the physical sciences and engineering. It provides accessible self-contained explanations of mathematical principles, avoiding intimidating formal proofs. Worked examples and targeted exercises enable the student to master the realities of using numerical techniques for common needs such as solution of ordinary and partial differential equations, fitting experimental data, and simulation using particle and Monte Carlo methods. Topics are carefully selected and structured to build understanding, and illustrate key principles such as: accuracy, stability, order of convergence, iterative refinement, and computational effort estimation. Enrichment sections and in-depth footnotes form a springboard to more advanced material and provide additional background. Whether used for self-study, or as the basis of an accelerated introdu...

  20. Numerical and experimental investigations on cavitation erosion

    Science.gov (United States)

    Fortes Patella, R.; Archer, A.; Flageul, C.

    2012-11-01

    A method is proposed to predict cavitation damage from cavitating flow simulations. For this purpose, a numerical process coupling cavitating flow simulations and erosion models was developed and applied to a two-dimensional (2D) hydrofoil tested at TUD (Darmstadt University of Technology, Germany) [1] and to a NACA 65012 tested at LMH-EPFL (Lausanne Polytechnic School) [2]. Cavitation erosion tests (pitting tests) were carried out and a 3D laser profilometry was used to analyze surfaces damaged by cavitation [3]. The method allows evaluating the pit characteristics, and mainly the volume damage rates. The paper describes the developed erosion model, the technique of cavitation damage measurement and presents some comparisons between experimental results and numerical damage predictions. The extent of cavitation erosion was correctly estimated in both hydrofoil geometries. The simulated qualitative influence of flow velocity, sigma value and gas content on cavitation damage agreed well with experimental observations.

  1. Representation of Numerical and Non-Numerical Order in Children

    Science.gov (United States)

    Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco

    2012-01-01

    The representation of numerical and non-numerical ordered sequences was investigated in children from preschool to grade 3. The child's conception of how sequence items map onto a spatial scale was tested using the Number-to-Position task (Siegler & Opfer, 2003) and new variants of the task designed to probe the representation of the alphabet…

  2. Ferrofluids: Modeling, numerical analysis, and scientific computation

    Science.gov (United States)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  3. Numerical treatment of compartment models

    International Nuclear Information System (INIS)

    Einarsson, B.

    1984-11-01

    This report describes and interactive program RADIO (Radioactive Decay Information Online) for studying the radioactive decay process, with applications to many ecological problems, but not necessarily involving radioactive processes. Starting with the compartment coefficients and initial values of the various compartments the problem is solved as a system of linear ordinary differential equations. The method of solution is the direct use of matrix exponentials or the backward differences method. A program INVERS is also available for the solution of the inverse problem, that is parameter estimation in a system of linear ordinary differential equations when the solution is available pointwise. The output can be printed on a line printer either from a result file or from the plot file, which of course also can be used to produce graphic output. The plot file is processed by the plotting program VISION or by the auxiliary printing program RADAR. Another file can be used for a later restart from the point of time where the previous computation was aborted or from an arbitrary point of time if the relevant starting information is available. This is useful in order to avoid the manual input of a compartment matrix if it is similar to one used before. When the program RADIO is run the user answers to the question asked by the program. The programs are written in Fortran 77 for the Digital Equipment VAX 11 with graphical presentation on a Tektronix 4010, and are available from the author. (Author)

  4. Numerical Asymptotic Solutions Of Differential Equations

    Science.gov (United States)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  5. MCNP HPGe detector benchmark with previously validated Cyltran model.

    Science.gov (United States)

    Hau, I D; Russ, W R; Bronson, F

    2009-05-01

    An exact copy of the detector model generated for Cyltran was reproduced as an MCNP input file and the detection efficiency was calculated similarly with the methodology used in previous experimental measurements and simulation of a 280 cm(3) HPGe detector. Below 1000 keV the MCNP data correlated to the Cyltran results within 0.5% while above this energy the difference between MCNP and Cyltran increased to about 6% at 4800 keV, depending on the electron cut-off energy.

  6. HEART TRANSPLANTATION IN PATIENTS WITH PREVIOUS OPEN HEART SURGERY

    Directory of Open Access Journals (Sweden)

    R. Sh. Saitgareev

    2016-01-01

    Full Text Available Heart Transplantation (HTx to date remains the most effective and radical method of treatment of patients with end-stage heart failure. The defi cit of donor hearts is forcing to resort increasingly to the use of different longterm mechanical circulatory support systems, including as a «bridge» to the follow-up HTx. According to the ISHLT Registry the number of recipients underwent cardiopulmonary bypass surgery increased from 40% in the period from 2004 to 2008 to 49.6% for the period from 2009 to 2015. HTx performed in repeated patients, on the one hand, involves considerable technical diffi culties and high risks; on the other hand, there is often no alternative medical intervention to HTx, and if not dictated by absolute contradictions the denial of the surgery is equivalent to 100% mortality. This review summarizes the results of a number of published studies aimed at understanding the immediate and late results of HTx in patients, previously underwent open heart surgery. The effect of resternotomy during HTx and that of the specifi c features associated with its implementation in recipients previously operated on open heart, and its effects on the immediate and long-term survival were considered in this review. Results of studies analyzing the risk factors for perioperative complications in repeated recipients were also demonstrated. Separately, HTx risks after implantation of prolonged mechanical circulatory support systems were examined. The literature does not allow to clearly defi ning the impact factor of earlier performed open heart surgery on the course of perioperative period and on the prognosis of survival in recipients who underwent HTx. On the other hand, subject to the regular fl ow of HTx and the perioperative period the risks in this clinical situation are justifi ed as a long-term prognosis of recipients previously conducted open heart surgery and are comparable to those of patients who underwent primary HTx. Studies

  7. Economic impact of feeding a phenylalanine-restricted diet to adults with previously untreated phenylketonuria.

    Science.gov (United States)

    Brown, M C; Guest, J F

    1999-02-01

    The aim of the present study was to estimate the direct healthcare cost of managing adults with previously untreated phenylketonuria (PKU) for one year before any dietary restrictions and for the first year after a phenylalanine- (PHE-) restricted diet was introduced. The resource use and corresponding costs were estimated from medical records and interviews with health care professionals experienced in caring for adults with previously untreated PKU. The mean annual cost of caring for a client being fed an unrestricted diet was estimated to be 83 996 pound silver. In the first year after introducing a PHE-restricted diet, the mean annual cost was reduced by 20 647 pound silver to 63 348 pound silver as a result of a reduction in nursing time, hospitalizations, outpatient clinic visits and medications. However, the economic benefit of the diet depended on whether the clients were previously high or low users of nursing care. Nursing time was the key cost-driver, accounting for 79% of the cost of managing high users and 31% of the management cost for low users. In contrast, the acquisition cost of a PHE-restricted diet accounted for up to 6% of the cost for managing high users and 15% of the management cost for low users. Sensitivity analyses showed that introducing a PHE-restricted diet reduces the annual cost of care, provided that annual nursing time was reduced by more than 8% or more than 5% of clients respond to the diet. The clients showed fewer negative behaviours when being fed a PHE-restricted diet, which may account for the observed reduction in nursing time needed to care for these clients. In conclusion, feeding a PHE-restricted diet to adults with previously untreated PKU leads to economic benefits to the UK's National Health Service and society in general.

  8. How to Circumvent Church Numerals

    DEFF Research Database (Denmark)

    Goldberg, Mayer; Torgersen, Mads

    2002-01-01

    In this work we consider a standard numeral system in the lambda-calculus, and the elementary arithmetic and Boolean functions and predicates defined on this numeral system, and show how to construct terms that "circumvent" or "defeat" these functions: The equality predicate is satisfied when com...

  9. Numerical Gram-Schmidt orthonormalization

    International Nuclear Information System (INIS)

    Werneth, Charles M; Dhar, Mallika; Maung, Khin Maung; Sirola, Christopher; Norbury, John W

    2010-01-01

    A numerical Gram-Schmidt orthonormalization procedure is presented for constructing an orthonormal basis function set from a non-orthonormal set, when the number of basis functions is large. This method will provide a pedagogical illustration of the Gram-Schmidt procedure and can be presented in classes on numerical methods or computational physics.

  10. Transportation package design using numerical optimization

    International Nuclear Information System (INIS)

    Harding, D.C.; Witkowski, W.R.

    1992-01-01

    The design of structures and engineering systems has always been an iterative process whose complexity was dependent upon the boundary conditions, constraints and available analytical tools. Transportation packaging design is no exception with structural, thermal and radiation shielding constraints based on regulatory hypothetical accident conditions. Transportation packaging design is often accomplished by a group of specialists, each designing a single component based on one or more simple criteria, pooling results with the group, evaluating the open-quotes pooledclose quotes design, and then reiterating the entire process until a satisfactory design is reached. The manual iterative methods used by the designer/analyst can be summarized in the following steps: design the part, analyze the part, interpret the analysis results, modify the part, and re-analyze the part. The inefficiency of this design practice and the frequently conservative result suggests the need for a more structured design methodology, which can simultaneously consider all of the design constraints. Numerical optimization is a structured design methodology whose maturity in development has allowed it to become a primary design tool in many industries. The purpose of this overview is twofold: first, to outline the theory and basic elements of numerical optimization; and second, to show how numerical optimization can be applied to the transportation packaging industry and used to increase efficiency and safety of radioactive and hazardous material transportation packages. A more extensive review of numerical optimization and its applications to radioactive material transportation package design was performed previously by the authors (Witkowski and Harding 1992). A proof-of-concept Type B package design is also presented as a simplified example of potential improvements achievable using numerical optimization in the design process

  11. Wave Velocity Estimation in Heterogeneous Media

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2016-01-01

    In this paper, modulating functions-based method is proposed for estimating space-time dependent unknown velocity in the wave equation. The proposed method simplifies the identification problem into a system of linear algebraic equations. Numerical

  12. Bayesian estimation of the discrete coefficient of determination.

    Science.gov (United States)

    Chen, Ting; Braga-Neto, Ulisses M

    2016-12-01

    The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.

  13. Proteomics Analysis Reveals Previously Uncharacterized Virulence Factors in Vibrio proteolyticus

    Directory of Open Access Journals (Sweden)

    Ann Ray

    2016-07-01

    Full Text Available Members of the genus Vibrio include many pathogens of humans and marine animals that share genetic information via horizontal gene transfer. Hence, the Vibrio pan-genome carries the potential to establish new pathogenic strains by sharing virulence determinants, many of which have yet to be characterized. Here, we investigated the virulence properties of Vibrio proteolyticus, a Gram-negative marine bacterium previously identified as part of the Vibrio consortium isolated from diseased corals. We found that V. proteolyticus causes actin cytoskeleton rearrangements followed by cell lysis in HeLa cells in a contact-independent manner. In search of the responsible virulence factor involved, we determined the V. proteolyticus secretome. This proteomics approach revealed various putative virulence factors, including active type VI secretion systems and effectors with virulence toxin domains; however, these type VI secretion systems were not responsible for the observed cytotoxic effects. Further examination of the V. proteolyticus secretome led us to hypothesize and subsequently demonstrate that a secreted hemolysin, belonging to a previously uncharacterized clan of the leukocidin superfamily, was the toxin responsible for the V. proteolyticus-mediated cytotoxicity in both HeLa cells and macrophages. Clearly, there remains an armory of yet-to-be-discovered virulence factors in the Vibrio pan-genome that will undoubtedly provide a wealth of knowledge on how a pathogen can manipulate host cells.

  14. Incidence of Acneform Lesions in Previously Chemically Damaged Persons-2004

    Directory of Open Access Journals (Sweden)

    N Dabiri

    2008-04-01

    Full Text Available ABSTRACT: Introduction & Objective: Chemical gas weapons especially nitrogen mustard which was used in Iraq-Iran war against Iranian troops have several harmful effects on skin. Some other chemical agents also can cause acne form lesions on skin. The purpose of this study was to compare the incidence of acneform in previously chemically damaged soldiers and non chemically damaged persons. Materials & Methods: In this descriptive and analytical study, 180 chemically damaged soldiers, who have been referred to dermatology clinic between 2000 – 2004, and forty non-chemically damaged people, were chosen randomly and examined for acneform lesions. SPSS software was used for statistic analysis of the data. Results: The mean age of the experimental group was 37.5 ± 5.2 and that of the control group was 38.7 ± 5.9 years. The mean percentage of chemical damage in cases was 31 percent and the time after the chemical damage was 15.2 ± 1.1 years. Ninety seven cases (53.9 percent of the subjects and 19 people (47.5 percent of the control group had some degree of acne. No significant correlation was found in incidence, degree of lesions, site of lesions and age of subjects between two groups. No significant correlation was noted between percentage of chemical damage and incidence and degree of lesions in case group. Conclusion: Incidence of acneform lesions among previously chemically injured peoples was not higher than the normal cases.

  15. Relationship of deer and moose populations to previous winters' snow

    Science.gov (United States)

    Mech, L.D.; McRoberts, R.E.; Peterson, R.O.; Page, R.E.

    1987-01-01

    (1) Linear regression was used to relate snow accumulation during single and consecutive winters with white-tailed deer (Odocoileus virginianus) fawn:doe ratios, mosse (Alces alces) twinning rates and calf:cow ratios, and annual changes in deer and moose populations. Significant relationships were found between snow accumulation during individual winters and these dependent variables during the following year. However, the strongest relationships were between the dependent variables and the sums of the snow accumulations over the previous three winters. The percentage of the variability explained was 36 to 51. (2) Significant relationships were also found between winter vulnerability of moose calves and the sum of the snow accumulations in the current, and up to seven previous, winters, with about 49% of the variability explained. (3) No relationship was found between wolf numbers and the above dependent variables. (4) These relationships imply that winter influences on maternal nutrition can accumulate for several years and that this cumulative effect strongly determines fecundity and/or calf and fawn survivability. Although wolf (Canis lupus L.) predation is the main direct mortality agent on fawns and calves, wolf density itself appears to be secondary to winter weather in influencing the deer and moose populations.

  16. [ANTITHROMBOTIC MEDICATION IN PREGNANT WOMEN WITH PREVIOUS INTRAUTERINE GROWTH RESTRICTION].

    Science.gov (United States)

    Neykova, K; Dimitrova, V; Dimitrov, R; Vakrilova, L

    2016-01-01

    To analyze pregnancy outcome in patients who were on antithrombotic medication (AM) because of previous pregnancy with fetal intrauterine growth restriction (IUGR). The studied group (SG) included 21 pregnancies in 15 women with history of previous IUGR. The patients were on low dose aspirin (LDA) and/or low molecular weight heparin (LMWH). Pregnancy outcome was compared to the one in two more groups: 1) primary group (PG) including the previous 15 pregnancies with IUGR of the same women; 2) control group (CG) including 45 pregnancies of women matched for parity with the ones in the SG, with no history of IUGR and without medication. The SG, PG and CG were compared for the following: mean gestational age (g.a.) at birth, mean birth weight (BW), proportion of cases with early preeclampsia (PE), IUGR (total, moderate, and severe), intrauterine fetal death (IUFD), neonatal death (NND), admission to NICU, cesarean section (CS) because of chronic or acute fetal distress (FD) related to IUGR, PE or placental abruption. Student's t-test was applied to assess differences between the groups. P values < 0.05 were considered statistically significant. The differences between the SG and the PG regarding mean g. a. at delivery (33.7 and 29.8 w.g. respectively) and the proportion of babies admitted to NICU (66.7% vs. 71.4%) were not statistically significant. The mean BW in the SG (2114,7 g.) was significantly higher than in the PG (1090.8 g.). In the SG compared with the PG there were significantly less cases of IUFD (14.3% and 53.3% respectively), early PE (9.5% vs. 46.7%) moderate and severe IUGR (10.5% and 36.8% vs. 41.7% and 58.3%). Neonatal mortality in the SG (5.6%) was significantly lower than in the PG (57.1%), The proportion of CS for FD was not significantly different--53.3% in the SG and 57.1% in the PG. On the other hand, comparison between the SG and the CG demonstrated significantly lower g.a. at delivery in the SG (33.7 vs. 38 w.g.) an lower BW (2114 vs. 3094 g

  17. The Adriatic response to the bora forcing. A numerical study

    International Nuclear Information System (INIS)

    Rachev, N.

    2001-01-01

    This paper deals with the bora wind effect on the Adriatic Sea circulation as simulated by a 3-D numerical code (the DieCAST model). The main result of this forcing is the formation of intense upwelling along the eastern coast in agreement with previous theoretical studies and observations. Different numerical experiments are discussed for various boundary and initial conditions to evaluate their influence on both circulation and upwelling patterns

  18. A numerical simulation of pre-big bang cosmology

    CERN Document Server

    Maharana, J P; Veneziano, Gabriele

    1998-01-01

    We analyse numerically the onset of pre-big bang inflation in an inhomogeneous, spherically symmetric Universe. Adding a small dilatonic perturbation to a trivial (Milne) background, we find that suitable regions of space undergo dilaton-driven inflation and quickly become spatially flat ($\\Omega \\to 1$). Numerical calculations are pushed close enough to the big bang singularity to allow cross checks against previously proposed analytic asymptotic solutions.

  19. Estimation of population mean under systematic sampling

    Science.gov (United States)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  20. Estimation of complex permittivity using loop antenna

    DEFF Research Database (Denmark)

    Lenler-Eriksen, Hans-Rudolph; Meincke, Peter

    2004-01-01

    A method for estimating the complex permittivity of materials in the vicinity of a loop antenna is proposed. The method is based on comparing measured and numerically calculated input admittances for the loop antenna.......A method for estimating the complex permittivity of materials in the vicinity of a loop antenna is proposed. The method is based on comparing measured and numerically calculated input admittances for the loop antenna....

  1. Analytical estimates of structural behavior

    CERN Document Server

    Dym, Clive L

    2012-01-01

    Explicitly reintroducing the idea of modeling to the analysis of structures, Analytical Estimates of Structural Behavior presents an integrated approach to modeling and estimating the behavior of structures. With the increasing reliance on computer-based approaches in structural analysis, it is becoming even more important for structural engineers to recognize that they are dealing with models of structures, not with the actual structures. As tempting as it is to run innumerable simulations, closed-form estimates can be effectively used to guide and check numerical results, and to confirm phys

  2. A novel numerical approach for workspace determination of parallel mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yiqun; Niu, Junchuan; Liu, Zhihui; Zhang, Fuliang [Shandong University, Shandong (China)

    2017-06-15

    In this paper, a novel numerical approach is proposed for workspace determination of parallel mechanisms. Compared with the classical numerical approaches, this presented approach discretizes both location and orientation of the mechanism simultaneously, not only one of the two. This technique makes the presented numerical approach applicable in determining almost all types of workspaces, while traditional numerical approaches are only applicable in determining the constant orientation workspace and orientation workspace. The presented approach and its steps to determine the inclusive orientation workspace and total orientation workspace are described in detail. A lower-mobility parallel mechanism and a six-degrees-of-freedom Stewart platform are set as examples, the workspaces of these mechanisms are estimated and visualized by the proposed numerical approach. Furthermore, the efficiency of the presented approach is discussed. The examples show that the presented approach is applicable in determining the inclusive orientation workspace and total orientation workspace of parallel mechanisms with high efficiency.

  3. Adaptive optimisation-offline cyber attack on remote state estimator

    Science.gov (United States)

    Huang, Xin; Dong, Jiuxiang

    2017-10-01

    Security issues of cyber-physical systems have received increasing attentions in recent years. In this paper, deception attacks on the remote state estimator equipped with the chi-squared failure detector are considered, and it is assumed that the attacker can monitor and modify all the sensor data. A novel adaptive optimisation-offline cyber attack strategy is proposed, where using the current and previous sensor data, the attack can yield the largest estimation error covariance while ensuring to be undetected by the chi-squared monitor. From the attacker's perspective, the attack is better than the existing linear deception attacks to degrade the system performance. Finally, some numerical examples are provided to demonstrate theoretical results.

  4. EFFECTS OF DIFFERENT NUMERICAL INTERFACE METHODS ON HYDRODYNAMICS INSTABILITY

    Energy Technology Data Exchange (ETDEWEB)

    FRANCOIS, MARIANNE M. [Los Alamos National Laboratory; DENDY, EDWARD D. [Los Alamos National Laboratory; LOWRIE, ROBERT B. [Los Alamos National Laboratory; LIVESCU, DANIEL [Los Alamos National Laboratory; STEINKAMP, MICHAEL J. [Los Alamos National Laboratory

    2007-01-11

    The authors compare the effects of different numerical schemes for the advection and material interface treatments on the single-mode Rayleigh-Taylor instability, using the RAGE hydro-code. The interface growth and its surface density (interfacial area) versus time are investigated. The surface density metric shows to be better suited to characterize the difference in the flow, than the conventional interface growth metric. They have found that Van Leer's limiter combined to no interface treatment leads to the largest surface area. Finally, to quantify the difference between the numerical methods they have estimated the numerical viscosity in the linear-regime at different scales.

  5. Some Numerical Aspects on Crowd Motion - The Hughes Model

    KAUST Repository

    Gomes, Diogo A.

    2016-01-06

    Here, we study a crowd model proposed by R. Hughes in [5] and we describe a numerical approach to solve it. This model comprises a Fokker-Planck equation coupled with an Eikonal equation with Dirichlet or Neumann data. First, we establish a priori estimates for the solution. Second, we study radial solutions and identify a shock formation mechanism. Third, we illustrate the existence of congestion, the breakdown of the model, and the trend to the equilibrium. Finally, we propose a new numerical method and consider two numerical examples.

  6. Deepwater Gulf of Mexico more profitable than previously thought

    International Nuclear Information System (INIS)

    Craig, M.J.K.; Hyde, S.T.

    1997-01-01

    Economic evaluations and recent experience show that the deepwater Gulf of Mexico (GOM) is much more profitable than previously thought. Four factors contributing to the changed viewpoint are: First, deepwater reservoirs have proved to have excellent productive capacity, distribution, and continuity when compared to correlative-age shelf deltaic sands. Second, improved technologies and lower perceived risks have lowered the cost of floating production systems (FPSs). Third, projects now get on-line quicker. Fourth, a collection of other important factors are: Reduced geologic risk and associated high success rates for deepwater GOM wells due primarily to improved seismic imaging and processing tools (3D, AVO, etc.); absence of any political risk in the deepwater GOM (common overseas, and very significant in some international areas); and positive impact of deepwater federal royalty relief. This article uses hypothetical reserve distributions and price forecasts to illustrate indicative economics of deepwater prospects. Economics of Shell Oil Co.'s three deepwater projects are also discussed

  7. Corneal perforation after conductive keratoplasty with previous refractive surgery.

    Science.gov (United States)

    Kymionis, George D; Titze, Patrik; Markomanolakis, Marinos M; Aslanides, Ioannis M; Pallikaris, Ioannis G

    2003-12-01

    A 56-year-old woman had conductive keratoplasty (CK) for residual hyperopia and astigmatism. Three years before the procedure, the patient had arcuate keratotomy, followed by laser in situ keratomileusis 2 years later for high astigmatism correction in both eyes. During CK, a corneal perforation occurred in the right eye; during the postoperative examination, an iris perforation and anterior subcapsule opacification were seen beneath the perforation site. The perforation was managed with a bandage contact lens and an antibiotic-steroid ointment; it had a negative Seidel sign by the third day. The surgery in the left eye was uneventful. Three months after the procedure, the uncorrected visual acuity was 20/32 and the best corrected visual acuity 20/20 in both eyes with a significant improvement in corneal topography. Care must be taken to prevent CK-treated spots from coinciding with areas in the corneal stroma that might have been altered by previous refractive procedures.

  8. Interference from previous distraction disrupts older adults' memory.

    Science.gov (United States)

    Biss, Renée K; Campbell, Karen L; Hasher, Lynn

    2013-07-01

    Previously relevant information can disrupt the ability of older adults to remember new information. Here, the researchers examined whether prior irrelevant information, or distraction, can also interfere with older adults' memory for new information. Younger and older adults first completed a 1-back task on pictures that were superimposed with distracting words. After a delay, participants learned picture-word paired associates and memory was tested using picture-cued recall. In 1 condition (high interference), some pairs included pictures from the 1-back task now paired with new words. In a low-interference condition, the transfer list used all new items. Older adults had substantially lower cued-recall performance in the high- compared with the low-interference condition. In contrast, younger adults' performance did not vary across conditions. These findings suggest that even never-relevant information from the past can disrupt older adults' memory for new associations.

  9. The long-term consequences of previous hyperthyroidism

    DEFF Research Database (Denmark)

    Hjelm Brandt Kristensen, Frans

    2015-01-01

    Thyroid hormones affect every cell in the human body, and the cardiovascular changes associated with increased levels of thyroid hormones are especially well described. As an example, short-term hyperthyroidism has positive chronotropic and inotropic effects on the heart, leading to a hyperdynamic...... with CVD, LD and DM both before and after the diagnosis of hyperthyroidism. Although the design used does not allow a stringent distinction between cause and effect, the findings indicate a possible direct association between hyperthyroidism and these morbidities, or vice versa....... vascular state. While it is biologically plausible that these changes may induce long-term consequences, the insight into morbidity as well as mortality in patients with previous hyperthyroidism is limited. The reasons for this are a combination of inadequately powered studies, varying definitions...

  10. Relativistic positioning systems: Numerical simulations

    Science.gov (United States)

    Puchades Colmenero, Neus

    The position of users located on the Earth's surface or near it may be found with the classic positioning systems (CPS). Certain information broadcast by satellites of global navigation systems, as GPS and GALILEO, may be used for positioning. The CPS are based on the Newtonian formalism, although relativistic post-Newtonian corrections are done when they are necessary. This thesis contributes to the development of a different positioning approach, which is fully relativistic from the beginning. In the relativistic positioning systems (RPS), the space-time position of any user (ship, spacecraft, and so on) can be calculated with the help of four satellites, which broadcast their proper times by means of codified electromagnetic signals. In this thesis, we have simulated satellite 4-tuples of the GPS and GALILEO constellations. If a user receives the signals from four satellites simultaneously, the emission proper times read -after decoding- are the user "emission coordinates". In order to find the user "positioning coordinates", in an appropriate almost inertial reference system, there are two possibilities: (a) the explicit relation between positioning and emission coordinates (broadcast by the satellites) is analytically found or (b) numerical codes are designed to calculate the positioning coordinates from the emission ones. Method (a) is only viable in simple ideal cases, whereas (b) allows us to consider realistic situations. In this thesis, we have designed numerical codes with the essential aim of studying two appropriate RPS, which may be generalized. Sometimes, there are two real users placed in different positions, which receive the same proper times from the same satellites; then, we say that there is bifurcation, and additional data are needed to choose the real user position. In this thesis, bifurcation is studied in detail. We have analyzed in depth two RPS models; in both, it is considered that the satellites move in the Schwarzschild's space

  11. Is Previous Respiratory Disease a Risk Factor for Lung Cancer?

    Science.gov (United States)

    Denholm, Rachel; Schüz, Joachim; Straif, Kurt; Stücker, Isabelle; Jöckel, Karl-Heinz; Brenner, Darren R.; De Matteis, Sara; Boffetta, Paolo; Guida, Florence; Brüske, Irene; Wichmann, Heinz-Erich; Landi, Maria Teresa; Caporaso, Neil; Siemiatycki, Jack; Ahrens, Wolfgang; Pohlabeln, Hermann; Zaridze, David; Field, John K.; McLaughlin, John; Demers, Paul; Szeszenia-Dabrowska, Neonila; Lissowska, Jolanta; Rudnai, Peter; Fabianova, Eleonora; Dumitru, Rodica Stanescu; Bencko, Vladimir; Foretova, Lenka; Janout, Vladimir; Kendzia, Benjamin; Peters, Susan; Behrens, Thomas; Vermeulen, Roel; Brüning, Thomas; Kromhout, Hans

    2014-01-01

    Rationale: Previous respiratory diseases have been associated with increased risk of lung cancer. Respiratory conditions often co-occur and few studies have investigated multiple conditions simultaneously. Objectives: Investigate lung cancer risk associated with chronic bronchitis, emphysema, tuberculosis, pneumonia, and asthma. Methods: The SYNERGY project pooled information on previous respiratory diseases from 12,739 case subjects and 14,945 control subjects from 7 case–control studies conducted in Europe and Canada. Multivariate logistic regression models were used to investigate the relationship between individual diseases adjusting for co-occurring conditions, and patterns of respiratory disease diagnoses and lung cancer. Analyses were stratified by sex, and adjusted for age, center, ever-employed in a high-risk occupation, education, smoking status, cigarette pack-years, and time since quitting smoking. Measurements and Main Results: Chronic bronchitis and emphysema were positively associated with lung cancer, after accounting for other respiratory diseases and smoking (e.g., in men: odds ratio [OR], 1.33; 95% confidence interval [CI], 1.20–1.48 and OR, 1.50; 95% CI, 1.21–1.87, respectively). A positive relationship was observed between lung cancer and pneumonia diagnosed 2 years or less before lung cancer (OR, 3.31; 95% CI, 2.33–4.70 for men), but not longer. Co-occurrence of chronic bronchitis and emphysema and/or pneumonia had a stronger positive association with lung cancer than chronic bronchitis “only.” Asthma had an inverse association with lung cancer, the association being stronger with an asthma diagnosis 5 years or more before lung cancer compared with shorter. Conclusions: Findings from this large international case–control consortium indicate that after accounting for co-occurring respiratory diseases, chronic bronchitis and emphysema continue to have a positive association with lung cancer. PMID:25054566

  12. Urethrotomy has a much lower success rate than previously reported.

    Science.gov (United States)

    Santucci, Richard; Eisenberg, Lauren

    2010-05-01

    We evaluated the success rate of direct vision internal urethrotomy as a treatment for simple male urethral strictures. A retrospective chart review was performed on 136 patients who underwent urethrotomy from January 1994 through March 2009. The Kaplan-Meier method was used to analyze stricture-free probability after the first, second, third, fourth and fifth urethrotomy. Patients with complex strictures (36) were excluded from the study for reasons including previous urethroplasty, neophallus or previous radiation, and 24 patients were lost to followup. Data were available for 76 patients. The stricture-free rate after the first urethrotomy was 8% with a median time to recurrence of 7 months. For the second urethrotomy stricture-free rate was 6% with a median time to recurrence of 9 months. For the third urethrotomy stricture-free rate was 9% with a median time to recurrence of 3 months. For procedures 4 and 5 stricture-free rate was 0% with a median time to recurrence of 20 and 8 months, respectively. Urethrotomy is a popular treatment for male urethral strictures. However, the performance characteristics are poor. Success rates were no higher than 9% in this series for first or subsequent urethrotomy during the observation period. Most of the patients in this series will be expected to experience failure with longer followup and the expected long-term success rate from any (1 through 5) urethrotomy approach is 0%. Urethrotomy should be considered a temporizing measure until definitive curative reconstruction can be planned. 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  13. Typing DNA profiles from previously enhanced fingerprints using direct PCR.

    Science.gov (United States)

    Templeton, Jennifer E L; Taylor, Duncan; Handt, Oliva; Linacre, Adrian

    2017-07-01

    Fingermarks are a source of human identification both through the ridge patterns and DNA profiling. Typing nuclear STR DNA markers from previously enhanced fingermarks provides an alternative method of utilising the limited fingermark deposit that can be left behind during a criminal act. Dusting with fingerprint powders is a standard method used in classical fingermark enhancement and can affect DNA data. The ability to generate informative DNA profiles from powdered fingerprints using direct PCR swabs was investigated. Direct PCR was used as the opportunity to generate usable DNA profiles after performing any of the standard DNA extraction processes is minimal. Omitting the extraction step will, for many samples, be the key to success if there is limited sample DNA. DNA profiles were generated by direct PCR from 160 fingermarks after treatment with one of the following dactyloscopic fingerprint powders: white hadonite; silver aluminium; HiFi Volcano silk black; or black magnetic fingerprint powder. This was achieved by a combination of an optimised double-swabbing technique and swab media, omission of the extraction step to minimise loss of critical low-template DNA, and additional AmpliTaq Gold ® DNA polymerase to boost the PCR. Ninety eight out of 160 samples (61%) were considered 'up-loadable' to the Australian National Criminal Investigation DNA Database (NCIDD). The method described required a minimum of working steps, equipment and reagents, and was completed within 4h. Direct PCR allows the generation of DNA profiles from enhanced prints without the need to increase PCR cycle numbers beyond manufacturer's recommendations. Particular emphasis was placed on preventing contamination by applying strict protocols and avoiding the use of previously used fingerprint brushes. Based on this extensive survey, the data provided indicate minimal effects of any of these four powders on the chance of obtaining DNA profiles from enhanced fingermarks. Copyright © 2017

  14. Thermodynamic estimation: Ionic materials

    International Nuclear Information System (INIS)

    Glasser, Leslie

    2013-01-01

    Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy

  15. Numerical modeling of fires on gas pipelines

    International Nuclear Information System (INIS)

    Zhao Yang; Jianbo Lai; Lu Liu

    2011-01-01

    When natural gas is released through a hole on a high-pressure pipeline, it disperses in the atmosphere as a jet. A jet fire will occur when the leaked gas meets an ignition source. To estimate the dangerous area, the shape and size of the fire must be known. The evolution of the jet fire in air is predicted by using a finite-volume procedure to solve the flow equations. The model is three-dimensional, elliptic and calculated by using a compressibility corrected version of the k - ξ turbulence model, and also includes a probability density function/laminar flamelet model of turbulent non-premixed combustion process. Radiation heat transfer is described using an adaptive version of the discrete transfer method. The model is compared with the experiments about a horizontal jet fire in a wind tunnel in the literature with success. The influence of wind and jet velocity on the fire shape has been investigated. And a correlation based on numerical results for predicting the stoichiometric flame length is proposed. - Research highlights: → We developed a model to predict the evolution of turbulent jet diffusion flames. → Measurements of temperature distributions match well with the numerical predictions. → A correlation has been proposed to predict the stoichiometric flame length. → Buoyancy effects are higher in the numerical results. → The radiative heat loss is bigger in the experimental results.

  16. Experiments and Numerical Simulations of Electrodynamic Tether

    Science.gov (United States)

    Iki, Kentaro; Kawamoto, Satomi; Takahashi, Ayaka; Ishimoto, Tomori; Yanagida, Atsushi; Toda, Susumu

    As an effective means of suppressing space debris growth, the Aerospace Research and Development Directorate of the Japan Aerospace Exploration Agency (JAXA) has been investigating an active space debris removal system that employs highly efficient electrodynamic tether (EDT) technology for orbital transfer. This study investigates tether deployment dynamics by means of on-ground experiments and numerical simulations of an electrodynamic tether system. Some key parameters used in the numerical simulations, such as the elastic modulus and damping ratio of the tether, the spring constant of the coiling of the tether, and deployment friction, must be estimated, and various experiments are conducted to determine these values. As a result, the following values were obtained: The elastic modulus of the tether was 40 GPa, and the damping ratio of the tether was 0.02. The spring constant and the damping ratio of the tether coiling were 10-4 N/m and 0.025 respectively. The deployment friction was 0.038ν + 0.005 N. In numerical simulations using a multiple mass tether model, tethers with lengths of several kilometers are deployed and the attitude dynamics of satellites attached to the end of the tether and tether libration are calculated. As a result, the simulations confirmed successful deployment of the tether with a length of 500 m using the electrodynamic tether system.

  17. A Numerical Model for Trickle Bed Reactors

    Science.gov (United States)

    Propp, Richard M.; Colella, Phillip; Crutchfield, William Y.; Day, Marcus S.

    2000-12-01

    Trickle bed reactors are governed by equations of flow in porous media such as Darcy's law and the conservation of mass. Our numerical method for solving these equations is based on a total-velocity splitting, sequential formulation which leads to an implicit pressure equation and a semi-implicit mass conservation equation. We use high-resolution finite-difference methods to discretize these equations. Our solution scheme extends previous work in modeling porous media flows in two ways. First, we incorporate physical effects due to capillary pressure, a nonlinear inlet boundary condition, spatial porosity variations, and inertial effects on phase mobilities. In particular, capillary forces introduce a parabolic component into the recast evolution equation, and the inertial effects give rise to hyperbolic nonconvexity. Second, we introduce a modification of the slope-limiting algorithm to prevent our numerical method from producing spurious shocks. We present a numerical algorithm for accommodating these difficulties, show the algorithm is second-order accurate, and demonstrate its performance on a number of simplified problems relevant to trickle bed reactor modeling.

  18. Obstetric Outcomes of Mothers Previously Exposed to Sexual Violence.

    Directory of Open Access Journals (Sweden)

    Agnes Gisladottir

    Full Text Available There is a scarcity of data on the association of sexual violence and women's subsequent obstetric outcomes. Our aim was to investigate whether women exposed to sexual violence as teenagers (12-19 years of age or adults present with different obstetric outcomes than women with no record of such violence.We linked detailed prospectively collected information on women attending a Rape Trauma Service (RTS to the Icelandic Medical Birth Registry (IBR. Women who attended the RTS in 1993-2010 and delivered (on average 5.8 years later at least one singleton infant in Iceland through 2012 formed our exposed cohort (n = 1068. For each exposed woman's delivery, nine deliveries by women with no RTS attendance were randomly selected from the IBR (n = 9126 matched on age, parity, and year and season of delivery. Information on smoking and Body mass index (BMI was available for a sub-sample (n = 792 exposed and n = 1416 non-exposed women. Poisson regression models were used to estimate Relative Risks (RR with 95% confidence intervals (CI.Compared with non-exposed women, exposed women presented with increased risks of maternal distress during labor and delivery (RR 1.68, 95% CI 1.01-2.79, prolonged first stage of labor (RR 1.40, 95% CI 1.03-1.88, antepartum bleeding (RR 1.95, 95% CI 1.22-3.07 and emergency instrumental delivery (RR 1.16, 95% CI 1.00-1.34. Slightly higher risks were seen for women assaulted as teenagers. Overall, we did not observe differences between the groups regarding the risk of elective cesarean section (RR 0.86, 95% CI 0.61-1.21, except for a reduced risk among those assaulted as teenagers (RR 0.56, 95% CI 0.34-0.93. Adjusting for maternal smoking and BMI in a sub-sample did not substantially affect point estimates.Our prospective data suggest that women with a history of sexual assault, particularly as teenagers, are at increased risks of some adverse obstetric outcomes.

  19. Obstetric Outcomes of Mothers Previously Exposed to Sexual Violence.

    Science.gov (United States)

    Gisladottir, Agnes; Luque-Fernandez, Miguel Angel; Harlow, Bernard L; Gudmundsdottir, Berglind; Jonsdottir, Eyrun; Bjarnadottir, Ragnheidur I; Hauksdottir, Arna; Aspelund, Thor; Cnattingius, Sven; Valdimarsdottir, Unnur A

    2016-01-01

    There is a scarcity of data on the association of sexual violence and women's subsequent obstetric outcomes. Our aim was to investigate whether women exposed to sexual violence as teenagers (12-19 years of age) or adults present with different obstetric outcomes than women with no record of such violence. We linked detailed prospectively collected information on women attending a Rape Trauma Service (RTS) to the Icelandic Medical Birth Registry (IBR). Women who attended the RTS in 1993-2010 and delivered (on average 5.8 years later) at least one singleton infant in Iceland through 2012 formed our exposed cohort (n = 1068). For each exposed woman's delivery, nine deliveries by women with no RTS attendance were randomly selected from the IBR (n = 9126) matched on age, parity, and year and season of delivery. Information on smoking and Body mass index (BMI) was available for a sub-sample (n = 792 exposed and n = 1416 non-exposed women). Poisson regression models were used to estimate Relative Risks (RR) with 95% confidence intervals (CI). Compared with non-exposed women, exposed women presented with increased risks of maternal distress during labor and delivery (RR 1.68, 95% CI 1.01-2.79), prolonged first stage of labor (RR 1.40, 95% CI 1.03-1.88), antepartum bleeding (RR 1.95, 95% CI 1.22-3.07) and emergency instrumental delivery (RR 1.16, 95% CI 1.00-1.34). Slightly higher risks were seen for women assaulted as teenagers. Overall, we did not observe differences between the groups regarding the risk of elective cesarean section (RR 0.86, 95% CI 0.61-1.21), except for a reduced risk among those assaulted as teenagers (RR 0.56, 95% CI 0.34-0.93). Adjusting for maternal smoking and BMI in a sub-sample did not substantially affect point estimates. Our prospective data suggest that women with a history of sexual assault, particularly as teenagers, are at increased risks of some adverse obstetric outcomes.

  20. Testing gravitational-wave searches with numerical relativity waveforms: results from the first Numerical INJection Analysis (NINJA) project

    International Nuclear Information System (INIS)

    Aylott, Benjamin; Baker, John G; Camp, Jordan; Centrella, Joan; Boggs, William D; Buonanno, Alessandra; Boyle, Michael; Buchman, Luisa T; Chu, Tony; Brady, Patrick R; Brown, Duncan A; Bruegmann, Bernd; Cadonati, Laura; Campanelli, Manuela; Faber, Joshua; Chatterji, Shourov; Christensen, Nelson; Diener, Peter; Dorband, Nils; Etienne, Zachariah B

    2009-01-01

    The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave data analysis communities. The purpose of NINJA is to study the sensitivity of existing gravitational-wave search algorithms using numerically generated waveforms and to foster closer collaboration between the numerical relativity and data analysis communities. We describe the results of the first NINJA analysis which focused on gravitational waveforms from binary black hole coalescence. Ten numerical relativity groups contributed numerical data which were used to generate a set of gravitational-wave signals. These signals were injected into a simulated data set, designed to mimic the response of the initial LIGO and Virgo gravitational-wave detectors. Nine groups analysed this data using search and parameter-estimation pipelines. Matched filter algorithms, un-modelled-burst searches and Bayesian parameter estimation and model-selection algorithms were applied to the data. We report the efficiency of these search methods in detecting the numerical waveforms and measuring their parameters. We describe preliminary comparisons between the different search methods and suggest improvements for future NINJA analyses.

  1. Investigating physiological methods to determine previous exposure of immature insects to ionizing radiation and estimating the exposure dose

    International Nuclear Information System (INIS)

    Mansour, M.

    1998-10-01

    Effects of gamma radiation on pupation and adult emergence in mature (diapausing and non-diapausing) codling moth, Cydia pomonella L., larvae and on phenoloxidase activity in larvae killed by freezing were investigated. Results showed that, a dose of 50 Gy reduced adult emergence (and pupation) significantly and 200 Gy completely prevented it. Diapausing larvae were more susceptible to irradiation that non-diapausing larvae and female moths were more susceptible to irradiation injury than males. Phenoloxidase activity in codling moth larvae was determined spectrophotometrically by measuring the increase in optical density at 490 nm, or by observing the degree of melanization in larvae killed by freezing. Results showed that, in un-irradiated larvae, phenoloxidase activity can be detected in 7 day old larvae and activity continued to accumulate throughout the larval stage. This accumulation was not observed when larvae were irradiated with a minimum dose of 50 Gy during the 1st week of their development. However, irradiating larvae in which enzyme activity was already high (2-3 week old) did not remove activity but only reduced further accumulation. Larval melanization studies were in agreement with results of the phenoloxidase assay. (author)

  2. Prediction of successful trial of labour in patients with a previous caesarean section

    International Nuclear Information System (INIS)

    Shaheen, N.; Khalil, S.; Iftikhar, P.

    2014-01-01

    Objective: To determine the prediction rate of success in trial of labour after one previous caesarean section. Methods: The cross-sectional study was conducted at the Department of Obstetrics and Gynaecology, Cantonment General Hospital, Rawalpindi, from January 1, 2012 to January 31, 2013, and comprised women with one previous Caesarean section and with single alive foetus at 37-41 weeks of gestation. Women with more than one Caesarean section, unknown site of uterine scar, bony pelvic deformity, placenta previa, intra-uterine growth restriction, deep transverse arrest in previous labour and non-reassuring foetal status at the time of admission were excluded. Intrapartum risk assessment included Bishop score at admission, rate of cervical dilatation and scar tenderness. SPSS 21 was used for statistical analysis. Results: Out of a total of 95 women, the trial was successful in 68 (71.6%). Estimated foetal weight and number of prior vaginal deliveries had a high predictive value for successful trial of labour after Caesarean section. Estimated foetal weight had an odds ratio of 0.46 (p<0.001), while number of prior vaginal deliveries had an odds ratio of 0.85 with (p=0.010). Other factors found to be predictive of successful trial included Bishop score at the time of admission (p<0.037) and rate of cervical dilatation in the first stage of labour (p<0.021). Conclusion: History of prior vaginal deliveries, higher Bishop score at the time of admission, rapid rate of cervical dilatation and lower estimated foetal weight were predictive of a successful trial of labour after Caesarean section. (author)

  3. Central diabetes insipidus: a previously unreported side effect of temozolomide.

    Science.gov (United States)

    Faje, Alexander T; Nachtigall, Lisa; Wexler, Deborah; Miller, Karen K; Klibanski, Anne; Makimura, Hideo

    2013-10-01

    Temozolomide (TMZ) is an alkylating agent primarily used to treat tumors of the central nervous system. We describe 2 patients with apparent TMZ-induced central diabetes insipidus. Using our institution's Research Patient Database Registry, we identified 3 additional potential cases of TMZ-induced diabetes insipidus among a group of 1545 patients treated with TMZ. A 53-year-old male with an oligoastrocytoma and a 38-year-old male with an oligodendroglioma each developed symptoms of polydipsia and polyuria approximately 2 months after the initiation of TMZ. Laboratory analyses demonstrated hypernatremia and urinary concentrating defects, consistent with the presence of diabetes insipidus, and the patients were successfully treated with desmopressin acetate. Desmopressin acetate was withdrawn after the discontinuation of TMZ, and diabetes insipidus did not recur. Magnetic resonance imaging of the pituitary and hypothalamus was unremarkable apart from the absence of a posterior pituitary bright spot in both of the cases. Anterior pituitary function tests were normal in both cases. Using the Research Patient Database Registry database, we identified the 2 index cases and 3 additional potential cases of diabetes insipidus for an estimated prevalence of 0.3% (5 cases of diabetes insipidus per 1545 patients prescribed TMZ). Central diabetes insipidus is a rare but reversible side effect of treatment with TMZ.

  4. Numerical Verification Of Equilibrium Chemistry

    International Nuclear Information System (INIS)

    Piro, Markus; Lewis, Brent; Thompson, William T.; Simunovic, Srdjan; Besmann, Theodore M.

    2010-01-01

    A numerical tool is in an advanced state of development to compute the equilibrium compositions of phases and their proportions in multi-component systems of importance to the nuclear industry. The resulting software is being conceived for direct integration into large multi-physics fuel performance codes, particularly for providing boundary conditions in heat and mass transport modules. However, any numerical errors produced in equilibrium chemistry computations will be propagated in subsequent heat and mass transport calculations, thus falsely predicting nuclear fuel behaviour. The necessity for a reliable method to numerically verify chemical equilibrium computations is emphasized by the requirement to handle the very large number of elements necessary to capture the entire fission product inventory. A simple, reliable and comprehensive numerical verification method is presented which can be invoked by any equilibrium chemistry solver for quality assurance purposes.

  5. Axisymmetric Numerical Modeling of Pulse Detonation Rocket Engines

    Science.gov (United States)

    Morris, Christopher I.

    2005-01-01

    Pulse detonation rocket engines (PDREs) have generated research interest in recent years as a chemical propulsion system potentially offering improved performance and reduced complexity compared to conventional rocket engines. The detonative mode of combustion employed by these devices offers a thermodynamic advantage over the constant-pressure deflagrative combustion mode used in conventional rocket engines and gas turbines. However, while this theoretical advantage has spurred considerable interest in building PDRE devices, the unsteady blowdown process intrinsic to the PDRE has made realistic estimates of the actual propulsive performance problematic. The recent review article by Kailasanath highlights some of the progress that has been made in comparing the available experimental measurements with analytical and numerical models. In recent work by the author, a quasi-one-dimensional, finite rate chemistry CFD model was utilized to study the gasdynamics and performance characteristics of PDREs over a range of blowdown pressure ratios from 1-1000. Models of this type are computationally inexpensive, and enable first-order parametric studies of the effect of several nozzle and extension geometries on PDRE performance over a wide range of conditions. However, the quasi-one-dimensional approach is limited in that it cannot properly capture the multidimensional blast wave and flow expansion downstream of the PDRE, nor can it resolve nozzle flow separation if present. Moreover, the previous work was limited to single-pulse calculations. In this paper, an axisymmetric finite rate chemistry model is described and utilized to study these issues in greater detail. Example Mach number contour plots showing the multidimensional blast wave and nozzle exhaust plume are shown. The performance results are compared with the quasi-one-dimensional results from the previous paper. Both Euler and Navier-Stokes solutions are calculated in order to determine the effect of viscous

  6. Radon anomalies prior to earthquakes (1). Review of previous studies

    International Nuclear Information System (INIS)

    Ishikawa, Tetsuo; Tokonami, Shinji; Yasuoka, Yumi; Shinogi, Masaki; Nagahama, Hiroyuki; Omori, Yasutaka; Kawada, Yusuke

    2008-01-01

    The relationship between radon anomalies and earthquakes has been studied for more than 30 years. However, most of the studies dealt with radon in soil gas or in groundwater. Before the 1995 Hyogoken-Nanbu earthquake, an anomalous increase of atmospheric radon was observed at Kobe Pharmaceutical University. The increase was well fitted with a mathematical model related to earthquake fault dynamics. This paper reports the significance of this observation, reviewing previous studies on radon anomaly before earthquakes. Groundwater/soil radon measurements for earthquake prediction began in 1970's in Japan as well as foreign countries. One of the most famous studies in Japan is groundwater radon anomaly before the 1978 Izu-Oshima-kinkai earthquake. We have recognized the significance of radon in earthquake prediction research, but recently its limitation was also pointed out. Some researchers are looking for a better indicator for precursors; simultaneous measurements of radon and other gases are new trials in recent studies. Contrary to soil/groundwater radon, we have not paid much attention to atmospheric radon before earthquakes. However, it might be possible to detect precursors in atmospheric radon before a large earthquake. In the next issues, we will discuss the details of the anomalous atmospheric radon data observed before the Hyogoken-Nanbu earthquake. (author)

  7. Mediastinal involvement in lymphangiomatosis: a previously unreported MRI sign

    Energy Technology Data Exchange (ETDEWEB)

    Shah, Vikas; Shah, Sachit; Barnacle, Alex; McHugh, Kieran [Great Ormond Street Hospital for Children, Department of Radiology, London (United Kingdom); Sebire, Neil J. [Great Ormond Street Hospital for Children, Department of Histopathology, London (United Kingdom); Brock, Penelope [Great Ormond Street Hospital for Children, Department of Oncology, London (United Kingdom); Harper, John I. [Great Ormond Street Hospital for Children, Department of Dermatology, London (United Kingdom)

    2011-08-15

    Multifocal lymphangiomatosis is a rare systemic disorder affecting children. Due to its rarity and wide spectrum of clinical, histological and imaging features, establishing the diagnosis of multifocal lymphangiomatosis can be challenging. The purpose of this study was to describe a new imaging sign in this disorder: paraspinal soft tissue and signal abnormality at MRI. We retrospectively reviewed the imaging, clinical and histopathological findings in a cohort of eight children with thoracic involvement from this condition. Evidence of paraspinal chest disease was identified at MRI and CT in all eight of these children. The changes comprise heterogeneous intermediate-to-high signal parallel to the thoracic vertebrae on T2-weighted sequences at MRI, with abnormal paraspinal soft tissue at CT and plain radiography. Multifocal lymphangiomatosis is a rare disorder with a broad range of clinicopathological and imaging features. MRI allows complete evaluation of disease extent without the use of ionising radiation and has allowed us to describe a previously unreported imaging sign in this disorder, namely, heterogeneous hyperintense signal in abnormal paraspinal tissue on T2-weighted images. (orig.)

  8. Cerebral Metastasis from a Previously Undiagnosed Appendiceal Adenocarcinoma

    Directory of Open Access Journals (Sweden)

    Antonio Biroli

    2012-01-01

    Full Text Available Brain metastases arise in 10%–40% of all cancer patients. Up to one third of the patients do not have previous cancer history. We report a case of a 67-years-old male patient who presented with confusion, tremor, and apraxia. A brain MRI revealed an isolated right temporal lobe lesion. A thorax-abdomen-pelvis CT scan showed no primary lesion. The patient underwent a craniotomy with gross-total resection. Histopathology revealed an intestinal-type adenocarcinoma. A colonoscopy found no primary lesion, but a PET-CT scan showed elevated FDG uptake in the appendiceal nodule. A right hemicolectomy was performed, and the specimen showed a moderately differentiated mucinous appendiceal adenocarcinoma. Whole brain radiotherapy was administrated. A subsequent thorax-abdomen CT scan revealed multiple lung and hepatic metastasis. Seven months later, the patient died of disease progression. In cases of undiagnosed primary lesions, patients present in better general condition, but overall survival does not change. Eventual identification of the primary tumor does not affect survival. PET/CT might be a helpful tool in detecting lesions of the appendiceal region. To the best of our knowledge, such a case was never reported in the literature, and an appendiceal malignancy should be suspected in patients with brain metastasis from an undiagnosed primary tumor.

  9. Coronary collateral vessels in patients with previous myocardial infarction

    International Nuclear Information System (INIS)

    Nakatsuka, M.; Matsuda, Y.; Ozaki, M.

    1987-01-01

    To assess the degree of collateral vessels after myocardial infarction, coronary angiograms, left ventriculograms, and exercise thallium-201 myocardial scintigrams of 36 patients with previous myocardial infarction were reviewed. All 36 patients had total occlusion of infarct-related coronary artery and no more than 70% stenosis in other coronary arteries. In 19 of 36 patients with transient reduction of thallium-201 uptake in the infarcted area during exercise (Group A), good collaterals were observed in 10 patients, intermediate collaterals in 7 patients, and poor collaterals in 2 patients. In 17 of 36 patients without transient reduction of thallium-201 uptake in the infarcted area during exercise (Group B), good collaterals were seen in 2 patients, intermediate collaterals in 7 patients, and poor collaterals in 8 patients (p less than 0.025). Left ventricular contractions in the infarcted area were normal or hypokinetic in 10 patients and akinetic or dyskinetic in 9 patients in Group A. In Group B, 1 patient had hypokinetic contraction and 16 patients had akinetic or dyskinetic contraction (p less than 0.005). Thus, patients with transient reduction of thallium-201 uptake in the infarcted area during exercise had well developed collaterals and preserved left ventricular contraction, compared to those in patients without transient reduction of thallium-201 uptake in the infarcted area during exercise. These results suggest that the presence of viable myocardium in the infarcted area might be related to the degree of collateral vessels

  10. High-Grade Leiomyosarcoma Arising in a Previously Replanted Limb

    Directory of Open Access Journals (Sweden)

    Tiffany J. Pan

    2015-01-01

    Full Text Available Sarcoma development has been associated with genetics, irradiation, viral infections, and immunodeficiency. Reports of sarcomas arising in the setting of prior trauma, as in burn scars or fracture sites, are rare. We report a case of a leiomyosarcoma arising in an arm that had previously been replanted at the level of the elbow joint following traumatic amputation when the patient was eight years old. He presented twenty-four years later with a 10.8 cm mass in the replanted arm located on the volar forearm. The tumor was completely resected and pathology examination showed a high-grade, subfascial spindle cell sarcoma diagnosed as a grade 3 leiomyosarcoma with stage pT2bNxMx. The patient underwent treatment with brachytherapy, reconstruction with a free flap, and subsequently chemotherapy. To the best of our knowledge, this is the first case report of leiomyosarcoma developing in a replanted extremity. Development of leiomyosarcoma in this case could be related to revascularization, scar formation, or chronic injury after replantation. The patient remains healthy without signs of recurrence at three-year follow-up.

  11. Global functional atlas of Escherichia coli encompassing previously uncharacterized proteins.

    Directory of Open Access Journals (Sweden)

    Pingzhao Hu

    2009-04-01

    Full Text Available One-third of the 4,225 protein-coding genes of Escherichia coli K-12 remain functionally unannotated (orphans. Many map to distant clades such as Archaea, suggesting involvement in basic prokaryotic traits, whereas others appear restricted to E. coli, including pathogenic strains. To elucidate the orphans' biological roles, we performed an extensive proteomic survey using affinity-tagged E. coli strains and generated comprehensive genomic context inferences to derive a high-confidence compendium for virtually the entire proteome consisting of 5,993 putative physical interactions and 74,776 putative functional associations, most of which are novel. Clustering of the respective probabilistic networks revealed putative orphan membership in discrete multiprotein complexes and functional modules together with annotated gene products, whereas a machine-learning strategy based on network integration implicated the orphans in specific biological processes. We provide additional experimental evidence supporting orphan participation in protein synthesis, amino acid metabolism, biofilm formation, motility, and assembly of the bacterial cell envelope. This resource provides a "systems-wide" functional blueprint of a model microbe, with insights into the biological and evolutionary significance of previously uncharacterized proteins.

  12. Global functional atlas of Escherichia coli encompassing previously uncharacterized proteins.

    Science.gov (United States)

    Hu, Pingzhao; Janga, Sarath Chandra; Babu, Mohan; Díaz-Mejía, J Javier; Butland, Gareth; Yang, Wenhong; Pogoutse, Oxana; Guo, Xinghua; Phanse, Sadhna; Wong, Peter; Chandran, Shamanta; Christopoulos, Constantine; Nazarians-Armavil, Anaies; Nasseri, Negin Karimi; Musso, Gabriel; Ali, Mehrab; Nazemof, Nazila; Eroukova, Veronika; Golshani, Ashkan; Paccanaro, Alberto; Greenblatt, Jack F; Moreno-Hagelsieb, Gabriel; Emili, Andrew

    2009-04-28

    One-third of the 4,225 protein-coding genes of Escherichia coli K-12 remain functionally unannotated (orphans). Many map to distant clades such as Archaea, suggesting involvement in basic prokaryotic traits, whereas others appear restricted to E. coli, including pathogenic strains. To elucidate the orphans' biological roles, we performed an extensive proteomic survey using affinity-tagged E. coli strains and generated comprehensive genomic context inferences to derive a high-confidence compendium for virtually the entire proteome consisting of 5,993 putative physical interactions and 74,776 putative functional associations, most of which are novel. Clustering of the respective probabilistic networks revealed putative orphan membership in discrete multiprotein complexes and functional modules together with annotated gene products, whereas a machine-learning strategy based on network integration implicated the orphans in specific biological processes. We provide additional experimental evidence supporting orphan participation in protein synthesis, amino acid metabolism, biofilm formation, motility, and assembly of the bacterial cell envelope. This resource provides a "systems-wide" functional blueprint of a model microbe, with insights into the biological and evolutionary significance of previously uncharacterized proteins.

  13. Influence of Previous Knowledge in Torrance Tests of Creative Thinking

    Directory of Open Access Journals (Sweden)

    María Aranguren

    2015-07-01

    Full Text Available The aim of this work is to analyze the influence of study field, expertise and recreational activities participation in Torrance Tests of Creative Thinking (TTCT, 1974 performance. Several hypotheses were postulated to explore the possible effects of previous knowledge in TTCT verbal and TTCT figural university students’ outcomes. Participants in this study included 418 students from five study fields: Psychology;Philosophy and Literature, Music; Engineering; and Journalism and Advertising (Communication Sciences. Results found in this research seem to indicate that there in none influence of the study field, expertise and recreational activities participation in neither of the TTCT tests. Instead, the findings seem to suggest some kind of interaction between certain skills needed to succeed in specific studies fields and performance on creativity tests, such as the TTCT. These results imply that TTCT is a useful and valid instrument to measure creativity and that some cognitive process involved in innovative thinking can be promoted using different intervention programs in schools and universities regardless the students study field.

  14. Gastrointestinal tolerability with ibandronate after previous weekly bisphosphonate treatment.

    Science.gov (United States)

    Derman, Richard; Kohles, Joseph D; Babbitt, Ann

    2009-01-01

    Data from two open-label trials (PRIOR and CURRENT) of women with postmenopausal osteoporosis or osteopenia were evaluated to assess whether monthly oral and quarterly intravenous (IV) ibandronate dosing improved self-reported gastrointestinal (GI) tolerability for patients who had previously experienced GI irritation with bisphosphonate (BP) use. In PRIOR, women who had discontinued daily or weekly BP treatment due to GI intolerance received monthly oral or quarterly IV ibandronate for 12 months. The CURRENT subanalysis included women receiving weekly BP treatment who switched to monthly oral ibandronate for six months. GI symptom severity and frequency were assessed using the Osteoporosis Patient Satisfaction Questionnaire. In PRIOR, mean GI tolerability scores increased significantly at month 1 from screening for both treatment groups (oral: 79.3 versus 54.1; IV: 84.4 versus 51.0; p 90% at Month 10). In the CURRENT subanalysis >60% of patients reported improvements in heartburn or acid reflux and >70% indicated improvement in other stomach upset at month 6. Postmenopausal women with GI irritability with daily or weekly BPs experienced improvement in symptoms with extended dosing monthly or quarterly ibandronate compared with baseline.

  15. Pertussis-associated persistent cough in previously vaccinated children.

    Science.gov (United States)

    Principi, Nicola; Litt, David; Terranova, Leonardo; Picca, Marina; Malvaso, Concetta; Vitale, Cettina; Fry, Norman K; Esposito, Susanna

    2017-11-01

    To evaluate the role of Bordetella pertussis infection, 96 otherwise healthy 7- to 17-year-old subjects who were suffering from a cough lasting from 2 to 8 weeks were prospectively recruited. At enrolment, a nasopharyngeal swab and an oral fluid sample were obtained to search for pertussis infection by the detection of B. pertussis DNA and/or an elevated titre of anti-pertussis toxin IgG. Evidence of pertussis infection was found in 18 (18.7 %; 95 % confidence interval, 11.5-28.0) cases. In 15 cases, the disease occurred despite booster administration. In two cases, pertussis was diagnosed less than 2 years after the booster injection, whereas in the other cases it was diagnosed between 2 and 9 years after the booster dose. This study used non-invasive testing to show that pertussis is one of the most important causes of long-lasting cough in school-age subjects. Moreover, the protection offered by acellular pertussis vaccines currently wanes more rapidly than previously thought.

  16. Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History

    Directory of Open Access Journals (Sweden)

    Danping Wang

    2017-01-01

    Full Text Available A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional, 10 CEC2005 benchmark functions (30-dimensional, and a real-world problem (multilevel image segmentation problems. Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.

  17. How to prevent type 2 diabetes in women with previous gestational diabetes?

    DEFF Research Database (Denmark)

    Pedersen, Anne Louise Winkler; Terkildsen Maindal, Helle; Juul, Lise

    2017-01-01

    OBJECTIVES: Women with previous gestational diabetes (GDM) have a seven times higher risk of developing type 2 diabetes (T2DM) than women without. We aimed to review the evidence of effective behavioural interventions seeking to prevent T2DM in this high-risk group. METHODS: A systematic review...... of RCTs in several databases in March 2016. RESULTS: No specific intervention or intervention components were found superior. The pooled effect on diabetes incidence (four trials) was estimated to: -5.02 per 100 (95% CI: -9.24; -0.80). CONCLUSIONS: This study indicates that intervention is superior...... to no intervention in prevention of T2DM among women with previous GDM....

  18. Numerical Investigation of Masonry Strengthened with Composites

    Directory of Open Access Journals (Sweden)

    Giancarlo Ramaglia

    2018-03-01

    Full Text Available In this work, two main fiber strengthening systems typically applied in masonry structures have been investigated: composites made of basalt and hemp fibers, coupled with inorganic matrix. Starting from the experimental results on composites, the out-of-plane behavior of the strengthened masonry was assessed according to several numerical analyses. In a first step, the ultimate behavior was assessed in terms of P (axial load-M (bending moment domain (i.e., failure surface, changing several mechanical parameters. In order to assess the ductility capacity of the strengthened masonry elements, the P-M domain was estimated starting from the bending moment-curvature diagrams. Key information about the impact of several mechanical parameters on both the capacity and the ductility was considered. Furthermore, the numerical analyses allow the assessment of the efficiency of the strengthening system, changing the main mechanical properties. Basalt fibers had lower efficiency when applied to weak masonry. In this case, the elastic properties of the masonry did not influence the structural behavior under a no tension assumption for the masonry. Conversely, their impact became non-negligible, especially for higher values of the compressive strength of the masonry. The stress-strain curve used to model the composite impacted the flexural strength. Natural fibers provided similar outcomes, but a first difference regards the higher mechanical compatibility of the strengthening system with the substrate. In this case, the ultimate condition is due to the failure mode of the composite. The stress-strain curves used to model the strengthening system are crucial in the ductility estimation of the strengthened masonry. However, the behavior of the composite strongly influences the curvature ductility in the case of higher compressive strength for masonry. The numerical results discussed in this paper provide the base to develop normalized capacity models able to

  19. Numerical Analysis of Multiscale Computations

    CERN Document Server

    Engquist, Björn; Tsai, Yen-Hsi R

    2012-01-01

    This book is a snapshot of current research in multiscale modeling, computations and applications. It covers fundamental mathematical theory, numerical algorithms as well as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and space. Complex fluids, porous media flow and oscillatory dynamical systems are treated in some extra depth, as well as tools like analytical and numerical homogenization, and fast multipole method.

  20. Numerical calculations near spatial infinity

    International Nuclear Information System (INIS)

    Zenginoglu, Anil

    2007-01-01

    After describing in short some problems and methods regarding the smoothness of null infinity for isolated systems, I present numerical calculations in which both spatial and null infinity can be studied. The reduced conformal field equations based on the conformal Gauss gauge allow us in spherical symmetry to calculate numerically the entire Schwarzschild-Kruskal spacetime in a smooth way including spacelike, null and timelike infinity and the domain close to the singularity

  1. Numerical modelling of mine workings.

    CSIR Research Space (South Africa)

    Lightfoot, N

    1999-03-01

    Full Text Available to cover most of what is required for a practising rock mechanics engineer to be able to use any of these five programs to solve practical mining problems. The chapters on specific programs discuss their individual strengths and weaknesses and highlight... and applications of numerical modelling in the context of the South African gold and platinum mining industries. This includes an example that utilises a number of different numerical 3 modelling programs to solve a single problem. This particular example...

  2. On joint numerical radius II

    Czech Academy of Sciences Publication Activity Database

    Drnovšek, R.; Müller, Vladimír

    2014-01-01

    Roč. 62, č. 9 (2014), s. 1197-1204 ISSN 0308-1087 R&D Projects: GA ČR GA201/09/0473; GA AV ČR IAA100190903 Institutional support: RVO:67985840 Keywords : joint numerical range * numerical radius Subject RIV: BA - General Mathematics Impact factor: 0.738, year: 2014 http://www.tandfonline.com/doi/abs/10.1080/03081087.2013.816303

  3. High prevalence of genetic variants previously associated with LQT syndrome in new exome data

    DEFF Research Database (Denmark)

    Refsgaard, Lena; Holst, Anders G; Sadjadieh, Golnaz

    2012-01-01

    To date, hundreds of variants in 13 genes have been associated with long QT syndrome (LQTS). The prevalence of LQTS is estimated to be between 1:2000 and 1:5000. The knowledge of genetic variation in the general population has until recently been limited, but newly published data from NHLBI GO...... variants KCNH2 P347S; SCN5A: S216L, V1951L; and CAV3 T78M in the control population (n=704) revealed prevalences comparable to those of ESP. Thus, we identified a much higher prevalence of previously LQTS-associated variants than expected in exome data from population studies. Great caution regarding...

  4. Numerical Hydrodynamics in General Relativity

    Directory of Open Access Journals (Sweden)

    Font José A.

    2003-01-01

    Full Text Available The current status of numerical solutions for the equations of ideal general relativistic hydrodynamics is reviewed. With respect to an earlier version of the article, the present update provides additional information on numerical schemes, and extends the discussion of astrophysical simulations in general relativistic hydrodynamics. Different formulations of the equations are presented, with special mention of conservative and hyperbolic formulations well-adapted to advanced numerical methods. A large sample of available numerical schemes is discussed, paying particular attention to solution procedures based on schemes exploiting the characteristic structure of the equations through linearized Riemann solvers. A comprehensive summary of astrophysical simulations in strong gravitational fields is presented. These include gravitational collapse, accretion onto black holes, and hydrodynamical evolutions of neutron stars. The material contained in these sections highlights the numerical challenges of various representative simulations. It also follows, to some extent, the chronological development of the field, concerning advances on the formulation of the gravitational field and hydrodynamic equations and the numerical methodology designed to solve them.

  5. "My math and me": Nursing students' previous experiences in learning mathematics.

    Science.gov (United States)

    Røykenes, Kari

    2016-01-01

    In this paper, 11 narratives about former experiences in learning of mathematics written by nursing students are thematically analyzed. Most students had a positive relationship with the subject in primary school, when they found mathematics fun and were able to master the subject. For some, a change occurred in the transition to lower secondary school. The reasons for this change was found in the subject (increased difficulty), the teachers (movement of teachers, numerous substitute teachers), the class environment and size (many pupils, noise), and the student him- or herself (silent and anonymous pupil). This change was also found in the transition from lower to higher secondary school. By contrast, some students had experienced changes that were positive, and their mathematics teacher was a significant factor in this positive change. The paper emphasizes the importance of previous experiences in learning mathematics to nursing students when learning about drug calculation. Copyright © 2015. Published by Elsevier Ltd.

  6. On the numerical verification of industrial codes

    International Nuclear Information System (INIS)

    Montan, Sethy Akpemado

    2013-01-01

    Numerical verification of industrial codes, such as those developed at EDF R and D, is required to estimate the precision and the quality of computed results, even more for code running in HPC environments where millions of instructions are performed each second. These programs usually use external libraries (MPI, BLACS, BLAS, LAPACK). In this context, it is required to have a tool as non intrusive as possible to avoid rewriting the original code. In this regard, the CADNA library, which implements the Discrete Stochastic Arithmetic, appears to be one of a promising approach for industrial applications. In the first part of this work, we are interested in an efficient implementation of the BLAS routine DGEMM (General Matrix Multiply) implementing Discrete Stochastic Arithmetic. The implementation of a basic algorithm for matrix product using stochastic types leads to an overhead greater than 1000 for a matrix of 1024 * 1024 compared to the standard version and commercial versions of xGEMM. Here, we detail different solutions to reduce this overhead and the results we have obtained. A new routine Dgemm- CADNA have been designed. This routine has allowed to reduce the overhead from 1100 to 35 compare to optimized BLAS implementations (GotoBLAS). Then, we focus on the numerical verification of Telemac-2D computed results. Performing a numerical validation with the CADNA library shows that more than 30% of the numerical instabilities occurring during an execution come from the dot product function. A more accurate implementation of the dot product with compensated algorithms is presented in this work. We show that implementing these kinds of algorithms, in order to improve the accuracy of computed results does not alter the code performance. (author)

  7. Nocturnal Wakefulness as a Previously Unrecognized Risk Factor for Suicide.

    Science.gov (United States)

    Perlis, Michael L; Grandner, Michael A; Brown, Gregory K; Basner, Mathias; Chakravorty, Subhajit; Morales, Knashawn H; Gehrman, Philip R; Chaudhary, Ninad S; Thase, Michael E; Dinges, David F

    2016-06-01

    Suicide is a major public health problem and the 10th leading cause of death in the United States. The identification of modifiable risk factors is essential for reducing the prevalence of suicide. Recently, it has been shown that insomnia and nightmares significantly increase the risk for suicidal ideation, attempted suicide, and death by suicide. While both forms of sleep disturbance may independently confer risk, and potentially be modifiable risk factors, it is also possible that simply being awake at night represents a specific vulnerability for suicide. The present analysis evaluates the frequency of completed suicide per hour while taking into account the percentage of individuals awake at each hour. Archival analyses were conducted estimating the time of fatal injury using the National Violent Death Reporting System for 2003-2010 and the proportion of the American population awake per hour across the 24-hour day using the American Time Use Survey. The mean ± SD incident rate from 06:00-23:59 was 2.2% ± 0.7%, while the mean ± SD incident rate from 00:00-05:59 was 10.3% ± 4.9%. The maximum incident rate was from 02:00-02:59 (16.3%). Hour-by-hour observed values differed from those that would be expected by chance (P < .001), and when 6-hour blocks were examined, the observed frequency at night was 3.6 times higher than would be expected by chance (P < .001). Being awake at night confers greater risk for suicide than being awake at other times of the day, suggesting that disturbances of sleep or circadian neurobiology may potentiate suicide risk. © Copyright 2016 Physicians Postgraduate Press, Inc.

  8. Sacrococcygeal pilonidal disease: analysis of previously proposed risk factors

    Directory of Open Access Journals (Sweden)

    Ali Harlak

    2010-01-01

    Full Text Available PURPOSE: Sacrococcygeal pilonidal disease is a source of one of the most common surgical problems among young adults. While male gender, obesity, occupations requiring sitting, deep natal clefts, excessive body hair, poor body hygiene and excessive sweating are described as the main risk factors for this disease, most of these need to be verified with a clinical trial. The present study aimed to evaluate the value and effect of these factors on pilonidal disease. METHOD: Previously proposed main risk factors were evaluated in a prospective case control study that included 587 patients with pilonidal disease and 2,780 healthy control patients. RESULTS: Stiffness of body hair, number of baths and time spent seated per day were the three most predictive risk factors. Adjusted odds ratios were 9.23, 6.33 and 4.03, respectively (p<0.001. With an adjusted odds ratio of 1.3 (p<.001, body mass index was another risk factor. Family history was not statistically different between the groups and there was no specific occupation associated with the disease. CONCLUSIONS: Hairy people who sit down for more than six hours a day and those who take a bath two or less times per week are at a 219-fold increased risk for sacrococcygeal pilonidal disease than those without these risk factors. For people with a great deal of hair, there is a greater need for them to clean their intergluteal sulcus. People who engage in work that requires sitting in a seat for long periods of time should choose more comfortable seats and should also try to stand whenever possible.

  9. Impact of Students’ Class Attendance on Recalling Previously Acquired Information

    Directory of Open Access Journals (Sweden)

    Camellia Hemyari

    2018-03-01

    Full Text Available Background: In recent years, availability of class material including typed lectures, the professor’s Power Point slides, sound recordings, and even videos made a group of students feel that it is unnecessary to attend the classes. These students usually read and memorize typed lectures within two or three days prior to the exams and usually pass the tests even with low attendance rate. Thus, the question is how effective is this learning system and how long the one-night memorized lessons may last.Methods: A group of medical students (62 out of 106 students, with their class attendance and educational achievements in the Medical Mycology and Parasitology course being recorded since two years ago, was selected and their knowledge about this course was tested by multiple choice questions (MCQ designed based on the previous lectures.Results: Although the mean re-exam score of the students at the end of the externship was lower than the corresponding final score, a significant association was found between the scores of the students in these two exams (r=0.48, P=0.01. Moreover, a significant negative association was predicted between the number of absences and re-exam scores (r=-0.26, P=0.037.Conclusion: As our findings show, the phenomenon of recalling the acquired lessons is preserved for a long period of time and it is associated with the students’ attendance. Many factors including generation effect (by taking notes and cued-recall (via slide picture might play a significant role in the better recalling of the learned information in students with good class attendance.Keywords: STUDENT, MEMORY, LONG-TERM, RECALL, ABSENTEEISM, LEARNING

  10. Gastrointestinal tolerability with ibandronate after previous weekly bisphosphonate treatment

    Directory of Open Access Journals (Sweden)

    Richard Derman

    2009-09-01

    Full Text Available Richard Derman1, Joseph D Kohles2, Ann Babbitt31Department of Obstetrics and Gynecology, Christiana Hospital, Newark, DE, USA; 2Roche, Nutley, NJ, USA; 3Greater Portland Bone and Joint Specialists, Portland, ME, USAAbstract: Data from two open-label trials (PRIOR and CURRENT of women with postmenopausal osteoporosis or osteopenia were evaluated to assess whether monthly oral and quarterly intravenous (IV ibandronate dosing improved self-reported gastrointestinal (GI tolerability for patients who had previously experienced GI irritation with bisphosphonate (BP use. In PRIOR, women who had discontinued daily or weekly BP treatment due to GI intolerance received monthly oral or quarterly IV ibandronate for 12 months. The CURRENT subanalysis included women receiving weekly BP treatment who switched to monthly oral ibandronate for six months. GI symptom severity and frequency were assessed using the Osteoporosis Patient Satisfaction Questionnaire™. In PRIOR, mean GI tolerability scores increased significantly at month 1 from screening for both treatment groups (oral: 79.3 versus 54.1; IV: 84.4 versus 51.0; p < 0.001 for both. Most patients reported improvement in GI symptom severity and frequency from baseline at all post-screening assessments (>90% at Month 10. In the CURRENT subanalysis >60% of patients reported improvements in heartburn or acid reflux and >70% indicated improvement in other stomach upset at month 6. Postmenopausal women with GI irritability with daily or weekly BPs experienced improvement in symptoms with extended dosing monthly or quarterly ibandronate compared with baseline.Keywords: ibandronate, osteoporosis, bisphosphonate, gastrointestinal

  11. [Population policy and women: the relevance of previous studies].

    Science.gov (United States)

    De Barbieri, M T

    1983-01-01

    participation, and the role of women in society. Moreover, the literature concerning fertility decline contains numerous statements by both those opposed to and in favor of birth control, that improving the status of women is 1 of the most effective means of reducing population growth. It can then be asked what changes in the role of women in Mexico will attend application of a fertility reduction policy. The crude birth rate declined from 44.2 in 1970 to 34.4 in 1980, with fertility falling among all age groups but especially among women over 40. The decline occurred primarily among urban nonmanual occupations. More research must be done on recent fertility change in Mexico and on related changes in the role orientations of men and women in different classes and life cycle stages, that have occurred at various stages of the population debate.

  12. Numerical investigation into the failure of a micropile retaining wall

    OpenAIRE

    Prat Catalán, Pere

    2017-01-01

    The paper presents a numerical investigation on the failure of a micropile wall that collapsed while excavating the adjacent ground. The main objectives are: to estimate the strength parameters of the ground; to perform a sensitivity analysis on the back slope height and to obtain the shape and position of the failure surface. Because of uncertainty of the original strength parameters, a simplified backanalysis using a range of cohesion/friction pairs has been used to estimate the most realis...

  13. Applicability of numerical model for seabed topography changes by tsunami flow. Analysis of formulae for sediment transport and simulations in a rectangular harbor

    International Nuclear Information System (INIS)

    Matsuyama, Masafumi

    2009-01-01

    Characteristics of formulae for bed-load transport and pick-up rate in suspended transport are investigated in order to clarify the impact on seabed topography changes by tsunami flow. The impact by bed-load transport was depended on Froude number and water surface slope. Bed-load transport causes deposition under Fr 6/7 at face front of tsunami wave. Pick-up rate has more predominant influences for seabed topography changes than that of one brought by bed-load transport. 2-D Numerical simulations with formulae by Ikeno et.al were carried out to simulate topography changes around harbor by tsunami flow in the flume. The result indicated that the numerical model is more applicable than a numerical model with previous formulae for estimation of deposit and erosion by topography changes. It is for this reason that the formula of pick-up rate is adaptable for wide-range diameter of sand, from 0.08mm to 0.2mm. Upper limit of suspended sediment concentration is needed to set due to avoid overlarge concentration in the numerical model. Comparison between numerical results in a real scale with 1% and 5% upper limits clearly shows topography changes have a deep relevance with the upper limit value. The upper limit value is one of dominant factors for evaluating seabed topography changes by the 2-D Numerical simulations with the formulae by Ikeno et.al in a real scale. (author)

  14. Varieties of Quantity Estimation in Children

    Science.gov (United States)

    Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco

    2015-01-01

    In the number-to-position task, with increasing age and numerical expertise, children's pattern of estimates shifts from a biased (nonlinear) to a formal (linear) mapping. This widely replicated finding concerns symbolic numbers, whereas less is known about other types of quantity estimation. In Experiment 1, Preschool, Grade 1, and Grade 3…

  15. Review of Methods and Approaches for Deriving Numeric ...

    Science.gov (United States)

    EPA will propose numeric criteria for nitrogen/phosphorus pollution to protect estuaries, coastal areas and South Florida inland flowing waters that have been designated Class I, II and III , as well as downstream protective values (DPVs) to protect estuarine and marine waters. In accordance with the formal determination and pursuant to a subsequent consent decree, these numeric criteria are being developed to translate and implement Florida’s existing narrative nutrient criterion, to protect the designated use that Florida has previously set for these waters, at Rule 62-302.530(47)(b), F.A.C. which provides that “In no case shall nutrient concentrations of a body of water be altered so as to cause an imbalance in natural populations of aquatic flora or fauna.” Under the Clean Water Act and EPA’s implementing regulations, these numeric criteria must be based on sound scientific rationale and reflect the best available scientific knowledge. EPA has previously published a series of peer reviewed technical guidance documents to develop numeric criteria to address nitrogen/phosphorus pollution in different water body types. EPA recognizes that available and reliable data sources for use in numeric criteria development vary across estuarine and coastal waters in Florida and flowing waters in South Florida. In addition, scientifically defensible approaches for numeric criteria development have different requirements that must be taken into consider

  16. Initial results of CyberKnife treatment for recurrent previously irradiated head and neck cancer

    International Nuclear Information System (INIS)

    Himei, Kengo; Katsui, Kuniaki; Yoshida, Atsushi

    2003-01-01

    The purpose of this study was to evaluate the efficacy of CyberKnife for recurrent previously irradiated head and neck cancer. Thirty-one patients with recurrent previously irradiated head and neck cancer were treated with a CyberKnife from July 1999 to March 2002 at Okayama Kyokuto Hospital were retrospectively studied. The accumulated dose was 28-80 Gy (median 60 Gy). The interval between CyberKnife treatment and previous radiotherapy was 0.4-429.5 months (median 16.3 months). Primary lesions were nasopharynx: 7, maxillary sinus: 6, tongue: 5, ethmoid sinus: 3, and others: 1. The pathology was squamous cell carcinoma: 25, adenoid cystic carcinoma: 4, and others: 2. Symptoms were pain: 8, and nasal bleeding: 2. The prescribed dose was 15.0-40.3 Gy (median 32.3 Gy) as for the marginal dose. The response rate (complete response (CR)+partial response (PR)) and local control rate (CR+PR+no change (NC)) was 74% and 94% respectively. Pain disappeared for 4 cases, relief was obtained for 4 cases and no change for 2 cases and nasal bleeding disappeared for 2 cases for an improvement of symptoms. An adverse effects were observed as mucositis in 5 cases and neck swelling in one case. Prognosis of recurrent previously irradiated head and neck cancer was estimated as poor. Our early experience shows that CyberKnife is expected to be feasible treatment for recurrent previously irradiated head and neck cancer, and for the reduction adverse effects and maintenance of useful quality of life (QOL) for patients. (author)

  17. Numerical Radius Inequalities for Finite Sums of Operators

    Directory of Open Access Journals (Sweden)

    Mirmostafaee Alireza Kamel

    2014-12-01

    Full Text Available In this paper, we obtain some sharp inequalities for numerical radius of finite sums of operators. Moreover, we give some applications of our result in estimation of spectral radius. We also compare our results with some known results.

  18. Numerical distribution functions of fractional unit root and cointegration tests

    DEFF Research Database (Denmark)

    MacKinnon, James G.; Nielsen, Morten Ørregaard

    We calculate numerically the asymptotic distribution functions of likelihood ratio tests for fractional unit roots and cointegration rank. Because these distributions depend on a real-valued parameter, b, which must be estimated, simple tabulation is not feasible. Partly due to the presence...

  19. Advanced Numerical Model for Irradiated Concrete

    Energy Technology Data Exchange (ETDEWEB)

    Giorla, Alain B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-03-01

    In this report, we establish a numerical model for concrete exposed to irradiation to address these three critical points. The model accounts for creep in the cement paste and its coupling with damage, temperature and relative humidity. The shift in failure mode with the loading rate is also properly represented. The numerical model for creep has been validated and calibrated against different experiments in the literature [Wittmann, 1970, Le Roy, 1995]. Results from a simplified model are shown to showcase the ability of numerical homogenization to simulate irradiation effects in concrete. In future works, the complete model will be applied to the analysis of the irradiation experiments of Elleuch et al. [1972] and Kelly et al. [1969]. This requires a careful examination of the experimental environmental conditions as in both cases certain critical information are missing, including the relative humidity history. A sensitivity analysis will be conducted to provide lower and upper bounds of the concrete expansion under irradiation, and check if the scatter in the simulated results matches the one found in experiments. The numerical and experimental results will be compared in terms of expansion and loss of mechanical stiffness and strength. Both effects should be captured accordingly by the model to validate it. Once the model has been validated on these two experiments, it can be applied to simulate concrete from nuclear power plants. To do so, the materials used in these concrete must be as well characterized as possible. The main parameters required are the mechanical properties of each constituent in the concrete (aggregates, cement paste), namely the elastic modulus, the creep properties, the tensile and compressive strength, the thermal expansion coefficient, and the drying shrinkage. These can be either measured experimentally, estimated from the initial composition in the case of cement paste, or back-calculated from mechanical tests on concrete. If some

  20. Estimating Utility

    DEFF Research Database (Denmark)

    Arndt, Channing; Simler, Kenneth R.

    2010-01-01

    A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes a......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones.......A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes...... an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially...

  1. Numerical methods in multibody dynamics

    CERN Document Server

    Eich-Soellner, Edda

    1998-01-01

    Today computers play an important role in the development of complex mechanical systems, such as cars, railway vehicles or machines. Efficient simulation of these systems is only possible when based on methods that explore the strong link between numerics and computational mechanics. This book gives insight into modern techniques of numerical mathematics in the light of an interesting field of applications: multibody dynamics. The important interaction between modeling and solution techniques is demonstrated by using a simplified multibody model of a truck. Different versions of this mechanical model illustrate all key concepts in static and dynamic analysis as well as in parameter identification. The book focuses in particular on constrained mechanical systems. Their formulation in terms of differential-algebraic equations is the backbone of nearly all chapters. The book is written for students and teachers in numerical analysis and mechanical engineering as well as for engineers in industrial research labor...

  2. Numerical analysis of electromagnetic fields

    CERN Document Server

    Zhou Pei Bai

    1993-01-01

    Numerical methods for solving boundary value problems have developed rapidly. Knowledge of these methods is important both for engineers and scientists. There are many books published that deal with various approximate methods such as the finite element method, the boundary element method and so on. However, there is no textbook that includes all of these methods. This book is intended to fill this gap. The book is designed to be suitable for graduate students in engineering science, for senior undergraduate students as well as for scientists and engineers who are interested in electromagnetic fields. Objective Numerical calculation is the combination of mathematical methods and field theory. A great number of mathematical concepts, principles and techniques are discussed and many computational techniques are considered in dealing with practical problems. The purpose of this book is to provide students with a solid background in numerical analysis of the field problems. The book emphasizes the basic theories ...

  3. Numerical simulation of flood barriers

    Science.gov (United States)

    Srb, Pavel; Petrů, Michal; Kulhavý, Petr

    This paper deals with testing and numerical simulating of flood barriers. The Czech Republic has been hit by several very devastating floods in past years. These floods caused several dozens of causalities and property damage reached billions of Euros. The development of flood measures is very important, especially for the reduction the number of casualties and the amount of property damage. The aim of flood control measures is the detention of water outside populated areas and drainage of water from populated areas as soon as possible. For new flood barrier design it is very important to know its behaviour in case of a real flood. During the development of the barrier several standardized tests have to be carried out. Based on the results from these tests numerical simulation was compiled using Abaqus software and some analyses were carried out. Based on these numerical simulations it will be possible to predict the behaviour of barriers and thus improve their design.

  4. Numerical investigation of freak waves

    Science.gov (United States)

    Chalikov, D.

    2009-04-01

    Paper describes the results of more than 4,000 long-term (up to thousands of peak-wave periods) numerical simulations of nonlinear gravity surface waves performed for investigation of properties and estimation of statistics of extreme (‘freak') waves. The method of solution of 2-D potential wave's equations based on conformal mapping is applied to the simulation of wave behavior assigned by different initial conditions, defined by JONSWAP and Pierson-Moskowitz spectra. It is shown that nonlinear wave evolution sometimes results in appearance of very big waves. The shape of freak waves varies within a wide range: some of them are sharp-crested, others are asymmetric, with a strong forward inclination. Some of them can be very big, but not steep enough to create dangerous conditions for vessels (but not for fixed objects). Initial generation of extreme waves can occur merely as a result of group effects, but in some cases the largest wave suddenly starts to grow. The growth is followed sometimes by strong concentration of wave energy around a peak vertical. It is taking place in the course of a few peak wave periods. The process starts with an individual wave in a physical space without significant exchange of energy with surrounding waves. Sometimes, a crest-to-trough wave height can be as large as nearly three significant wave heights. On the average, only one third of all freak waves come to breaking, creating extreme conditions, however, if a wave height approaches the value of three significant wave heights, all of the freak waves break. The most surprising result was discovery that probability of non-dimensional freak waves (normalized by significant wave height) is actually independent of density of wave energy. It does not mean that statistics of extreme waves does not depend on wave energy. It just proves that normalization of wave heights by significant wave height is so effective, that statistics of non-dimensional extreme waves tends to be independent

  5. Numeral Incorporation in Japanese Sign Language

    Science.gov (United States)

    Ktejik, Mish

    2013-01-01

    This article explores the morphological process of numeral incorporation in Japanese Sign Language. Numeral incorporation is defined and the available research on numeral incorporation in signed language is discussed. The numeral signs in Japanese Sign Language are then introduced and followed by an explanation of the numeral morphemes which are…

  6. A numerical library in Java for scientists and engineers

    CERN Document Server

    Lau, Hang T

    2003-01-01

    At last researchers have an inexpensive library of Java-based numeric procedures for use in scientific computation. The first and only book of its kind, A Numeric Library in Java for Scientists and Engineers is a translation into Java of the library NUMAL (NUMerical procedures in ALgol 60). This groundbreaking text presents procedural descriptions for linear algebra, ordinary and partial differential equations, optimization, parameter estimation, mathematical physics, and other tools that are indispensable to any dynamic research group. The book offers test programs that allow researchers to execute the examples provided; users are free to construct their own tests and apply the numeric procedures to them in order to observe a successful computation or simulate failure. The entry for each procedure is logically presented, with name, usage parameters, and Java code included. This handbook serves as a powerful research tool, enabling the performance of critical computations in Java. It stands as a cost-effi...

  7. Numerically robust geometry engine for compound solid geometries

    International Nuclear Information System (INIS)

    Vlachoudis, V.; Sinuela-Pastor, D.

    2013-01-01

    Monte Carlo programs heavily rely on a fast and numerically robust solid geometry engines. However the success of solid modeling, depends on facilities for specifying and editing parameterized models through a user-friendly graphical front-end. Such a user interface has to be fast enough in order to be interactive for 2D and/or 3D displays, but at the same time numerically robust in order to display possible modeling errors at real time that could be critical for the simulation. The graphical user interface Flair for FLUKA currently employs such an engine where special emphasis has been given on being fast and numerically robust. The numerically robustness is achieved by a novel method of estimating the floating precision of the operations, which dynamically adapts all the decision operations accordingly. Moreover a predictive caching mechanism is ensuring that logical errors in the geometry description are found online, without compromising the processing time by checking all regions. (authors)

  8. Numerical precision control and GRACE

    International Nuclear Information System (INIS)

    Fujimoto, J.; Hamaguchi, N.; Ishikawa, T.; Kaneko, T.; Morita, H.; Perret-Gallix, D.; Tokura, A.; Shimizu, Y.

    2006-01-01

    The control of the numerical precision of large-scale computations like those generated by the GRACE system for automatic Feynman diagram calculations has become an intrinsic part of those packages. Recently, Hitachi Ltd. has developed in FORTRAN a new library HMLIB for quadruple and octuple precision arithmetic where the number of lost-bits is made available. This library has been tested with success on the 1-loop radiative correction to e + e - ->e + e - τ + τ - . It is shown that the approach followed by HMLIB provides an efficient way to track down the source of numerical significance losses and to deliver high-precision results yet minimizing computing time

  9. Matlab programming for numerical analysis

    CERN Document Server

    Lopez, Cesar

    2014-01-01

    MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. Programming MATLAB for Numerical Analysis introduces you to the MATLAB language with practical hands-on instructions and results, allowing you to quickly achieve your goals. You will first become

  10. Numeric invariants from multidimensional persistence

    Energy Technology Data Exchange (ETDEWEB)

    Skryzalin, Jacek [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carlsson, Gunnar [Stanford Univ., Stanford, CA (United States)

    2017-05-19

    In this paper, we analyze the space of multidimensional persistence modules from the perspectives of algebraic geometry. We first build a moduli space of a certain subclass of easily analyzed multidimensional persistence modules, which we construct specifically to capture much of the information which can be gained by using multidimensional persistence over one-dimensional persistence. We argue that the global sections of this space provide interesting numeric invariants when evaluated against our subclass of multidimensional persistence modules. Lastly, we extend these global sections to the space of all multidimensional persistence modules and discuss how the resulting numeric invariants might be used to study data.

  11. Residents' numeric inputting error in computerized physician order entry prescription.

    Science.gov (United States)

    Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

    2016-04-01

    Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial

  12. Milky Way Past Was More Turbulent Than Previously Known

    Science.gov (United States)

    2004-04-01

    Results of 1001 observing nights shed new light on our Galaxy [1] Summary A team of astronomers from Denmark, Switzerland and Sweden [2] has achieved a major breakthrough in our understanding of the Milky Way, the galaxy in which we live. After more than 1,000 nights of observations spread over 15 years, they have determined the spatial motions of more than 14,000 solar-like stars residing in the neighbourhood of the Sun. For the first time, the changing dynamics of the Milky Way since its birth can now be studied in detail and with a stellar sample sufficiently large to allow a sound analysis. The astronomers find that our home galaxy has led a much more turbulent and chaotic life than previously assumed. PR Photo 10a/04: Distribution on the sky of the observed stars. PR Photo 10b/04: Stars in the solar neigbourhood and the Milky Way galaxy (artist's view). PR Video Clip 04/04: The motions of the observed stars during the past 250 million years. Unknown history Home is the place we know best. But not so in the Milky Way - the galaxy in which we live. Our knowledge of our nearest stellar neighbours has long been seriously incomplete and - worse - skewed by prejudice concerning their behaviour. Stars were generally selected for observation because they were thought to be "interesting" in some sense, not because they were typical. This has resulted in a biased view of the evolution of our Galaxy. The Milky Way started out just after the Big Bang as one or more diffuse blobs of gas of almost pure hydrogen and helium. With time, it assembled into the flattened spiral galaxy which we inhabit today. Meanwhile, generation after generation of stars were formed, including our Sun some 4,700 million years ago. But how did all this really happen? Was it a rapid process? Was it violent or calm? When were all the heavier elements formed? How did the Milky Way change its composition and shape with time? Answers to these and many other questions are 'hot' topics for the

  13. Application of spreadsheet to estimate infiltration parameters

    OpenAIRE

    Zakwan, Mohammad; Muzzammil, Mohammad; Alam, Javed

    2016-01-01

    Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach ...

  14. New population-based exome data question the pathogenicity of some genetic variants previously associated with Marfan syndrome

    DEFF Research Database (Denmark)

    Yang, Ren-Qiang; Jabbari, Javad; Cheng, Xiao-Shu

    2014-01-01

    BACKGROUND: Marfan syndrome (MFS) is a rare autosomal dominantly inherited connective tissue disorder with an estimated prevalence of 1:5,000. More than 1000 variants have been previously reported to be associated with MFS. However, the disease-causing effect of these variants may be questionable...

  15. Precarious Rock Methodology for Seismic Hazard: Physical Testing, Numerical Modeling and Coherence Studies

    Energy Technology Data Exchange (ETDEWEB)

    Anooshehpoor, Rasool; Purvance, Matthew D.; Brune, James N.; Preston, Leiph A.; Anderson, John G.; Smith, Kenneth D.

    2006-09-29

    This report covers the following projects: Shake table tests of precarious rock methodology, field tests of precarious rocks at Yucca Mountain and comparison of the results with PSHA predictions, study of the coherence of the wave field in the ESF, and a limited survey of precarious rocks south of the proposed repository footprint. A series of shake table experiments have been carried out at the University of Nevada, Reno Large Scale Structures Laboratory. The bulk of the experiments involved scaling acceleration time histories (uniaxial forcing) from 0.1g to the point where the objects on the shake table overturned a specified number of times. The results of these experiments have been compared with numerical overturning predictions. Numerical predictions for toppling of large objects with simple contact conditions (e.g., I-beams with sharp basal edges) agree well with shake-table results. The numerical model slightly underpredicts the overturning of small rectangular blocks. It overpredicts the overturning PGA for asymmetric granite boulders with complex basal contact conditions. In general the results confirm the approximate predictions of previous studies. Field testing of several rocks at Yucca Mountain has approximately confirmed the preliminary results from previous studies, suggesting that he PSHA predictions are too high, possibly because the uncertainty in the mean of the attenuation relations. Study of the coherence of wavefields in the ESF has provided results which will be very important in design of the canisters distribution, in particular a preliminary estimate of the wavelengths at which the wavefields become incoherent. No evidence was found for extreme focusing by lens-like inhomogeneities. A limited survey for precarious rocks confirmed that they extend south of the repository, and one of these has been field tested.

  16. Low-dose computed tomography image restoration using previous normal-dose scan

    International Nuclear Information System (INIS)

    Ma, Jianhua; Huang, Jing; Feng, Qianjin; Zhang, Hua; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2011-01-01

    Purpose: In current computed tomography (CT) examinations, the associated x-ray radiation dose is of a significant concern to patients and operators. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) or kVp parameter (or delivering less x-ray energy to the body) as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and the noise would propagate into the CT image if no adequate noise control is applied during image reconstruction. Since a normal-dose high diagnostic CT image scanned previously may be available in some clinical applications, such as CT perfusion imaging and CT angiography (CTA), this paper presents an innovative way to utilize the normal-dose scan as a priori information to induce signal restoration of the current low-dose CT image series. Methods: Unlike conventional local operations on neighboring image voxels, nonlocal means (NLM) algorithm utilizes the redundancy of information across the whole image. This paper adapts the NLM to utilize the redundancy of information in the previous normal-dose scan and further exploits ways to optimize the nonlocal weights for low-dose image restoration in the NLM framework. The resulting algorithm is called the previous normal-dose scan induced nonlocal means (ndiNLM). Because of the optimized nature of nonlocal weights calculation, the ndiNLM algorithm does not depend heavily on image registration between the current low-dose and the previous normal-dose CT scans. Furthermore, the smoothing parameter involved in the ndiNLM algorithm can be adaptively estimated based on the image noise relationship between the current low-dose and the previous normal-dose scanning protocols. Results: Qualitative and quantitative evaluations were carried out on a physical phantom as well as clinical abdominal and brain perfusion CT scans in terms of accuracy and resolution properties. The gain by the use

  17. Bayesian estimates of linkage disequilibrium

    Directory of Open Access Journals (Sweden)

    Abad-Grau María M

    2007-06-01

    Full Text Available Abstract Background The maximum likelihood estimator of D' – a standard measure of linkage disequilibrium – is biased toward disequilibrium, and the bias is particularly evident in small samples and rare haplotypes. Results This paper proposes a Bayesian estimation of D' to address this problem. The reduction of the bias is achieved by using a prior distribution on the pair-wise associations between single nucleotide polymorphisms (SNPs that increases the likelihood of equilibrium with increasing physical distances between pairs of SNPs. We show how to compute the Bayesian estimate using a stochastic estimation based on MCMC methods, and also propose a numerical approximation to the Bayesian estimates that can be used to estimate patterns of LD in large datasets of SNPs. Conclusion Our Bayesian estimator of D' corrects the bias toward disequilibrium that affects the maximum likelihood estimator. A consequence of this feature is a more objective view about the extent of linkage disequilibrium in the human genome, and a more realistic number of tagging SNPs to fully exploit the power of genome wide association studies.

  18. Numerical investigations of gravitational collapse

    Energy Technology Data Exchange (ETDEWEB)

    Csizmadia, Peter; Racz, Istvan, E-mail: iracz@rmki.kfki.h [RMKI, Budapest, Konkoly Thege Miklos ut 29-33, H-1121 (Hungary)

    2010-03-01

    Some properties of a new framework for simulating generic 4-dimensional spherically symmetric gravitating systems are discussed. The framework can be used to investigate spacetimes that undergo complete gravitational collapse. The analytic setup is chosen to ensure that our numerical method is capable to follow the time evolution everywhere, including the black hole region.

  19. Numerical solution of Boltzmann's equation

    International Nuclear Information System (INIS)

    Sod, G.A.

    1976-04-01

    The numerical solution of Boltzmann's equation is considered for a gas model consisting of rigid spheres by means of Hilbert's expansion. If only the first two terms of the expansion are retained, Boltzmann's equation reduces to the Boltzmann-Hilbert integral equation. Successive terms in the Hilbert expansion are obtained by solving the same integral equation with a different source term. The Boltzmann-Hilbert integral equation is solved by a new very fast numerical method. The success of the method rests upon the simultaneous use of four judiciously chosen expansions; Hilbert's expansion for the distribution function, another expansion of the distribution function in terms of Hermite polynomials, the expansion of the kernel in terms of the eigenvalues and eigenfunctions of the Hilbert operator, and an expansion involved in solving a system of linear equations through a singular value decomposition. The numerical method is applied to the study of the shock structure in one space dimension. Numerical results are presented for Mach numbers of 1.1 and 1.6. 94 refs, 7 tables, 1 fig

  20. Numerical experiments with neural networks

    International Nuclear Information System (INIS)

    Miranda, Enrique.

    1990-01-01

    Neural networks are highly idealized models which, in spite of their simplicity, reproduce some key features of the real brain. In this paper, they are introduced at a level adequate for an undergraduate computational physics course. Some relevant magnitudes are defined and evaluated numerically for the Hopfield model and a short term memory model. (Author)

  1. Gaps in nonsymmetric numerical semigroups

    International Nuclear Information System (INIS)

    Fel, Leonid G.; Aicardi, Francesca

    2006-12-01

    There exist two different types of gaps in the nonsymmetric numerical semigroups S(d 1 , . . . , d m ) finitely generated by a minimal set of positive integers {d 1 , . . . , d m }. We give the generating functions for the corresponding sets of gaps. Detailed description of both gap types is given for the 1st nontrivial case m = 3. (author)

  2. Numerical simulation in plasma physics

    International Nuclear Information System (INIS)

    Samarskii, A.A.

    1980-01-01

    Plasma physics is not only a field for development of physical theories and mathematical models but also an object of application of the computational experiment comprising analytical and numerical methods adapted for computers. The author considers only MHD plasma physics problems. Examples treated are dissipative structures in plasma; MHD model of solar dynamo; supernova explosion simulation; and plasma compression by a liner. (Auth.)

  3. Numerical computation of MHD equilibria

    International Nuclear Information System (INIS)

    Atanasiu, C.V.

    1982-10-01

    A numerical code for a two-dimensional MHD equilibrium computation has been carried out. The code solves the Grad-Shafranov equation in its integral form, for both formulations: the free-boundary problem and the fixed boundary one. Examples of the application of the code to tokamak design are given. (author)

  4. Numerical Calabi-Yau metrics

    International Nuclear Information System (INIS)

    Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, Rene

    2008-01-01

    We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results

  5. Numerical simulation of distorted crystal Darwin width

    International Nuclear Information System (INIS)

    Wang Li; Xu Zhongmin; Wang Naxiu

    2012-01-01

    A new numerical simulation method according to distorted crystal optical theory was used to predict the direct-cooling crystal monochromator optical properties(crystal Darwin width) in this study. The finite element analysis software was used to calculate the deformed displacements of DCM crystal and to get the local reciprocal lattice vector of distorted crystal. The broadening of direct-cooling crystal Darwin width in meridional direction was estimated at 4.12 μrad. The result agrees well with the experimental data of 5 μrad, while it was 3.89 μrad by traditional calculation method of root mean square (RMS) of the slope error in the center line of footprint. The new method provides important theoretical support for designing and processing of monochromator crystal for synchrotron radiation beamline. (authors)

  6. Generalized shrunken type-GM estimator and its application

    International Nuclear Information System (INIS)

    Ma, C Z; Du, Y L

    2014-01-01

    The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points

  7. Generalized shrunken type-GM estimator and its application

    Science.gov (United States)

    Ma, C. Z.; Du, Y. L.

    2014-03-01

    The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points.

  8. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  9. beta. and. gamma. -comparative dose estimates on Enewetak Atoll

    Energy Technology Data Exchange (ETDEWEB)

    Crase, K.W.; Gudiksen, P.H.; Robison, W.L. (California Univ., Livermore (USA). Lawrence Livermore National Lab.)

    1982-05-01

    Enewetak Atoll in the Pacific is used for atmospheric testing of U.S. nuclear weapons. Beta dose and ..gamma..-ray exposure measurements were made on two islands of the Enewetak Atoll during July-August 1976 to determine the ..beta.. and low energy ..gamma..-contribution to the total external radiation doses to the returning Marshallese. Measurements were made at numerous locations with thermoluminescent dosimeters (TLD), pressurized ionization chambers, portable NaI detectors, and thin-window pancake GM probes. Results of the TLD measurements with and without a ..beta..-attenuator indicate that approx. 29% of the total dose rate at 1 m in air is due to ..beta..- or low energy ..gamma..-contribution. The contribution at any particular site, however, is reduced by vegetation. Integral 30-yr external shallow dose estimates for future inhabitants were made and compared with external dose estimates of a previous large scale radiological survey. Integral 30-yr shallow external dose estimates are 25-50% higher than whole body estimates. Due to the low penetrating ability of the ..beta..'s or low energy ..gamma..'s, however, several remedial actions can be taken to reduce the shallow dose contribution to the total external dose.

  10. beta- and gamma-Comparative dose estimates on Eniwetok Atoll

    Energy Technology Data Exchange (ETDEWEB)

    Crase, K.W.; Gudiksen, P.H.; Robison, W.L.

    1982-05-01

    Eniwetok Atoll is one of the Pacific atolls used for atmospheric testing of U.S. nuclear weapons. Beta dose and gamma-ray exposure measurements were made on two islands of the Eniwetok Atoll during July-August 1976 to determine the beta and low energy gamma-contribution to the total external radiation doses to the returning Marshallese. Measurements were made at numerous locations with thermoluminescent dosimeters (TLD), pressurized ionization chambers, portable NaI detectors, and thin-window pancake GM probes. Results of the TLD measurements with and without a beta-attenuator indicate that approx. 29% of the total dose rate at 1 m in air is due to beta- or low energy gamma-contribution. The contribution at any particular site, however, is somewhat dependent on ground cover, since a minimal amount of vegetation will reduce it significantly from that over bare soil, but thick stands of vegetation have little effect on any further reductions. Integral 30-yr external shallow dose estimates for future inhabitants were made and compared with external dose estimates of a previous large scale radiological survey (En73). Integral 30-yr shallow external dose estimates are 25-50% higher than whole body estimates. Due to the low penetrating ability of the beta's or low energy gamma's, however, several remedial actions can be taken to reduce the shallow dose contribution to the total external dose.

  11. RELAP-7 Numerical Stabilization: Entropy Viscosity Method

    Energy Technology Data Exchange (ETDEWEB)

    R. A. Berry; M. O. Delchini; J. Ragusa

    2014-06-01

    The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The code is based on the INL's modern scientific software development framework, MOOSE (Multi-Physics Object Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5's capability and extends the analysis capability for all reactor system simulation scenarios. RELAP-7 utilizes a single phase and a novel seven-equation two-phase flow models as described in the RELAP-7 Theory Manual (INL/EXT-14-31366). The basic equation systems are hyperbolic, which generally require some type of stabilization (or artificial viscosity) to capture nonlinear discontinuities and to suppress advection-caused oscillations. This report documents one of the available options for this stabilization in RELAP-7 -- a new and novel approach known as the entropy viscosity method. Because the code is an ongoing development effort in which the physical sub models, numerics, and coding are evolving, so too must the specific details of the entropy viscosity stabilization method. Here the fundamentals of the method in their current state are presented.

  12. Numerical simulation of avascular tumor growth

    Energy Technology Data Exchange (ETDEWEB)

    Slezak, D Fernandez; Suarez, C; Soba, A; Risk, M; Marshall, G [Laboratorio de Sistemas Complejos, Departamento de Computacion, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires (C1428EGA) Buenos Aires (Argentina)

    2007-11-15

    A mathematical and numerical model for the description of different aspects of microtumor development is presented. The model is based in the solution of a system of partial differential equations describing an avascular tumor growth. A detailed second-order numeric algorithm for solving this system is described. Parameters are swiped to cover a range of feasible physiological values. While previous published works used a single set of parameters values, here we present a wide range of feasible solutions for tumor growth, covering a more realistic scenario. The model is validated by experimental data obtained with a multicellular spheroid model, a specific type of in vitro biological model which is at present considered to be optimum for the study of complex aspects of avascular microtumor physiology. Moreover, a dynamical analysis and local behaviour of the system is presented, showing chaotic situations for particular sets of parameter values at some fixed points. Further biological experiments related to those specific points may give potentially interesting results.

  13. Black hole spectroscopy: Systematic errors and ringdown energy estimates

    Science.gov (United States)

    Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav

    2018-02-01

    The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.

  14. Analysis of pumping tests of partially penetrating wells in an unconfined aquifer using inverse numerical optimization

    Science.gov (United States)

    Hvilshøj, S.; Jensen, K. H.; Barlebo, H. C.; Madsen, B.

    1999-08-01

    Inverse numerical modeling was applied to analyze pumping tests of partially penetrating wells carried out in three wells established in an unconfined aquifer in Vejen, Denmark, where extensive field investigations had previously been carried out, including tracer tests, mini-slug tests, and other hydraulic tests. Drawdown data from multiple piezometers located at various horizontal and vertical distances from the pumping well were included in the optimization. Horizontal and vertical hydraulic conductivities, specific storage, and specific yield were estimated, assuming that the aquifer was either a homogeneous system with vertical anisotropy or composed of two or three layers of different hydraulic properties. In two out of three cases, a more accurate interpretation was obtained for a multi-layer model defined on the basis of lithostratigraphic information obtained from geological descriptions of sediment samples, gammalogs, and flow-meter tests. Analysis of the pumping tests resulted in values for horizontal hydraulic conductivities that are in good accordance with those obtained from slug tests and mini-slug tests. Besides the horizontal hydraulic conductivity, it is possible to determine the vertical hydraulic conductivity, specific yield, and specific storage based on a pumping test of a partially penetrating well. The study demonstrates that pumping tests of partially penetrating wells can be analyzed using inverse numerical models. The model used in the study was a finite-element flow model combined with a non-linear regression model. Such a model can accommodate more geological information and complex boundary conditions, and the parameter-estimation procedure can be formalized to obtain optimum estimates of hydraulic parameters and their standard deviations.

  15. Numerical study of thermal test of a cask of transportation for radioactive material

    International Nuclear Information System (INIS)

    Vieira, Tiago A.S.; Santos, André A.C. dos; Vidal, Guilherme A.M.; Silva Junior, Geraldo E.

    2017-01-01

    In this study numerical simulations of a transport cask for radioactive material were made and the numerical results were compared with experimental results of tests carried out in two different opportunities. A mesh study was also made regarding the previously designed geometry of the same cask, in order to evaluate its impact in relation to the stability of numerical results for this type of problem. The comparison of the numerical and experimental results allowed to evaluate the need to plan and carry out a new test in order to validate the CFD codes used in the numerical simulations

  16. Evaluation of wave runup predictions from numerical and parametric models

    Science.gov (United States)

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  17. Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models

    Science.gov (United States)

    Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.

    2011-01-01

    Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.

  18. Estimations of actual availability

    International Nuclear Information System (INIS)

    Molan, M.; Molan, G.

    2001-01-01

    Adaptation of working environment (social, organizational, physical and physical) should assure higher level of workers' availability and consequently higher level of workers' performance. A special theoretical model for description of connections between environmental factors, human availability and performance was developed and validated. The central part of the model is evaluations of human actual availability in the real working situation or fitness for duties self-estimation. The model was tested in different working environments. On the numerous (2000) workers, standardized values and critical limits for an availability questionnaire were defined. Standardized method was used in identification of the most important impact of environmental factors. Identified problems were eliminated by investments in the organization in modification of selection and training procedures in humanization of working .environment. For workers with behavioural and health problems individual consultancy was offered. The described method is a tool for identification of impacts. In combination with behavioural analyses and mathematical analyses of connections, it offers possibilities to keep adequate level of human availability and fitness for duty in each real working situation. The model should be a tool for achieving adequate level of nuclear safety by keeping the adequate level of workers' availability and fitness for duty. For each individual worker possibility for estimation of level of actual fitness for duty is possible. Effects of prolonged work and additional tasks should be evaluated. Evaluations of health status effects and ageing are possible on the individual level. (author)

  19. Numerical simulation of Higgs models

    International Nuclear Information System (INIS)

    Jaster, A.

    1995-10-01

    The SU(2) Higgs and the Schwinger model on the lattice were analysed. Numerical simulations of the SU(2) Higgs model were performed to study the finite temperature electroweak phase transition. With the help of the multicanonical method the distribution of an order parameter at the phase transition point was measured. This was used to obtain the order of the phase transition and the value of the interface tension with the histogram method. Numerical simulations were also performed at zero temperature to perform renormalization. The measured values for the Wilson loops were used to determine the static potential and from this the renormalized gauge coupling. The Schwinger model was simulated at different gauge couplings to analyse the properties of the Kaplan-Shamir fermions. The prediction that the mass parameter gets only multiplicative renormalization was tested and verified. (orig.)

  20. Numerical methods for metamaterial design

    CERN Document Server

    2013-01-01

    This book describes a relatively new approach for the design of electromagnetic metamaterials.  Numerical optimization routines are combined with electromagnetic simulations to tailor the broadband optical properties of a metamaterial to have predetermined responses at predetermined wavelengths. After a review of both the major efforts within the field of metamaterials and the field of mathematical optimization, chapters covering both gradient-based and derivative-free design methods are considered.  Selected topics including surrogate-base optimization, adaptive mesh search, and genetic algorithms are shown to be effective, gradient-free optimization strategies.  Additionally, new techniques for representing dielectric distributions in two dimensions, including level sets, are demonstrated as effective methods for gradient-based optimization.  Each chapter begins with a rigorous review of the optimization strategy used, and is followed by numerous examples that combine the strategy with either electromag...

  1. Numerical Modelling of Electrical Discharges

    International Nuclear Information System (INIS)

    Durán-Olivencia, F J; Pontiga, F; Castellanos, A

    2014-01-01

    The problem of the propagation of an electrical discharge between a spherical electrode and a plane has been solved by means of finite element methods (FEM) using a fluid approximation and assuming weak ionization and local equilibrium with the electric field. The numerical simulation of this type of problems presents the usual difficulties of convection-diffusion-reaction problems, in addition to those associated with the nonlinearities of the charged species velocities, the formation of steep gradients of the electric field and particle densities, and the coexistence of very different temporal scales. The effect of using different temporal discretizations for the numerical integration of the corresponding system of partial differential equations will be here investigated. In particular, the so-called θ-methods will be used, which allows to implement implicit, semi-explicit and fully explicit schemes in a simple way

  2. Numerical Modeling of Shoreline Undulations

    DEFF Research Database (Denmark)

    Kærgaard, Kasper Hauberg

    model has been developed which describes the longshore sediment transport along arbitrarily shaped shorelines. The numerical model is based on a spectral wave model, a depth integrated flow model, a wave-phase resolving sediment transport description and a one-line shoreline model. First the theoretical...... of the feature and under predicts the migration speeds of the features. On the second shoreline, the shoreline model predicts undulations lengths which are longer than the observed undulations. Lastly the thesis considers field measurements of undulations of the bottom bathymetry along an otherwise straight...... length of the shoreline undulations is determined in the linear regime using a shoreline stability analysis based on the numerical model. The analysis shows that the length of the undulations in the linear regime depends on the incoming wave conditions and on the coastal profile. For larger waves...

  3. Numerical simulation of fire vortex

    Science.gov (United States)

    Barannikova, D. D.; Borzykh, V. E.; Obukhov, A. G.

    2018-05-01

    The article considers the numerical simulation of the swirling flow of air around the smoothly heated vertical cylindrical domain in the conditions of gravity and Coriolis forces action. The solutions of the complete system of Navie-Stocks equations are numerically solved at constant viscosity and heat conductivity factors. Along with the proposed initial and boundary conditions, these solutions describe the complex non-stationary 3D flows of viscous compressible heat conducting gas. For various instants of time of the initial flow formation stage using the explicit finite-difference scheme the calculations of all gas dynamics parameters, that is density, temperature, pressure and three velocity components of gas particles, have been run. The current instant lines corresponding to the trajectories of the particles movement in the emerging flow have been constructed. A negative direction of the air flow swirling occurred in the vertical cylindrical domain heating has been defined.

  4. Numerical and Evolutionary Optimization Workshop

    CERN Document Server

    Trujillo, Leonardo; Legrand, Pierrick; Maldonado, Yazmin

    2017-01-01

    This volume comprises a selection of works presented at the Numerical and Evolutionary Optimization (NEO) workshop held in September 2015 in Tijuana, Mexico. The development of powerful search and optimization techniques is of great importance in today’s world that requires researchers and practitioners to tackle a growing number of challenging real-world problems. In particular, there are two well-established and widely known fields that are commonly applied in this area: (i) traditional numerical optimization techniques and (ii) comparatively recent bio-inspired heuristics. Both paradigms have their unique strengths and weaknesses, allowing them to solve some challenging problems while still failing in others. The goal of the NEO workshop series is to bring together people from these and related fields to discuss, compare and merge their complimentary perspectives in order to develop fast and reliable hybrid methods that maximize the strengths and minimize the weaknesses of the underlying paradigms. Throu...

  5. Numerical Tokamak Project code comparison

    International Nuclear Information System (INIS)

    Waltz, R.E.; Cohen, B.I.; Beer, M.A.

    1994-01-01

    The Numerical Tokamak Project undertook a code comparison using a set of TFTR tokamak parameters. Local radial annulus codes of both gyrokinetic and gyrofluid types were compared for both slab and toroidal case limits assuming ion temperature gradient mode turbulence in a pure plasma with adiabatic electrons. The heat diffusivities were found to be in good internal agreement within ± 50% of the group average over five codes

  6. Numerical algorithms in secondary creep

    International Nuclear Information System (INIS)

    Feijoo, R.A.; Taroco, E.

    1980-01-01

    The problem of stationary creep is presented as well as its variational formulation, when weak constraints are established, capable of assuring one single solution. A second, so-called elasto-creep problem, is further analysed, together with its variational formulation. It is shown that its stationary solution coincides with that of the stationary creep and the advantages of this formulation with respect to the former one is emphasized. Some numerical applications showing the efficiency of the method propesed are finally presented [pt

  7. Numerical and symbolic scientific computing

    CERN Document Server

    Langer, Ulrich

    2011-01-01

    The book presents the state of the art and results and also includes articles pointing to future developments. Most of the articles center around the theme of linear partial differential equations. Major aspects are fast solvers in elastoplasticity, symbolic analysis for boundary problems, symbolic treatment of operators, computer algebra, and finite element methods, a symbolic approach to finite difference schemes, cylindrical algebraic decomposition and local Fourier analysis, and white noise analysis for stochastic partial differential equations. Further numerical-symbolic topics range from

  8. Numerical ability predicts mortgage default.

    Science.gov (United States)

    Gerardi, Kristopher; Goette, Lorenz; Meier, Stephan

    2013-07-09

    Unprecedented levels of US subprime mortgage defaults precipitated a severe global financial crisis in late 2008, plunging much of the industrialized world into a deep recession. However, the fundamental reasons for why US mortgages defaulted at such spectacular rates remain largely unknown. This paper presents empirical evidence showing that the ability to perform basic mathematical calculations is negatively associated with the propensity to default on one's mortgage. We measure several aspects of financial literacy and cognitive ability in a survey of subprime mortgage borrowers who took out loans in 2006 and 2007, and match them to objective, detailed administrative data on mortgage characteristics and payment histories. The relationship between numerical ability and mortgage default is robust to controlling for a broad set of sociodemographic variables, and is not driven by other aspects of cognitive ability. We find no support for the hypothesis that numerical ability impacts mortgage outcomes through the choice of the mortgage contract. Rather, our results suggest that individuals with limited numerical ability default on their mortgage due to behavior unrelated to the initial choice of their mortgage.

  9. Constrained evolution in numerical relativity

    Science.gov (United States)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  10. Numerical ability predicts mortgage default

    Science.gov (United States)

    Gerardi, Kristopher; Goette, Lorenz; Meier, Stephan

    2013-01-01

    Unprecedented levels of US subprime mortgage defaults precipitated a severe global financial crisis in late 2008, plunging much of the industrialized world into a deep recession. However, the fundamental reasons for why US mortgages defaulted at such spectacular rates remain largely unknown. This paper presents empirical evidence showing that the ability to perform basic mathematical calculations is negatively associated with the propensity to default on one’s mortgage. We measure several aspects of financial literacy and cognitive ability in a survey of subprime mortgage borrowers who took out loans in 2006 and 2007, and match them to objective, detailed administrative data on mortgage characteristics and payment histories. The relationship between numerical ability and mortgage default is robust to controlling for a broad set of sociodemographic variables, and is not driven by other aspects of cognitive ability. We find no support for the hypothesis that numerical ability impacts mortgage outcomes through the choice of the mortgage contract. Rather, our results suggest that individuals with limited numerical ability default on their mortgage due to behavior unrelated to the initial choice of their mortgage. PMID:23798401

  11. 75 FR 39143 - Airworthiness Directives; Arrow Falcon Exporters, Inc. (previously Utah State University); AST...

    Science.gov (United States)

    2010-07-08

    ... (previously Precision Helicopters, LLC); Robinson Air Crane, Inc.; San Joaquin Helicopters (previously Hawkins... (Previously Hawkins & Powers Aviation); S.M. &T. Aircraft (Previously Us Helicopter Inc., UNC Helicopters, Inc...

  12. 75 FR 66009 - Airworthiness Directives; Cessna Aircraft Company (Type Certificate Previously Held by Columbia...

    Science.gov (United States)

    2010-10-27

    ... Company (Type Certificate Previously Held by Columbia Aircraft Manufacturing (Previously the Lancair... Company (Type Certificate Previously Held by Columbia Aircraft Manufacturing (Previously The Lancair...-15895. Applicability (c) This AD applies to the following Cessna Aircraft Company (type certificate...

  13. Numerical vs. turbulent diffusion in geophysical flow modelling

    International Nuclear Information System (INIS)

    D'Isidoro, M.; Maurizi, A.; Tampieri, F.

    2008-01-01

    Numerical advection schemes induce the spreading of passive tracers from localized sources. The effects of changing resolution and Courant number are investigated using the WAF advection scheme, which leads to a sub-diffusive process. The spreading rate from an instantaneous source is compared with the physical diffusion necessary to simulate unresolved turbulent motions. The time at which the physical diffusion process overpowers the numerical spreading is estimated, and is shown to reduce as the resolution increases, and to increase as the wind velocity increases.

  14. Java technology for implementing efficient numerical analysis in intranet

    International Nuclear Information System (INIS)

    Song, Hee Yong; Ko, Sung Ho

    2001-01-01

    This paper introduces some useful Java technologies for utilizing the internet in numerical analysis, and suggests one architecture performing efficient numerical analysis in the intranet by using them. The present work has verified it's possibility by implementing some parts of this architecture with two easy examples. One is based on Servlet-Applet communication, JDBC and swing. The other is adding multi-threads, file transfer and Java remote method invocation to the former. Through this work it has been intended to make the base for the later advanced and practical research that will include efficiency estimates of this architecture and deal with advanced load balancing

  15. Risks of cardiovascular adverse events and death in patients with previous stroke undergoing emergency noncardiac, nonintracranial surgery

    DEFF Research Database (Denmark)

    Christiansen, Mia N.; Andersson, Charlotte; Gislason, Gunnar H.

    2017-01-01

    Background: The outcomes of emergent noncardiac, nonintracranial surgery in patients with previous stroke remain unknown. Methods: All emergency surgeries performed in Denmark (2005 to 2011) were analyzed according to time elapsed between previous ischemic stroke and surgery. The risks of 30-day...... mortality and major adverse cardiovascular events were estimated as odds ratios (ORs) and 95% CIs using adjusted logistic regression models in a priori defined groups (reference was no previous stroke). In patients undergoing surgery immediately (within 1 to 3 days) or early after stroke (within 4 to 14...... and general anesthesia less frequent in patients with previous stroke (all P Risks of major adverse cardiovascular events and mortality were high for patients with stroke less than 3 months (20.7 and 16.4% events; OR = 4.71 [95% CI, 4.18 to 5.32] and 1.65 [95% CI, 1.45 to 1.88]), and remained...

  16. Microwave Breast Imaging System Prototype with Integrated Numerical Characterization

    Directory of Open Access Journals (Sweden)

    Mark Haynes

    2012-01-01

    Full Text Available The increasing number of experimental microwave breast imaging systems and the need to properly model them have motivated our development of an integrated numerical characterization technique. We use Ansoft HFSS and a formalism we developed previously to numerically characterize an S-parameter- based breast imaging system and link it to an inverse scattering algorithm. We show successful reconstructions of simple test objects using synthetic and experimental data. We demonstrate the sensitivity of image reconstructions to knowledge of the background dielectric properties and show the limits of the current model.

  17. Learning linear spatial-numeric associations improves accuracy of memory for numbers

    Directory of Open Access Journals (Sweden)

    Clarissa Ann Thompson

    2016-01-01

    Full Text Available Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1. Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status. To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2. As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  18. Combining four Monte Carlo estimators for radiation momentum deposition

    International Nuclear Information System (INIS)

    Hykes, Joshua M.; Urbatsch, Todd J.

    2011-01-01

    Using four distinct Monte Carlo estimators for momentum deposition - analog, absorption, collision, and track-length estimators - we compute a combined estimator. In the wide range of problems tested, the combined estimator always has a figure of merit (FOM) equal to or better than the other estimators. In some instances the FOM of the combined estimator is only a few percent higher than the FOM of the best solo estimator, the track-length estimator, while in one instance it is better by a factor of 2.5. Over the majority of configurations, the combined estimator's FOM is 10 - 20% greater than any of the solo estimators' FOM. The numerical results show that the track-length estimator is the most important term in computing the combined estimator, followed far behind by the analog estimator. The absorption and collision estimators make negligible contributions. (author)

  19. Online Wavelet Complementary velocity Estimator.

    Science.gov (United States)

    Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin

    2018-02-01

    In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. External cephalic version among women with a previous cesarean delivery: report on 36 cases and review of the literature.

    Science.gov (United States)

    Abenhaim, Haim A; Varin, Jocelyne; Boucher, Marc

    2009-01-01

    Whether or not women with a previous cesarean section should be considered for an external cephalic version remains unclear. In our study, we sought to examine the relationship between a history of previous cesarean section and outcomes of external cephalic version for pregnancies at 36 completed weeks of gestation or more. Data on obstetrical history and on external cephalic version outcomes was obtained from the C.H.U. Sainte-Justine External Cephalic Version Database. Baseline clinical characteristics were compared among women with and without a history of previous cesarean section. We used logistic regression analysis to evaluate the effect of previous cesarean section on success of external cephalic version while adjusting for parity, maternal body mass index, gestational age, estimated fetal weight, and amniotic fluid index. Over a 15-year period, 1425 external cephalic versions were attempted of which 36 (2.5%) were performed on women with a previous cesarean section. Although women with a history of previous cesarean section were more likely to be older and para >2 (38.93% vs. 15.0%), there were no difference in gestational age, estimated fetal weight, and amniotic fluid index. Women with a prior cesarean section had a success rate similar to women without [50.0% vs. 51.6%, adjusted OR: 1.31 (0.48-3.59)]. Women with a previous cesarean section who undergo an external cephalic version have similar success rates than do women without. Concern about procedural success in women with a previous cesarean section is unwarranted and should not deter attempting an external cephalic version.

  1. Wecpos - Wave Energy Coastal Protection Oscillating System: A Numerical Assessment

    Science.gov (United States)

    Dentale, Fabio; Pugliese Carratelli, Eugenio; Rzzo, Gianfranco; Arsie, Ivan; Davide Russo, Salvatore

    2010-05-01

    In recent years, the interest in developing new technologies to produce energy with low environmental impact by using renewable sources has grown exponentially all over the world. In this context, the experiences made to derive electricity from the sea (currents, waves, etc.) are of particular interest. At the moment, due to the many existing experiments completed or still in progress, it is quite impossible explain what has been obtained but it is worth mentioning the EMEC, which summarizes the major projects in the world. Another important environmental aspect, also related to the maritime field, is the coastal protection from the sea waves. Even in this field, since many years, the structural and non-structural solutions which can counteract this phenomenon are analyzed, in order to cause the least possible damage to the environment. The studies in development by the researchers of the University of Salerno are based on these two aspect previously presented. Considering the technologies currently available, a submerged system has been designed, WECPOS (Wave Energy Coastal Protection Oscillating System), to be located on relatively shallow depths, to can be used simultaneously for both electricity generation and for the coastal protection using the oscillating motion of the water particles. The single element constituting the system is realized by a fixed base and three movable panels that can fluctuate in a fixed angle. The waves interact with the panels generating an alternative motion which can be exploited to produce electricity. At the same time, the constraint movement imposed for the rotation of the panels is a barrier to the wave propagation phenomena, triggering the breaking in the downstream part of the device. So the wave energy will be dissipated obtaining a positive effect for the coastal protection. Currently, the efficiency and effectiveness of the system (WECPOS single module) has been studied by using numerical models. Using the FLOW-3D

  2. Evaluation of an interdisciplinary re-isolation policy for patients with previous Clostridium difficile diarrhea.

    Science.gov (United States)

    Boone, N; Eagan, J A; Gillern, P; Armstrong, D; Sepkowitz, K A

    1998-12-01

    Diarrhea caused by Clostridium difficile is increasingly recognized as a nosocomial problem. The effectiveness and cost of a new program to decrease nosocomial spread by identifying patients scheduled for readmission who were previously positive for toxin was evaluated. The Memorial Sloan-Kettering Cancer Center is a 410-bed comprehensive cancer center in New York City. Many patients are readmitted during their course of cancer therapy. In 1995 as a result of concern about the nosocomial spread of C difficile, we implemented a policy that all patients who were positive for C difficile toxin in the previous 6 months with no subsequent toxin-negative stool as an outpatient would be placed into contact isolation on readmission pending evaluation of stool specimens. Patients who were previously positive for C difficile toxin were identified to infection control and admitting office databases via computer. Admitting personnel contacted infection control with all readmissions to determine whether a private room was required. Between July 1, 1995, and June 30, 1996, 47 patients who were previously positive for C difficile toxin were readmitted. Before their first scheduled readmission, the specimens for 15 (32%) of these patients were negative for C difficile toxin. They were subsequently cleared as outpatients and were readmitted without isolation. Workup of the remaining 32 patients revealed that the specimens for 7 patients were positive for C difficile toxin and 86 isolation days were used. An additional 25 patients used 107 isolation days and were either cleared after a negative specimen was obtained in-house or discharged without having an appropriate specimen sent. Four patients (9%) had reoccurring C difficile after having toxin-negative stools. We estimate (because outpatient specimens were not collected) the cost incurred at $48,500 annually, including the incremental cost of hospital isolation and equipment. Our policy to control the spread of nosocomial C

  3. Previously Unrecognized Ornithuromorph Bird Diversity in the Early Cretaceous Changma Basin, Gansu Province, Northwestern China

    Science.gov (United States)

    Wang, Ya-Ming; O'Connor, Jingmai K.; Li, Da-Qing; You, Hai-Lu

    2013-01-01

    Here we report on three new species of ornithuromorph birds from the Lower Cretaceous Xiagou Formation in the Changma Basin of Gansu Province, northwestern China: Yumenornis huangi gen. et sp. nov., Changmaornis houi gen. et sp. nov., and Jiuquanornis niui gen. et sp. nov.. The last of these is based on a previously published but unnamed specimen: GSGM-05-CM-021. Although incomplete, the specimens can be clearly distinguished from each other and from Gansus yumenensis Hou and Liu, 1984. Phylogenetic analysis resolves the three new taxa as basal ornithuromorphs. This study reveals previously unrecognized ornithuromorph diversity in the Changma avifauna, which is largely dominated by Gansus but with at least three other ornithuromorphs. Body mass estimates demonstrate that enantiornithines were much smaller than ornithuromorphs in the Changma avifauna. In addition, Changma enantiornithines preserve long and recurved pedal unguals, suggesting an arboreal lifestyle; in contrast, Changma ornithuromorphs tend to show terrestrial or even aquatic adaptions. Similar differences in body mass and ecology are also observed in the Jehol avifauna in northeastern China, suggesting niche partitioning between these two clades developed early in their evolutionary history. PMID:24147058

  4. Moving Horizon Estimation and Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp

    successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....

  5. Bin mode estimation methods for Compton camera imaging

    International Nuclear Information System (INIS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-01-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods

  6. Current Status of the IAU Working Group for Numerical Standards of Fundamental Astronomy

    National Research Council Canada - National Science Library

    Luzum, B; Capitaine, N; Fienga, A; Folkner, W; Fukushima, T; Hilton, J; Hohenkerk, C; Krasinsky, G; Petit, G; Pitjeva, E; Soffel, M; Wallace, P

    2007-01-01

    ...) for Numerical Standards of Fundamental Astronomy. The goal of the WG are to update "IAU Current Best Estimates" conforming with IAU Resolutions, the International Earth Rotation and Reference System Service (IERS...

  7. Wave Transformation Over Reefs: Evaluation of One-Dimensional Numerical Models

    National Research Council Canada - National Science Library

    Demirbilek, Zeki; Nwogu, Okey G; Ward, Donald L; Sanchez, Alejandro

    2009-01-01

    Three one-dimensional (1D) numerical wave models are evaluated for wave transformation over reefs and estimates of wave setup, runup, and ponding levels in an island setting where the beach is fronted by fringing reef and lagoons...

  8. [A brief history of resuscitation - the influence of previous experience on modern techniques and methods].

    Science.gov (United States)

    Kucmin, Tomasz; Płowaś-Goral, Małgorzata; Nogalski, Adam

    2015-02-01

    Cardiopulmonary resuscitation (CPR) is relatively novel branch of medical science, however first descriptions of mouth-to-mouth ventilation are to be found in the Bible and literature is full of descriptions of different resuscitation methods - from flagellation and ventilation with bellows through hanging the victims upside down and compressing the chest in order to stimulate ventilation to rectal fumigation with tobacco smoke. The modern history of CPR starts with Kouwenhoven et al. who in 1960 published a paper regarding heart massage through chest compressions. Shortly after that in 1961Peter Safar presented a paradigm promoting opening the airway, performing rescue breaths and chest compressions. First CPR guidelines were published in 1966. Since that time guidelines were modified and improved numerously by two leading world expert organizations ERC (European Resuscitation Council) and AHA (American Heart Association) and published in a new version every 5 years. Currently 2010 guidelines should be obliged. In this paper authors made an attempt to present history of development of resuscitation techniques and methods and assess the influence of previous lifesaving methods on nowadays technologies, equipment and guidelines which allow to help those women and men whose life is in danger due to sudden cardiac arrest. © 2015 MEDPRESS.

  9. Estimating the time evolution of NMR systems via a quantum-speed-limit-like expression

    Science.gov (United States)

    Villamizar, D. V.; Duzzioni, E. I.; Leal, A. C. S.; Auccaise, R.

    2018-05-01

    Finding the solutions of the equations that describe the dynamics of a given physical system is crucial in order to obtain important information about its evolution. However, by using estimation theory, it is possible to obtain, under certain limitations, some information on its dynamics. The quantum-speed-limit (QSL) theory was originally used to estimate the shortest time in which a Hamiltonian drives an initial state to a final one for a given fidelity. Using the QSL theory in a slightly different way, we are able to estimate the running time of a given quantum process. For that purpose, we impose the saturation of the Anandan-Aharonov bound in a rotating frame of reference where the state of the system travels slower than in the original frame (laboratory frame). Through this procedure it is possible to estimate the actual evolution time in the laboratory frame of reference with good accuracy when compared to previous methods. Our method is tested successfully to predict the time spent in the evolution of nuclear spins 1/2 and 3/2 in NMR systems. We find that the estimated time according to our method is better than previous approaches by up to four orders of magnitude. One disadvantage of our method is that we need to solve a number of transcendental equations, which increases with the system dimension and parameter discretization used to solve such equations numerically.

  10. Estimation of Anaerobic Debromination Rate Constants of PBDE Pathways Using an Anaerobic Dehalogenation Model.

    Science.gov (United States)

    Karakas, Filiz; Imamoglu, Ipek

    2017-04-01

    This study aims to estimate anaerobic debromination rate constants (k m ) of PBDE pathways using previously reported laboratory soil data. k m values of pathways are estimated by modifying a previously developed model as Anaerobic Dehalogenation Model. Debromination activities published in the literature in terms of bromine substitutions as well as specific microorganisms and their combinations are used for identification of pathways. The range of estimated k m values is between 0.0003 and 0.0241 d -1 . The median and maximum of k m values are found to be comparable to the few available biologically confirmed rate constants published in the literature. The estimated k m values can be used as input to numerical fate and transport models for a better and more detailed investigation of the fate of individual PBDEs in contaminated sediments. Various remediation scenarios such as monitored natural attenuation or bioremediation with bioaugmentation can be handled in a more quantitative manner with the help of k m estimated in this study.

  11. Toward an enhanced Bayesian estimation framework for multiphase flow soft-sensing

    International Nuclear Information System (INIS)

    Luo, Xiaodong; Lorentzen, Rolf J; Stordal, Andreas S; Nævdal, Geir

    2014-01-01

    In this work the authors study the multiphase flow soft-sensing problem based on a previously established framework. There are three functional modules in this framework, namely, a transient well flow model that describes the response of certain physical variables in a well, for instance, temperature, velocity and pressure, to the flow rates entering and leaving the well zones; a Markov jump process that is designed to capture the potential abrupt changes in the flow rates; and an estimation method that is adopted to estimate the underlying flow rates based on the measurements from the physical sensors installed in the well. In the previous studies, the variances of the flow rates in the Markov jump process are chosen manually. To fill this gap, in the current work two automatic approaches are proposed in order to optimize the variance estimation. Through a numerical example, we show that, when the estimation framework is used in conjunction with these two proposed variance-estimation approaches, it can achieve reasonable performance in terms of matching both the measurements of the physical sensors and the true underlying flow rates. (paper)

  12. M-estimator for the 3D symmetric Helmert coordinate transformation

    Science.gov (United States)

    Chang, Guobin; Xu, Tianhe; Wang, Qianxin

    2018-01-01

    The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.

  13. Numerical soliton-like solutions of the potential Kadomtsev-Petviashvili equation by the decomposition method

    International Nuclear Information System (INIS)

    Kaya, Dogan; El-Sayed, Salah M.

    2003-01-01

    In this Letter we present an Adomian's decomposition method (shortly ADM) for obtaining the numerical soliton-like solutions of the potential Kadomtsev-Petviashvili (shortly PKP) equation. We will prove the convergence of the ADM. We obtain the exact and numerical solitary-wave solutions of the PKP equation for certain initial conditions. Then ADM yields the analytic approximate solution with fast convergence rate and high accuracy through previous works. The numerical solutions are compared with the known analytical solutions

  14. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  15. Numerical fatigue analysis of premolars restored by CAD/CAM ceramic crowns.

    Science.gov (United States)

    Homaei, Ehsan; Jin, Xiao-Zhuang; Pow, Edmond Ho Nang; Matinlinna, Jukka Pekka; Tsoi, James Kit-Hon; Farhangdoost, Khalil

    2018-04-10

    The purpose of this study was to estimate the fatigue life of premolars restored with two dental ceramics, lithium disilicate (LD) and polymer infiltrated ceramic (PIC) using the numerical method and compare it with the published in vitro data. A premolar restored with full-coverage crown was digitized. The volumetric shape of tooth tissues and crowns were created in Mimics ® . They were transferred to IA-FEMesh for mesh generation and the model was analyzed with Abaqus. By combining the stress distribution results with fatigue stress-life (S-N) approach, the lifetime of restored premolars was predicted. The predicted lifetime was 1,231,318 cycles for LD with fatigue load of 1400N, while the one for PIC was 475,063 cycles with the load of 870N. The peak value of maximum principal stress occurred at the contact area (LD: 172MPa and PIC: 96MPa) and central fossa (LD: 100MPa and PIC: 64MPa) for both ceramics which were the most seen failure areas in the experiment. In the adhesive layer, the maximum shear stress was observed at the shoulder area (LD: 53.6MPa and PIC: 29MPa). The fatigue life and failure modes of all-ceramic crown determined by the numerical method seem to correlate well with the previous experimental study. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  16. Reliability of Estimation Pile Load Capacity Methods

    Directory of Open Access Journals (Sweden)

    Yudhi Lastiasih

    2014-04-01

    Full Text Available None of numerous previous methods for predicting pile capacity is known how accurate any of them are when compared with the actual ultimate capacity of piles tested to failure. The author’s of the present paper have conducted such an analysis, based on 130 data sets of field loading tests. Out of these 130 data sets, only 44 could be analysed, of which 15 were conducted until the piles actually reached failure. The pile prediction methods used were: Brinch Hansen’s method (1963, Chin’s method (1970, Decourt’s Extrapolation Method (1999, Mazurkiewicz’s method (1972, Van der Veen’s method (1953, and the Quadratic Hyperbolic Method proposed by Lastiasih et al. (2012. It was obtained that all the above methods were sufficiently reliable when applied to data from pile loading tests that loaded to reach failure. However, when applied to data from pile loading tests that loaded without reaching failure, the methods that yielded lower values for correction factor N are more recommended. Finally, the empirical method of Reese and O’Neill (1988 was found to be reliable enough to be used to estimate the Qult of a pile foundation based on soil data only.

  17. The Hindu-Arabic numerals

    CERN Document Server

    Smith, David Eugene

    1911-01-01

    The numbers that we call Arabic are so familiar throughout Europe and the Americas that it can be difficult to realize that their general acceptance in commercial transactions is a matter of only the last four centuries and they still remain unknown in parts of the world.In this volume, one of the earliest texts to trace the origin and development of our number system, two distinguished mathematicians collaborated to bring together many fragmentary narrations to produce a concise history of Hindu-Arabic numerals. Clearly and succinctly, they recount the labors of scholars who have studied the

  18. Radiation transport in numerical astrophysics

    International Nuclear Information System (INIS)

    Lund, C.M.

    1983-02-01

    In this article, we discuss some of the numerical techniques developed by Jim Wilson and co-workers for the calculation of time-dependent radiation flow. Difference equations for multifrequency transport are given for both a discrete-angle representation of radiation transport and a Fick's law-like representation. These methods have the important property that they correctly describe both the streaming and diffusion limits of transport theory in problems where the mean free path divided by characteristic distances varies from much less than one to much greater than one. They are also stable for timesteps comparable to the changes in physical variables, rather than being limited by stability requirements

  19. Odelouca Dam Construction: Numerical Analysis

    OpenAIRE

    Brito, A.; Maranha, J. R.; Caldeira, L.

    2012-01-01

    Odelouca dam is an embankment dam, with 76 m height, recently constructed in the south of Portugal. It is zoned with a core consisting of colluvial and residual schist soil and with soil-rockfill mixtures making up the shells (weathered schist with a significant fraction of coarse sized particles). This paper presents a numerical analysis of Odelouca Dam`s construction. The material con-stants of the soil model used are determined from a comprehensive testing programme carried out in the C...

  20. On numerically pluricanonical cyclic coverings

    International Nuclear Information System (INIS)

    Kulikov, V S; Kharlamov, V M

    2014-01-01

    We investigate some properties of cyclic coverings f:Y→X (where X is a complex surface of general type) branched along smooth curves B⊂X that are numerically equivalent to a multiple of the canonical class of X. Our main results concern coverings of surfaces of general type with p g =0 and Miyaoka-Yau surfaces. In particular, such coverings provide new examples of multi-component moduli spaces of surfaces with given Chern numbers and new examples of surfaces that are not deformation equivalent to their complex conjugates