WorldWideScience

Sample records for methods produce errors

  1. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  2. Methods for producing diterpenes

    DEFF Research Database (Denmark)

    2015-01-01

    The present invention discloses that by combining different di TPS enzymes of class I and class II different diterpenes may be produced including diterpenes not identified in nature. Surprisingly it is revealed that a di TPS enzyme of class I of one species may be combined with a di TPS enzyme...... of class II from a different species, resulting in a high diversity of diterpenes, which can be produced....

  3. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  4. A straightness error measurement method matched new generation GPS

    International Nuclear Information System (INIS)

    Zhang, X B; Lu, H; Jiang, X Q; Li, Z

    2005-01-01

    The axis of the non-diffracting beam produced by an axicon is very stable and can be adopted as the datum line to measure the spatial straightness error in continuous working distance, which may be short, medium or long. Though combining the non-diffracting beam datum-line with LVDT displace detector, a new straightness error measurement method is developed. Because the non-diffracting beam datum-line amends the straightness error gauged by LVDT, the straightness error is reliable and this method is matchs new generation GPS

  5. A method of producing hydroxymethyfurfural

    DEFF Research Database (Denmark)

    2011-01-01

    The present invention relates to a method of producing 5-hydroxymethylfurfural by dehydration of fructose and/or glucose and/or mannose.......The present invention relates to a method of producing 5-hydroxymethylfurfural by dehydration of fructose and/or glucose and/or mannose....

  6. Method of producing molybdenum-99

    Science.gov (United States)

    Pitcher, Eric John

    2013-05-28

    Method of producing molybdenum-99, comprising accelerating ions by means of an accelerator; directing the ions onto a metal target so as to generate neutrons having an energy of greater than 10 MeV; directing the neutrons through a converter material comprising techentium-99 to produce a mixture comprising molybdenum-99; and, chemically extracting the molybdenum-99 from the mixture.

  7. Method for producing redox shuttles

    Science.gov (United States)

    Pupek, Krzysztof Z.; Dzwiniel, Trevor L.; Krumdick, Gregory K.

    2015-03-03

    A single step method for producing a redox shuttle having the formula 2,5-di-tert-butyl-1,4-phenylene tetraethyl bis(phosphate) is provided, the method comprising phosphorylating tert butyl hydroquinone with a phosphate-containing reagent. Also provided is method for producing 2,5-di-tert-butyl-1,4-phenylene tetraethyl bis(phosphate), the method comprising solubilizing tert-butyl hydroquinone and tetrabutylammonium bromide with methyltetrahydrofuran to create a mixture; heating the mixture while adding base to the mixture in an amount to turn the mixture orange; and adding diethyl chlorophosphate to the orange mixture in an amount to phosphorylate the hydroquinone.

  8. Methods of producing cesium-131

    Science.gov (United States)

    Meikrantz, David H; Snyder, John R

    2012-09-18

    Methods of producing cesium-131. The method comprises dissolving at least one non-irradiated barium source in water or a nitric acid solution to produce a barium target solution. The barium target solution is irradiated with neutron radiation to produce cesium-131, which is removed from the barium target solution. The cesium-131 is complexed with a calixarene compound to separate the cesium-131 from the barium target solution. A liquid:liquid extraction device or extraction column is used to separate the cesium-131 from the barium target solution.

  9. Methods of producing transportation fuel

    Science.gov (United States)

    Nair, Vijay [Katy, TX; Roes, Augustinus Wilhelmus Maria [Houston, TX; Cherrillo, Ralph Anthony [Houston, TX; Bauldreay, Joanna M [Chester, GB

    2011-12-27

    Systems, methods, and heaters for treating a subsurface formation are described herein. At least one method for producing transportation fuel is described herein. The method for producing transportation fuel may include providing formation fluid having a boiling range distribution between -5.degree. C. and 350.degree. C. from a subsurface in situ heat treatment process to a subsurface treatment facility. A liquid stream may be separated from the formation fluid. The separated liquid stream may be hydrotreated and then distilled to produce a distilled stream having a boiling range distribution between 150.degree. C. and 350.degree. C. The distilled liquid stream may be combined with one or more additives to produce transportation fuel.

  10. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  11. Method of producing grouting mortar

    Energy Technology Data Exchange (ETDEWEB)

    Shelomov, I K; Alchina, S I; Dizer, E I; Gruzdeva, G A; Nikitinskii, V I; Sabirzyanov, A K

    1980-10-07

    A method of producing grouting mortar by mixing the cement with an aqueous salt solution is proposed. So as to increase the quality of the mortar through an acceleration of the time for hardening, the mixture is prepared in two stages, in the first of which 20-30% of the entire cement batch hardens, and in the second of which the remainder of the cement hardens; 1-3-% of an aqueous salt solution is used in quantities of 0.5/1 wt.-% of weight of the cement. The use of this method of producing grouting mortar helps to increase the flexural strength of the cement brick up to 50% after two days ageing by comparison with the strength of cement brick produced from grouting mortar by ordinary methods utilizing identical quantities of the initial components (cement, water, chloride).

  12. Grammatical Errors Produced by English Majors: The Translation Task

    Science.gov (United States)

    Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad

    2011-01-01

    This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…

  13. Method for producing metallic nanoparticles

    Science.gov (United States)

    Phillips, Jonathan; Perry, William L.; Kroenke, William J.

    2004-02-10

    Method for producing metallic nanoparticles. The method includes generating an aerosol of solid metallic microparticles, generating non-oxidizing plasma with a plasma hot zone at a temperature sufficiently high to vaporize the microparticles into metal vapor, and directing the aerosol into the hot zone of the plasma. The microparticles vaporize in the hot zone to metal vapor. The metal vapor is directed away from the hot zone and to the plasma afterglow where it cools and condenses to form solid metallic nanoparticles.

  14. Method for producing metallic microparticles

    Science.gov (United States)

    Phillips, Jonathan; Perry, William L.; Kroenke, William J.

    2004-06-29

    Method for producing metallic particles. The method converts metallic nanoparticles into larger, spherical metallic particles. An aerosol of solid metallic nanoparticles and a non-oxidizing plasma having a portion sufficiently hot to melt the nanoparticles are generated. The aerosol is directed into the plasma where the metallic nanoparticles melt, collide, join, and spheroidize. The molten spherical metallic particles are directed away from the plasma and enter the afterglow where they cool and solidify.

  15. Error Parsing: An alternative method of implementing social judgment theory

    OpenAIRE

    Crystal C. Hall; Daniel M. Oppenheimer

    2015-01-01

    We present a novel method of judgment analysis called Error Parsing, based upon an alternative method of implementing Social Judgment Theory (SJT). SJT and Error Parsing both posit the same three components of error in human judgment: error due to noise, error due to cue weighting, and error due to inconsistency. In that sense, the broad theory and framework are the same. However, SJT and Error Parsing were developed to answer different questions, and thus use different m...

  16. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage

  17. Method for producing carbon nanotubes

    Science.gov (United States)

    Phillips, Jonathan [Santa Fe, NM; Perry, William L [Jemez Springs, NM; Chen, Chun-Ku [Albuquerque, NM

    2006-02-14

    Method for producing carbon nanotubes. Carbon nanotubes were prepared using a low power, atmospheric pressure, microwave-generated plasma torch system. After generating carbon monoxide microwave plasma, a flow of carbon monoxide was directed first through a bed of metal particles/glass beads and then along the outer surface of a ceramic tube located in the plasma. As a flow of argon was introduced into the plasma through the ceramic tube, ropes of entangled carbon nanotubes, attached to the surface of the tube, were produced. Of these, longer ropes formed on the surface portion of the tube located in the center of the plasma. Transmission electron micrographs of individual nanotubes revealed that many were single-walled.

  18. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  19. Methods of producing luminescent images

    International Nuclear Information System (INIS)

    Broadhead, P.; Newman, G.A.

    1977-01-01

    A method is described for producing a luminescent image in a layer of a binding material in which is dispersed a thermoluminescent material. The layer is heated uniformly to a temperature of 80 to 300 0 C and is exposed to luminescence inducing radiation whilst so heated. The preferred exposing radiation is X-rays and preferably the thermoluminescent material is insensitive to electromagnetic radiation of wavelength longer than 300 mm. Information concerning preparation of the luminescent material is given in BP 1,347,672; this material has the advantage that at elevated temperatures it shows increased sensitivity compared with room temperature. At temperatures in the range 80 to 150 0 C the thermoluminescent material exhibits 'afterglow', allowing the image to persist for several seconds after the X-radiation has ceased, thus allowing the image to be retained for visual inspection in this temperature range. At higher temperatures, however, there is negligible 'afterglow'. The thermoluminescent layers so produced are particularly useful as fluoroscopic screens. The preferred method of heating the thermoluminescent material is described in BP 1,354,149. An example is given of the application of the method. (U.K.)

  20. Parietal lesions produce illusory conjunction errors in rats

    Directory of Open Access Journals (Sweden)

    Raymond PIERRE Kesner

    2012-05-01

    Full Text Available When several different objects are presented, visual objects are perceived correctly only if their features are identified and then bound together. Illusory-conjunction errors result when an object is correctly identified but is combined incorrectly. The parietal cortex (PPC has been shown repeatedly to play an important role in feature binding. The present study builds on a series of recent studies that have made use of visual search paradigms to elucidate the neural system involved in feature binding. This experiment attempts to define the role the PPC plays in binding the properties of a visual object that varies on the features of color and size in rats. Rats with PPC lesions or control surgery were exposed to three blocks of 20 trials administered over a 1-week period, with each block containing ten-one feature and ten-two feature trials. The target object consisted of one color object (e.g. black and white and one size object (e.g. short and tall. Of the ten one feature trials, five of the trials were tailored specifically for size discrimination and five for color discrimination. In the two-feature condition, the animal was required to locate the targeted object among four objects with two objects differing in size and two objects differing in color. The results showed a significant decrease in learning the task for the PPC lesioned rats compared to controls, especially for the two-feature condition. Based on a subsequent error analysis for color and size, the results showed a significant increase in illusory conjunction errors for the PPC lesioned rats relative to controls for color and relative to color discrimination, suggesting that the PPC may support feature binding as it relates to color. There was an increase in illusory conjunctions errors for both the PPC lesioned and control animals for size, but this appeared to be due to a difficulty with size discrimination.

  1. Detecting self-produced speech errors before and after articulation: An ERP investigation

    Directory of Open Access Journals (Sweden)

    Kevin Michael Trewartha

    2013-11-01

    Full Text Available It has been argued that speech production errors are monitored by the same neural system involved in monitoring other types of action errors. Behavioral evidence has shown that speech errors can be detected and corrected prior to articulation, yet the neural basis for such pre-articulatory speech error monitoring is poorly understood. The current study investigated speech error monitoring using a phoneme-substitution task known to elicit speech errors. Stimulus-locked event-related potential (ERP analyses comparing correct and incorrect utterances were used to assess pre-articulatory error monitoring and response-locked ERP analyses were used to assess post-articulatory monitoring. Our novel finding in the stimulus-locked analysis revealed that words that ultimately led to a speech error were associated with a larger P2 component at midline sites (FCz, Cz, and CPz. This early positivity may reflect the detection of an error in speech formulation, or a predictive mechanism to signal the potential for an upcoming speech error. The data also revealed that general conflict monitoring mechanisms are involved during this task as both correct and incorrect responses elicited an anterior N2 component typically associated with conflict monitoring. The response-locked analyses corroborated previous observations that self-produced speech errors led to a fronto-central ERN. These results demonstrate that speech errors can be detected prior to articulation, and that speech error monitoring relies on a central error monitoring mechanism.

  2. Internal Error Propagation in Explicit Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2014-09-11

    In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.

  3. Error response test system and method using test mask variable

    Science.gov (United States)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  4. Method of producing vegetable puree

    DEFF Research Database (Denmark)

    2004-01-01

    A process for producing a vegetable puree, comprising the sequential steps of: a)crushing, chopping or slicing the vegetable into pieces of 1 to 30 mm; b) blanching the vegetable pieces at a temperature of 60 to 90°C; c) contacted the blanched vegetable pieces with a macerating enzyme activity; d......) blending the macerated vegetable pieces and obtaining a puree....

  5. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  6. Method of producing polyalkylated oligoalkylenepolyamines

    Energy Technology Data Exchange (ETDEWEB)

    Elangovan, Arumugasamy

    2014-02-25

    A method of preparing polyalkylated oligoalkylenepolyamines is provided. The method includes contacting oligoalkylenepolyamine with a reagent composition comprising (a) alkyl bromide and/or alkyl chloride; (b) a basic agent; and (c) iodide salt. The alkylation reaction may be carried out in a polar, aprotic organic solvent.

  7. Method of producing ethyl alcohol

    Energy Technology Data Exchange (ETDEWEB)

    Philliskirk, G; Yates, H J

    1978-09-13

    Ethanol was produced from whey by removing protein from the whey by ultrafiltration, concentrating the deproteinized whey by reverse osmosis to a lactose content of at least 8 g/100 mL, fermenting with Candida pseudotropicalis NCYC 744, and distilling. E.g., milk whey was deproteinized to give a permeate containing 8.3 g lactose/100 mL. After fermentation, the final lactose content was 0.1 g/100 mL and the ethanol concentration was 3.55 g/100 mL, representing a 42% conversion of lactose to ethanol.

  8. New method of classifying human errors at nuclear power plants and the analysis results of applying this method to maintenance errors at domestic plants

    International Nuclear Information System (INIS)

    Takagawa, Kenichi; Miyazaki, Takamasa; Gofuku, Akio; Iida, Hiroyasu

    2007-01-01

    Since many of the adverse events that have occurred in nuclear power plants in Japan and abroad have been related to maintenance or operation, it is necessary to plan preventive measures based on detailed analyses of human errors made by maintenance workers or operators. Therefore, before planning preventive measures, we developed a new method of analyzing human errors. Since each human error is an unsafe action caused by some misjudgement made by a person, we decided to classify them into six categories according to the stage in the judgment process in which the error was made. By further classifying each error into either an omission-type or commission-type, we produced 12 categories of errors. Then, we divided them into the two categories of basic error tendencies and individual error tendencies, and categorized background factors into four categories: imperfect planning; imperfect facilities or tools; imperfect environment; and imperfect instructions or communication. We thus defined the factors in each category to make it easy to identify factors that caused the error. Then using this method, we studied the characteristics of human errors that involved maintenance workers and planners since many maintenance errors have occurred. Among the human errors made by workers (worker errors) during the implementation stage, the following three types were prevalent with approximately 80%: commission-type 'projection errors', omission-type comprehension errors' and commission type 'action errors'. The most common among the individual factors of worker errors was 'repetition or habit' (schema), based on the assumption of a typical situation, and the half number of the 'repetition or habit' cases (schema) were not influenced by any background factors. The most common background factor that contributed to the individual factor was 'imperfect work environment', followed by 'insufficient knowledge'. Approximately 80% of the individual factors were 'repetition or habit' or

  9. Method for producing ceramic bodies

    International Nuclear Information System (INIS)

    Prunier, A.R. Jr.; Spangenberg, S.F.; Wijeyesekera, S.

    1992-01-01

    This patent describes a method for preparing a superconducting ceramic article. It comprises heating a powdered admixture comprising a source of yttria (Y 2 O 3 ), a source of barium monoxide and a source of cupric oxide to a temperature of from about 800 degrees Centigrade to 900 degrees Centigrade to allow the admixture to be densified under pressure to more than about 65 percent of the admixture's theoretical density but low enough to substantially preclude melting of the admixture; applying to the heated admixture isostatic pressure of between about 80,000 psi (5.5 x 10 2 MPa) and about the fracture stress of the heated admixture, for a period of time of from about 0.1 second to about ten minutes to form a densified article with a density of more than about 65 percent of the admixture's theoretical density; and annealing the densified article in the presence of gaseous oxygen under conditions sufficient to convert the densified article to a superconducting ceramic article having a composition comprising YBa 2 Cu 3 O 7 - x where O < x < 0.6

  10. Internal quality control of RIA with Tonks error calculation method

    International Nuclear Information System (INIS)

    Chen Xiaodong

    1996-01-01

    According to the methodology feature of RIA, an internal quality control chart with Tonks error calculation method which is suitable for RIA is designed. The quality control chart defines the value of the allowance error with normal reference range. The method has the simplicity of its performance and directly perceived through the senses. Taking the example of determining T 3 and T 4 , the calculation of allowance error, drawing of quality control chart and the analysis of result are introduced

  11. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  12. A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers

    Energy Technology Data Exchange (ETDEWEB)

    Melboe, Hallgeir

    2001-10-01

    This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)

  13. Internal Error Propagation in Explicit Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.; Loczi, Lajos; Parsani, Matteo

    2014-01-01

    of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods

  14. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper; Tempone, Raul

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  15. THE PRACTICAL ANALYSIS OF FINITE ELEMENTS METHOD ERRORS

    Directory of Open Access Journals (Sweden)

    Natalia Bakhova

    2011-03-01

    Full Text Available Abstract. The most important in the practical plan questions of reliable estimations of finite elementsmethod errors are considered. Definition rules of necessary calculations accuracy are developed. Methodsand ways of the calculations allowing receiving at economical expenditures of computing work the best finalresults are offered.Keywords: error, given the accuracy, finite element method, lagrangian and hermitian elements.

  16. Error of image saturation in the structured-light method.

    Science.gov (United States)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-01-01

    In the phase-measuring structured-light method, image saturation will induce large phase errors. Usually, by selecting proper system parameters (such as the phase-shift number, exposure time, projection intensity, etc.), the phase error can be reduced. However, due to lack of a complete theory of phase error, there is no rational principle or basis for the selection of the optimal system parameters. For this reason, the phase error due to image saturation is analyzed completely, and the effects of the two main factors, including the phase-shift number and saturation degree, on the phase error are studied in depth. In addition, the selection of optimal system parameters is discussed, including the proper range and the selection principle of the system parameters. The error analysis and the conclusion are verified by simulation and experiment results, and the conclusion can be used for optimal parameter selection in practice.

  17. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  18. Error Analysis for Fourier Methods for Option Pricing

    KAUST Repository

    Hä ppö lä , Juho

    2016-01-01

    We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE

  19. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  20. Grinding Method and Error Analysis of Eccentric Shaft Parts

    Science.gov (United States)

    Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua

    2017-12-01

    RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.

  1. Method and apparatus for producing microspherical particles

    International Nuclear Information System (INIS)

    Egli, W.; Bailey, W.H.; Leary, D.F.; Lansley, R.J.

    1979-01-01

    This invention relates generally to a method and apparatus for producing microspherical particles and more particularly to a method and apparatus which are particularly useful in connection with the sol-gel process for the production of nuclear fuel kernels. (U.K.)

  2. Methods of producing compounds from plant materials

    Science.gov (United States)

    Werpy, Todd A [West Richland, WA; Schmidt, Andrew J [Richland, WA; Frye, Jr., John G.; Zacher, Alan H. , Franz; James A. , Alnajjar; Mikhail S. , Neuenschwander; Gary G. , Alderson; Eric V. , Orth; Rick J. , Abbas; Charles A. , Beery; Kyle E. , Rammelsberg; Anne M. , Kim; Catherine, J [Decatur, IL

    2010-01-26

    The invention includes methods of processing plant material by adding water to form a mixture, heating the mixture, and separating a liquid component from a solid-comprising component. At least one of the liquid component and the solid-comprising component undergoes additional processing. Processing of the solid-comprising component produces oils, and processing of the liquid component produces one or more of glycerol, ethylene glycol, lactic acid and propylene glycol. The invention includes a process of forming glycerol, ethylene glycol, lactic acid and propylene glycol from plant matter by adding water, heating and filtering the plant matter. The filtrate containing starch, starch fragments, hemicellulose and fragments of hemicellulose is treated to form linear poly-alcohols which are then cleaved to produce one or more of glycerol, ethylene glycol, lactic acid and propylene glycol. The invention also includes a method of producing free and/or complexed sterols and stanols from plant material.

  3. Methods of producing compounds from plant material

    Energy Technology Data Exchange (ETDEWEB)

    Werpy, Todd A.; Schmidt, Andrew J.; Frye, Jr., John G.; Zacher, Alan H.; Franz, James A.; Alnajjar, Mikhail S.; Neuenschwander, Gary G.; Alderson, Eric V.; Orth, Rick J.; Abbas, Charles A.; Beery, Kyle E.; Rammelsberg, Anne M.; Kim, Catherine J.

    2006-01-03

    The invention includes methods of processing plant material by adding water to form a mixture, heating the mixture, and separating a liquid component from a solid-comprising component. At least one of the liquid component and the solid-comprising component undergoes additional processing. Processing of the solid-comprising component produces oils, and processing of the liquid component produces one or more of glycerol, ethylene glycol, lactic acid and propylene glycol. The invention includes a process of forming glycerol, ethylene glycol, lactic acid and propylene glycol from plant matter by adding water, heating and filtering the plant matter. The filtrate containing starch, starch fragments, hemicellulose and fragments of hemicellulose is treated to form linear poly-alcohols which are then cleaved to produce one or more of glycerol, ethylene glycol, lactic acid and propylene glycol. The invention also includes a method of producing free and/or complexed sterols and stanols from plant material.

  4. Nonlinear error dynamics for cycled data assimilation methods

    International Nuclear Information System (INIS)

    Moodey, Alexander J F; Lawless, Amos S; Potthast, Roland W E; Van Leeuwen, Peter Jan

    2013-01-01

    We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at t k , k = 1, 2, 3, …, with a first guess given by the state propagated via a dynamical system model M k from time t k−1 to time t k . In particular, for nonlinear dynamical systems M k that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ‖e k ‖ ≔ ‖x (a) k − x (t) k ‖ between the estimated state x (a) and the true state x (t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system M k under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ‖e k ‖, depending on the size δ of the observation error, the reconstruction operator R α , the observation operator H and the Lipschitz constants K (1) and K (2) on the lower and higher modes of M k controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c‖R α ‖δ with some constant c. Since ‖R α ‖ → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz ‘63 system. (paper)

  5. Method for producing uranium atomic beam source

    International Nuclear Information System (INIS)

    Krikorian, O.H.

    1976-01-01

    A method is described for producing a beam of neutral uranium atoms by vaporizing uranium from a compound UM/sub x/ heated to produce U vapor from an M boat or from some other suitable refractory container such as a tungsten boat, where M is a metal whose vapor pressure is negligible compared with that of uranium at the vaporization temperature. The compound, for example, may be the uranium-rhenium compound, URe 2 . An evaporation rate in excess of about 10 times that of conventional uranium beam sources is produced

  6. Produced water treatment methods for SAGD

    Energy Technology Data Exchange (ETDEWEB)

    Minnich, K. [Veolia Water Solutions and Technologies, Mississauga, ON (Canada)

    2008-07-01

    Produced water treatment methods for steam assisted gravity drainage (SAGD) processes were presented. Lime softening is used to remove sludge before weak acid cation processes. However, the process is not reliable in cold climates, and disposal of the sludge is now posing environmental problems in Alberta. High pH MVC evaporation processes use sodium hydroxide (NaOH) additions to prevent silica scaling. However the process produces silica wastes that are difficult to dispose of. The sorption slurry process was designed to reduce the use of caustic soda and develop a cost-effective method of disposing evaporator concentrates. The method produces 98 per cent steam quality for SAGD injection. Silica is sorbed onto crystals in order to prevent silica scaling. The evaporator concentrate from the process is suitable for on- and off-site deep well disposal. The ceramic membrane process was designed to reduce the consumption of chemicals and improve the reliability of water treatment processes. The ion exchange desilication process uses 80 per cent less power and produces 80 per cent fewer CO{sub 2} emissions than MVC evaporators. A comparative operating cost evaluation of various electric supply configurations and produced water treatment processes was also included, as well as an analysis of produced water chemistry. tabs., figs.

  7. Methods for producing reinforced carbon nanotubes

    Science.gov (United States)

    Ren, Zhifen [Newton, MA; Wen, Jian Guo [Newton, MA; Lao, Jing Y [Chestnut Hill, MA; Li, Wenzhi [Brookline, MA

    2008-10-28

    Methods for producing reinforced carbon nanotubes having a plurality of microparticulate carbide or oxide materials formed substantially on the surface of such reinforced carbon nanotubes composite materials are disclosed. In particular, the present invention provides reinforced carbon nanotubes (CNTs) having a plurality of boron carbide nanolumps formed substantially on a surface of the reinforced CNTs that provide a reinforcing effect on CNTs, enabling their use as effective reinforcing fillers for matrix materials to give high-strength composites. The present invention also provides methods for producing such carbide reinforced CNTs.

  8. Output Error Method for Tiltrotor Unstable in Hover

    Directory of Open Access Journals (Sweden)

    Lichota Piotr

    2017-03-01

    Full Text Available This article investigates unstable tiltrotor in hover system identification from flight test data. The aircraft dynamics was described by a linear model defined in Body-Fixed-Coordinate System. Output Error Method was selected in order to obtain stability and control derivatives in lateral motion. For estimating model parameters both time and frequency domain formulations were applied. To improve the system identification performed in the time domain, a stabilization matrix was included for evaluating the states. In the end, estimates obtained from various Output Error Method formulations were compared in terms of parameters accuracy and time histories. Evaluations were performed in MATLAB R2009b environment.

  9. Methods and systems for producing syngas

    Science.gov (United States)

    Hawkes, Grant L; O& #x27; Brien, James E; Stoots, Carl M; Herring, J. Stephen; McKellar, Michael G; Wood, Richard A; Carrington, Robert A; Boardman, Richard D

    2013-02-05

    Methods and systems are provided for producing syngas utilizing heat from thermochemical conversion of a carbonaceous fuel to support decomposition of at least one of water and carbon dioxide using one or more solid-oxide electrolysis cells. Simultaneous decomposition of carbon dioxide and water or steam by one or more solid-oxide electrolysis cells may be employed to produce hydrogen and carbon monoxide. A portion of oxygen produced from at least one of water and carbon dioxide using one or more solid-oxide electrolysis cells is fed at a controlled flow rate in a gasifier or combustor to oxidize the carbonaceous fuel to control the carbon dioxide to carbon monoxide ratio produced.

  10. Method for decoupling error correction from privacy amplification

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.

  11. Method for decoupling error correction from privacy amplification

    International Nuclear Information System (INIS)

    Lo, Hoi-Kwong

    2003-01-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof

  12. Method for producing substrates for superconducting layers

    DEFF Research Database (Denmark)

    2013-01-01

    There is provided a method for producing a substrate (600) suitable for supporting an elongated superconducting element, wherein, e.g., a deformation process is utilized in order to form disruptive strips in a layered solid element, and where etching is used to form undercut volumes (330, 332...

  13. Method for Producing Substrates for Superconducting Layers

    DEFF Research Database (Denmark)

    2015-01-01

    There is provided a method for producing a substrate suitable for supporting an elongated superconducting element, wherein one or more elongated strips of masking material are placed on a solid element (202) so as to form one or more exposed elongated areas being delimited on one or two sides...

  14. Analysis of possible systematic errors in the Oslo method

    International Nuclear Information System (INIS)

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-01-01

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  15. Method for producing small hollow spheres

    International Nuclear Information System (INIS)

    Hendricks, C.D.

    1979-01-01

    A method is described for producing small hollow spheres of glass, metal or plastic, wherein the sphere material is mixed with or contains as part of the composition a blowing agent which decomposes at high temperature (T >approx. 600 0 C). As the temperature is quickly raised, the blowing agent decomposes and the resulting gas expands from within, thus forming a hollow sphere of controllable thickness. The thus produced hollow spheres (20 to 10 3 μm) have a variety of application, and are particularly useful in the fabrication of targets for laser implosion such as neutron sources, laser fusion physics studies, and laser initiated fusion power plants

  16. An in-situ measuring method for planar straightness error

    Science.gov (United States)

    Chen, Xi; Fu, Luhua; Yang, Tongyu; Sun, Changku; Wang, Zhong; Zhao, Yan; Liu, Changjie

    2018-01-01

    According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.

  17. Method of producing silicon carbide articles

    International Nuclear Information System (INIS)

    Milewski, J.V.

    1985-01-01

    A method of producing articles comprising reaction-bonded silicon carbide (SiC) and graphite (and/or carbon) is given. The process converts the graphite (and/or carbon) in situ to SiC, thus providing the capability of economically obtaining articles made up wholly or partially of SiC having any size and shape in which graphite (and/or carbon) can be found or made. When the produced articles are made of an inner graphite (and/or carbon) substrate to which SiC is reaction bonded, these articles distinguish SiC-coated graphite articles found in the prior art by the feature of a strong bond having a gradual (as opposed to a sharply defined) interface which extends over a distance of mils. A method for forming SiC whisker-reinforced ceramic matrices is also given. The whisker-reinforced articles comprise SiC whiskers which substantially retain their structural integrity

  18. Method of producing encapsulated thermonuclear fuel particles

    International Nuclear Information System (INIS)

    Smith, W.H.; Taylor, W.L.; Turner, H.L.

    1976-01-01

    A method of producing a fuel particle is disclosed, which comprises forming hollow spheroids which have a mass number greater than 50, immersing said spheroids while under the presence of pressure and heat in a gaseous atmosphere containing an isotope, such as deuterium and tritium, so as to diffuse the gas into the spheroid and thereafter cooling said spheroids up to about 77 0 Kelvin to about 4 0 Kelvin. 4 Claims, 3 Drawing Figures

  19. Error evaluation of inelastic response spectrum method for earthquake design

    International Nuclear Information System (INIS)

    Paz, M.; Wong, J.

    1981-01-01

    Two-story, four-story and ten-story shear building-type frames subjected to earthquake excitaion, were analyzed at several levels of their yield resistance. These frames were subjected at their base to the motion recorded for north-south component of the 1940 El Centro earthquake, and to an artificial earthquake which would produce the response spectral charts recommended for design. The frames were first subjected to 25% or 50% of the intensity level of these earthquakes. The resulting maximum relative displacement for each story of the frames was assumed to be yield resistance for the subsequent analyses at 100% of intensity for the excitation. The frames analyzed were uniform along their height with the stiffness adjusted as to result in 0.20 seconds of the fundamental period for the two-story frame, 0.40 seconds for the four-story frame and 1.0 seconds for the ten-story frame. Results of the study provided the following conclusions: (1) The percentage error in floor displacement for linear behavior was less than 10%; (2) The percentage error in floor displacement for inelastic behavior (elastoplastic) could be as high as 100%; (3) In most of the cases analyzed, the error increased with damping in the system; (4) As a general rule, the error increased as the modal yield resistance decreased; (5) The error was lower for the structures subjected to the 1940 E1 Centro earthquake than for the same structures subjected to an artificial earthquake which was generated from the response spectra for design. (orig./HP)

  20. ECOLOGICAL REGIONALIZATION METHODS OF OIL PRODUCING AREAS

    Directory of Open Access Journals (Sweden)

    Inna Ivanovna Pivovarova

    2017-01-01

    Full Text Available The paper analyses territory zoning methods with varying degrees of anthropogenic pollution risk. The summarized results of spatial analysis of oil pollution of surface water in the most developed oil-producing region of Russia. An example of GIS-zoning according to the degree of environmental hazard is presented. All possible algorithms of cluster analysis are considered for isolation of homogeneous data structures. The conclusion is made on the benefits of using combined methods of analysis for assessing the homogeneity of specific environmental characteristics in selected territories.

  1. Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme

    Directory of Open Access Journals (Sweden)

    Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin

    2012-08-01

    Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words:  Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.

  2. Review of current GPS methodologies for producing accurate time series and their error sources

    Science.gov (United States)

    He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping

    2017-05-01

    The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e

  3. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  4. Hydrogen producing method and device therefor

    International Nuclear Information System (INIS)

    Iwamura, Yasuhiro; Ito, Takehiko; Goto, Nobuo; Toyota, Ichiro; Tonegawa, Hiroshi.

    1997-01-01

    The present invention concerns a process for producing hydrogen from water by utilizing a γ · X ray radiation source such as spent nuclear fuels. Hydrogen is formed from water by combining a scintillator which uses a γ · X ray radiation source as an energy source to emit UV light and an optical catalyst or an optical catalyst electrode which undergoes UV light to decompose water into hydrogen and oxygen. The present invention provides a method of effectively using spent fuel assemblies which have not been used at present and capable of converting them into hydrogen as storable chemical energy. (N.H.)

  5. Cells and methods for producing fatty alcohols

    Science.gov (United States)

    Pfleger, Brian F.; Youngquist, Tyler J.

    2017-07-18

    Recombinant cells and methods for improved yield of fatty alcohols. The recombinant cells harbor a recombinant thioesterase gene, a recombinant acyl-CoA synthetase gene, and a recombinant acyl-CoA reductase gene. In addition, a gene product from one or more of an acyl-CoA dehydrogenase gene, an enoyl-CoA hydratase gene, a 3-hydroxyacyl-CoA dehydrogenase gene, and a 3-ketoacyl-CoA thiolase gene in the recombinant cells is functionally deleted. Culturing the recombinant cells produces fatty alcohols at high yields.

  6. Method of producing granulated ceramic nuclear fuels

    International Nuclear Information System (INIS)

    Wilkinson, W.L.

    1976-01-01

    For the production of granulated ceramic nuclear fuels with a grain size spectrum as narrow as possible it is proposed to suspend the nuclear fuel powder in a non-aqueous solvent with small content of hydrogen (e.g. chloridized hydrocarbons) while adding a binding agent and then dry it by means of rays. As binding agent polybutyl methane acrylate in dibutyl phthalate is proposed. The method is described by the example of UO 2 -powder in trichloroethylene. The dry granulated material is produced in one working step. (UWI) [de

  7. Error Estimation and Accuracy Improvements in Nodal Transport Methods; Estimacion de Errores y Aumento de la Precision en Metodos Nodales de Transporte

    Energy Technology Data Exchange (ETDEWEB)

    Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.

  8. Method of producing zeolite encapsulated nanoparticles

    DEFF Research Database (Denmark)

    2015-01-01

    The invention therefore relates to a method for producing zeolite, zeolite-like or zeotype encapsulated metal nanoparticles, the method comprises the steps of: 1) Adding one or more metal precursors to a silica or alumina source; 2) Reducing the one or more metal precursors to form metal...... nanoparticles on the surface of the silica or alumina source; 3) Passing a gaseous hydrocarbon, alkyl alcohol or alkyl ether over the silica or alumina supported metal nanoparticles to form a carbon template coated zeolite, zeolite-like or zeotype precursor composition; 4a) Adding a structure directing agent...... to the carbon template coated zeolite, zeolite-like or zeotype precursor composition thereby creating a zeolite, zeolite-like or zeotype gel composition; 4b) Crystallising the zeolite, zeolite-like or zeotype gel composition by subjecting said composition to a hydrothermal treatment; 5) Removing the carbon...

  9. Error Analysis for Fourier Methods for Option Pricing

    KAUST Repository

    Häppölä, Juho

    2016-01-06

    We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE). Applying a Fourier transformation to the PIDE yields an ordinary differential equation that can be solved analytically in terms of the characteristic exponent of the Levy process. Then, a numerical inverse Fourier transform allows us to obtain the option price. We present a novel bound for the error and use this bound to set the parameters for the numerical method. We analyze the properties of the bound for a dissipative and pure-jump example. The bound presented is independent of the asymptotic behaviour of option prices at extreme asset prices. The error bound can be decomposed into a product of terms resulting from the dynamics and the option payoff, respectively. The analysis is supplemented by numerical examples that demonstrate results comparable to and superior to the existing literature.

  10. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  11. Error Analysis of Galerkin's Method for Semilinear Equations

    Directory of Open Access Journals (Sweden)

    Tadashi Kawanago

    2012-01-01

    Full Text Available We establish a general existence result for Galerkin's approximate solutions of abstract semilinear equations and conduct an error analysis. Our results may be regarded as some extension of a precedent work (Schultz 1969. The derivation of our results is, however, different from the discussion in his paper and is essentially based on the convergence theorem of Newton’s method and some techniques for deriving it. Some of our results may be applicable for investigating the quality of numerical verification methods for solutions of ordinary and partial differential equations.

  12. Reduction of very large reaction mechanisms using methods based on simulation error minimization

    Energy Technology Data Exchange (ETDEWEB)

    Nagy, Tibor; Turanyi, Tamas [Institute of Chemistry, Eoetvoes University (ELTE), P.O. Box 32, H-1518 Budapest (Hungary)

    2009-02-15

    A new species reduction method called the Simulation Error Minimization Connectivity Method (SEM-CM) was developed. According to the SEM-CM algorithm, a mechanism building procedure is started from the important species. Strongly connected sets of species, identified on the basis of the normalized Jacobian, are added and several consistent mechanisms are produced. The combustion model is simulated with each of these mechanisms and the mechanism causing the smallest error (i.e. deviation from the model that uses the full mechanism), considering the important species only, is selected. Then, in several steps other strongly connected sets of species are added, the size of the mechanism is gradually increased and the procedure is terminated when the error becomes smaller than the required threshold. A new method for the elimination of redundant reactions is also presented, which is called the Principal Component Analysis of Matrix F with Simulation Error Minimization (SEM-PCAF). According to this method, several reduced mechanisms are produced by using various PCAF thresholds. The reduced mechanism having the least CPU time requirement among the ones having almost the smallest error is selected. Application of SEM-CM and SEM-PCAF together provides a very efficient way to eliminate redundant species and reactions from large mechanisms. The suggested approach was tested on a mechanism containing 6874 irreversible reactions of 345 species that describes methane partial oxidation to high conversion. The aim is to accurately reproduce the concentration-time profiles of 12 major species with less than 5% error at the conditions of an industrial application. The reduced mechanism consists of 246 reactions of 47 species and its simulation is 116 times faster than using the full mechanism. The SEM-CM was found to be more effective than the classic Connectivity Method, and also than the DRG, two-stage DRG, DRGASA, basic DRGEP and extended DRGEP methods. (author)

  13. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    Science.gov (United States)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  14. Error assessment in recombinant baculovirus titration: evaluation of different methods.

    Science.gov (United States)

    Roldão, António; Oliveira, Rui; Carrondo, Manuel J T; Alves, Paula M

    2009-07-01

    The success of baculovirus/insect cells system in heterologous protein expression depends on the robustness and efficiency of the production workflow. It is essential that process parameters are controlled and include as little variability as possible. The multiplicity of infection (MOI) is the most critical factor since irreproducible MOIs caused by inaccurate estimation of viral titers hinder batch consistency and process optimization. This lack of accuracy is related to intrinsic characteristics of the method such as the inability to distinguish between infectious and non-infectious baculovirus. In this study, several methods for baculovirus titration were compared. The most critical issues identified were the incubation time and cell concentration at the time of infection. These variables influence strongly the accuracy of titers and must be defined for optimal performance of the titration method. Although the standard errors of the methods varied significantly (7-36%), titers were within the same order of magnitude; thus, viral titers can be considered independent of the method of titration. A cost analysis of the baculovirus titration methods used in this study showed that the alamarblue, real time Q-PCR and plaque assays were the most expensive techniques. The remaining methods cost on average 75% less than the former methods. Based on the cost, time and error analysis undertaken in this study, the end-point dilution assay, microculture tetrazolium assay and flow cytometric assay were found to be the techniques that combine all these three main factors better. Nevertheless, it is always recommended to confirm the accuracy of the titration either by comparison with a well characterized baculovirus reference stock or by titration using two different methods and verification of the variability of results.

  15. Method for producing zeolites and zeotypes

    DEFF Research Database (Denmark)

    2015-01-01

    The invention relates to a method for producing zeolite, zeolite-like or zeotype particles comprising the steps of: 1 ) Adding one or more metal precursors to a silica or alumina source; 2) Reducing the one or more metal precursors to form metal nanoparticles on the surface of the silica or alumina...... source; 3) Passing a gaseous hydrocarbon, alkyl alcohol or alkyl ether over the silica or alumina supported metal nanoparticle to form a carbon template coated zeolite, zeolite-like or zeotype precursor composition; 4a) Adding a structure directing agent to the carbon template coated zeolite, zeolite......-like or zeotype precursor composition thereby creating a zeolite, zeolite-like or zeotype gel composition; 4b) Crystallising the zeolite, zeolite-like or zeotype gel composition by subjecting said composition to a hydrothermal treatment; 5) Removing the carbon template and structure directing agent and isolating...

  16. Method of producing pitch (distillation residue)

    Energy Technology Data Exchange (ETDEWEB)

    Stepanenko, M.A.; Belkina, T.V.; Krysin, V.P.

    1979-08-15

    A method is proposed for producing pitch by mixing hard coal pitch with anthracene fraction and thermal treatment of the mixture. The method is distinguished in that in order to increase the quality of the pitch, the anthracene fraction is subjected to thermal treatment at 250-300/sup 0/ for 10-13 hours in the presence of air. This duration of heat treatment allows one to build up in the anthracene fraction up to 20-24% of material which is not soluble and toluene, without the formation of products which are not soluble in quinoline. The fraction prepared in this manner is inserted into the initial pitch in the ratio 1:2 up to 1:9, the mixture is subject to heat treatment at temperature 360-380/sup 0/ and air consumption 7-91/kgX hours until the production of pitch with softening temperature of 85-90/sup 0/. As the initial raw material we used pitch with softening temperature of 60/sup 0/, content of substances which are not soluble in quinoline, 2.0% which are not soluble and toluene 20.6% and coking residue of 49.2%. Example. 80 grams of anthracene fraction is added to 320 grams of pitch. The anthracene fraction is subjected previously to heat treatment at 300/sup 0/ for 13 hours in the presence of air, supplied in the amount of 9 liters per hour. As a result of the heat treatment of the content of materials which are not soluble in toluence in the anthracene fraction is 24.0%, in quinoline it is 0.1%. The ratio of a pitch and thermally treated anthracene fraction in the mixture was 4:l. The produced mixture was subjected to heat treatment at 360/sup 0/ for 1.5 hours with air supply in the amount of 7 liters/ kilograms/hours. Pitch is produced with the following characteristics: softening temperature 88/sup 0/, content of substances which are not soluble in toluene 32.5%, in quinilone, 6.0%, coking residue, 56.7%. The invention can be used in the chemical coking and petrochemical industry.

  17. Residual-based Methods for Controlling Discretization Error in CFD

    Science.gov (United States)

    2015-08-24

    ccjccjccj iVi Jwxf V dVxf V 1 ,,, )(det)( 1)(1   . (25) where J is the Jacobian of the coordinate transformation and the weights can be found from...179. Layton, W., Lee , H.K., and Peterson, J. (2002). “A Defect-Correction Method for the Incompressible Navier-Stokes Equations,” Applied Mathematics...and Computation, Vol. 129, pp. 1-19. Lee , D. and Tsuei, Y.M. (1992). “A Formula for Estimation of Truncation Errors of Convective Terms in a

  18. Methods for producing nanoparticles using palladium salt and uses thereof

    Science.gov (United States)

    Chan, Siu-Wai; Liang, Hongying

    2015-12-01

    The disclosed subject matter is directed to a method for producing nanoparticles, as well as the nanoparticles produced by this method. In one embodiment, the nanoparticles produced by the disclosed method have a high defect density.

  19. Total error components - isolation of laboratory variation from method performance

    International Nuclear Information System (INIS)

    Bottrell, D.; Bleyler, R.; Fisk, J.; Hiatt, M.

    1992-01-01

    The consideration of total error across sampling and analytical components of environmental measurements is relatively recent. The U.S. Environmental Protection Agency (EPA), through the Contract Laboratory Program (CLP), provides complete analyses and documented reports on approximately 70,000 samples per year. The quality assurance (QA) functions of the CLP procedures provide an ideal data base-CLP Automated Results Data Base (CARD)-to evaluate program performance relative to quality control (QC) criteria and to evaluate the analysis of blind samples. Repetitive analyses of blind samples within each participating laboratory provide a mechanism to separate laboratory and method performance. Isolation of error sources is necessary to identify effective options to establish performance expectations, and to improve procedures. In addition, optimized method performance is necessary to identify significant effects that result from the selection among alternative procedures in the data collection process (e.g., sampling device, storage container, mode of sample transit, etc.). This information is necessary to evaluate data quality; to understand overall quality; and to provide appropriate, cost-effective information required to support a specific decision

  20. The commission errors search and assessment (CESA) method

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B.; Dang, V. N

    2007-05-15

    Errors of Commission (EOCs) refer to the performance of inappropriate actions that aggravate a situation. In Probabilistic Safety Assessment (PSA) terms, they are human failure events that result from the performance of an action. This report presents the Commission Errors Search and Assessment (CESA) method and describes the method in the form of user guidance. The purpose of the method is to identify risk-significant situations with a potential for EOCs in a predictive analysis. The main idea underlying the CESA method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. The catalog of required actions provides a basis for a systematic search of context-action combinations. To focus the search towards risk-significant scenarios, the actions that are examined in the CESA search are prioritized according to the importance of the systems and functions that are affected by these actions. The existing PSA provides this importance information; the Risk Achievement Worth or Risk Increase Factor values indicate the systems/functions for which an EOC contribution would be more significant. In addition, the contexts, i.e. PSA scenarios, for which the EOC opportunities are reviewed are also prioritized according to their importance (top sequences or cut sets). The search through these context-action combinations results in a set of EOC situations to be examined in detail. CESA has been applied in a plant-specific pilot study, which showed the method to be feasible and effective in identifying plausible EOC opportunities. This experience, as well as the experience with other EOC analyses, showed that the quantification of EOCs remains an issue. The quantification difficulties and the outlook for their resolution conclude the report. (author)

  1. The commission errors search and assessment (CESA) method

    International Nuclear Information System (INIS)

    Reer, B.; Dang, V. N.

    2007-05-01

    Errors of Commission (EOCs) refer to the performance of inappropriate actions that aggravate a situation. In Probabilistic Safety Assessment (PSA) terms, they are human failure events that result from the performance of an action. This report presents the Commission Errors Search and Assessment (CESA) method and describes the method in the form of user guidance. The purpose of the method is to identify risk-significant situations with a potential for EOCs in a predictive analysis. The main idea underlying the CESA method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. The catalog of required actions provides a basis for a systematic search of context-action combinations. To focus the search towards risk-significant scenarios, the actions that are examined in the CESA search are prioritized according to the importance of the systems and functions that are affected by these actions. The existing PSA provides this importance information; the Risk Achievement Worth or Risk Increase Factor values indicate the systems/functions for which an EOC contribution would be more significant. In addition, the contexts, i.e. PSA scenarios, for which the EOC opportunities are reviewed are also prioritized according to their importance (top sequences or cut sets). The search through these context-action combinations results in a set of EOC situations to be examined in detail. CESA has been applied in a plant-specific pilot study, which showed the method to be feasible and effective in identifying plausible EOC opportunities. This experience, as well as the experience with other EOC analyses, showed that the quantification of EOCs remains an issue. The quantification difficulties and the outlook for their resolution conclude the report. (author)

  2. Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas

    Science.gov (United States)

    Herzberg, Tina

    2010-01-01

    In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…

  3. A method for producing lower olefins

    Energy Technology Data Exchange (ETDEWEB)

    Lemayev, N.V.; Grigorovich, V.A.; Isayev, V.A.; Liakumovich, A.G.; Mitrofanov, A.I.; Orekhov, A.I.; Trifonov, S.V.; Vernov, P.A.

    1983-01-01

    In the known method for producing lower olefins by pyrolysis of a hydrocarbon raw material in the presence of an initiator which contains ammonia, in order to increase the output of the target products, morpholine or piperidine are additionally introduced into the initiator in a volume of 0.00001 to 0.1 percent each, converted for the raw material. The compounds to be added may be introduced into the pyrolysis zone by dissolving them in the hydrocarbon raw material or in water, which forms vapors with dilution of the raw material being subjected to pyrolysis. The increase in the outputs of the lower olefins in the process through the use of additives may be explained by the synergistic effect of the mixture of ammonia, morpholine and piperidine used. With benzine pyrolysis without the additives the output of ethylene is 24.1 percent; in comparable conditions with additives of ammonia or morpholine alone, or of piperidine alone, the outputs are 24.0, 26.2 and 25.8 percent, respectively. With the joint presence of ammonia and piperidine, the output of ethylene reaches 27.2 percent and with the addition of ammonia and morpholine, it reaches 27.4 percent.

  4. Method for producing ceramic particles and agglomerates

    Science.gov (United States)

    Phillips, Jonathan; Gleiman, Seth S.; Chen, Chun-Ku

    2001-01-01

    A method for generating spherical and irregularly shaped dense particles of ceramic oxides having a controlled particle size and particle size distribution. An aerosol containing precursor particles of oxide ceramics is directed into a plasma. As the particles flow through the hot zone of the plasma, they melt, collide, and join to form larger particles. If these larger particles remain in the hot zone, they continue melting and acquire a spherical shape that is retained after they exit the hot zone, cool down, and solidify. If they exit the hot zone before melting completely, their irregular shape persists and agglomerates are produced. The size and size distribution of the dense product particles can be controlled by adjusting several parameters, the most important in the case of powder precursors appears to be the density of powder in the aerosol stream that enters the plasma hot zone. This suggests that particle collision rate is responsible for determining ultimate size of the resulting sphere or agglomerate. Other parameters, particularly the gas flow rates and the microwave power, are also adjusted to control the particle size distribution.

  5. Using the CAIR-method to derive cognitive error mechanisms

    International Nuclear Information System (INIS)

    Straeter, Oliver

    2000-01-01

    This paper describes an application of the second-generation method CAHR (Connectionism Assessment of Human Reliability; Straeter, 1997) that was developed at the Technical University of Munich and the GRS in the years from 1992 to 1998. The method enables to combine event analysis and assessment and therefore to base human reliability assessment on past experience. The term connectionism' was coined by modeling human cognition on the basis of artificial intelligence models. Connectionism is a term describing methods that represent complex interrelations of various parameters (known for pattern recognition, expert systems, modeling of cognition). The method enables to combine event analysis and assessment on past experience. The paper will demonstrate the application of the method to communication aspects in NPPs (Nuclear Power Plants) and will give some outlooks for further developments. Application of the method to the problem of communication failures, for examples, initial work on communication within the low-power and shut down study for Boiling Water Reactors (BWRs), investigation of communication failures, importance of procedural and verbal communication for different error type and causes for failures in procedural and verbal communication are explained. (S.Y.)

  6. Incremental Volumetric Remapping Method: Analysis and Error Evaluation

    International Nuclear Information System (INIS)

    Baptista, A. J.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.; Alves, J. L.

    2007-01-01

    In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90 deg. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the

  7. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  8. A method for producing liquid paraffin

    Energy Technology Data Exchange (ETDEWEB)

    Dorodnova, V.S.; Martynenko, A.G.; Pereverzev, A.N.

    1983-01-01

    In the known method (Sp) for producing liquid paraffins (ZhP) through processing an oil fraction by crystalline carbamide in the presence of a solvent with subsequent removal of the formed complex (Km) of KA with the liquid paraffins from the deparaffinized product, staged washing and decomposition of the complex with isolation of the liquid paraffins, in order to increase the output of liquid paraffin and to improve its quality, beta,beta'-dichlorethyl ether (khloreks) in a mixture with methylethyl ketone (MEK) or methylisobutyl ketone in a ratio of 1 is used as the solvent, the processing by the crystalline carbamide is conducted with the addition of 180 to 260 percent solvent to raw material (Sr) and the washing of the composition is conducted by a solvent in the first stage and by methylethyl ketone or methylisobutyl ketone in the second stage. The crystalline carbamide for the complex formation is taken in a conversion of 60 to 70 percent for the raw material for observing the raw material to solvent ratio of from 1 to 1.8 to 2.6 to 0.6 to 0.6. The temperature in the zone of formation of the complex is maintained at 5 to 35 degrees. The presence of beta,beta'-dichlorethyl ether, which has high selectivity relative to aromatic hydrocarbons (ArU) and to resinous compounds provides for a sharp reduction in the adsorption of the undesired components on the surface of the granules of the complex and the crystalline carbamide and to a reduction in the portion of the alkylaromatic hydrocarbons (UV) extracted into the complex, which leads to a substantial improvement in the quality of the obtained liquid paraffins.

  9. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  10. A method for producing light olefines

    Energy Technology Data Exchange (ETDEWEB)

    Kavada, N.; Katsuno, K.

    1982-11-04

    A method is proposed for producing light olefins from MeOH in the presence of a catalyst (Kt), a crystalline silicate which includes silicon, an alkaline and or alkaline earth metal, titanium(4+) and phosphorus(5+), whose composition is described by the formula p(019 plus or minus 0.3)M2/mO with pZ4/nO2 with SiO2, where M is the alkaline or alkaline earth metal, Z = titanium(4+) or phosphorus(5+), m is the valency of the metal, n is the valency of Z and O is less than p is less than or equal to 0.1. A high selectivity of MeOH to C2 to C4 olefins is achieved in the presence of the catalyst. Silicon powder, silica gel, colloidal silicon, liquid glass or silicates of alkaline metals in a ratio of SiO2 to M2O of 1 to 5 is used as the source of the first component. Hydroxides or silicates of potassium and sodium (best sodium) and nitrates or chlorides of alkaline earth metals (best calcium) are used as the source of the second component. Water soluble compounds of titanium(4+) (best Ti(SO4)2, TiBr4 and TiI4) and phosphorus(5+) (best H3PO4, Na3PO4) are used as the source of the third component. Heterocyclic compounds (best morpholine, oxazolidine and their derivatives, which are taken in a molar ratio of crystallization agent to SiO2 of 0.01 to 50 (best at 0.1 to 10), are used as the crystallization agent (ArK). The catalyst is prepared through heating in an autoclave at a temperature of 80 to 300 degrees (best at 120 to 200 degrees) at atmospheric pressure for 10 to 50 hours with mixing of the mixture of the three components, water and the crystallization agent. The forming crystalline product is cooled, poured off, washed with water, dried for several hours at a temperature of at least 100 degrees and roasted in air for 2 to 48 hours at 300 to 700 degrees.

  11. Errors of the backextrapolation method in determination of the blood volume

    Science.gov (United States)

    Schröder, T.; Rösler, U.; Frerichs, I.; Hahn, G.; Ennker, J.; Hellige, G.

    1999-01-01

    Backextrapolation is an empirical method to calculate the central volume of distribution (for example the blood volume). It is based on the compartment model, which says that after an injection the substance is distributed instantaneously in the central volume with no time delay. The occurrence of recirculation is not taken into account. The change of concentration with time of indocyanine green (ICG) was observed in an in vitro model, in which the volume was recirculating in 60 s and the clearance of the ICG could be varied. It was found that the higher the elimination of ICG, the higher was the error of the backextrapolation method. The theoretical consideration of Schröder et al ( Biomed. Tech. 42 (1997) 7-11) was proved. If the injected substance is eliminated somewhere in the body (i.e. not by radioactive decay), the backextrapolation method produces large errors.

  12. On Round-off Error for Adaptive Finite Element Methods

    KAUST Repository

    Alvarez-Aramberri, J.

    2012-06-02

    Round-off error analysis has been historically studied by analyzing the condition number of the associated matrix. By controlling the size of the condition number, it is possible to guarantee a prescribed round-off error tolerance. However, the opposite is not true, since it is possible to have a system of linear equations with an arbitrarily large condition number that still delivers a small round-off error. In this paper, we perform a round-off error analysis in context of 1D and 2D hp-adaptive Finite Element simulations for the case of Poisson equation. We conclude that boundary conditions play a fundamental role on the round-off error analysis, specially for the so-called ‘radical meshes’. Moreover, we illustrate the importance of the right-hand side when analyzing the round-off error, which is independent of the condition number of the matrix.

  13. On Round-off Error for Adaptive Finite Element Methods

    KAUST Repository

    Alvarez-Aramberri, J.; Pardo, David; Paszynski, Maciej; Collier, Nathan; Dalcin, Lisandro; Calo, Victor M.

    2012-01-01

    Round-off error analysis has been historically studied by analyzing the condition number of the associated matrix. By controlling the size of the condition number, it is possible to guarantee a prescribed round-off error tolerance. However, the opposite is not true, since it is possible to have a system of linear equations with an arbitrarily large condition number that still delivers a small round-off error. In this paper, we perform a round-off error analysis in context of 1D and 2D hp-adaptive Finite Element simulations for the case of Poisson equation. We conclude that boundary conditions play a fundamental role on the round-off error analysis, specially for the so-called ‘radical meshes’. Moreover, we illustrate the importance of the right-hand side when analyzing the round-off error, which is independent of the condition number of the matrix.

  14. Method of producing thermally stable uranium carbonitrides

    International Nuclear Information System (INIS)

    Ugajin, M.; Takahashi, I.

    1975-01-01

    A thermally stable uranium carbonitride can be produced by adding tungsten and/or molybdenum in the amount of 0.2 wt percent or more, preferably 0.5 wt percent or more, to a pure uranium carbonitride. (U.S.)

  15. Method of producing a peptide mixture

    DEFF Research Database (Denmark)

    2000-01-01

    The present invention relates to a method for industrial production of a peptide preparation having specific specifications by hydrolysis of a protein material, preferably based on whey. The method comprises several steps, which makes it easy to control the method so as to obtain a product which,.......g. because of low mineral content, is well suited for peritoneal dialysis and parenteral feeding. The method gives a high yield....

  16. Method For Producing Mechanically Flexible Silicon Substrate

    KAUST Repository

    Hussain, Muhammad Mustafa

    2014-08-28

    A method for making a mechanically flexible silicon substrate is disclosed. In one embodiment, the method includes providing a silicon substrate. The method further includes forming a first etch stop layer in the silicon substrate and forming a second etch stop layer in the silicon substrate. The method also includes forming one or more trenches over the first etch stop layer and the second etch stop layer. The method further includes removing the silicon substrate between the first etch stop layer and the second etch stop layer.

  17. Method For Producing Mechanically Flexible Silicon Substrate

    KAUST Repository

    Hussain, Muhammad Mustafa; Rojas, Jhonathan Prieto

    2014-01-01

    A method for making a mechanically flexible silicon substrate is disclosed. In one embodiment, the method includes providing a silicon substrate. The method further includes forming a first etch stop layer in the silicon substrate and forming a second etch stop layer in the silicon substrate. The method also includes forming one or more trenches over the first etch stop layer and the second etch stop layer. The method further includes removing the silicon substrate between the first etch stop layer and the second etch stop layer.

  18. Findings from analysing and quantifying human error using current methods

    International Nuclear Information System (INIS)

    Dang, V.N.; Reer, B.

    1999-01-01

    In human reliability analysis (HRA), the scarcity of data means that, at best, judgement must be applied to transfer to the domain of the analysis what data are available for similar tasks. In particular for the quantification of tasks involving decisions, the analyst has to choose among quantification approaches that all depend to a significant degree on expert judgement. The use of expert judgement can be made more reliable by eliciting relative judgements rather than absolute judgements. These approaches, which are based on multiple criterion decision theory, focus on ranking the tasks to be analysed by difficulty. While these approaches remedy at least partially the poor performance of experts in the estimation of probabilities, they nevertheless require the calibration of the relative scale on which the actions are ranked in order to obtain the probabilities of interest. This paper presents some results from a comparison of some current HRA methods performed in the frame of a study of SLIM calibration options. The HRA quantification methods THERP, HEART, and INTENT were applied to derive calibration human error probabilities for two groups of operator actions. (author)

  19. Method to produce a neutron shielding

    International Nuclear Information System (INIS)

    Merkle, H.J.

    1978-01-01

    The neutron shielding for armoured vehicles consists of preshaped plastic plates which are coated on the armoured vehicle walls by conversion of the thermoplast. Suitable plastics or thermoplasts are PVC, PVC acetate, or mixtures of these, into which more than 50% B, B 4 C, or BN is embedded. The colour of the shielding may be determined by the choice of the neutron absorber, e.g. a white colour for BN. The plates are produced using an extruder or calender. (DG) [de

  20. Method of producing spherical lithium aluminate particles

    International Nuclear Information System (INIS)

    Yang, L.; Medico, R.R.; Baugh, W.A.

    1983-01-01

    Spherical particles of lithium aluminate are formed by initially producing aluminium hydroxide spheroids, and immersing the spheroids in a lithium ion-containing solution to infuse lithium ions into the spheroids. The lithium-infused spheroids are rinsed to remove excess lithium ion from the surface, and the rinsed spheroids are soaked for a period of time in a liquid medium, dried and sintered to form lithium aluminate spherical particles. (author)

  1. Method of producing radioactive carbon powder

    International Nuclear Information System (INIS)

    Imamura, Y.

    1980-01-01

    Carbon powder, placed in a hermetically closed apparatus under vacuum together with radium ore, adsorbs radon gas emanating from the radium ore thus producing a radioactive carbonaceous material, the radioactivity of which is due to the presence of adsorbed radon. The radioactive carbon powder thus obtained has excellent therapeutical efficacy and is suitable for a variety of applications because of the mild radioactivity of radon. Radium ore permits substantially limitlessly repeated production of the radioactive carbon powder

  2. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  3. Nanophase materials produced by physical methods

    International Nuclear Information System (INIS)

    Noda, Shoji

    1992-01-01

    A nanophase material is mainly characterized by the component's size and the large interface area. Some nanophase materials are briefly described. Ion implantation and oblique vapor deposition are taken as the methods to provide nanophase materials, and their features are described. These physical methods are non-equilibrium material processes, and the unique nanophase materials are demonstrated to be provided by these methods with little thermodynamic restriction. (author)

  4. Radon measurements-discussion of error estimates for selected methods

    International Nuclear Information System (INIS)

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2010-01-01

    The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%.

  5. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    Science.gov (United States)

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  6. Method of producing radioactive technetium-99M

    International Nuclear Information System (INIS)

    Karageozian, H.L.

    1979-01-01

    A chromatographic process of producing high purity and high yield radioactive Technetium-99m. A solution containing Molybdenum-99m and Technetium-99m is placed on a chromatographic column and eluted with a neutral solvent system comprising an organic solvent and from about 0.1 to less than about 10% of water or from about 1 to less than about 70% of a solvent selected from the group consisting of aliphatic alcohols having 1 to 6 carbon atoms. The eluted solvent system containing the Technetium-99m is then removed leaving the Technetium-99m as a dry, particulate residue

  7. Statistical error estimation of the Feynman-α method using the bootstrap method

    International Nuclear Information System (INIS)

    Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho

    2016-01-01

    Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)

  8. A method for producing a hydrocarbon resin

    Energy Technology Data Exchange (ETDEWEB)

    Tsachev, A B; Andonov, K S; Igliyev, S P

    1980-11-25

    Rock coal resin (KS), for instance, with a relative density of 1,150 to 1,190 kilograms per cubic meter, which contains 8 to 10 percent naphthaline, 1.5 to 2.8 percent phenol and 6 to 15 percent substances insoluble in toluene, or its mixture with rock coal or oil fractions of resin are subjected to distillation (Ds) in a pipe furnace with two evaporators (Is) and a distillation tower with a temperature mode in the second stage of 320 to 360 degrees and 290 to 340 degrees in the pitch compartment. A hydrocarbon resin is produced with a high carbon content, especially for the production of resin and dolomite refractory materials, as well as fuel mixtures for blast furnace and open hearth industry.

  9. Answering Contextually Demanding Questions: Pragmatic Errors Produced by Children with Asperger Syndrome or High-Functioning Autism

    Science.gov (United States)

    Loukusa, Soile; Leinonen, Eeva; Jussila, Katja; Mattila, Marja-Leena; Ryder, Nuala; Ebeling, Hanna; Moilanen, Irma

    2007-01-01

    This study examined irrelevant/incorrect answers produced by children with Asperger syndrome or high-functioning autism (7-9-year-olds and 10-12-year-olds) and normally developing children (7-9-year-olds). The errors produced were divided into three types: in Type 1, the child answered the original question incorrectly, in Type 2, the child gave a…

  10. The regression-calibration method for fitting generalized linear models with additive measurement error

    OpenAIRE

    James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll

    2003-01-01

    This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...

  11. Improved Methods of Producing and Administering Extracellular Vesicles | Poster

    Science.gov (United States)

    An efficient method of producing purified extracellular vesicles (EVs), in conjunction with a method that blocks liver macrophages from clearing EVs from the body, has produced promising results for the use of EVs in cancer therapy.

  12. Method of producing thin cellulose nitrate film

    International Nuclear Information System (INIS)

    Lupica, S.B.

    1975-01-01

    An improved method for forming a thin nitrocellulose film of reproducible thickness is described. The film is a cellulose nitrate film, 10 to 20 microns in thickness, cast from a solution of cellulose nitrate in tetrahydrofuran, said solution containing from 7 to 15 percent, by weight, of dioctyl phthalate, said cellulose nitrate having a nitrogen content of from 10 to 13 percent

  13. Stable, fertile, high polyhydroxyalkanoate producing plants and methods of producing them

    Energy Technology Data Exchange (ETDEWEB)

    Bohmert-Tatarev, Karen; McAvoy, Susan; Peoples, Oliver P.; Snell, Kristi D.

    2015-08-04

    Transgenic plants that produce high levels of polyhydroxybutyrate and methods of producing them are provided. In a preferred embodiment the transgenic plants are produced using plastid transformation technologies and utilize genes which are codon optimized. Stably transformed plants able to produce greater than 10% dwt PHS in tissues are also provided.

  14. Method and apparatus for producing tomographic images

    International Nuclear Information System (INIS)

    Annis, M.

    1989-01-01

    A device useful in producing a tomographic image of a selected slice of an object to be examined is described comprising: a source of penetrating radiation, sweep means for forming energy from the source into a pencil beam and repeatedly sweeping the pencil beam over a line in space to define a sweep plane, first means for supporting an object to be examined so that the pencil beam intersections the object along a path passing through the object and the selected slice, line collimating means for filtering radiation scattered by the object, the line collimating means having a field of view which intersects and sweep plane in a bounded line so that the line collimating means passes only radiation scattered by elementary volumes of the object lying along the bounded line, and line collimating means including a plurality of channels such substantially planar in form to collectively define the field of view, the channels oriented so that pencil beam sweeps along the bounded line as a function of time, and radiation detector means responsive to radiation passed by the line collimating means

  15. Method to produce catalytically active nanocomposite coatings

    Science.gov (United States)

    Erdemir, Ali; Eryilmaz, Osman Levent; Urgen, Mustafa; Kazmanli, Kursat

    2016-02-09

    A nanocomposite coating and method of making and using the coating. The nanocomposite coating is disposed on a base material, such as a metal or ceramic; and the nanocomposite consists essentially of a matrix of an alloy selected from the group of Cu, Ni, Pd, Pt and Re which are catalytically active for cracking of carbon bonds in oils and greases and a grain structure selected from the group of borides, carbides and nitrides.

  16. Method to produce catalytically active nanocomposite coatings

    Energy Technology Data Exchange (ETDEWEB)

    Erdemir, Ali; Eryilmaz, Osman Levent; Urgen, Mustafa; Kazmanli, Kursat

    2017-12-19

    A nanocomposite coating and method of making and using the coating. The nanocomposite coating is disposed on a base material, such as a metal or ceramic; and the nanocomposite consists essentially of a matrix of an alloy selected from the group of Cu, Ni, Pd, Pt and Re which are catalytically active for cracking of carbon bonds in oils and greases and a grain structure selected from the group of borides, carbides and nitrides.

  17. Radiation sources and methods for producing them

    International Nuclear Information System (INIS)

    Malson, H.A.; Moyer, S.E.; Honious, H.B.; Janzow, E.F.

    1979-01-01

    The radiation sources contain a substrate with an electrically conducting, non-radioactive metal surface, a layer of a metal isotope of the scandium group as well as a percentage of non-radioactive binding metal being coated on the surface by means of an electroplating method. Besides examples for β sources ( 147 Pm), γ sources ( 241 Am), and neutron sources ( 252 Cf) there is described an α-radiation source ( 241 Am, 244 Cu, 238 Pu) for smoke detectors. There are given extensive tables and a bibliography. (DG) [de

  18. An Analysis and Quantification Method of Human Errors of Soft Controls in Advanced MCRs

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jae Whan; Jang, Seung Cheol

    2011-01-01

    In this work, a method was proposed for quantifying human errors that may occur during operation executions using soft control. Soft controls of advanced main control rooms (MCRs) have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to define the human error modes and to quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests a modified K-HRA method for quantifying error probability

  19. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  20. METHOD FOR PRODUCING CEMENTED CARBIDE ARTICLES

    Science.gov (United States)

    Onstott, E.I.; Cremer, G.D.

    1959-07-14

    A method is described for making molded materials of intricate shape where the materials consist of mixtures of one or more hard metal carbides or oxides and matrix metals or binder metals thereof. In one embodiment of the invention 90% of finely comminuted tungsten carbide powder together with finely comminuted cobalt bonding agent is incorporated at 60 deg C into a slurry with methyl alcohol containing 1.5% paraffin, 3% camphor, 3.5% naphthalene, and 1.8% toluene. The compact is formed by the steps of placing the slurry in a mold at least one surface of which is porous to the fluid organic system, compacting the slurry, removing a portion of the mold from contact with the formed object and heating the formed object to remove the remaining organic matter and to sinter the compact.

  1. System and method for producing metallic iron

    Science.gov (United States)

    Englund, David J.; Schlichting, Mark; Meehan, John; Crouch, Jeremiah; Wilson, Logan

    2014-07-29

    A method of production of metallic iron nodules comprises assembling a hearth furnace having a moveable hearth comprising refractory material and having a conversion zone and a fusion zone, providing a hearth material layer comprising carbonaceous material on the refractory material, providing a layer of reducible material comprising and iron bearing material arranged in discrete portions over at least a portion of the hearth material layer, delivering oxygen gas into the hearth furnace to a ratio of at least 0.8:1 ponds of oxygen to pounds of iron in the reducible material to heat the conversion zone to a temperature sufficient to at least partially reduce the reducible material and to heat the fusion zone to a temperature sufficient to at least partially reduce the reducible material, and heating the reducible material to form one or more metallic iron nodules and slag.

  2. Method for producing polycrystalline boron nitride

    International Nuclear Information System (INIS)

    Alexeevskii, V.P.; Bochko, A.V.; Dzhamarov, S.S.; Karpinos, D.M.; Karyuk, G.G.; Kolomiets, I.P.; Kurdyumov, A.V.; Pivovarov, M.S.; Frantsevich, I.N.; Yarosh, V.V.

    1975-01-01

    A mixture containing less than 50 percent of graphite-like boron nitride treated by a shock wave and highly defective wurtzite-like boron nitride obtained by a shock-wave method is compressed and heated at pressure and temperature values corresponding to the region of the phase diagram for boron nitride defined by the graphite-like compact modifications of boron nitride equilibrium line and the cubic wurtzite-like boron nitride equilibrium line. The resulting crystals of boron nitride exhibit a structure of wurtzite-like boron nitride or of both wurtzite-like and cubic boron nitride. The resulting material exhibits higher plasticity as compared with polycrystalline cubic boron nitride. Tools made of this compact polycrystalline material have a longer service life under impact loads in machining hardened steel and chilled iron. (U.S.)

  3. Method and apparatus for producing synthesis gas

    Science.gov (United States)

    Hemmings, John William; Bonnell, Leo; Robinson, Earl T.

    2010-03-03

    A method and apparatus for reacting a hydrocarbon containing feed stream by steam methane reforming reactions to form a synthesis gas. The hydrocarbon containing feed is reacted within a reactor having stages in which the final stage from which a synthesis gas is discharged incorporates expensive high temperature materials such as oxide dispersed strengthened metals while upstream stages operate at a lower temperature allowing the use of more conventional high temperature alloys. Each of the reactor stages incorporate reactor elements having one or more separation zones to separate oxygen from an oxygen containing feed to support combustion of a fuel within adjacent combustion zones, thereby to generate heat to support the endothermic steam methane reforming reactions.

  4. M/T method based incremental encoder velocity measurement error analysis and self-adaptive error elimination algorithm

    DEFF Research Database (Denmark)

    Chen, Yangyang; Yang, Ming; Long, Jiang

    2017-01-01

    For motor control applications, the speed loop performance is largely depended on the accuracy of speed feedback signal. M/T method, due to its high theoretical accuracy, is the most widely used in incremental encoder adopted speed measurement. However, the inherent encoder optical grating error...

  5. Propagation of internal errors in explicit Runge–Kutta methods and internal stability of SSP and extrapolation methods

    KAUST Repository

    Ketcheson, David I.

    2014-04-11

    In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.

  6. Diagnosis of Cognitive Errors by Statistical Pattern Recognition Methods.

    Science.gov (United States)

    Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.

    The rule space model permits measurement of cognitive skill acquisition, diagnosis of cognitive errors, and detection of the strengths and weaknesses of knowledge possessed by individuals. Two ways to classify an individual into his or her most plausible latent state of knowledge include: (1) hypothesis testing--Bayes' decision rules for minimum…

  7. Systems and methods for producing electrical discharges in compositions

    KAUST Repository

    Cha, Min; Zhang, Xuming; Chung, Suk-Ho

    2015-01-01

    Systems and methods configured to produce electrical discharges in compositions, such as those, for example, configured to produce electrical discharges in compositions that comprise mixtures of materials, such as a mixture of a material having a

  8. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Directory of Open Access Journals (Sweden)

    Gyungho Khim

    2015-01-01

    Full Text Available We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement.

  9. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Science.gov (United States)

    Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok

    2015-01-01

    We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715

  10. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  11. Research on Electronic Transformer Data Synchronization Based on Interpolation Methods and Their Error Analysis

    Directory of Open Access Journals (Sweden)

    Pang Fubin

    2015-09-01

    Full Text Available In this paper the origin problem of data synchronization is analyzed first, and then three common interpolation methods are introduced to solve the problem. Allowing for the most general situation, the paper divides the interpolation error into harmonic and transient interpolation error components, and the error expression of each method is derived and analyzed. Besides, the interpolation errors of linear, quadratic and cubic methods are computed at different sampling rates, harmonic orders and transient components. Further, the interpolation accuracy and calculation amount of each method are compared. The research results provide theoretical guidance for selecting the interpolation method in the data synchronization application of electronic transformer.

  12. Statistical method for quality control in presence of measurement errors

    International Nuclear Information System (INIS)

    Lauer-Peccoud, M.R.

    1998-01-01

    In a quality inspection of a set of items where the measurements of values of a quality characteristic of the item are contaminated by random errors, one can take wrong decisions which are damageable to the quality. So of is important to control the risks in such a way that a final quality level is insured. We consider that an item is defective or not if the value G of its quality characteristic is larger or smaller than a given level g. We assume that, due to the lack of precision of the measurement instrument, the measurement M of this characteristic is expressed by ∫ (G) + ξ where f is an increasing function such that the value ∫ (g 0 ) is known and ξ is a random error with mean zero and given variance. First we study the problem of the determination of a critical measure m such that a specified quality target is reached after the classification of a lot of items where each item is accepted or rejected depending on whether its measurement is smaller or greater than m. Then we analyse the problem of testing the global quality of a lot from the measurements for a example of items taken from the lot. For these two kinds of problems and for different quality targets, we propose solutions emphasizing on the case where the function ∫ is linear and the error ξ and the variable G are Gaussian. Simulation results allow to appreciate the efficiency of the different considered control procedures and their robustness with respect to deviations from the assumptions used in the theoretical derivations. (author)

  13. A low error reconstruction method for confocal holography to determine 3-dimensional properties

    Energy Technology Data Exchange (ETDEWEB)

    Jacquemin, P.B., E-mail: pbjacque@nps.edu [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada); Herring, R.A. [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada)

    2012-06-15

    A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as 'wily'. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: Black-Right-Pointing-Pointer Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. Black-Right-Pointing-Pointer Processing of multiple holograms containing the cumulative refractive index through the fluid. Black-Right-Pointing-Pointer Reconstruction issues due to restricting angular scanning to the numerical aperture of the

  14. A low error reconstruction method for confocal holography to determine 3-dimensional properties

    International Nuclear Information System (INIS)

    Jacquemin, P.B.; Herring, R.A.

    2012-01-01

    A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as “wily”. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: ► Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. ► Processing of multiple holograms containing the cumulative refractive index through the fluid. ► Reconstruction issues due to restricting angular scanning to the numerical aperture of the beam. ► Minimizing tomographic reconstruction error by defining boundary

  15. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  16. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  17. Band extension in digital methods of transfer function determination – signal conditioners asymmetry error corrections

    Directory of Open Access Journals (Sweden)

    Zbigniew Staroszczyk

    2014-12-01

    Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors

  18. A channel-by-channel method of reducing the errors associated with peak area integration

    International Nuclear Information System (INIS)

    Luedeke, T.P.; Tripard, G.E.

    1996-01-01

    A new method of reducing the errors associated with peak area integration has been developed. This method utilizes the signal content of each channel as an estimate of the overall peak area. These individual estimates can then be weighted according to the precision with which each estimate is known, producing an overall area estimate. Experimental measurements were performed on a small peak sitting on a large background, and the results compared to those obtained from a commercial software program. Results showed a marked decrease in the spread of results around the true value (obtained by counting for a long period of time), and a reduction in the statistical uncertainty associated with the peak area. (orig.)

  19. Cognitive strategies: a method to reduce diagnostic errors in ER

    Directory of Open Access Journals (Sweden)

    Carolina Prevaldi

    2009-02-01

    Full Text Available I wonder why sometimes we are able to rapidly recognize patterns of disease presentation, formulate a speedy diagnostic closure, and go on with a treatment plan. On the other hand sometimes we proceed studing in deep our patient in an analytic, slow and rational way of decison making. Why decisions sometimes can be intuitive, while sometimes we have to proceed in a rigorous way? What is the “back ground noise” and the “signal to noise ratio” of presenting sintoms? What is the risk in premature labeling or “closure” of a patient? When is it useful the “cook-book” approach in clinical decision making? The Emergency Department is a natural laboratory for the study of error” stated an author. Many studies have focused on the occurrence of errors in medicine, and in hospital practice, but the ED with his unique operating characteristics seems to be a uniquely errorprone environment. That's why it is useful to understand the underlying pattern of thinking that can lead us to misdiagnosis. The general knowledge of thought processes gives the psysician awareness an the ability to apply different tecniques in clinical decision making and to recognize and avoid pitfalls.

  20. A method of producing a body comprising porous alpha silicon carbide and the body produced by the method

    DEFF Research Database (Denmark)

    2017-01-01

    The present invention relates to a method of producing porous alpha-SiC containing shaped body and porous alpha-SiC containing shaped body produced by that method. The porous alpha-SiC containing shaped body shows a characteristic microstructure providing a high degree of mechanical stability...

  1. The Connection between Teaching Methods and Attribution Errors

    Science.gov (United States)

    Wieman, Carl; Welsh, Ashley

    2016-01-01

    We collected data at a large, very selective public university on what math and science instructors felt was the biggest barrier to their students' learning. We also determined the extent of each instructor's use of research-based effective teaching methods. Instructors using fewer effective methods were more likely to say the greatest barrier to…

  2. Linearly convergent stochastic heavy ball method for minimizing generalization error

    KAUST Repository

    Loizou, Nicolas; Richtarik, Peter

    2017-01-01

    In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss

  3. Energy dependent mesh adaptivity of discontinuous isogeometric discrete ordinate methods with dual weighted residual error estimators

    Science.gov (United States)

    Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.

    2017-04-01

    In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the "exact" adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.

  4. Methods of producing cermet materials and methods of utilizing same

    Science.gov (United States)

    Kong, Peter C [Idaho Falls, ID

    2008-12-30

    Methods of fabricating cermet materials and methods of utilizing the same such as in filtering particulate and gaseous pollutants from internal combustion engines having intermetallic and ceramic phases. The cermet material may be made from a transition metal aluminide phase and an alumina phase. The mixture may be pressed to form a green compact body and then heated in a nitrogen-containing atmosphere so as to melt aluminum particles and form the cermet. Filler materials may be added to increase the porosity or tailor the catalytic properties of the cermet material. Additionally, the cermet material may be reinforced with fibers or screens. The cermet material may also be formed so as to pass an electrical current therethrough to heat the material during use.

  5. Mixed Methods Analysis of Medical Error Event Reports: A Report from the ASIPS Collaborative

    National Research Council Canada - National Science Library

    Harris, Daniel M; Westfall, John M; Fernald, Douglas H; Duclos, Christine W; West, David R; Niebauer, Linda; Marr, Linda; Quintela, Javan; Main, Deborah S

    2005-01-01

    .... This paper presents a mixed methods approach to analyzing narrative error event reports. Mixed methods studies integrate one or more qualitative and quantitative techniques for data collection and analysis...

  6. Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure

    Directory of Open Access Journals (Sweden)

    Hesheng Zhang

    2016-01-01

    Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.

  7. Linearly convergent stochastic heavy ball method for minimizing generalization error

    KAUST Repository

    Loizou, Nicolas

    2017-10-30

    In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.

  8. A remarkable systemic error in calibration methods of γ spectrometer used for determining activity of 238U

    International Nuclear Information System (INIS)

    Su Qiong; Cheng Jianping; Diao Lijun; Li Guiqun

    2006-01-01

    A remarkable systemic error which was unknown in past long time has been indicated. The error appears in the calibration methods of determining activity of 238 U is used with γ-spectrometer with high resolution. When the γ-ray of 92.6 keV as the characteristic radiation from 238 U is used to determine the activity of 238 U in natural environment samples, the disturbing radiation produced by external excitation (or called outer sourcing X-ray radiation) is the main problem. Because the X-ray intensity is changed with many indefinite factors, it is advised that the calibration methods should be put away. As the influence of the systemic errors has been left in some past research papers, the authors suggest that the data from those papers should be cited carefully and if possible the data ought to be re-determined. (authors)

  9. Study of on-machine error identification and compensation methods for micro machine tools

    International Nuclear Information System (INIS)

    Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng

    2016-01-01

    Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results

  10. Detection method of nonlinearity errors by statistical signal analysis in heterodyne Michelson interferometer.

    Science.gov (United States)

    Hu, Juju; Hu, Haijiang; Ji, Yinghua

    2010-03-15

    Periodic nonlinearity that ranges from tens of nanometers to a few nanometers in heterodyne interferometer limits its use in high accuracy measurement. A novel method is studied to detect the nonlinearity errors based on the electrical subdivision and the analysis method of statistical signal in heterodyne Michelson interferometer. Under the movement of micropositioning platform with the uniform velocity, the method can detect the nonlinearity errors by using the regression analysis and Jackknife estimation. Based on the analysis of the simulations, the method can estimate the influence of nonlinearity errors and other noises for the dimensions measurement in heterodyne Michelson interferometer.

  11. Method for producing bonded nonwoven fabrics using ionizing radiation

    International Nuclear Information System (INIS)

    Drelich, A.H.; Oney, D.G.

    1979-01-01

    A method is described for producing a resin-bonded nonwoven fabric. The preparation involves forming a fibrous web annealing it and compressing it to provide fiber to fiber contact. A polymerizable binder is applied to the fibrous web which is then treated by ionizing radiation to produce the material. 9 figures, 3 drawing

  12. Studies on the method of producing radiographic 170Tm source

    International Nuclear Information System (INIS)

    Maeda, Sho

    1976-08-01

    A method of producing radiographic 170 Tm source has been studied, including target preparation, neutron irradiation, handling of the irradiated target in the hot cell and source capsules. On the basis of the results, practical 170 Tm radiographic sources (29 -- 49Ci, with pellets 3mm in diameter and 3mm long) were produced in trial by neutron irradiation with the JMTR. (auth.)

  13. A platform-independent method for detecting errors in metagenomic sequencing data: DRISEE.

    Directory of Open Access Journals (Sweden)

    Kevin P Keegan

    Full Text Available We provide a novel method, DRISEE (duplicate read inferred sequencing error estimation, to assess sequencing quality (alternatively referred to as "noise" or "error" within and/or between sequencing samples. DRISEE provides positional error estimates that can be used to inform read trimming within a sample. It also provides global (whole sample error estimates that can be used to identify samples with high or varying levels of sequencing error that may confound downstream analyses, particularly in the case of studies that utilize data from multiple sequencing samples. For shotgun metagenomic data, we believe that DRISEE provides estimates of sequencing error that are more accurate and less constrained by technical limitations than existing methods that rely on reference genomes or the use of scores (e.g. Phred. Here, DRISEE is applied to (non amplicon data sets from both the 454 and Illumina platforms. The DRISEE error estimate is obtained by analyzing sets of artifactual duplicate reads (ADRs, a known by-product of both sequencing platforms. We present DRISEE as an open-source, platform-independent method to assess sequencing error in shotgun metagenomic data, and utilize it to discover previously uncharacterized error in de novo sequence data from the 454 and Illumina sequencing platforms.

  14. Host cells and methods for producing isoprenyl alkanoates

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Taek Soon; Fortman, Jeffrey L.; Keasling, Jay D.

    2015-12-01

    The invention provides for a method of producing an isoprenyl alkanoate in a genetically modified host cell. In one embodiment, the method comprises culturing a genetically modified host cell which expresses an enzyme capable of catalyzing the esterification of an isoprenol and a straight-chain fatty acid, such as an alcohol acetyltransferase (AAT), wax ester synthase/diacylglycerol acyltransferase (WS/DGAT) or lipase, under a suitable condition so that the isoprenyl alkanoate is produced.

  15. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong; Sun, Shuyu; Xie, Xiaoping

    2015-01-01

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  16. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong

    2015-10-26

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  17. Propagation of internal errors in explicit Runge–Kutta methods and internal stability of SSP and extrapolation methods

    KAUST Repository

    Ketcheson, David I.; Loczi, Lajos; Parsani, Matteo

    2014-01-01

    of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods

  18. Fibonacci collocation method with a residual error Function to solve linear Volterra integro differential equations

    Directory of Open Access Journals (Sweden)

    Salih Yalcinbas

    2016-01-01

    Full Text Available In this paper, a new collocation method based on the Fibonacci polynomials is introduced to solve the high-order linear Volterra integro-differential equations under the conditions. Numerical examples are included to demonstrate the applicability and validity of the proposed method and comparisons are made with the existing results. In addition, an error estimation based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation.

  19. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    Science.gov (United States)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  20. Methods of producing adsorption media including a metal oxide

    Science.gov (United States)

    Mann, Nicholas R; Tranter, Troy J

    2014-03-04

    Methods of producing a metal oxide are disclosed. The method comprises dissolving a metal salt in a reaction solvent to form a metal salt/reaction solvent solution. The metal salt is converted to a metal oxide and a caustic solution is added to the metal oxide/reaction solvent solution to adjust the pH of the metal oxide/reaction solvent solution to less than approximately 7.0. The metal oxide is precipitated and recovered. A method of producing adsorption media including the metal oxide is also disclosed, as is a precursor of an active component including particles of a metal oxide.

  1. A posteriori error estimator and AMR for discrete ordinates nodal transport methods

    International Nuclear Information System (INIS)

    Duo, Jose I.; Azmy, Yousry Y.; Zikatanov, Ludmil T.

    2009-01-01

    In the development of high fidelity transport solvers, optimization of the use of available computational resources and access to a tool for assessing quality of the solution are key to the success of large-scale nuclear systems' simulation. In this regard, error control provides the analyst with a confidence level in the numerical solution and enables for optimization of resources through Adaptive Mesh Refinement (AMR). In this paper, we derive an a posteriori error estimator based on the nodal solution of the Arbitrarily High Order Transport Method of the Nodal type (AHOT-N). Furthermore, by making assumptions on the regularity of the solution, we represent the error estimator as a function of computable volume and element-edges residuals. The global L 2 error norm is proved to be bound by the estimator. To lighten the computational load, we present a numerical approximation to the aforementioned residuals and split the global norm error estimator into local error indicators. These indicators are used to drive an AMR strategy for the spatial discretization. However, the indicators based on forward solution residuals alone do not bound the cell-wise error. The estimator and AMR strategy are tested in two problems featuring strong heterogeneity and highly transport streaming regime with strong flux gradients. The results show that the error estimator indeed bounds the global error norms and that the error indicator follows the cell-error's spatial distribution pattern closely. The AMR strategy proves beneficial to optimize resources, primarily by reducing the number of unknowns solved for to achieve prescribed solution accuracy in global L 2 error norm. Likewise, AMR achieves higher accuracy compared to uniform refinement when resolving sharp flux gradients, for the same number of unknowns

  2. Error analysis in Fourier methods for option pricing for exponential Lévy processes

    KAUST Repository

    Crocce, Fabian; Hä ppö lä , Juho; Keissling, Jonas; Tempone, Raul

    2015-01-01

    We derive an error bound for utilising the discrete Fourier transform method for solving Partial Integro-Differential Equations (PIDE) that describe european option prices for exponential Lévy driven asset prices. We give sufficient conditions

  3. Systems and methods for producing electrical discharges in compositions

    KAUST Repository

    Cha, Min Suk

    2015-09-03

    Systems and methods configured to produce electrical discharges in compositions, such as those, for example, configured to produce electrical discharges in compositions that comprise mixtures of materials, such as a mixture of a material having a high dielectric constant and a material having a low dielectric constant (e.g., a composition of a liquid having a high dielectric constant and a liquid having a low dielectric constant, a composition of a solid having a high dielectric constant and a liquid having a low dielectric constant, and similar compositions), and further systems and methods configured to produce materials, such as through material modification and/or material synthesis, in part, resulting from producing electrical discharges in compositions.

  4. Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography

    DEFF Research Database (Denmark)

    Müller, P.; Hiller, Jochen; Dai, Y.

    2015-01-01

    X-ray Computed Tomography (CT) has become an important technology for quality control of industrial components. As with other technologies, e.g., tactile coordinate measurements or optical measurements, CT is influenced by numerous quantities which may have negative impact on the accuracy...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...

  5. A posteriori error analysis of multiscale operator decomposition methods for multiphysics models

    International Nuclear Information System (INIS)

    Estep, D; Carey, V; Tavener, S; Ginting, V; Wildey, T

    2008-01-01

    Multiphysics, multiscale models present significant challenges in computing accurate solutions and for estimating the error in information computed from numerical solutions. In this paper, we describe recent advances in extending the techniques of a posteriori error analysis to multiscale operator decomposition solution methods. While the particulars of the analysis vary considerably with the problem, several key ideas underlie a general approach being developed to treat operator decomposition multiscale methods. We explain these ideas in the context of three specific examples

  6. Estimation of Mechanical Signals in Induction Motors using the Recursive Prediction Error Method

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik

    1993-01-01

    Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed ........Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed .....

  7. The problem of assessing landmark error in geometric morphometrics: theory, methods, and modifications.

    Science.gov (United States)

    von Cramon-Taubadel, Noreen; Frazier, Brenda C; Lahr, Marta Mirazón

    2007-09-01

    Geometric morphometric methods rely on the accurate identification and quantification of landmarks on biological specimens. As in any empirical analysis, the assessment of inter- and intra-observer error is desirable. A review of methods currently being employed to assess measurement error in geometric morphometrics was conducted and three general approaches to the problem were identified. One such approach employs Generalized Procrustes Analysis to superimpose repeatedly digitized landmark configurations, thereby establishing whether repeat measures fall within an acceptable range of variation. The potential problem of this error assessment method (the "Pinocchio effect") is demonstrated and its effect on error studies discussed. An alternative approach involves employing Euclidean distances between the configuration centroid and repeat measures of a landmark to assess the relative repeatability of individual landmarks. This method is also potentially problematic as the inherent geometric properties of the specimen can result in misleading estimates of measurement error. A third approach involved the repeated digitization of landmarks with the specimen held in a constant orientation to assess individual landmark precision. This latter approach is an ideal method for assessing individual landmark precision, but is restrictive in that it does not allow for the incorporation of instrumentally defined or Type III landmarks. Hence, a revised method for assessing landmark error is proposed and described with the aid of worked empirical examples. (c) 2007 Wiley-Liss, Inc.

  8. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    Science.gov (United States)

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  9. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    Directory of Open Access Journals (Sweden)

    Jeffrey R Kugelman

    Full Text Available Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5 of all compared methods.

  10. Error of the slanted edge method for measuring the modulation transfer function of imaging systems.

    Science.gov (United States)

    Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu

    2018-03-01

    The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.

  11. Method of producing nano-scaled inorganic platelets

    Science.gov (United States)

    Zhamu, Aruna; Jang, Bor Z.

    2012-11-13

    The present invention provides a method of exfoliating a layered material (e.g., transition metal dichalcogenide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites.

  12. A New Error Analysis and Accuracy Synthesis Method for Shoe Last Machine

    Directory of Open Access Journals (Sweden)

    Bian Xiangjuan

    2014-05-01

    Full Text Available In order to improve the manufacturing precision of the shoe last machine, a new error-computing model has been put forward to. At first, Based on the special topological structure of the shoe last machine and multi-rigid body system theory, a spatial error-calculating model of the system was built; Then, the law of error distributing in the whole work space was discussed, and the maximum error position of the system was found; At last, The sensitivities of error parameters were analyzed at the maximum position and the accuracy synthesis was conducted by using Monte Carlo method. Considering the error sensitivities analysis, the accuracy of the main parts was distributed. Results show that the probability of the maximal volume error less than 0.05 mm of the new scheme was improved from 0.6592 to 0.7021 than the probability of the old scheme, the precision of the system was improved obviously, the model can be used for the error analysis and accuracy synthesis of the complex multi- embranchment motion chain system, and to improve the system precision of manufacturing.

  13. The use of error and uncertainty methods in the medical laboratory.

    Science.gov (United States)

    Oosterhuis, Wytze P; Bayat, Hassan; Armbruster, David; Coskun, Abdurrahman; Freeman, Kathleen P; Kallner, Anders; Koch, David; Mackenzie, Finlay; Migliarino, Gabriel; Orth, Matthias; Sandberg, Sverre; Sylte, Marit S; Westgard, Sten; Theodorsson, Elvar

    2018-01-26

    Error methods - compared with uncertainty methods - offer simpler, more intuitive and practical procedures for calculating measurement uncertainty and conducting quality assurance in laboratory medicine. However, uncertainty methods are preferred in other fields of science as reflected by the guide to the expression of uncertainty in measurement. When laboratory results are used for supporting medical diagnoses, the total uncertainty consists only partially of analytical variation. Biological variation, pre- and postanalytical variation all need to be included. Furthermore, all components of the measuring procedure need to be taken into account. Performance specifications for diagnostic tests should include the diagnostic uncertainty of the entire testing process. Uncertainty methods may be particularly useful for this purpose but have yet to show their strength in laboratory medicine. The purpose of this paper is to elucidate the pros and cons of error and uncertainty methods as groundwork for future consensus on their use in practical performance specifications. Error and uncertainty methods are complementary when evaluating measurement data.

  14. HUMAN RELIABILITY ANALYSIS DENGAN PENDEKATAN COGNITIVE RELIABILITY AND ERROR ANALYSIS METHOD (CREAM

    Directory of Open Access Journals (Sweden)

    Zahirah Alifia Maulida

    2015-01-01

    Full Text Available Kecelakaan kerja pada bidang grinding dan welding menempati urutan tertinggi selama lima tahun terakhir di PT. X. Kecelakaan ini disebabkan oleh human error. Human error terjadi karena pengaruh lingkungan kerja fisik dan non fisik.Penelitian kali menggunakan skenario untuk memprediksi serta mengurangi kemungkinan terjadinya error pada manusia dengan pendekatan CREAM (Cognitive Reliability and Error Analysis Method. CREAM adalah salah satu metode human reliability analysis yang berfungsi untuk mendapatkan nilai Cognitive Failure Probability (CFP yang dapat dilakukan dengan dua cara yaitu basic method dan extended method. Pada basic method hanya akan didapatkan nilai failure probabailty secara umum, sedangkan untuk extended method akan didapatkan CFP untuk setiap task. Hasil penelitian menunjukkan faktor- faktor yang mempengaruhi timbulnya error pada pekerjaan grinding dan welding adalah kecukupan organisasi, kecukupan dari Man Machine Interface (MMI & dukungan operasional, ketersediaan prosedur/ perencanaan, serta kecukupan pelatihan dan pengalaman. Aspek kognitif pada pekerjaan grinding yang memiliki nilai error paling tinggi adalah planning dengan nilai CFP 0.3 dan pada pekerjaan welding yaitu aspek kognitif execution dengan nilai CFP 0.18. Sebagai upaya untuk mengurangi nilai error kognitif pada pekerjaan grinding dan welding rekomendasi yang diberikan adalah memberikan training secara rutin, work instrucstion yang lebih rinci dan memberikan sosialisasi alat. Kata kunci: CREAM (cognitive reliability and error analysis method, HRA (human reliability analysis, cognitive error Abstract The accidents in grinding and welding sectors were the highest cases over the last five years in PT. X and it caused by human error. Human error occurs due to the influence of working environment both physically and non-physically. This study will implement an approaching scenario called CREAM (Cognitive Reliability and Error Analysis Method. CREAM is one of human

  15. Method for producing dysprosium-iron-boron alloy powder

    International Nuclear Information System (INIS)

    Camp, F.E.; Wooden, S.A.

    1989-01-01

    A method for producing a dysprosium-iron alloy adapted for use in the manufacture of rare-earth element containing, iron-boron permanent magnets, the method including providing a particle mixture comprising dysprosium oxide, iron and calcium, compacting the particle mixture to produce a consolidated article, heating the article for a time at temperature to form a metallic compound comprising dysprosium and iron and to form calcium oxide, producing a particle mass of -35 mesh from the compact, washing the particle mass with water at a temperature no greater than 10 0 C to react to the calcium and to the calcium oxide therewith to form a calcium hydroxide, while preventing oxidation of the particle mass, and removing the calcium hydroxide from the particle mass

  16. A novel method for producing multiple ionization of noble gas

    International Nuclear Information System (INIS)

    Wang Li; Li Haiyang; Dai Dongxu; Bai Jiling; Lu Richang

    1997-01-01

    We introduce a novel method for producing multiple ionization of He, Ne, Ar, Kr and Xe. A nanosecond pulsed electron beam with large number density, which could be energy-controlled, was produced by incidence a focused 308 nm laser beam onto a stainless steel grid. On Time-of-Flight Mass Spectrometer, using this electron beam, we obtained multiple ionization of noble gas He, Ne, Ar and Xe. Time of fight mass spectra of these ions were given out. These ions were supposed to be produced by step by step ionization of the gas atoms by electron beam impact. This method may be used as a ideal soft ionizing point ion source in Time of Flight Mass Spectrometer

  17. Methods for producing thin film charge selective transport layers

    Science.gov (United States)

    Hammond, Scott Ryan; Olson, Dana C.; van Hest, Marinus Franciscus Antonius Maria

    2018-01-02

    Methods for producing thin film charge selective transport layers are provided. In one embodiment, a method for forming a thin film charge selective transport layer comprises: providing a precursor solution comprising a metal containing reactive precursor material dissolved into a complexing solvent; depositing the precursor solution onto a surface of a substrate to form a film; and forming a charge selective transport layer on the substrate by annealing the film.

  18. Methods for identifying lipoxygenase producing microorganisms on agar plates

    NARCIS (Netherlands)

    Nyyssola, A.; Heshof, R.; Haarmann, T.; Eidner, J.; Westerholm-Parvinen, A.; Langfelder, K.; Kruus, K.; Graaff, de L.H.; Buchert, J.

    2012-01-01

    Plate assays for lipoxygenase producing microorganisms on agar plates have been developed. Both potassium iodide-starch and indamine dye formation methods were effective for detecting soybean lipoxygenase activity on agar plates. A positive result was also achieved using the beta-carotene bleaching

  19. Method of producing hydrogen, and rendering a contaminated biomass inert

    Science.gov (United States)

    Bingham, Dennis N [Idaho Falls, ID; Klingler, Kerry M [Idaho Falls, ID; Wilding, Bruce M [Idaho Falls, ID

    2010-02-23

    A method for rendering a contaminated biomass inert includes providing a first composition, providing a second composition, reacting the first and second compositions together to form an alkaline hydroxide, providing a contaminated biomass feedstock and reacting the alkaline hydroxide with the contaminated biomass feedstock to render the contaminated biomass feedstock inert and further producing hydrogen gas, and a byproduct that includes the first composition.

  20. Membrane with Stable Nanosized Microstructure and Method for Producing same

    DEFF Research Database (Denmark)

    2010-01-01

    The present invention provides a membrane, comprising in this order a first catalyst layer, an electronically and ionically conducting layer having a nanosized microstructure, and a second catalyst layer, characterized in that the electronically and ionically conducting layer is formed from...... an electrolyte material, a grain growth inhibitor and/or grain boundary modifier, and a method for producing same....

  1. Electroplating method for producing ultralow-mass fissionable deposits

    International Nuclear Information System (INIS)

    Ruddy, F.H.

    1989-01-01

    A method for producing ultralow-mass fissionable deposits for nuclear reactor dosimetry is described, including the steps of holding a radioactive parent until the radioactive parent reaches secular equilibrium with a daughter isotope, chemically separating the daughter from the parent, electroplating the daughter on a suitable substrate, and holding the electroplated daughter until the daughter decays to the fissionable deposit

  2. A method for analysing incidents due to human errors on nuclear installations

    International Nuclear Information System (INIS)

    Griffon, M.

    1980-01-01

    This paper deals with the development of a methodology adapted to a detailed analysis of incidents considered to be due to human errors. An identification of human errors and a search for their eventual multiple causes is then needed. They are categorized in eight classes: education and training of personnel, installation design, work organization, time and work duration, physical environment, social environment, history of the plant and performance of the operator. The method is illustrated by the analysis of a handling incident generated by multiple human errors. (author)

  3. Calculating method on human error probabilities considering influence of management and organization

    International Nuclear Information System (INIS)

    Gao Jia; Huang Xiangrui; Shen Zupei

    1996-01-01

    This paper is concerned with how management and organizational influences can be factored into quantifying human error probabilities on risk assessments, using a three-level Influence Diagram (ID) which is originally only as a tool for construction and representation of models of decision-making trees or event trees. An analytical model of human errors causation has been set up with three influence levels, introducing a method for quantification assessments (of the ID), which can be applied into quantifying probabilities) of human errors on risk assessments, especially into the quantification of complex event trees (system) as engineering decision-making analysis. A numerical case study is provided to illustrate the approach

  4. Statistical analysis with measurement error or misclassification strategy, method and application

    CERN Document Server

    Yi, Grace Y

    2017-01-01

    This monograph on measurement error and misclassification covers a broad range of problems and emphasizes unique features in modeling and analyzing problems arising from medical research and epidemiological studies. Many measurement error and misclassification problems have been addressed in various fields over the years as well as with a wide spectrum of data, including event history data (such as survival data and recurrent event data), correlated data (such as longitudinal data and clustered data), multi-state event data, and data arising from case-control studies. Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application brings together assorted methods in a single text and provides an update of recent developments for a variety of settings. Measurement error effects and strategies of handling mismeasurement for different models are closely examined in combination with applications to specific problems. Readers with diverse backgrounds and objectives can utilize th...

  5. Re-Normalization Method of Doppler Lidar Signal for Error Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nakgyu; Baik, Sunghoon; Park, Seungkyu; Kim, Donglyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Dukhyeon [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    In this paper, we presented a re-normalization method for the fluctuations of Doppler signals from the various noises mainly due to the frequency locking error for a Doppler lidar system. For the Doppler lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter and an iodine filter as the Doppler frequency discriminator. For the Doppler frequency shift measurement, the transmission ratio using the injection-seeded laser is locked to stabilize the frequency. If the frequency locking system is not perfect, the Doppler signal has some error due to the frequency locking error. The re-normalization process of the Doppler signals was performed to reduce this error using an additional laser beam to an Iodine cell. We confirmed that the renormalized Doppler signal shows the stable experimental data much more than that of the averaged Doppler signal using our calibration method, the reduced standard deviation was 4.838 Χ 10{sup -3}.

  6. Symmetric and Asymmetric Patterns of Attraction Errors in Producing Subject-Predicate Agreement in Hebrew: An Issue of Morphological Structure

    Science.gov (United States)

    Deutsch, Avital; Dank, Maya

    2011-01-01

    A common characteristic of subject-predicate agreement errors (usually termed attraction errors) in complex noun phrases is an asymmetrical pattern of error distribution, depending on the inflectional state of the nouns comprising the complex noun phrase. That is, attraction is most likely to occur when the head noun is the morphologically…

  7. A Multipoint Method for Detecting Genotyping Errors and Mutations in Sibling-Pair Linkage Data

    OpenAIRE

    Douglas, Julie A.; Boehnke, Michael; Lange, Kenneth

    2000-01-01

    The identification of genes contributing to complex diseases and quantitative traits requires genetic data of high fidelity, because undetected errors and mutations can profoundly affect linkage information. The recent emphasis on the use of the sibling-pair design eliminates or decreases the likelihood of detection of genotyping errors and marker mutations through apparent Mendelian incompatibilities or close double recombinants. In this article, we describe a hidden Markov method for detect...

  8. Round-off error in long-term orbital integrations using multistep methods

    Science.gov (United States)

    Quinlan, Gerald D.

    1994-01-01

    Techniques for reducing roundoff error are compared by testing them on high-order Stormer and summetric multistep methods. The best technique for most applications is to write the equation in summed, function-evaluation form and to store the coefficients as rational numbers. A larger error reduction can be achieved by writing the equation in backward-difference form and performing some of the additions in extended precision, but this entails a larger central processing unit (cpu) cost.

  9. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    Science.gov (United States)

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems

    International Nuclear Information System (INIS)

    Stephansen, A.F.

    2007-12-01

    In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)

  11. Estimating misclassification error: a closer look at cross-validation based methods

    Directory of Open Access Journals (Sweden)

    Ounpraseuth Songthip

    2012-11-01

    Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.

  12. SCHEME (Soft Control Human error Evaluation MEthod) for advanced MCR HRA

    International Nuclear Information System (INIS)

    Jang, Inseok; Jung, Wondea; Seong, Poong Hyun

    2015-01-01

    The Technique for Human Error Rate Prediction (THERP), Korean Human Reliability Analysis (K-HRA), Human Error Assessment and Reduction Technique (HEART), A Technique for Human Event Analysis (ATHEANA), Cognitive Reliability and Error Analysis Method (CREAM), and Simplified Plant Analysis Risk Human Reliability Assessment (SPAR-H) in relation to NPP maintenance and operation. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). They are still used for HRA in advanced MCRs even though the operating environment of advanced MCRs in NPPs has been considerably changed by the adoption of new human-system interfaces such as computer-based soft controls. Among the many features in advanced MCRs, soft controls are an important feature because the operation action in NPP advanced MCRs is performed by soft controls. Consequently, those conventional methods may not sufficiently consider the features of soft control execution human errors. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing the soft control task analysis and the literature reviews regarding widely accepted human error taxonomies. In this study, the framework of a HRA method for evaluating soft control execution human error in advanced MCRs is developed. First, the factors which HRA method in advanced MCRs should encompass are derived based on the literature review, and soft control task analysis. Based on the derived factors, execution HRA framework in advanced MCRs is developed mainly focusing on the features of soft control. Moreover, since most current HRA database deal with operation in conventional type of MCRs and are not explicitly designed to deal with digital HSI, HRA database are developed under lab scale simulation

  13. Microorganisms and methods for producing pyruvate, ethanol, and other compounds

    Energy Technology Data Exchange (ETDEWEB)

    Reed, Jennifer L.; Zhang, Xiaolin

    2017-12-26

    Microorganisms comprising modifications for producing pyruvate, ethanol, and other compounds. The microorganisms comprise modifications that reduce or ablate activity of one or more of pyruvate dehydrogenase, 2-oxoglutarate dehydrogenase, phosphate acetyltransferase, acetate kinase, pyruvate oxidase, lactate dehydrogenase, cytochrome terminal oxidase, succinate dehydrogenase, 6-phosphogluconate dehydrogenase, glutamate dehydrogenase, pyruvate formate lyase, pyruvate formate lyase activating enzyme, and isocitrate lyase. The microorganisms optionally comprise modifications that enhance expression or activity of pyruvate decarboxylase and alcohol dehydrogenase. The microorganisms are optionally evolved in defined media to enhance specific production of one or more compounds. Methods of producing compounds with the microorganisms are provided.

  14. Host cells and methods for producing diacid compounds

    Energy Technology Data Exchange (ETDEWEB)

    Steen, Eric J.; Fortman, Jeffrey L.; Dietrich, Jeffrey A.; Keasling, Jay D.

    2018-04-24

    The present invention provides for a method of producing one or more fatty acid derived dicarboxylic acids in a genetically modified host cell which does not naturally produce the one or more derived fatty acid derived dicarboxylic acids. The invention provides for the biosynthesis of dicarboxylic acid ranging in length from C3 to C26. The host cell can be further modified to increase fatty acid production or export of the desired fatty acid derived compound, and/or decrease fatty acid storage or metabolism.

  15. Improvements in or relating to methods of producing superconductors

    International Nuclear Information System (INIS)

    McDougall, I.L.

    1975-01-01

    A method is described for manufacturing a superconductor comprised of a superconducting intermetallic compound of at least two elements. The method consists of producing a composite containing at least one filament of at least one of the elements, this filament being embedded in a matrix material comprising a support material and the remainder of the elements. This material is coated with a material having a low self diffusion coefficient and which is insoluble in the matrix material. The remainder of the elements are allowed to diffuse into the filament and react to form the intermetallic compound. Full details are given of the application of the method, and examples are given. (U.K.)

  16. Correction method for the error of diamond tool's radius in ultra-precision cutting

    Science.gov (United States)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  17. The nuclear physical method for high pressure steam manifold water level gauging and its error

    International Nuclear Information System (INIS)

    Li Nianzu; Li Beicheng; Jia Shengming

    1993-10-01

    A new method, which is non-contact on measured water level, for measuring high pressure steam manifold water level with nuclear detection technique is introduced. This method overcomes the inherent drawback of previous water level gauges based on other principles. This method can realize full range real time monitoring on the continuous water level of high pressure steam manifold from the start to full load of boiler, and the actual value of water level can be obtained. The measuring errors were analysed on site. Errors from practical operation in Tianjin Junliangcheng Power Plant and in laboratory are also presented

  18. Development of an analysis rule of diagnosis error for standard method of human reliability analysis

    International Nuclear Information System (INIS)

    Jeong, W. D.; Kang, D. I.; Jeong, K. S.

    2003-01-01

    This paper presents the status of development of Korea standard method for Human Reliability Analysis (HRA), and proposed a standard procedure and rules for the evaluation of diagnosis error probability. The quality of KSNP HRA was evaluated using the requirement of ASME PRA standard guideline, and the design requirement for the standard HRA method was defined. Analysis procedure and rules, developed so far, to analyze diagnosis error probability was suggested as a part of the standard method. And also a study of comprehensive application was performed to evaluate the suitability of the proposed rules

  19. A new method for weakening the combined effect of residual errors on multibeam bathymetric data

    Science.gov (United States)

    Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue

    2014-12-01

    Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.

  20. Analysis of Statistical Methods and Errors in the Articles Published in the Korean Journal of Pain

    Science.gov (United States)

    Yim, Kyoung Hoon; Han, Kyoung Ah; Park, Soo Young

    2010-01-01

    Background Statistical analysis is essential in regard to obtaining objective reliability for medical research. However, medical researchers do not have enough statistical knowledge to properly analyze their study data. To help understand and potentially alleviate this problem, we have analyzed the statistical methods and errors of articles published in the Korean Journal of Pain (KJP), with the intention to improve the statistical quality of the journal. Methods All the articles, except case reports and editorials, published from 2004 to 2008 in the KJP were reviewed. The types of applied statistical methods and errors in the articles were evaluated. Results One hundred and thirty-nine original articles were reviewed. Inferential statistics and descriptive statistics were used in 119 papers and 20 papers, respectively. Only 20.9% of the papers were free from statistical errors. The most commonly adopted statistical method was the t-test (21.0%) followed by the chi-square test (15.9%). Errors of omission were encountered 101 times in 70 papers. Among the errors of omission, "no statistics used even though statistical methods were required" was the most common (40.6%). The errors of commission were encountered 165 times in 86 papers, among which "parametric inference for nonparametric data" was the most common (33.9%). Conclusions We found various types of statistical errors in the articles published in the KJP. This suggests that meticulous attention should be given not only in the applying statistical procedures but also in the reviewing process to improve the value of the article. PMID:20552071

  1. The systematic error of temperature noise correlation measurement method and self-calibration

    International Nuclear Information System (INIS)

    Tian Hong; Tong Yunxian

    1993-04-01

    The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments

  2. Error analysis of some Galerkin - least squares methods for the elasticity equations

    International Nuclear Information System (INIS)

    Franca, L.P.; Stenberg, R.

    1989-05-01

    We consider the recent technique of stabilizing mixed finite element methods by augmenting the Galerkin formulation with least squares terms calculated separately on each element. The error analysis is performed in a unified manner yielding improved results for some methods introduced earlier. In addition, a new formulation is introduced and analyzed [pt

  3. Error analysis and system improvements in phase-stepping methods for photoelasticity

    International Nuclear Information System (INIS)

    Wenyan Ji

    1997-11-01

    In the past automated photoelasticity has been demonstrated to be one of the most efficient technique for determining the complete state of stress in a 3-D component. However, the measurement accuracy, which depends on many aspects of both the theoretical foundations and experimental procedures, has not been studied properly. The objective of this thesis is to reveal the intrinsic properties of the errors, provide methods for reducing them and finally improve the system accuracy. A general formulation for a polariscope with all the optical elements in an arbitrary orientation was deduced using the method of Mueller Matrices. The deduction of this formulation indicates an inherent connectivity among the optical elements and gives a knowledge of the errors. In addition, this formulation also shows a common foundation among the photoelastic techniques, consequently, these techniques share many common error sources. The phase-stepping system proposed by Patterson and Wang was used as an exemplar to analyse the errors and provide the proposed improvements. This system can be divided into four parts according to their function, namely the optical system, light source, image acquisition equipment and image analysis software. All the possible error sources were investigated separately and the methods for reducing the influence of the errors and improving the system accuracy are presented. To identify the contribution of each possible error to the final system output, a model was used to simulate the errors and analyse their consequences. Therefore the contribution to the results from different error sources can be estimated quantitatively and finally the accuracy of the systems can be improved. For a conventional polariscope, the system accuracy can be as high as 99.23% for the fringe order and the error less than 5 degrees for the isoclinic angle. The PSIOS system is limited to the low fringe orders. For a fringe order of less than 1.5, the accuracy is 94.60% for fringe

  4. A Method and Support Tool for the Analysis of Human Error Hazards in Digital Devices

    International Nuclear Information System (INIS)

    Lee, Yong Hee; Kim, Seon Soo; Lee, Yong Hee

    2012-01-01

    In recent years, many nuclear power plants have adopted modern digital I and C technologies since they are expected to significantly improve their performance and safety. Modern digital technologies were expected to significantly improve both the economical efficiency and safety of nuclear power plants. However, the introduction of an advanced main control room (MCR) is accompanied with lots of changes in forms and features and differences through virtue of new digital devices. Many user-friendly displays and new features in digital devices are not enough to prevent human errors in nuclear power plants (NPPs). It may be an urgent to matter find the human errors potentials due to digital devices, and their detailed mechanisms. We can then consider them during the design of digital devices and their interfaces. The characteristics of digital technologies and devices may give many opportunities to the interface management, and can be integrated into a compact single workstation in an advanced MCR, such that workers can operate the plant with minimum burden under any operating condition. However, these devices may introduce new types of human errors, and thus we need a means to evaluate and prevent such errors, especially within digital devices for NPPs. This research suggests a new method named HEA-BIS (Human Error Analysis based on Interaction Segment) to confirm and detect human errors associated with digital devices. This method can be facilitated by support tools when used to ensure the safety when applying digital devices in NPPs

  5. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER

    International Nuclear Information System (INIS)

    QIAN, S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-01-01

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately

  6. Filtering Methods for Error Reduction in Spacecraft Attitude Estimation Using Quaternion Star Trackers

    Science.gov (United States)

    Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil

    2011-01-01

    Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.

  7. An error compensation method for a linear array sun sensor with a V-shaped slit

    International Nuclear Information System (INIS)

    Fan, Qiao-yun; Tan, Xiao-feng

    2015-01-01

    Existing methods of improving measurement accuracy, such as polynomial fitting and increasing pixel numbers, cannot guarantee high precision and good miniaturization specifications of a microsun sensor at the same time. Therefore, a novel integrated and accurate error compensation method is proposed. A mathematical error model is established according to the analysis results of all the contributing factors, and the model parameters are calculated through multi-sets simultaneous calibration. The numerical simulation results prove that the calibration method is unaffected by installation errors introduced by the calibration process, and is capable of separating the sensor’s intrinsic and extrinsic parameters precisely, and obtaining accurate and robust intrinsic parameters. In laboratorial calibration, the calibration data are generated by using a two-axis rotation table and a sun simulator. The experimental results show that owing to the proposed error compensation method, the sun sensor’s measurement accuracy is improved by 30 times throughout its field of view (±60°  ×  ±60°), with a RMS error of 0.1°. (paper)

  8. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  9. Method of producing gaseous products using a downflow reactor

    Science.gov (United States)

    Cortright, Randy D; Rozmiarek, Robert T; Hornemann, Charles C

    2014-09-16

    Reactor systems and methods are provided for the catalytic conversion of liquid feedstocks to synthesis gases and other noncondensable gaseous products. The reactor systems include a heat exchange reactor configured to allow the liquid feedstock and gas product to flow concurrently in a downflow direction. The reactor systems and methods are particularly useful for producing hydrogen and light hydrocarbons from biomass-derived oxygenated hydrocarbons using aqueous phase reforming. The generated gases may find used as a fuel source for energy generation via PEM fuel cells, solid-oxide fuel cells, internal combustion engines, or gas turbine gensets, or used in other chemical processes to produce additional products. The gaseous products may also be collected for later use or distribution.

  10. High capacity adsorption media and method of producing

    Science.gov (United States)

    Tranter, Troy J.; Mann, Nicholas R.; Todd, Terry A.; Herbst, Ronald S.

    2010-10-05

    A method of producing an adsorption medium to remove at least one constituent from a feed stream. The method comprises dissolving and/or suspending at least one metal compound in a solvent to form a metal solution, dissolving polyacrylonitrile into the metal solution to form a PAN-metal solution, and depositing the PAN-metal solution into a quenching bath to produce the adsorption medium. The at least one constituent, such as arsenic, selenium, or antimony, is removed from the feed stream by passing the feed stream through the adsorption medium. An adsorption medium having an increased metal loading and increased capacity for arresting the at least one constituent to be removed is also disclosed. The adsorption medium includes a polyacrylonitrile matrix and at least one metal hydroxide incorporated into the polyacrylonitrile matrix.

  11. Some error estimates for the lumped mass finite element method for a parabolic problem

    KAUST Repository

    Chatzipantelidis, P.

    2012-01-01

    We study the spatially semidiscrete lumped mass method for the model homogeneous heat equation with homogeneous Dirichlet boundary conditions. Improving earlier results we show that known optimal order smooth initial data error estimates for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods. © 2011 American Mathematical Society.

  12. HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD

    Energy Technology Data Exchange (ETDEWEB)

    Harold S. Blackman; David I. Gertman; Ronald L. Boring

    2008-09-01

    This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.

  13. METHOD FOR PRODUCING ISOTOPIC METHANES AND PARTIALLY HALOGENATED DERIVATIVES THEROF

    Science.gov (United States)

    Frazer, J.W.

    1959-08-18

    A method is given for producing isotopic methanes and/ or partially halogenated derivatives. Lithium hydride, deuteride, or tritide is reacted with a halogenated methane or with a halogenated methane in combination with free halogen. The process is conveniently carried out by passing a halogenated methane preferably at low pressures or in an admixture with an inert gas through a fixed bed of finely divided lithium hydride heated initially to temperatures of 100 to 200 deg C depending upon the halogenated methane used.

  14. System and method for producing substitute natural gas from coal

    Science.gov (United States)

    Hobbs, Raymond [Avondale, AZ

    2012-08-07

    The present invention provides a system and method for producing substitute natural gas and electricity, while mitigating production of any greenhouse gasses. The system includes a hydrogasification reactor, to form a gas stream including natural gas and a char stream, and an oxygen burner to combust the char material to form carbon oxides. The system also includes an algae farm to convert the carbon oxides to hydrocarbon material and oxygen.

  15. Real-time GPS seismology using a single receiver: method comparison, error analysis and precision validation

    Science.gov (United States)

    Li, Xingxing

    2014-05-01

    Earthquake monitoring and early warning system for hazard assessment and mitigation has traditional been based on seismic instruments. However, for large seismic events, it is difficult for traditional seismic instruments to produce accurate and reliable displacements because of the saturation of broadband seismometers and problematic integration of strong-motion data. Compared with the traditional seismic instruments, GPS can measure arbitrarily large dynamic displacements without saturation, making them particularly valuable in case of large earthquakes and tsunamis. GPS relative positioning approach is usually adopted to estimate seismic displacements since centimeter-level accuracy can be achieved in real-time by processing double-differenced carrier-phase observables. However, relative positioning method requires a local reference station, which might itself be displaced during a large seismic event, resulting in misleading GPS analysis results. Meanwhile, the relative/network approach is time-consuming, particularly difficult for the simultaneous and real-time analysis of GPS data from hundreds or thousands of ground stations. In recent years, several single-receiver approaches for real-time GPS seismology, which can overcome the reference station problem of the relative positioning approach, have been successfully developed and applied to GPS seismology. One available method is real-time precise point positioning (PPP) relied on precise satellite orbit and clock products. However, real-time PPP needs a long (re)convergence period, of about thirty minutes, to resolve integer phase ambiguities and achieve centimeter-level accuracy. In comparison with PPP, Colosimo et al. (2011) proposed a variometric approach to determine the change of position between two adjacent epochs, and then displacements are obtained by a single integration of the delta positions. This approach does not suffer from convergence process, but the single integration from delta positions to

  16. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  17. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Directory of Open Access Journals (Sweden)

    Huiliang Cao

    2016-01-01

    Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  18. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  19. Research on the Method of Noise Error Estimation of Atomic Clocks

    Science.gov (United States)

    Song, H. J.; Dong, S. W.; Li, W.; Zhang, J. H.; Jing, Y. J.

    2017-05-01

    The simulation methods of different noises of atomic clocks are given. The frequency flicker noise of atomic clock is studied by using the Markov process theory. The method for estimating the maximum interval error of the frequency white noise is studied by using the Wiener process theory. Based on the operation of 9 cesium atomic clocks in the time frequency reference laboratory of NTSC (National Time Service Center), the noise coefficients of the power-law spectrum model are estimated, and the simulations are carried out according to the noise models. Finally, the maximum interval error estimates of the frequency white noises generated by the 9 cesium atomic clocks have been acquired.

  20. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    Science.gov (United States)

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  1. A method to deal with installation errors of wearable accelerometers for human activity recognition

    International Nuclear Information System (INIS)

    Jiang, Ming; Wang, Zhelong; Shang, Hong; Li, Hongyi; Wang, Yuechao

    2011-01-01

    Human activity recognition (HAR) by using wearable accelerometers has gained significant interest in recent years in a range of healthcare areas, including inferring metabolic energy expenditure, predicting falls, measuring gait parameters and monitoring daily activities. The implementation of HAR relies heavily on the correctness of sensor fixation. The installation errors of wearable accelerometers may dramatically decrease the accuracy of HAR. In this paper, a method is proposed to improve the robustness of HAR to the installation errors of accelerometers. The method first calculates a transformation matrix by using Gram–Schmidt orthonormalization in order to eliminate the sensor's orientation error and then employs a low-pass filter with a cut-off frequency of 10 Hz to eliminate the main effect of the sensor's misplacement. The experimental results showed that the proposed method obtained a satisfactory performance for HAR. The average accuracy rate from ten subjects was 95.1% when there were no installation errors, and was 91.9% when installation errors were involved in wearable accelerometers

  2. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  3. An Estimation of Human Error Probability of Filtered Containment Venting System Using Dynamic HRA Method

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)

    2016-10-15

    The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.

  4. Errors in accident data, its types, causes and methods of rectification-analysis of the literature.

    Science.gov (United States)

    Ahmed, Ashar; Sadullah, Ahmad Farhan Mohd; Yahya, Ahmad Shukri

    2017-07-29

    Most of the decisions taken to improve road safety are based on accident data, which makes it the back bone of any country's road safety system. Errors in this data will lead to misidentification of black spots and hazardous road segments, projection of false estimates pertinent to accidents and fatality rates, and detection of wrong parameters responsible for accident occurrence, thereby making the entire road safety exercise ineffective. Its extent varies from country to country depending upon various factors. Knowing the type of error in the accident data and the factors causing it enables the application of the correct method for its rectification. Therefore there is a need for a systematic literature review that addresses the topic at a global level. This paper fulfils the above research gap by providing a synthesis of literature for the different types of errors found in the accident data of 46 countries across the six regions of the world. The errors are classified and discussed with respect to each type and analysed with respect to income level; assessment with regard to the magnitude for each type is provided; followed by the different causes that result in their occurrence, and the various methods used to address each type of error. Among high-income countries the extent of error in reporting slight, severe, non-fatal and fatal injury accidents varied between 39-82%, 16-52%, 12-84%, and 0-31% respectively. For middle-income countries the error for the same categories varied between 93-98%, 32.5-96%, 34-99% and 0.5-89.5% respectively. The only four studies available for low-income countries showed that the error in reporting non-fatal and fatal accidents varied between 69-80% and 0-61% respectively. The logistic relation of error in accident data reporting, dichotomised at 50%, indicated that as the income level of a country increases the probability of having less error in accident data also increases. Average error in recording information related to the

  5. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  6. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    Science.gov (United States)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  7. The behaviour of the local error in splitting methods applied to stiff problems

    International Nuclear Information System (INIS)

    Kozlov, Roman; Kvaernoe, Anne; Owren, Brynjulf

    2004-01-01

    Splitting methods are frequently used in solving stiff differential equations and it is common to split the system of equations into a stiff and a nonstiff part. The classical theory for the local order of consistency is valid only for stepsizes which are smaller than what one would typically prefer to use in the integration. Error control and stepsize selection devices based on classical local order theory may lead to unstable error behaviour and inefficient stepsize sequences. Here, the behaviour of the local error in the Strang and Godunov splitting methods is explained by using two different tools, Lie series and singular perturbation theory. The two approaches provide an understanding of the phenomena from different points of view, but both are consistent with what is observed in numerical experiments

  8. A GROSS ERROR ELIMINATION METHOD FOR POINT CLOUD DATA BASED ON KD-TREE

    Directory of Open Access Journals (Sweden)

    Q. Kang

    2018-04-01

    Full Text Available Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data’s pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  9. A numerical method for multigroup slab-geometry discrete ordinates problems with no spatial truncation error

    International Nuclear Information System (INIS)

    Barros, R.C. de; Larsen, E.W.

    1991-01-01

    A generalization of the one-group Spectral Green's Function (SGF) method is developed for multigroup, slab-geometry discrete ordinates (S N ) problems. The multigroup SGF method is free from spatial truncation errors; it generated numerical values for the cell-edge and cell-average angular fluxes that agree with the analytic solution of the multigroup S N equations. Numerical results are given to illustrate the method's accuracy

  10. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  11. The error analysis of the determination of the activity coefficients via the isopiestic method

    International Nuclear Information System (INIS)

    Zhou Jun; Chen Qiyuan; Fang Zheng; Liang Yizeng; Liu Shijun; Zhou Yong

    2005-01-01

    Error analysis is very important to experimental designs. The error analysis of the determination of activity coefficients for a binary system via the isopiestic method shows that the error sources include not only the experimental errors of the analyzed molalities and the measured osmotic coefficients, but also the deviation of the regressed values from the experimental data when the regression function is used. It also shows that the accurate chemical analysis of the molality of the test solution is important, and it is preferable to keep the error of the measured osmotic coefficients changeless in all isopiestic experiments including those experiments on the very dilute solutions. The isopiestic experiments on the dilute solutions are very important, and the lowest molality should be low enough so that a theoretical method can be used below the lowest molality. And it is necessary that the isopiestic experiment should be done on the test solutions of lower than 0.1 mol . kg -1 . For most electrolytes solutions, it is usually preferable to require the lowest molality to be less than 0.05 mol . kg -1 . Moreover, the experimental molalities of the test solutions should be firstly arranged by keeping the interval of the logarithms of the molalities nearly constant, and secondly more number of high molalities should be arranged, and we propose to arrange the experimental molalities greater than 1 mol . kg -1 according to some kind of the arithmetical progression of the intervals of the molalities. After experiments, the error of the calculated activity coefficients of the solutes could be calculated from the actually values of the errors of the measured isopiestic molalities and the deviations of the regressed values from the experimental values with our obtained equations

  12. Method for producing nanowire-polymer composite electrodes

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Qibing; Yu, Zhibin

    2017-11-21

    A method for producing flexible, nanoparticle-polymer composite electrodes is described. Conductive nanoparticles, preferably metal nanowires or nanotubes, are deposited on a smooth surface of a platform to produce a porous conductive layer. A second application of conductive nanoparticles or a mixture of nanoparticles can also be deposited to form a porous conductive layer. The conductive layer is then coated with at least one coating of monomers that is polymerized to form a conductive layer-polymer composite film. Optionally, a protective coating can be applied to the top of the composite film. In one embodiment, the monomer coating includes light transducing particles to reduce the total internal reflection of light through the composite film or pigments that absorb light at one wavelength and re-emit light at a longer wavelength. The resulting composite film has an active side that is smooth with surface height variations of 100 nm or less.

  13. Method and apparatus for producing small hollow spheres

    International Nuclear Information System (INIS)

    Hendricks, C.D.

    1979-01-01

    A method and apparatus are described for producing small hollow spheres of glass, metal or plastic, wherein the sphere material is mixed with or contains as part of the composition a blowing agent which decomposes at high temperature (T greater than or equal to 600 0 C). As the temperature is quickly raised, the blowing agent decomposes and the resulting gas expands from within, thus forming a hollow sphere of controllable thickness. The thus produced hollow spheres (20 to 10 3 μm) have a variety of application, and are particularly useful in the fabrication of targets for laser implosion such as neutron sources, laser fusion physics studies, and laser initiated fusion power plants

  14. Systems and methods for producing low work function electrodes

    Science.gov (United States)

    Kippelen, Bernard; Fuentes-Hernandez, Canek; Zhou, Yinhua; Kahn, Antoine; Meyer, Jens; Shim, Jae Won; Marder, Seth R.

    2015-07-07

    According to an exemplary embodiment of the invention, systems and methods are provided for producing low work function electrodes. According to an exemplary embodiment, a method is provided for reducing a work function of an electrode. The method includes applying, to at least a portion of the electrode, a solution comprising a Lewis basic oligomer or polymer; and based at least in part on applying the solution, forming an ultra-thin layer on a surface of the electrode, wherein the ultra-thin layer reduces the work function associated with the electrode by greater than 0.5 eV. According to another exemplary embodiment of the invention, a device is provided. The device includes a semiconductor; at least one electrode disposed adjacent to the semiconductor and configured to transport electrons in or out of the semiconductor.

  15. A highly accurate finite-difference method with minimum dispersion error for solving the Helmholtz equation

    KAUST Repository

    Wu, Zedong

    2018-04-05

    Numerical simulation of the acoustic wave equation in either isotropic or anisotropic media is crucial to seismic modeling, imaging and inversion. Actually, it represents the core computation cost of these highly advanced seismic processing methods. However, the conventional finite-difference method suffers from severe numerical dispersion errors and S-wave artifacts when solving the acoustic wave equation for anisotropic media. We propose a method to obtain the finite-difference coefficients by comparing its numerical dispersion with the exact form. We find the optimal finite difference coefficients that share the dispersion characteristics of the exact equation with minimal dispersion error. The method is extended to solve the acoustic wave equation in transversely isotropic (TI) media without S-wave artifacts. Numerical examples show that the method is is highly accurate and efficient.

  16. Nonlinear effect of the structured light profilometry in the phase-shifting method and error correction

    International Nuclear Information System (INIS)

    Zhang Wan-Zhen; Chen Zhe-Bo; Xia Bin-Feng; Lin Bin; Cao Xiang-Qun

    2014-01-01

    Digital structured light (SL) profilometry is increasingly used in three-dimensional (3D) measurement technology. However, the nonlinearity of the off-the-shelf projectors and cameras seriously reduces the measurement accuracy. In this paper, first, we review the nonlinear effects of the projector–camera system in the phase-shifting structured light depth measurement method. We show that high order harmonic wave components lead to phase error in the phase-shifting method. Then a practical method based on frequency domain filtering is proposed for nonlinear error reduction. By using this method, the nonlinear calibration of the SL system is not required. Moreover, both the nonlinear effects of the projector and the camera can be effectively reduced. The simulations and experiments have verified our nonlinear correction method. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  17. The relative size of measurement error and attrition error in a panel survey. Comparing them with a new multi-trait multi-method model

    NARCIS (Netherlands)

    Lugtig, Peter

    2017-01-01

    This paper proposes a method to simultaneously estimate both measurement and nonresponse errors for attitudinal and behavioural questions in a longitudinal survey. The method uses a Multi-Trait Multi-Method (MTMM) approach, which is commonly used to estimate the reliability and validity of survey

  18. Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations

    KAUST Repository

    Jin, Bangti; Lazarov, Raytcho; Zhou, Zhi

    2013-01-01

    initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally

  19. The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.

    Science.gov (United States)

    Kaskowitz, Gary S.; De Ayala, R. J.

    2001-01-01

    Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…

  20. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    Science.gov (United States)

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  1. Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels

    Science.gov (United States)

    Mulligan, Jeffrey B.

    1990-01-01

    A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.

  2. Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method

    Science.gov (United States)

    Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.

    2012-01-01

    Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978

  3. Evaluation of Two Methods for Modeling Measurement Errors When Testing Interaction Effects with Observed Composite Scores

    Science.gov (United States)

    Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C.

    2018-01-01

    Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…

  4. The assessment of cognitive errors using an observer-rated method.

    Science.gov (United States)

    Drapeau, Martin

    2014-01-01

    Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.

  5. Method for producing rapid pH changes

    Science.gov (United States)

    Clark, J.H.; Campillo, A.J.; Shapiro, S.L.; Winn, K.R.

    A method of initiating a rapid pH change in a solution comprises irradiating the solution with an intense flux of electromagnetic radiation of a frequency which produces a substantial pK change to a compound in solution. To optimize the resulting pH change, the compound being irradiated in solution should have an excited state lifetime substantially longer than the time required to establish an excited state acid-base equilibrium in the solution. Desired pH changes can be accomplished in nanoseconds or less by means of picosecond pulses of laser radiation.

  6. Tight Error Bounds for Fourier Methods for Option Pricing for Exponential Levy Processes

    KAUST Repository

    Crocce, Fabian

    2016-01-06

    Prices of European options whose underlying asset is driven by the L´evy process are solutions to partial integrodifferential Equations (PIDEs) that generalise the Black-Scholes equation by incorporating a non-local integral term to account for the discontinuities in the asset price. The Levy -Khintchine formula provides an explicit representation of the characteristic function of a L´evy process (cf, [6]): One can derive an exact expression for the Fourier transform of the solution of the relevant PIDE. The rapid rate of convergence of the trapezoid quadrature and the speedup provide efficient methods for evaluationg option prices, possibly for a range of parameter configurations simultaneously. A couple of works have been devoted to the error analysis and parameter selection for these transform-based methods. In [5] several payoff functions are considered for a rather general set of models, whose characteristic function is assumed to be known. [4] presents the framework and theoretical approach for the error analysis, and establishes polynomial convergence rates for approximations of the option prices. [1] presents FT-related methods with curved integration contour. The classical flat FT-methods have been, on the other hand, extended for option pricing problems beyond the European framework [3]. We present a methodology for studying and bounding the error committed when using FT methods to compute option prices. We also provide a systematic way of choosing the parameters of the numerical method, minimising the error bound and guaranteeing adherence to a pre-described error tolerance. We focus on exponential L´evy processes that may be of either diffusive or pure jump in type. Our contribution is to derive a tight error bound for a Fourier transform method when pricing options under risk-neutral Levy dynamics. We present a simplified bound that separates the contributions of the payoff and of the process in an easily processed and extensible product form that

  7. A new method to produce nanoscale iron for nitrate removal

    International Nuclear Information System (INIS)

    Chen, S.-S.; Hsu, H.-D.; Li, C.-W.

    2004-01-01

    This article proposes a novel technology combining electrochemical and ultrasonic methods to produce nanoscale zero valent iron (NZVI). With platinum placed in the cathode and the presence of the dispersion agent, 0.2g/l cetylpyridinium chloride (CPC), a cation surfactant, in the solution, the nanoscale iron particle was successfully produced with diameter of 1-20 nm and specific surface area of 25.4m 2 /g. The produced NZVI was tested in batch experiments for nitrate removal. The results showed that the nitrate reduction was affected by pH. Al low pH, nitrate was shown faster decline and more reduction in term of g NO 3 - -N/g NZVI. The reaction was first order and kinetic coefficients for the four pHs were directly related to pH with R 2 >0.95. Comparing with microscale zero-valent iron (45μm, 0.183m 2 /g), microscale zero-valent iron converted nitrate to ammonia completely, but NZVI converted nitrate to ammonia partially from 36.2 to 45.3% dependent on pH. For mass balance of iron species, since the dissolved iron in the solution was very low ( 2 O 3 was recognized. Thus the reaction mechanisms can be determined

  8. Method of producing excited states of atomic nuclei

    International Nuclear Information System (INIS)

    Morita, M.; Morita, R.

    1976-01-01

    A method is claimed of producing excited states of atomic nuclei which comprises bombarding atoms with x rays or electrons, characterized in that (1) in the atoms selected to be produced in the excited state of their nuclei, (a) the difference between the nuclear excitation energy and the difference between the binding energies of adequately selected two electron orbits is small enough to introduce the nuclear excitation by electron transition, and (b) the system of the nucleus and the electrons in the case of ionizing an orbital electron in said atoms should satisfy the spin and parity conservation laws; and (2) the energy of the bombarding x rays or electrons should be larger than the binding energy of one of the said two electron orbits which is located at shorter distance from the atomic nucleus. According to the present invention, atomic nuclei can be excited in a relatively simple manner without requiring the use of large scale apparatus, equipment and production facilities, e.g., factories. It is also possible to produce radioactive substances or separate a particular isotope with an extremely high purity from a mixture of isotopes by utilizing nuclear excitation

  9. A human error taxonomy and its application to an automatic method accident analysis

    International Nuclear Information System (INIS)

    Matthews, R.H.; Winter, P.W.

    1983-01-01

    Commentary is provided on the quantification aspects of human factors analysis in risk assessment. Methods for quantifying human error in a plant environment are discussed and their application to system quantification explored. Such a programme entails consideration of the data base and a taxonomy of factors contributing to human error. A multi-levelled approach to system quantification is proposed, each level being treated differently drawing on the advantages of different techniques within the fault/event tree framework. Management, as controller of organization, planning and procedure, is assigned a dominant role. (author)

  10. On Error Estimation in the Conjugate Gradient Method and why it Works in Finite Precision Computations

    Czech Academy of Sciences Publication Activity Database

    Strakoš, Zdeněk; Tichý, Petr

    2002-01-01

    Roč. 13, - (2002), s. 56-80 ISSN 1068-9613 R&D Projects: GA ČR GA201/02/0595 Institutional research plan: AV0Z1030915 Keywords : conjugate gradient method * Gauss kvadrature * evaluation of convergence * error bounds * finite precision arithmetic * rounding errors * loss of orthogonality Subject RIV: BA - General Mathematics Impact factor: 0.565, year: 2002 http://etna.mcs.kent.edu/volumes/2001-2010/vol13/abstract.php?vol=13&pages=56-80

  11. A review of some a posteriori error estimates for adaptive finite element methods

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2010-01-01

    Roč. 80, č. 8 (2010), s. 1589-1600 ISSN 0378-4754. [European Seminar on Coupled Problems. Jetřichovice, 08.06.2008-13.06.2008] R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : hp-adaptive finite element method * a posteriori error estimators * computational error estimates Subject RIV: BA - General Mathematics Impact factor: 0.812, year: 2010 http://www.sciencedirect.com/science/article/pii/S0378475408004230

  12. Periodic boundary conditions and the error-controlled fast multipole method

    Energy Technology Data Exchange (ETDEWEB)

    Kabadshow, Ivo

    2012-08-22

    The simulation of pairwise interactions in huge particle ensembles is a vital issue in scientific research. Especially the calculation of long-range interactions poses limitations to the system size, since these interactions scale quadratically with the number of particles. Fast summation techniques like the Fast Multipole Method (FMM) can help to reduce the complexity to O(N). This work extends the possible range of applications of the FMM to periodic systems in one, two and three dimensions with one unique approach. Together with a tight error control, this contribution enables the simulation of periodic particle systems for different applications without the need to know and tune the FMM specific parameters. The implemented error control scheme automatically optimizes the parameters to obtain an approximation for the minimal runtime for a given energy error bound.

  13. Microemulsion extrusion technique: a new method to produce lipid nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Jesus, Marcelo Bispo de, E-mail: dejesusmb@gmail.com; Radaic, Allan [University of Campinas-UNICAMP, Department of Biochemistry, Institute of Biology (Brazil); Zuhorn, Inge S. [University of Groningen, Department of Membrane Cell Biology, University Medical Center (Netherlands); Paula, Eneida de [University of Campinas-UNICAMP, Department of Biochemistry, Institute of Biology (Brazil)

    2013-10-15

    Solid lipid nanoparticles (SLN) and nanostructured lipid carriers (NLC) have been intensively investigated for different applications, including their use as drug and gene delivery systems. Different techniques have been employed to produce lipid nanoparticles, of which high pressure homogenization is the standard technique that is adopted nowadays. Although this method has a high efficiency, does not require the use of organic solvents, and allows large-scale production, some limitations impede its application at laboratory scale: the equipment is expensive, there is a need of huge amounts of surfactants and co-surfactants during the preparation, and the operating conditions are energy intensive. Here, we present the microemulsion extrusion technique as an alternative method to prepare lipid nanoparticles. The parameters to produce lipid nanoparticles using microemulsion extrusion were established, and the lipid particles produced (SLN, NLC, and liposomes) were characterized with regard to size (from 130 to 190 nm), zeta potential, and drug (mitoxantrone) and gene (pDNA) delivery properties. In addition, the particles' in vitro co-delivery capacity (to carry mitoxantrone plus pDNA encoding the phosphatase and tensin homologue, PTEN) was tested in normal (BALB 3T3 fibroblast) and cancer (PC3 prostate and MCF-7 breast) cell lines. The results show that the microemulsion extrusion technique is fast, inexpensive, reproducible, free of organic solvents, and suitable for small volume preparations of lipid nanoparticles. Its application is particularly interesting when using rare and/or costly drugs or ingredients (e.g., cationic lipids for gene delivery or labeled lipids for nanoparticle tracking/diagnosis)

  14. Microemulsion extrusion technique: a new method to produce lipid nanoparticles

    International Nuclear Information System (INIS)

    Jesus, Marcelo Bispo de; Radaic, Allan; Zuhorn, Inge S.; Paula, Eneida de

    2013-01-01

    Solid lipid nanoparticles (SLN) and nanostructured lipid carriers (NLC) have been intensively investigated for different applications, including their use as drug and gene delivery systems. Different techniques have been employed to produce lipid nanoparticles, of which high pressure homogenization is the standard technique that is adopted nowadays. Although this method has a high efficiency, does not require the use of organic solvents, and allows large-scale production, some limitations impede its application at laboratory scale: the equipment is expensive, there is a need of huge amounts of surfactants and co-surfactants during the preparation, and the operating conditions are energy intensive. Here, we present the microemulsion extrusion technique as an alternative method to prepare lipid nanoparticles. The parameters to produce lipid nanoparticles using microemulsion extrusion were established, and the lipid particles produced (SLN, NLC, and liposomes) were characterized with regard to size (from 130 to 190 nm), zeta potential, and drug (mitoxantrone) and gene (pDNA) delivery properties. In addition, the particles’ in vitro co-delivery capacity (to carry mitoxantrone plus pDNA encoding the phosphatase and tensin homologue, PTEN) was tested in normal (BALB 3T3 fibroblast) and cancer (PC3 prostate and MCF-7 breast) cell lines. The results show that the microemulsion extrusion technique is fast, inexpensive, reproducible, free of organic solvents, and suitable for small volume preparations of lipid nanoparticles. Its application is particularly interesting when using rare and/or costly drugs or ingredients (e.g., cationic lipids for gene delivery or labeled lipids for nanoparticle tracking/diagnosis)

  15. Method of producing oxidation resistant coatings for molybdenum

    International Nuclear Information System (INIS)

    Timmons, G.A.

    1989-01-01

    A method is described for producing a molybdenum element having adherently bonded thereto a thermally self-healing plasma-sprayed coating consisting essentially of a composite of molybdenum and a refactory oxide material capable of reacting with molybdenum oxide under oxidizing conditions to form a substantially thermally stable refractory compound of molybdenum, the method comprising plasma-spraying a coating formed by the step-wise application of a plurality of interbonded plasma-sprayed layers of a composite of molybdenum/refractory oxide material produced from a particulate mixture thereof. The coating comprises a first layer of molybdenum plasma-sprayed bonded to the substrate of the molybdenum element, a second layer of plasma-sprayed mixture of particulate molybdenum/refactory oxide consisting essentially of predominantly molybdenum bonded to the first layer, and succeeding layers of this mixture. The next step is heating the coated molybdenum element under oxidizing conditions to an elevated temperature sufficient to cause oxygen to diffuse into the surface of the multi-layered coating to react with dispersed molybdenum therein to form molybdenum oxide and effect healing of the coating by reaction of the molybdenum oxide with the contained refractory oxide and thereby protect the substrate of the molybdenum element against oxidation

  16. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    Directory of Open Access Journals (Sweden)

    Zheng You

    2013-04-01

    Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  17. Optical system error analysis and calibration method of high-accuracy star trackers.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng

    2013-04-08

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  18. Improvement of spatial discretization error on the semi-analytic nodal method using the scattered source subtraction method

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Tatsumi, Masahiro

    2006-01-01

    In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)

  19. A heteroscedastic measurement error model for method comparison data with replicate measurements.

    Science.gov (United States)

    Nawarathna, Lakshika S; Choudhary, Pankaj K

    2015-03-30

    Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  20. A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors

    Directory of Open Access Journals (Sweden)

    Shuang Wang

    2015-12-01

    Full Text Available In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF and Least Square Methods (LSM is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.

  1. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  2. Suppressing carrier removal error in the Fourier transform method for interferogram analysis

    International Nuclear Information System (INIS)

    Fan, Qi; Yang, Hongru; Li, Gaoping; Zhao, Jianlin

    2010-01-01

    A new carrier removal method for interferogram analysis using the Fourier transform is presented. The proposed method can be used to suppress the carrier removal error as well as the spectral leakage error. First, the carrier frequencies are estimated with the spectral centroid of the up sidelobe of the apodized interferogram, and then the up sidelobe can be shifted to the origin in the frequency domain by multiplying the original interferogram by a constructed plane reference wave. The influence of the carrier frequencies without an integer multiple of the frequency interval and the window function for apodization of the interferogram can be avoided in our work. The simulation and experimental results show that this method is effective for phase measurement with a high accuracy from a single interferogram

  3. A possible alternative to the error prone modified Hodge test to correctly identify the carbapenemase producing Gram-negative bacteria

    Directory of Open Access Journals (Sweden)

    S S Jeremiah

    2014-01-01

    Full Text Available Context: The modified Hodge test (MHT is widely used as a screening test for the detection of carbapenemases in Gram-negative bacteria. This test has several pitfalls in terms of validity and interpretation. Also the test has a very low sensitivity in detecting the New Delhi metallo-β-lactamase (NDM. Considering the degree of dissemination of the NDM and the growing pandemic of carbapenem resistance, a more accurate alternative test is needed at the earliest. Aims: The study intends to compare the performance of the MHT with the commercially available Neo-Sensitabs - Carbapenemases/Metallo-β-Lactamase (MBL Confirmative Identification pack to find out whether the latter could be an efficient alternative to the former. Settings and Design: A total of 105 isolates of Klebsiella pneumoniae resistant to imipenem and meropenem, collected prospectively over a period of 2 years were included in the study. Subjects and Methods: The study isolates were tested with the MHT, the Neo-Sensitabs - Carbapenemases/MBL Confirmative Identification pack and polymerase chain reaction (PCR for detecting the blaNDM-1 gene. Results: Among the 105 isolates, the MHT identified 100 isolates as carbapenemase producers. In the five isolates negative for the MHT, four were found to produce MBLs by the Neo-Sensitabs. The Neo-Sensitabs did not have any false negatives when compared against the PCR. Conclusions: The MHT can give false negative results, which lead to failure in detecting the carbapenemase producers. Also considering the other pitfalls of the MHT, the Neo-Sensitabs - Carbapenemases/MBL Confirmative Identification pack could be a more efficient alternative for detection of carbapenemase production in Gram-negative bacteria.

  4. A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy

    International Nuclear Information System (INIS)

    Boswell, Sarah A.; Jeraj, Robert; Ruchala, Kenneth J.; Olivera, Gustavo H.; Jaradat, Hazim A.; James, Joshua A.; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T. Rock

    2005-01-01

    An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle

  5. A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation

    Directory of Open Access Journals (Sweden)

    Tianshuang Qiu

    2007-12-01

    Full Text Available This paper proposes a geometric method to locate a mobile station (MS in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA, angle of arrival (AOA, and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of “biased” or “unbiased” is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.

  6. Error analysis of motion correction method for laser scanning of moving objects

    Science.gov (United States)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  7. Unidirectional infiltration method to produce crown for dental prosthesis application

    International Nuclear Information System (INIS)

    Pontes, F.H.D.; Taguchi, S.P.; Machado, J.P.B.; Santos, C.

    2009-01-01

    Alumina ceramics have been used in dental prosthesis because it is inert, presents higher corrosion and shear resistance when compared to metals, excellent aesthetic, and mechanical resistance. In this work it was produced an infrastructure material for applications in dental crowns, obtained by glass infiltration in alumina preform. Various oxides, among that, rare-earth oxide produced by Xenotime, were melted at 1450 deg C and heat treatment at 700 deg C to obtain the glass (REglass). The alumina was pre-sintered at 1100 deg C cut and machined to predetermine format (unidirectional indirect infiltration) and finally conducted to infiltration test. The alumina was characterized by porosity (Hg-porosity and density) and microstructure (SEM). The glass wettability in alumina was determined as function of temperature, and the contact angle presented a low value (θ<90 deg), showing that glass can be infiltrated spontaneously in alumina. The infiltration test was conducted at glass melting temperature, during 30, 60, 180, 360 minutes. After infiltration, the samples were cut in longitudinal section, ground and polished, and analyzed by XRD (crystalline phases), SEM (microstructure) and EDS (composition).The REglass presents higher infiltration height when compared to current processes (direct infiltration), and homogeneous microstructure, showing that it is a promising method used by prosthetics and dentists. (author)

  8. Unidirectional infiltration method to produce crown for dental prosthesis application

    Energy Technology Data Exchange (ETDEWEB)

    Pontes, F.H.D.; Taguchi, S.P. [Universidade de Sao Paulo (EEL/DEMAR/USP), Lorena, SP (Brazil). Escola de Engenharia; Borges Junior, L.A. [Centro Universitario de Volta Redonda, RJ (Brazil); Machado, J.P.B. [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil); Santos, C. [ProtMat Materiais Avancados, Guaratingueta, SP (Brazil)

    2009-07-01

    Alumina ceramics have been used in dental prosthesis because it is inert, presents higher corrosion and shear resistance when compared to metals, excellent aesthetic, and mechanical resistance. In this work it was produced an infrastructure material for applications in dental crowns, obtained by glass infiltration in alumina preform. Various oxides, among that, rare-earth oxide produced by Xenotime, were melted at 1450 deg C and heat treatment at 700 deg C to obtain the glass (REglass). The alumina was pre-sintered at 1100 deg C cut and machined to predetermine format (unidirectional indirect infiltration) and finally conducted to infiltration test. The alumina was characterized by porosity (Hg-porosity and density) and microstructure (SEM). The glass wettability in alumina was determined as function of temperature, and the contact angle presented a low value (θ<90 deg), showing that glass can be infiltrated spontaneously in alumina. The infiltration test was conducted at glass melting temperature, during 30, 60, 180, 360 minutes. After infiltration, the samples were cut in longitudinal section, ground and polished, and analyzed by XRD (crystalline phases), SEM (microstructure) and EDS (composition).The REglass presents higher infiltration height when compared to current processes (direct infiltration), and homogeneous microstructure, showing that it is a promising method used by prosthetics and dentists. (author)

  9. Method and apparatus for producing food grade carbon dioxide

    International Nuclear Information System (INIS)

    Nobles, J.E.; Swenson, L.K.

    1984-01-01

    A method is disclosed of producing food grade carbon dioxide from an impure carbon dioxide source stream containing contaminants which may include light and heavy hydrocarbons (at least C 1 to C 3 ) and light sulfur compounds such as hydrogen sulfide and carbonyl sulfide as well as heavier sulfur constituents in the nature of mercaptans (RSH) and/or organic mono and disulfides (RSR and RSSR). Nitrogen, water and/or oxygen may also be present in varying amounts in the impure feed stream. The feed gas is first rectified with liquid carbon dioxide condensed from a part of the feed stream to remove heavy hydrocarbons and heavy sulfur compounds, then passed through an absorber to effect removal of the light sulfur compounds, next subjected to an oxidizing atmosphere capable of converting all of the C 2 hydrocarbons and optionally a part of the methane to carbon oxides and water, chilled to condense the water in the remaining gas stream without formation of hydrates, liquefied for ease of handling and storage and finally stripped to remove residual contaminants such as methane, carbon monoxide and nitrogen to produce the final food grade carbon dioxide product

  10. Tungsen--nickel--cobalt alloy and method of producing same

    International Nuclear Information System (INIS)

    Dickinson, J.M.; Riley, R.E.

    1977-01-01

    An improved tungsten alloy having a tungsten content of approximately 95 weight percent, a nickel content of about 3 weight percent, and the balance being cobalt of about 2 weight percent is described. A method for producing this tungsten--nickel--cobalt alloy is further described and comprises coating the tungsten particles with a nickel--cobalt alloy, pressing the coated particles into a compact shape, heating the compact in hydrogen to a temperature in the range of 1400 0 C and holding at this elevated temperature for a period of about 2 hours, increasing this elevated temperature to about 1500 0 C and holding for 1 hour at this temperature, cooling to about 1200 0 C and replacing the hydrogen atmosphere with an inert argon atmosphere while maintaining this elevated temperature for a period of about 1 / 2 hour, and cooling the resulting alloy to room temperature in this argon atmosphere

  11. Method for producing dustless graphite spheres from waste graphite fines

    Science.gov (United States)

    Pappano, Peter J [Oak Ridge, TN; Rogers, Michael R [Clinton, TN

    2012-05-08

    A method for producing graphite spheres from graphite fines by charging a quantity of spherical media into a rotatable cylindrical overcoater, charging a quantity of graphite fines into the overcoater thereby forming a first mixture of spherical media and graphite fines, rotating the overcoater at a speed such that the first mixture climbs the wall of the overcoater before rolling back down to the bottom thereby forming a second mixture of spherical media, graphite fines, and graphite spheres, removing the second mixture from the overcoater, sieving the second mixture to separate graphite spheres, charging the first mixture back into the overcoater, charging an additional quantity of graphite fines into the overcoater, adjusting processing parameters like overcoater dimensions, graphite fines charge, overcoater rotation speed, overcoater angle of rotation, and overcoater time of rotation, before repeating the steps until graphite fines are converted to graphite spheres.

  12. Method for producing superconducting wire and products of the same

    International Nuclear Information System (INIS)

    Marancik, W.G.; Ormand, F.T.

    1975-01-01

    A method is described for producing a composite superconducting wire including one or more strands of high-field Type II superconductor embedded in a conductive matrix of normal material. A composite body is prepared which includes a matrix in which are embedded one or more rods of a metal which is capable of forming a high-field Type II superconductor upon high temperature extruded to an intermediate diameter, and then is hot-drawn to a final diameter at temperatures exceeding about 100 0 C, by multiple passes through drawing dies, the composite being reduced in cross-sectional area approximately 15 to 20 percent per draw. In a preferred mode of practicing the invention, the rods comprise vanadium or niobium, with the matrix being respectively gallium--bronze or tin--bronze, and the superconductive strands being formed by high temperature diffusion of the gallium or tin into the rods subsequent to drawing

  13. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    Science.gov (United States)

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current

  14. Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion

    KAUST Repository

    Jin, B.; Lazarov, R.; Pasciak, J.; Zhou, Z.

    2014-01-01

    © 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.

  15. Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion

    KAUST Repository

    Jin, B.

    2014-05-30

    © 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.

  16. Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations

    KAUST Repository

    Jin, Bangti

    2013-01-01

    We consider the initial boundary value problem for a homogeneous time-fractional diffusion equation with an initial condition ν(x) and a homogeneous Dirichlet boundary condition in a bounded convex polygonal domain Ω. We study two semidiscrete approximation schemes, i.e., the Galerkin finite element method (FEM) and lumped mass Galerkin FEM, using piecewise linear functions. We establish almost optimal with respect to the data regularity error estimates, including the cases of smooth and nonsmooth initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally, we present some numerical results that give insight into the reliability of the theoretical study. © 2013 Society for Industrial and Applied Mathematics.

  17. Shifted Legendre method with residual error estimation for delay linear Fredholm integro-differential equations

    Directory of Open Access Journals (Sweden)

    Şuayip Yüzbaşı

    2017-03-01

    Full Text Available In this paper, we suggest a matrix method for obtaining the approximate solutions of the delay linear Fredholm integro-differential equations with constant coefficients using the shifted Legendre polynomials. The problem is considered with mixed conditions. Using the required matrix operations, the delay linear Fredholm integro-differential equation is transformed into a matrix equation. Additionally, error analysis for the method is presented using the residual function. Illustrative examples are given to demonstrate the efficiency of the method. The results obtained in this study are compared with the known results.

  18. Evaluating Method Engineer Performance: an error classification and preliminary empirical study

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    1998-11-01

    Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.

  19. Method of producing deuterium-oxide-enriched water

    International Nuclear Information System (INIS)

    Mandel, H.

    1976-01-01

    A method and apparatus for producing deuterium-oxide-enriched water (e.g., as a source of deuterium-rich gas mixtures) are disclosed wherein the multiplicity of individual cooling cycles of a power plant are connected in replenishment cascade so that fresh feed water with a naturally occurring level of deuterium oxide is supplied to replace the vaporization losses, sludge losses and withdrawn portion of water in a first cooling cycle, the withdrawn water being fed as the feed water to the subsequent cooling cycle or stage and serving as the sole feed-water input to the latter. At the end of the replenishment-cascade system, the withdrawn water has a high concentration of deuterium oxide and may serve as a source of water for the production of heavy water or deuterium-enriched gas by conventional methods of removing deuterium oxide or deuterium from the deuterium-oxide-enriched water. Each cooling cycle may form part of a thermal or nuclear power plant in which a turbine is driven by part of the energy and air-cooling of the water takes place in the atmosphere, e.g., in a cooling tower

  20. Methods of producing protoporphyrin IX and bacterial mutants therefor

    Science.gov (United States)

    Zhou, Jizhong; Qiu, Dongru; He, Zhili; Xie, Ming

    2016-03-01

    The presently disclosed inventive concepts are directed in certain embodiments to a method of producing protoporphyrin IX by (1) cultivating a strain of Shewanella bacteria in a culture medium under conditions suitable for growth thereof, and (2) recovering the protoporphyrin IX from the culture medium. The strain of Shewanella bacteria comprises at least one mutant hemH gene which is incapable of normal expression, thereby causing an accumulation of protoporphyrin IX. In certain embodiments of the method, the strain of Shewanella bacteria is a strain of S. loihica, and more specifically may be S. loihica PV-4. In certain embodiments, the mutant hemH gene of the strain of Shewanella bacteria may be a mutant of shew_2229 and/or of shew_1140. In other embodiments, the presently disclosed inventive concepts are directed to mutant strains of Shewanella bacteria having at least one mutant hemH gene which is incapable of normal expression, thereby causing an accumulation of protoporphyrin IX during cultivation of the bacteria. In certain embodiments the strain of Shewanella bacteria is a strain of S. loihica, and more specifically may be S. loihica PV-4. In certain embodiments, the mutant hemH gene of the strain of Shewanella bacteria may be a mutant of shew_2229 and/or shew_1140.

  1. A method for optical ground station reduce alignment error in satellite-ground quantum experiments

    Science.gov (United States)

    He, Dong; Wang, Qiang; Zhou, Jian-Wei; Song, Zhi-Jun; Zhong, Dai-Jun; Jiang, Yu; Liu, Wan-Sheng; Huang, Yong-Mei

    2018-03-01

    A satellite dedicated for quantum science experiments, has been developed and successfully launched from Jiuquan, China, on August 16, 2016. Two new optical ground stations (OGSs) were built to cooperate with the satellite to complete satellite-ground quantum experiments. OGS corrected its pointing direction by satellite trajectory error to coarse tracking system and uplink beacon sight, therefore fine tracking CCD and uplink beacon optical axis alignment accuracy was to ensure that beacon could cover the quantum satellite in all time when it passed the OGSs. Unfortunately, when we tested specifications of the OGSs, due to the coarse tracking optical system was commercial telescopes, the change of position of the target in the coarse CCD was up to 600μrad along with the change of elevation angle. In this paper, a method of reduce alignment error between beacon beam and fine tracking CCD is proposed. Firstly, OGS fitted the curve of target positions in coarse CCD along with the change of elevation angle. Secondly, OGS fitted the curve of hexapod secondary mirror positions along with the change of elevation angle. Thirdly, when tracking satellite, the fine tracking error unloaded on the real-time zero point position of coarse CCD which computed by the firstly calibration data. Simultaneously the positions of the hexapod secondary mirror were adjusted by the secondly calibration data. Finally the experiment result is proposed. Results show that the alignment error is less than 50μrad.

  2. Impact of Channel Estimation Errors on Multiuser Detection via the Replica Method

    Directory of Open Access Journals (Sweden)

    Li Husheng

    2005-01-01

    Full Text Available For practical wireless DS-CDMA systems, channel estimation is imperfect due to noise and interference. In this paper, the impact of channel estimation errors on multiuser detection (MUD is analyzed under the framework of the replica method. System performance is obtained in the large system limit for optimal MUD, linear MUD, and turbo MUD, and is validated by numerical results for finite systems.

  3. Producing transparent PLZT ceramics using different synthesis method

    International Nuclear Information System (INIS)

    Dambekalne, M.; Antonova, M.; Livinsh, M.; Kalvane, A.; Plonska, M.; Garbarz-Glos, B.

    2004-01-01

    Full text: Ceramic samples of Pb 1-x La x (Zr 0.65 Ti 0.35 )O 3 (x 8, 9, 10) were prepared from powders being sintered by two methods: 1) peroxohydroxopolimer (PHP), where as precursors were used solutions of inorganic salts TiCl 4 , ZrOCl 4 ·8H 2 O, Pb(NO 3 ) 2 , La(NO 3 ) 3 ·6H 2 O); 2) sol-gel, using as precursors solutions of metal organic salts Pb(COOCH 3 ) 2 ·3H 2 O, La(COOCH 3 ) 3 ·1.5H 2 O, Zr(OCH 2 CH 2 CH 3 ) 4 , Ti(OCH 2 CH 2 CH 3 ) 4 . The thermal regimes for both powders were similar: synthesis at 600 0 C for 2 - 4h, obtaining amorphous nanopowder. Ceramic samples were produced by hot pressing at 1100 - 1200 0 C for 2 - 6h and pressure of 20Mpa.Optical transmittance of ceramic samples from PHP derived powders was higher than that from sol- gel derived. The transparency of poled plates with thickness of 0.3mm (wavelength λ = 630nm) was 67 - 69% and 56 - 59%, respectively. It can be explained by lack of technical support for sol-gel processing in atmosphere of neutral gas, as metal organic precursors are extremely sensitive to moisture of air. X-ray and DTA studies were used for powders. Dielectrics, ferroelectric and optical properties as well as studies of icrostructure were carried out for ceramic samples. The grain size of ceramics produced from PHP powders is 3- 4μ, for sol-gel ceramics less than 1μ

  4. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2012-09-01

    Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their

  5. Errors in the estimation method for the rejection of vibrations in adaptive optics systems

    Science.gov (United States)

    Kania, Dariusz

    2017-06-01

    In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.

  6. The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications

    International Nuclear Information System (INIS)

    Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em

    2008-01-01

    Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L 2 error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods

  7. Anodization of cast aluminium alloys produced by different casting methods

    Directory of Open Access Journals (Sweden)

    K. Labisz

    2008-08-01

    Full Text Available In this paper the usability of two casting methods, of sand and high pressure cast for the anodization of AlSi12 and AlSi9Cu3 aluminium cast alloys was investigated. With defined anodization parameters like electrolyte composition and temperature, current type and value a anodic alumina surface layer was produced. The quality, size and properties of the anodic layer was investigated after the anodization of the chosen aluminium cast alloys. The Alumina layer was observed used light microscope, also the mechanical properties were measured as well the abrasive wear test was made with using ABR-8251 equipment. The researches included analyze of the influence of chemical composition, geometry and roughness of anodic layer obtained on aluminum casts. Conducted investigations shows the areas of later researches, especially in the direction of the possible, next optimization anodization process of aluminum casting alloys, for example in the range of raising resistance on corrosion to achieve a suitable anodic surface layer on elements for increasing applications in the aggressive environment for example as materials on working building constructions, elements in electronics and construction parts in air and automotive industry.

  8. A method for the quantification of model form error associated with physical systems.

    Energy Technology Data Exchange (ETDEWEB)

    Wallen, Samuel P.; Brake, Matthew Robert

    2014-03-01

    In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.

  9. Recursive prediction error methods for online estimation in nonlinear state-space models

    Directory of Open Access Journals (Sweden)

    Dag Ljungquist

    1994-04-01

    Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.

  10. Human reliability analysis of errors of commission: a review of methods and applications

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2007-06-15

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  11. Error analysis in Fourier methods for option pricing for exponential Lévy processes

    KAUST Repository

    Crocce, Fabian

    2015-01-07

    We derive an error bound for utilising the discrete Fourier transform method for solving Partial Integro-Differential Equations (PIDE) that describe european option prices for exponential Lévy driven asset prices. We give sufficient conditions for the existence of a L? bound that separates the dynamical contribution from that arising from the type of the option n in question. The bound achieved does not rely on information of the asymptotic behaviour of option prices at extreme asset values. In addition, we demonstrate improved numerical performance for select examples of practical relevance when compared to established bounding methods.

  12. Human reliability analysis of errors of commission: a review of methods and applications

    International Nuclear Information System (INIS)

    Reer, B.

    2007-06-01

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  13. Local and accumulated truncation errors in a class of perturbative numerical methods

    International Nuclear Information System (INIS)

    Adam, G.; Adam, S.; Corciovei, A.

    1980-01-01

    The approach to the solution of the radial Schroedinger equation using piecewise perturbative theory with a step function reference potential leads to a class of powerful numerical methods, conveniently abridged as SF-PNM(K), where K denotes the order at which the perturbation series was truncated. In the present paper rigorous results are given for the local truncation errors and bounds are derived for the accumulated truncated errors associated to SF-PNM(K), K = 0, 1, 2. They allow us to establish the smoothness conditions which have to be fulfilled by the potential in order to ensure a safe use of SF-PNM(K), and to understand the experimentally observed behaviour of the numerical results with the step size h. (author)

  14. An Alternative Method to Compute the Bit Error Probability of Modulation Schemes Subject to Nakagami- Fading

    Directory of Open Access Journals (Sweden)

    Madeiro Francisco

    2010-01-01

    Full Text Available Abstract This paper presents an alternative method for determining exact expressions for the bit error probability (BEP of modulation schemes subject to Nakagami- fading. In this method, the Nakagami- fading channel is seen as an additive noise channel whose noise is modeled as the ratio between Gaussian and Nakagami- random variables. The method consists of using the cumulative density function of the resulting noise to obtain closed-form expressions for the BEP of modulation schemes subject to Nakagami- fading. In particular, the proposed method is used to obtain closed-form expressions for the BEP of -ary quadrature amplitude modulation ( -QAM, -ary pulse amplitude modulation ( -PAM, and rectangular quadrature amplitude modulation ( -QAM under Nakagami- fading. The main contribution of this paper is to show that this alternative method can be used to reduce the computational complexity for detecting signals in the presence of fading.

  15. A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization

    Science.gov (United States)

    Foster, John V.; Cunningham, Kevin

    2010-01-01

    Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the

  16. Emissions Models and Other Methods to Produce Emission Inventories

    Science.gov (United States)

    An emissions inventory is a summary or forecast of the emissions produced by a group of sources in a given time period. Inventories of air pollution from mobile sources are often produced by models such as the MOtor Vehicle Emission Simulator (MOVES).

  17. Investigating a method of producing "red and dead" galaxies

    Science.gov (United States)

    Skory, Stephen

    2010-08-01

    In optical wavelengths, galaxies are observed to be either red or blue. The overall color of a galaxy is due to the distribution of the ages of its stellar population. Galaxies with currently active star formation appear blue, while those with no recent star formation at all (greater than about a Gyr) have only old, red stars. This strong bimodality has lead to the idea of star formation quenching, and various proposed physical mechanisms. In this dissertation, I attempt to reproduce with Enzo the results of Naab et al. (2007), in which red and dead galaxies are formed using gravitational quenching, rather than with one of the more typical methods of quenching. My initial attempts are unsuccessful, and I explore the reasons why I think they failed. Then using simpler methods better suited to Enzo + AMR, I am successful in producing a galaxy that appears to be similar in color and formation history to those in Naab et al. However, quenching is achieved using unphysically high star formation efficiencies, which is a different mechanism than Naab et al. suggests. Preliminary results of a much higher resolution, follow-on simulation of the above show some possible contradiction with the results of Naab et al. Cold gas is streaming into the galaxy to fuel starbursts, while at a similar epoch the galaxies in Naab et al. have largely already ceased forming stars in the galaxy. On the other hand, the results of the high resolution simulation are qualitatively similar to other works in the literature that show a somewhat different gravitational quenching mechanism than Naab et al. I also discuss my work using halo finders to analyze simulated cosmological data, and my work improving the Enzo/AMR analysis tool "yt". This includes two parallelizations of the halo finder HOP (Eisenstein and Hut, 1998) which allows analysis of very large cosmological datasets on parallel machines. The first version is "yt-HOP," which works well for datasets between about 2563 and 5123 particles

  18. Beam-pointing error compensation method of phased array radar seeker with phantom-bit technology

    Directory of Open Access Journals (Sweden)

    Qiuqiu WEN

    2017-06-01

    Full Text Available A phased array radar seeker (PARS must be able to effectively decouple body motion and accurately extract the line-of-sight (LOS rate for target missile tracking. In this study, the real-time two-channel beam pointing error (BPE compensation method of PARS for LOS rate extraction is designed. The PARS discrete beam motion principium is analyzed, and the mathematical model of beam scanning control is finished. According to the principle of the antenna element shift phase, both the antenna element shift phase law and the causes of beam-pointing error under phantom-bit conditions are analyzed, and the effect of BPE caused by phantom-bit technology (PBT on the extraction accuracy of the LOS rate is examined. A compensation method is given, which includes coordinate transforms, beam angle margin compensation, and detector dislocation angle calculation. When the method is used, the beam angle margin in the pitch and yaw directions is calculated to reduce the effect of the missile body disturbance and to improve LOS rate extraction precision by compensating for the detector dislocation angle. The simulation results validate the proposed method.

  19. Investigation into the limitations of straightness interferometers using a multisensor-based error separation method

    Science.gov (United States)

    Weichert, Christoph; Köchert, Paul; Schötka, Eugen; Flügge, Jens; Manske, Eberhard

    2018-06-01

    The uncertainty of a straightness interferometer is independent of the component used to introduce the divergence angle between the two probing beams, and is limited by three main error sources, which are linked to each other: their resolution, the influence of refractive index gradients and the topography of the straightness reflector. To identify the configuration with minimal uncertainties under laboratory conditions, a fully fibre-coupled heterodyne interferometer was successively equipped with three different wedge prisms, resulting in three different divergence angles (4°, 8° and 20°). To separate the error sources an independent reference with a smaller reproducibility is needed. Therefore, the straightness measurement capability of the Nanometer Comparator, based on a multisensor error separation method, was improved to provide measurements with a reproducibility of 0.2 nm. The comparison results revealed that the influence of the refractive index gradients of air did not increase with interspaces between the probing beams of more than 11.3 mm. Therefore, over a movement range of 220 mm, the lowest uncertainty was achieved with the largest divergence angle. The dominant uncertainty contribution arose from the mirror topography, which was additionally determined with a Fizeau interferometer. The measured topography agreed within  ±1.3 nm with the systematic deviations revealed in the straightness comparison, resulting in an uncertainty contribution of 2.6 nm for the straightness interferometer.

  20. On nonstationarity-related errors in modal combination rules of the response spectrum method

    Science.gov (United States)

    Pathak, Shashank; Gupta, Vinay K.

    2017-10-01

    Characterization of seismic hazard via (elastic) design spectra and the estimation of linear peak response of a given structure from this characterization continue to form the basis of earthquake-resistant design philosophy in various codes of practice all over the world. Since the direct use of design spectrum ordinates is a preferred option for the practicing engineers, modal combination rules play central role in the peak response estimation. Most of the available modal combination rules are however based on the assumption that nonstationarity affects the structural response alike at the modal and overall response levels. This study considers those situations where this assumption may cause significant errors in the peak response estimation, and preliminary models are proposed for the estimation of the extents to which nonstationarity affects the modal and total system responses, when the ground acceleration process is assumed to be a stationary process. It is shown through numerical examples in the context of complete-quadratic-combination (CQC) method that the nonstationarity-related errors in the estimation of peak base shear may be significant, when strong-motion duration of the excitation is too small compared to the period of the system and/or the response is distributed comparably in several modes. It is also shown that these errors are reduced marginally with the use of the proposed nonstationarity factor models.

  1. Pressurized water reactor monitoring. Study of detection, diagnostic and estimation methods (least error squares and filtering)

    International Nuclear Information System (INIS)

    Gillet, M.

    1986-07-01

    This thesis presents a study for the surveillance of the ''primary coolant circuit inventory monitoring'' of a pressurized water reactor. A reference model is developed in view of an automatic system ensuring detection and diagnostic in real time. The methods used for the present application are statistical tests and a method related to pattern recognition. The estimation of failures detected, difficult owing to the non-linearity of the problem, is treated by the least error squares method of the predictor or corrector type, and by filtering. It is in this frame that a new optimized method with superlinear convergence is developed, and that a segmented linearization of the model is introduced, in view of a multiple filtering [fr

  2. Knowledge-base for the new human reliability analysis method, A Technique for Human Error Analysis (ATHEANA)

    International Nuclear Information System (INIS)

    Cooper, S.E.; Wreathall, J.; Thompson, C.M., Drouin, M.; Bley, D.C.

    1996-01-01

    This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ''A Technique for Human Error Analysis'' (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst

  3. Synthetic methods in phase equilibria: A new apparatus and error analysis of the method

    DEFF Research Database (Denmark)

    Fonseca, José; von Solms, Nicolas

    2014-01-01

    of the equipment was confirmed through several tests, including measurements along the three phase co-existence line for the system ethane + methanol, the study of the solubility of methane in water, and of carbon dioxide in water. An analysis regarding the application of the synthetic isothermal method...

  4. Error characterization methods for surface soil moisture products from remote sensing

    International Nuclear Information System (INIS)

    Doubková, M.

    2012-01-01

    To support the operational use of Synthetic Aperture Radar (SAR) earth observation systems, the European Space Agency (ESA) is developing Sentinel-1 radar satellites operating in C-band. Much like its SAR predecessors (Earth Resource Satellite, ENVISAT, and RADARSAT), the Sentinel-1 will operate at a medium spatial resolution (ranging from 5 to 40 m), but with a greatly improved revisit period, especially over Europe (∼2 days). Given the planned high temporal sampling and the operational configuration Sentinel-1 is expected to be beneficial for operational monitoring of dynamic processes in hydrology and phenology. The benefit of a C-band SAR monitoring service in hydrology has already been demonstrated within the scope of the Soil Moisture for Hydrometeorologic Applications (SHARE) project using data from the Global Mode (GM) of the Advanced Synthetic Aperture Radar (ASAR). To fully exploit the potential of the SAR soil moisture products, well characterized error needs to be provided with the products. Understanding errors of remotely sensed surface soil moisture (SSM) datasets was indispensible for their application in models, for extractions of blended SSM products, as well as for their usage in evaluation of other soil moisture datasets. This thesis has several objectives. First, it provides the basics and state of the art methods for evaluating measures of SSM, including both the standard (e.g. Root Mean Square Error, Correlation coefficient) and the advanced (e.g. Error propagation, Triple collocation) evaluation measures. A summary of applications of soil moisture datasets is presented and evaluation measures are suggested for each application according to its requirement on the dataset quality. The evaluation of the Advanced Synthetic Aperture Radar (ASAR) Global Mode (GM) SSM using the standard and advanced evaluation measures comprises a second objective of the work. To achieve the second objective, the data from the Australian Water Assessment System

  5. Errors and discrepancies in the administration of intravenous infusions: a mixed methods multihospital observational study

    OpenAIRE

    Lyons, I.; Furniss, D.; Blandford, A.; Chumbley, G.; Iacovides, I.; Wei, L.; Cox, A.; Mayer, A.; Vos, J.; Galal-Edeen, G. H.; Schnock, K. O.; Dykes, P. C.; Bates, D. W.; Franklin, B. D.

    2018-01-01

    INTRODUCTION: Intravenous medication administration has traditionally been regarded as error prone, with high potential for harm. A recent US multisite study revealed few potentially harmful errors despite a high overall error rate. However, there is limited evidence about infusion practices in England and how they relate to prevalence and types of error. OBJECTIVES: To determine the prevalence, types and severity of errors and discrepancies in infusion administration in English hospitals, an...

  6. Beam-Based Error Identification and Correction Methods for Particle Accelerators

    CERN Document Server

    AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas

    2014-06-10

    Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...

  7. Error Analysis of a Finite Element Method for the Space-Fractional Parabolic Equation

    KAUST Repository

    Jin, Bangti; Lazarov, Raytcho; Pasciak, Joseph; Zhou, Zhi

    2014-01-01

    © 2014 Society for Industrial and Applied Mathematics We consider an initial boundary value problem for a one-dimensional fractional-order parabolic equation with a space fractional derivative of Riemann-Liouville type and order α ∈ (1, 2). We study a spatial semidiscrete scheme using the standard Galerkin finite element method with piecewise linear finite elements, as well as fully discrete schemes based on the backward Euler method and the Crank-Nicolson method. Error estimates in the L2(D)- and Hα/2 (D)-norm are derived for the semidiscrete scheme and in the L2(D)-norm for the fully discrete schemes. These estimates cover both smooth and nonsmooth initial data and are expressed directly in terms of the smoothness of the initial data. Extensive numerical results are presented to illustrate the theoretical results.

  8. Manufacturing error sensitivity analysis and optimal design method of cable-network antenna structures

    Science.gov (United States)

    Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye

    2016-03-01

    Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.

  9. Three-point method for measuring the geometric error components of linear and rotary axes based on sequential multilateration

    International Nuclear Information System (INIS)

    Zhang, Zhenjiu; Hu, Hong

    2013-01-01

    The linear and rotary axes are fundamental parts of multi-axis machine tools. The geometric error components of the axes must be measured for motion error compensation to improve the accuracy of the machine tools. In this paper, a simple method named the three point method is proposed to measure the geometric error of the linear and rotary axes of the machine tools using a laser tracker. A sequential multilateration method, where uncertainty is verified through simulation, is applied to measure the 3D coordinates. Three noncollinear points fixed on the stage of each axis are selected. The coordinates of these points are simultaneously measured using a laser tracker to obtain their volumetric errors by comparing these coordinates with ideal values. Numerous equations can be established using the geometric error models of each axis. The geometric error components can be obtained by solving these equations. The validity of the proposed method is verified through a series of experiments. The results indicate that the proposed method can measure the geometric error of the axes to compensate for the errors in multi-axis machine tools.

  10. A novel method for producing magnesium based hydrogen storage alloys

    International Nuclear Information System (INIS)

    Walton, A.; Matthews, J.; Barlow, R.; Almamouri, M.M.; Speight, J.D.; Harris, I.R.

    2003-01-01

    Conventional melt casting techniques for producing Mg 2 Ni often result in no stoichiometric compositions due to the excess Mg which is added to the melt in order to counterbalance sublimation during processing. In this work a vapour phase process known as Low Pressure Pack Sublimation (LPPS) has been used to coat Ni substrates with Mg at 460-600 o C producing layers of single phase Mg 2 Ni. Ni substrates coated to date include powder, foils and wire. Using Ni-Fe substrates it has also been demonstrated that Fe can be distributed through the Mg 2 Ni alloy layer which could have a beneficial effect on the hydrogen storage characteristics. The alloy layers formed have been characterised by XRD and SEM equipped with EDX analysis. Hydrogen storage properties have been evaluated using an Intelligent Gravimetric Analyser (IGA). LPPS avoids most of the sintering of powder particles during processing which is observed in other vapour phase techniques while producing a stoichiometric composition of Mg 2 Ni. It is also a simple, low cost technique for producing these alloys. (author)

  11. Method to produce sintered carriers for electrodes of galvanic elements

    Energy Technology Data Exchange (ETDEWEB)

    Jost, E M

    1978-03-24

    Carrier plates of precisely uniform thickness can be produced according to the invention by firstly thickening a solution of polyethylene oxide and (preferably) methanol by adding water and then, by adding nickel powder, obtaining an essentially homogeneous suspension of considerable viscosity. This slurry is coated on both sides of a nickel grid, dried and sintered.

  12. Microemulsion extrusion technique : a new method to produce lipid nanoparticles

    NARCIS (Netherlands)

    de Jesus, Marcelo Bispo; Radaic, Allan; Zuhorn, Inge S.; de Paula, Eneida

    2013-01-01

    Solid lipid nanoparticles (SLN) and nano-structured lipid carriers (NLC) have been intensively investigated for different applications, including their use as drug and gene delivery systems. Different techniques have been employed to produce lipid nanoparticles, of which high pressure homogenization

  13. A novel method to produce dry geopolymer cement powder

    Directory of Open Access Journals (Sweden)

    H.A. Abdel-Gawwad

    2016-04-01

    Full Text Available Geopolymer cement is the result of reaction of two materials containing aluminosilicate and concentrated alkaline solution to produce an inorganic polymer binder. The alkali solutions are corrosive and often viscous solutions which are not user friendly, and would be difficult to use for bulk production. This work aims to produce one-mix geopolymer mixed water that could be an alternative to Portland cement by blending with dry activator. Sodium hydroxide (SH was dissolved in water and added to calcium carbonate (CC then dried at 80 °C for 8 h followed by pulverization to a fixed particle size to produce the dry activator consisting of calcium hydroxide (CH, sodium carbonate (SC and pirssonite (P. This increases their commercial availability. The dry activator was blended with granulated blast-furnace slag (GBFS to produce geopolymer cement powder and by addition of water; the geopolymerization process is started. The effect of W/C and SH/CC ratio on the physico-mechanical properties of slag pastes was studied. The results showed that the optimum percent of activator and CC content is 4% SH and 5% CC, by the weight of slag, which give the highest physico-mechanical properties of GBFS. The characterization of the activated slag pastes was carried out using TGA, DTG, IR spectroscopy and SEM techniques.

  14. Ptychographic overlap constraint errors and the limits of their numerical recovery using conjugate gradient descent methods.

    Science.gov (United States)

    Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G

    2014-01-27

    Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.

  15. ERRORS MEASUREMENT OF INTERPOLATION METHODS FOR GEOID MODELS: STUDY CASE IN THE BRAZILIAN REGION

    Directory of Open Access Journals (Sweden)

    Daniel Arana

    Full Text Available Abstract: The geoid is an equipotential surface regarded as the altimetric reference for geodetic surveys and it therefore, has several practical applications for engineers. In recent decades the geodetic community has concentrated efforts on the development of highly accurate geoid models through modern techniques. These models are supplied through regular grids which users need to make interpolations. Yet, little information can be obtained regarding the most appropriate interpolation method to extract information from the regular grid of geoidal models. The use of an interpolator that does not represent the geoid surface appropriately can impair the quality of geoid undulations and consequently the height transformation. This work aims to quantify the magnitude of error that comes from a regular mesh of geoid models. The analysis consisted of performing a comparison between the interpolation of the MAPGEO2015 program and three interpolation methods: bilinear, cubic spline and neural networks Radial Basis Function. As a result of the experiments, it was concluded that 2.5 cm of the 18 cm error of the MAPGEO2015 validation is caused by the use of interpolations in the 5'x5' grid.

  16. Evaluation of roundness error using a new method based on a small displacement screw

    International Nuclear Information System (INIS)

    Nouira, Hichem; Bourdet, Pierre

    2014-01-01

    In relation to industrial need and the progress of technology, LNE would like to improve the measurement of its primary pressure, spherical and flick standards. The spherical and flick standards are respectively used to calibrate the spindle motion error and the probe which equips commercial conventional cylindricity measuring machines. The primary pressure standards are obtained using pressure balances equipped with rotary pistons with an uncertainty of 5 nm for a piston diameter of 10 mm. Conventional machines are not able to reach such an uncertainty level. That is why the development of a new machine is necessary. To ensure such a level of uncertainty, both stability and performance of the machine are not sufficient, and the data processing should also be done with accuracy less than a nanometre. In this paper, a new method based on the small displacement screw (SDS) model is proposed. A first validation of this method is proposed on a theoretical dataset published by the European Community Bureau of Reference (BCR) in report no 3327. Then, an experiment is prepared in order to validate the new method on real datasets. Specific environment conditions are taken into account and many precautions are considered. The new method is applied to analyse the least-squares circle, minimum zone circle, maximum inscribed circle and minimum circumscribed circle. The results are compared to those done by the reference Chebyshev best-fit method and reveal perfect agreement. The sensibilities of the SDS and Chebyshev methodologies are investigated, and it is revealed that results remain unchanged when the value of the diameter exceeds 700 times the form error. (paper)

  17. Using snowball sampling method with nurses to understand medication administration errors.

    Science.gov (United States)

    Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In

    2009-02-01

    We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non

  18. Methods of refining natural oils, and methods of producing fuel compositions

    Science.gov (United States)

    Firth, Bruce E.; Kirk, Sharon E.

    2015-10-27

    A method of refining a natural oil includes: (a) providing a feedstock that includes a natural oil; (b) reacting the feedstock in the presence of a metathesis catalyst to form a metathesized product that includes olefins and esters; (c) passivating residual metathesis catalyst with an agent that comprises nitric acid; (d) separating the olefins in the metathesized product from the esters in the metathesized product; and (e) transesterifying the esters in the presence of an alcohol to form a transesterified product and/or hydrogenating the olefins to form a fully or partially saturated hydrogenated product. Methods for suppressing isomerization of olefin metathesis products produced in a metathesis reaction, and methods of producing fuel compositions are described.

  19. Removal of round off errors in the matrix exponential method for solving the heavy nuclide chain

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Noh, Jae Man; Joo, Hyung Kook

    2005-01-01

    Many nodal codes for core simulation adopt the micro-depletion procedure for the depletion analysis. Unlike the macro-depletion procedure, the microdepletion procedure uses micro-cross sections and number densities of important nuclides to generate the macro cross section of a spatial calculational node. Therefore, it needs to solve the chain equations of the nuclides of interest to obtain their number densities. There are several methods such as the matrix exponential method (MEM) and the chain linearization method (CLM) for solving the nuclide chain equations. The former solves chain equations exactly even when the cycles that come from the alpha decay exist in the chain while the latter solves the chain approximately when the cycles exist in the chain. The former has another advantage over the latter. Many nodal codes for depletion analysis, such as MASTER, solve only the hard coded nuclide chains with the CLM. Therefore, if we want to extend the chain by adding some more nuclides to the chain, we have to modify the source code. In contrast, we can extend the chain just by modifying the input in the MEM because it is easy to implement the MEM solver for solving an arbitrary nuclide chain. In spite of these advantages of the MEM, many nodal codes adopt the chain linearization because the former has a large round off error when the flux level is very high or short lived or strong absorber nuclides exist in the chain. In this paper, we propose a new technique to remove the round off errors in the MEM and we compared the performance of the two methods

  20. Survey of industry methods for producing highly reliable software

    International Nuclear Information System (INIS)

    Lawrence, J.D.; Persons, W.L.

    1994-11-01

    The Nuclear Reactor Regulation Office of the US Nuclear Regulatory Commission is charged with assessing the safety of new instrument and control designs for nuclear power plants which may use computer-based reactor protection systems. Lawrence Livermore National Laboratory has evaluated the latest techniques in software reliability for measurement, estimation, error detection, and prediction that can be used during the software life cycle as a means of risk assessment for reactor protection systems. One aspect of this task has been a survey of the software industry to collect information to help identify the design factors used to improve the reliability and safety of software. The intent was to discover what practices really work in industry and what design factors are used by industry to achieve highly reliable software. The results of the survey are documented in this report. Three companies participated in the survey: Computer Sciences Corporation, International Business Machines (Federal Systems Company), and TRW. Discussions were also held with NASA Software Engineering Lab/University of Maryland/CSC, and the AIAA Software Reliability Project

  1. Method for Producing Launch/Landing Pads and Structures Project

    Science.gov (United States)

    Mueller, Robert P. (Compiler)

    2015-01-01

    Current plans for deep space exploration include building landing-launch pads capable of withstanding the rocket blast of much larger spacecraft that that of the Apollo days. The proposed concept will develop lightweight launch and landing pad materials from in-situ materials, utilizing regolith to produce controllable porous cast metallic foam brickstiles shapes. These shapes can be utilized to lay a landing launch platform, as a construction material or as more complex parts of mechanical assemblies.

  2. Analysis of S-box in Image Encryption Using Root Mean Square Error Method

    Science.gov (United States)

    Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan

    2012-07-01

    The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes

  3. Pediatric Nurses' Perceptions of Medication Safety and Medication Error: A Mixed Methods Study.

    Science.gov (United States)

    Alomari, Albara; Wilson, Val; Solman, Annette; Bajorek, Beata; Tinsley, Patricia

    2017-05-30

    This study aims to outline the current workplace culture of medication practice in a pediatric medical ward. The objective is to explore the perceptions of nurses in a pediatric clinical setting as to why medication administration errors occur. As nurses have a central role in the medication process, it is essential to explore nurses' perceptions of the factors influencing the medication process. Without this understanding, it is difficult to develop effective prevention strategies aimed at reducing medication administration errors. Previous studies were limited to exploring a single and specific aspect of medication safety. The methods used in these studies were limited to survey designs which may lead to incomplete or inadequate information being provided. This study is phase 1 on an action research project. Data collection included a direct observation of nurses during medication preparation and administration, audit based on the medication policy, and guidelines and focus groups with nursing staff. A thematic analysis was undertaken by each author independently to analyze the observation notes and focus group transcripts. Simple descriptive statistics were used to analyze the audit data. The study was conducted in a specialized pediatric medical ward. Four key themes were identified from the combined quantitative and qualitative data: (1) understanding medication errors, (2) the busy-ness of nurses, (3) the physical environment, and (4) compliance with medication policy and practice guidelines. Workload, frequent interruptions to process, poor physical environment design, lack of preparation space, and impractical medication policies are identified as barriers to safe medication practice. Overcoming these barriers requires organizations to review medication process policies and engage nurses more in medication safety research and in designing clinical guidelines for their own practice.

  4. Alternative method to detect compounds produced by Gambierdiscus spp.

    Directory of Open Access Journals (Sweden)

    Jon Andoni Sánchez

    2014-06-01

    Full Text Available Ciguatoxins (CTXs and CTX precursors are produced by several Gambierdiscus spp. These polyether toxins are associated to ciguatera fish poisoning (CFP. In addition to CTX, maitotoxins (MTX and gambierol are also produced by these dinoflagellates. MTX mechanism of action is strictly Ca2+ dependent, since the toxin induces a massive cytoplasmatic Ca2+ entrance. However, CTX activates the voltage-dependent sodium channels and no relation with calcium fluxes has been showed. The aim of this work was to study the effect of both toxins in the cytoplasmic calcium levels in the SH-SY5Y neuroblastoma cell line by using the fluorescent probe Fura-2 AM. Two completely different calcium profiles were obtained. While, MTX induces a sustained dose-dependent increase in Fura-2 ratio, CTX produces a light increase in dye ratio. From MTX results a calibration curve concentration versus Fura-2 ratio was obtained where the toxin concentration of an unknown sample can be calculated. Then, the effect of four samples from Gambierdiscus cultures was studied and different calcium profiles were obtained. A high increase in Fura-2 ratio was observed in two samples. The calcium profile was similar to MTX and by using the calibration curve the amount of toxin was calculated (4.9 and 1.8 nM of MTX. In the other samples, from the Fura-2 results the presence of CTX like compounds can be established.

  5. Trench capacitor and method for producing the same

    NARCIS (Netherlands)

    2009-01-01

    A method of fabricating a trench capacitor, and a trench capacitor fabricated thereby, are disclosed. The method involves the use of a vacuum impregnation process for a sol-gel film, to facilitate effective deposition of high- permittivity materials within a trench in a semiconductor substrate, to

  6. A possible alternative to the error prone modified Hodge test to correctly identify the carbapenemase producing Gram-negative bacteria.

    Science.gov (United States)

    Jeremiah, S S; Balaji, V; Anandan, S; Sahni, R D

    2014-01-01

    The modified Hodge test (MHT) is widely used as a screening test for the detection of carbapenemases in Gram-negative bacteria. This test has several pitfalls in terms of validity and interpretation. Also the test has a very low sensitivity in detecting the New Delhi metallo-β-lactamase (NDM). Considering the degree of dissemination of the NDM and the growing pandemic of carbapenem resistance, a more accurate alternative test is needed at the earliest. The study intends to compare the performance of the MHT with the commercially available Neo-Sensitabs - Carbapenemases/Metallo-β-Lactamase (MBL) Confirmative Identification pack to find out whether the latter could be an efficient alternative to the former. A total of 105 isolates of Klebsiella pneumoniae resistant to imipenem and meropenem, collected prospectively over a period of 2 years were included in the study. The study isolates were tested with the MHT, the Neo-Sensitabs - Carbapenemases/MBL Confirmative Identification pack and polymerase chain reaction (PCR) for detecting the blaNDM-1 gene. Among the 105 isolates, the MHT identified 100 isolates as carbapenemase producers. In the five isolates negative for the MHT, four were found to produce MBLs by the Neo-Sensitabs. The Neo-Sensitabs did not have any false negatives when compared against the PCR. The MHT can give false negative results, which lead to failure in detecting the carbapenemase producers. Also considering the other pitfalls of the MHT, the Neo-Sensitabs--Carbapenemases/MBL Confirmative Identification pack could be a more efficient alternative for detection of carbapenemase production in Gram-negative bacteria.

  7. Method of microbially producing metal gallate spinel nano-objects, and compositions produced thereby

    Science.gov (United States)

    Duty, Chad E.; Jellison, Jr., Gerald E.; Love, Lonnie J.; Moon, Ji Won; Phelps, Tommy J.; Ivanov, Ilia N.; Kim, Jongsu; Park, Jehong; Lauf, Robert

    2018-01-16

    A method of forming a metal gallate spinel structure that includes mixing a divalent metal-containing salt and a gallium-containing salt in solution with fermentative or thermophilic bacteria. In the process, the bacteria nucleate metal gallate spinel nano-objects from the divalent metal-containing salt and the gallium-containing salt without requiring reduction of a metal in the solution. The metal gallate spinel structures, as well as light-emitting structures in which they are incorporated, are also described.

  8. Improving Papanicolaou test quality and reducing medical errors by using Toyota production system methods.

    Science.gov (United States)

    Raab, Stephen S; Andrew-Jaja, Carey; Condel, Jennifer L; Dabbs, David J

    2006-01-01

    The objective of the study was to determine whether the Toyota production system process improves Papanicolaou test quality and patient safety. An 8-month nonconcurrent cohort study that included 464 case and 639 control women who had a Papanicolaou test was performed. Office workflow was redesigned using Toyota production system methods by introducing a 1-by-1 continuous flow process. We measured the frequency of Papanicolaou tests without a transformation zone component, follow-up and Bethesda System diagnostic frequency of atypical squamous cells of undetermined significance, and diagnostic error frequency. After the intervention, the percentage of Papanicolaou tests lacking a transformation zone component decreased from 9.9% to 4.7% (P = .001). The percentage of Papanicolaou tests with a diagnosis of atypical squamous cells of undetermined significance decreased from 7.8% to 3.9% (P = .007). The frequency of error per correlating cytologic-histologic specimen pair decreased from 9.52% to 7.84%. The introduction of the Toyota production system process resulted in improved Papanicolaou test quality.

  9. Boundary integral method to calculate the sensitivity temperature error of microstructured fibre plasmonic sensors

    International Nuclear Information System (INIS)

    Esmaeilzadeh, Hamid; Arzi, Ezatollah; Légaré, François; Hassani, Alireza

    2013-01-01

    In this paper, using the boundary integral method (BIM), we simulate the effect of temperature fluctuation on the sensitivity of microstructured optical fibre (MOF) surface plasmon resonance (SPR) sensors. The final results indicate that, as the temperature increases, the refractometry sensitivity of our sensor decreases from 1300 nm/RIU at 0 °C to 1200 nm/RIU at 50 °C, leading to ∼7.7% sensitivity reduction and the sensitivity temperature error of 0.15% °C −1 for this case. These results can be used for biosensing temperature-error adjustment in MOF SPR sensors, since biomaterials detection usually happens in this temperature range. Moreover, the signal-to-noise ratio (SNR) of our sensor decreases from 0.265 at 0 °C to 0.154 at 100 °C with the average reduction rate of ∼0.42% °C −1 . The results suggest that at lower temperatures the sensor has a higher SNR. (paper)

  10. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    Science.gov (United States)

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  11. Numerical method for multigroup one-dimensional SN eigenvalue problems with no spatial truncation error

    International Nuclear Information System (INIS)

    Abreu, M.P.; Filho, H.A.; Barros, R.C.

    1993-01-01

    The authors describe a new nodal method for multigroup slab-geometry discrete ordinates S N eigenvalue problems that is completely free from all spatial truncation errors. The unknowns in the method are the node-edge angular fluxes, the node-average angular fluxes, and the effective multiplication factor k eff . The numerical values obtained for these quantities are exactly those of the dominant analytic solution of the S N eigenvalue problem apart from finite arithmetic considerations. This method is based on the use of the standard balance equation and two nonstandard auxiliary equations. In the nonmultiplying regions, e.g., the reflector, we use the multigroup spectral Green's function (SGF) auxiliary equations. In the fuel regions, we use the multigroup spectral diamond (SD) auxiliary equations. The SD auxiliary equation is an extension of the conventional auxiliary equation used in the diamond difference (DD) method. This hybrid characteristic of the SD-SGF method improves both the numerical stability and the convergence rate

  12. Method for evaluation of risk due to seismic related design and construction errors based on past reactor experience

    International Nuclear Information System (INIS)

    Gonzalez Cuesta, M.; Okrent, D.

    1985-01-01

    This paper proposes a methodology for quantification of risk due to seismic related design and construction errors in nuclear power plants, based on information available on errors discovered in the past. For the purposes of this paper, an error is defined as any event that causes the seismic safety margins of a nuclear power plant to be smaller than implied by current regulatory requirements and industry common practice. Also, the actual reduction in the safety margins caused by the error will be called a deficiency. The method is based on a theoretical model of errors, called a deficiency logic diagram. First, an ultimate cause is present. This ultimate cause is consumated as a specific instance, called originating error. As originating errors may occur in actions to be applied a number of times, a deficiency generation system may be involved. Quality assurance activities will hopefully identify most of these deficiencies, requesting their disposition. However, the quality assurance program is not perfect and some operating plant deficiencies may persist, causing different levels of impact to the plant logic. The paper provides a way of extrapolating information about errors discovered in plants under construction in order to assess the risk due to errors that have not been discovered

  13. CNC LATHE MACHINE PRODUCING NC CODE BY USING DIALOG METHOD

    Directory of Open Access Journals (Sweden)

    Yakup TURGUT

    2004-03-01

    Full Text Available In this study, an NC code generation program utilising Dialog Method was developed for turning centres. Initially, CNC lathes turning methods and tool path development techniques were reviewed briefly. By using geometric definition methods, tool path was generated and CNC part program was developed for FANUC control unit. The developed program made CNC part program generation process easy. The program was developed using BASIC 6.0 programming language while the material and cutting tool database were and supported with the help of ACCESS 7.0.

  14. Survey on radionuclide producing using cyclotron method in Malaysia

    International Nuclear Information System (INIS)

    Mohd Fadli Mohammad Noh

    2008-01-01

    This research discuss about basic design and systems of medical cyclotron that Malaysia currently have, its applications in radionuclide production and upcoming technologies of cyclotron. Surveys have been carried out on cyclotron facilities at Hospital Putrajaya and Wijaya International Medical Center, WIMC as well as reactor facility at Malaysia Nuclear Agency. The sources in this research also involves on-line and library searches. Information obtained are recorded, categorized, synthesized and discussed. systems of cyclotron of Hospital Putrajaya are further discussed in details. Based from the surveys carried out, it is found out that cyclotron facilities both in Hospital Putrajaya and WIMC only produce ( 18 F)FDG with radioactivity of 18 F produced in 2007 are 16479 mCi and 92546 mCi respectively. Survey also revealed that radioisotope production at Nuclear Malaysia has had its operation been ceased. A new radiopharmaceutical, namely CHOL is suggested to be synthesized by both facilities as a new PET tracer. Latest developments concerning technologies of cyclotron as well as other accelerators such as laser for future medical accelerator, prospect of boron neutron capture and the potential of hadron therapy in Malaysia are discussed here. Radioisotope production in Malaysia is expected to keep booming in future due to increase in usage of PET techniques and the construction of more compact, easy to handle and less costly cyclotrons. (author)

  15. Errors of absolute methods of reactor neutron activation analysis caused by non-1/E epithermal neutron spectra

    International Nuclear Information System (INIS)

    Erdtmann, G.

    1993-08-01

    A sufficiently accurate characterization of the neutron flux and spectrum, i.e. the determination of the thermal flux, the flux ratio and the epithermal flux spectrum shape factor, α, is a prerequisite for all types of absolute and monostandard methods of reactor neutron activation analysis. A convenient method for these measurements is the bare triple monitor method. However, the results of this method, are very imprecise, because there are high error propagation factors form the counting errors of the monitor activities. Procedures are described to calculate the errors of the flux parameters, the α-dependent cross-section ratios, and of the analytical results from the errors of the activities of the monitor isotopes. They are included in FORTRAN programs which also allow a graphical representation of the results. A great number of examples were calculated for ten different irradiation facilities in four reactors and for 28 elements. Plots of the results are presented and discussed. (orig./HP) [de

  16. Do natural methods for fertility regulation increase the risks of genetic errors?

    Science.gov (United States)

    Serra, A

    1981-09-01

    Genetic errors of many kinds are connected with the reproductive processes and are favored by a nunber of largely uncontrollable, endogenous, and/or exogenous factors. For a long time human beings have taken into their own hands the control of this process. The regulation of fertility is clearly a forceful request to any family, to any community, were it only to lower the level of the consequences of genetic errors. In connection with this request, and in the context of the Congress for the Family of Africa and Europe (Catholic University, January 1981), 1 question must still be raised and possibly answered. The question is: do or can the so called "natural methods" for the regulation of fertility increase the risks of genetic errors with their generally dramatic effects on families and on communities. It is important to try to give as far as possible a scientifically based answer to this question. Fr. Haring, a moral theologian, citing scientific evidence finds it shocking that the rhythm method, so strongly and recently endorsed again by Church authorities, should be classified among the means of "birth control" by way of spontaneous abortion or at least by spontaneous loss of a large number of zygotes which, due to the concrete application of the rhythm method, lack of necessary vitality for survival. He goes on to state that the scientific research provides overwhelming evidence that the rhythm method in its traditional form is responsible for a disproportionate waste of zygotes and a disproportionate frequency of spontaneous abortions and a defective childern. Professor Hilgers, a reproductive physiologist, takes on opposite view, maintaining that the hypotheses are arbitrary and the alarm false. The strongest evidence upon which Fr. Haring bases his moral principles about the use of the natural methods of fertility regulation is a paper by Guerrero and Rojos (1975). These authors examined, retrospectively, the success of 965 pregnancies which occurred in

  17. Method and installation to produce compensators for radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Quast, U; Krause, K

    1978-05-11

    An irregular body surface in the radiation application area in therapeutic radiaction, e.g. in the head-throat region, leads to deviations of the dose homogeneity. To compensate for this, a lateral geometrically-corrected and radial absorption-corrected compensator made of Lipowitz metal (alloy 50% Bi, 26.7% Pb, 13.3% Sn and 10% Cd) is used. It exhibits higher absorption properties than tissue-equavalent materials. In order to produce the negative form for the compensator to be laterally reduced for divergence reasons, a device is used which scans the body section as well as leads a cutting device over a disc of finely porous polystyrene hard foam at the same time and forms to negative shape from its surface.

  18. Method to produce furandicarboxylic acid (FDCA) from 5-hydroxymethylfurfural (HMF)

    Science.gov (United States)

    Dumesic, James A.; Motagamwala, Ali Hussain

    2017-04-11

    A process to produce furandicarboxylic acid (FDCA). The process includes the steps of reacting a C6 sugar-containing reactant in a reaction solution comprising a first organic solvent selected from the group consisting of beta-, gamma-, and delta-lactones, hydrofurans, hydropyrans, and combinations thereof, in the presence of an acid catalyst for a time and under conditions wherein at least a portion of the C6 sugar present in the reactant is converted to 5-(hydroxymethyl)furfural (HMF); oxidizing the HMF into FDCA with or without separating the HMF from the reaction solution; and extracting the FDCA by adding an aprotic organic solvent having a dipole moment of about 1.0 D or less to the reaction solution.

  19. Method for producing fluorinated diamond-like carbon films

    Science.gov (United States)

    Hakovirta, Marko J.; Nastasi, Michael A.; Lee, Deok-Hyung; He, Xiao-Ming

    2003-06-03

    Fluorinated, diamond-like carbon (F-DLC) films are produced by a pulsed, glow-discharge plasma immersion ion processing procedure. The pulsed, glow-discharge plasma was generated at a pressure of 1 Pa from an acetylene (C.sub.2 H.sub.2) and hexafluoroethane (C.sub.2 F.sub.6) gas mixture, and the fluorinated, diamond-like carbon films were deposited on silicon substrates. The film hardness and wear resistance were found to be strongly dependent on the fluorine content incorporated into the coatings. The hardness of the F-DLC films was found to decrease considerably when the fluorine content in the coatings reached about 20%. The contact angle of water on the F-DLC coatings was found to increase with increasing film fluorine content and to saturate at a level characteristic of polytetrafluoroethylene.

  20. Valuing urban open space using the travel-cost method and the implications of measurement error.

    Science.gov (United States)

    Hanauer, Merlin M; Reid, John

    2017-08-01

    Urbanization has placed pressure on open space within and adjacent to cities. In recent decades, a greater awareness has developed to the fact that individuals derive multiple benefits from urban open space. Given the location, there is often a high opportunity cost to preserving urban open space, thus it is important for both public and private stakeholders to justify such investments. The goals of this study are twofold. First, we use detailed surveys and precise, accessible, mapping methods to demonstrate how travel-cost methods can be applied to the valuation of urban open space. Second, we assess the degree to which typical methods of estimating travel times, and thus travel costs, introduce bias to the estimates of welfare. The site we study is Taylor Mountain Regional Park, a 1100-acre space located immediately adjacent to Santa Rosa, California, which is the largest city (∼170,000 population) in Sonoma County and lies 50 miles north of San Francisco. We estimate that the average per trip access value (consumer surplus) is $13.70. We also demonstrate that typical methods of measuring travel costs significantly understate these welfare measures. Our study provides policy-relevant results and highlights the sensitivity of urban open space travel-cost studies to bias stemming from travel-cost measurement error. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A METHOD AND AN ELECTRODE PRODUCED BY INFILTRATION

    DEFF Research Database (Denmark)

    2014-01-01

    The present invention relates to electrodes having Gd and Pr -doped cerium oxide (CGPO)backbones infiltrated with Sr -doped LaCoO3 (LSC) and a method to manufacture them. Pr ions have been introduced into a prefabricated CGO backbone by infiltrating Pr nitrate solution followed by high temperatur...

  2. Radiation shielding phenolic fibers and method of producing same

    International Nuclear Information System (INIS)

    Ohtomo, K.

    1976-01-01

    A radiation shielding phenolic fiber is described comprising a filamentary phenolic polymer consisting predominantly of a sulfonic acid group-containing cured novolak resin and a metallic atom having a great radiation shielding capacity, the metallic atom being incorporated in the polymer by being chemically bound in the ionic state in the novolak resin. A method for the production of the fiber is discussed

  3. Methods for detection of environmental agents that produce congenital defects

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, T.H.; Miller, J.R.; Marois, M. (eds.)

    1975-01-01

    Some topics discussed are as follows: current methods for teratogenicity testing in animals and suggestion for improvement; use of zebra fish for screening of teratogens; chemical structure and teratogenic mechanism of action; somatic cell genetics and teratogenesis; studies on mammalian embryos during organogenesis; infectious agents as teratogens; and pharmacogenetics and teratogenesis. (HLW)

  4. Medication Errors in a Swiss Cardiovascular Surgery Department: A Cross-Sectional Study Based on a Novel Medication Error Report Method

    Directory of Open Access Journals (Sweden)

    Kaspar Küng

    2013-01-01

    Full Text Available The purpose of this study was (1 to determine frequency and type of medication errors (MEs, (2 to assess the number of MEs prevented by registered nurses, (3 to assess the consequences of ME for patients, and (4 to compare the number of MEs reported by a newly developed medication error self-reporting tool to the number reported by the traditional incident reporting system. We conducted a cross-sectional study on ME in the Cardiovascular Surgery Department of Bern University Hospital in Switzerland. Eligible registered nurses ( involving in the medication process were included. Data on ME were collected using an investigator-developed medication error self reporting tool (MESRT that asked about the occurrence and characteristics of ME. Registered nurses were instructed to complete a MESRT at the end of each shift even if there was no ME. All MESRTs were completed anonymously. During the one-month study period, a total of 987 MESRTs were returned. Of the 987 completed MESRTs, 288 (29% indicated that there had been an ME. Registered nurses reported preventing 49 (5% MEs. Overall, eight (2.8% MEs had patient consequences. The high response rate suggests that this new method may be a very effective approach to detect, report, and describe ME in hospitals.

  5. A Method to Optimize Geometric Errors of Machine Tool based on SNR Quality Loss Function and Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Cai Ligang

    2017-01-01

    Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.

  6. A method for producing a water and coal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Mutase, T.; Khongo, T.; Minemura, N.; Nakai, S.; Ogura, K.; Savada, M.

    1983-08-30

    Coal dust (100 parts with a 95 to 99 percent content of particles with a size of 7 to 150 micrometers) is loaded into a mixture of hydrocarbon oil (1 to 20 parts) and water (300 to 1,000 parts) and mixed for 3 to 5 minutes at a rotation frequency of 1,800 to 1,500 per minute. The agglomerates of the coal dust and hydrocarbon (Uv) (100 parts) produced in this manner are then mixed with water (25 to 60 parts), an anion surfacant (PAV) (from 0.1 to 2 parts) which has high dispersion activity and a nonionogenic surfacant (0.1 to 2 parts) which has an HLB indicator of from 7 to 17 (preferably 13) to ensure a high consistency of the aqueous suspension of high quality coal, characterized by high fluidity (dynamic viscosity from 0.5 to 1.4 pascals times seconds). It is preferable to use a heavy oil fraction, kerosene, residue from oil distillation or an anthracite coal resin as the hydrocarbon oil. Separation of the ash from the suspension is increased by adding the surfacants and a water soluble inorganic salt which provides for an alkalinity of the aqueous solution (a pH of 7). It is recommended that a salt of alkylbenzolsulfo acid, a sodium salt of polyoxyethylenalkylphenolsulfo acid, sodium laurylsulfate, ammonium lauryl sulfate polyoxyethylensorbitantristearate, polyoxyethylenlaurylic acid, polyoxyethylennonylphenol ether or polyoxyethyllauric ether be used as the surfacant.

  7. Laser readable thermoluminescent radiation dosimeters and methods for producing thereof

    International Nuclear Information System (INIS)

    Braunlich, P.F.; Tetzlaff, W.

    1989-01-01

    Thin layer thermoluminescent radiation dosimeters for use in laser readable dosimetry systems, and methods of fabricating such thin layer dosimeters are disclosed. The thin layer thermoluminescent radiation dosimeters include a thin substrate made from glass or other inorganic materials capable of withstanding high temperatures and high heating rates. A thin layer of a thermoluminescent phosphor material is heat bonded to the substrate using an inorganic binder such as glass. The dosimeters can be mounted in frames and cases for ease in handling. Methods of the invention include mixing a suitable phosphor composition and binder, both being in particulate or granular form. The mixture is then deposited onto a substrate such as by using mask printing techniques. The dosimeters are thereafter heated to fuse and bond the binder and phosphor to the substrate. 34 figs

  8. Method to produce carbon-cladded nuclear fuel particles

    International Nuclear Information System (INIS)

    Sturge, D.W.; Meaden, G.W.

    1978-01-01

    In the method charges of micro-spherules of fuel element are designed to have two carbon layers, whereby a one aims to achieve a uniform granulation (standard measurement). Two drums are used for this purpose connected behind one another. The micro-spherules coated with the first layer (phenolformaldehyde resin coated graphite particles) leave the first drum and enter the second one. Following the coating with a second layer, the micro-spherules are introduced into a grain size separator. The spherules that are too small are directly recycled into the second drum and those ones that are too large are recycled into the first drum after removing the graphite layers. The method may also be applied to metal cladded particles to manufacture cermet fuels. (RW) [de

  9. Method of producing catalytic material for fabricating nanostructures

    Science.gov (United States)

    Seals, Roland D.; Menchhofer, Paul A.; Howe, Jane Y.; Wang, Wei

    2018-01-30

    Methods of fabricating nano-catalysts are described. In some embodiments the nano-catalyst is formed from a powder-based substrate material and is some embodiments the nano-catalyst is formed from a solid-based substrate material. In some embodiments the substrate material may include metal, ceramic, or silicon or another metalloid. The nano-catalysts typically have metal nanoparticles disposed adjacent the surface of the substrate material. The methods typically include functionalizing the surface of the substrate material with a chelating agent, such as a chemical having dissociated carboxyl functional groups (--COO), that provides an enhanced affinity for metal ions. The functionalized substrate surface may then be exposed to a chemical solution that contains metal ions. The metal ions are then bound to the substrate material and may then be reduced, such as by a stream of gas that includes hydrogen, to form metal nanoparticles adjacent the surface of the substrate.

  10. The treatment of commission errors in first generation human reliability analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da, E-mail: bayout@cnen.gov.b, E-mail: rfonseca@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN) Rio de Janeiro, RJ (Brazil); Melo, Paulo Fernando Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)

  11. Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.

    Science.gov (United States)

    Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2016-01-01

    Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (Pcutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Identification of Error of Commissions in the LOCA Using the CESA Method

    Energy Technology Data Exchange (ETDEWEB)

    Tukhbyet-olla, Myeruyert; Kang, Sunkoo; Kim, Jonghyun [KEPCO international nuclear graduate school, Ulsan (Korea, Republic of)

    2015-10-15

    An Errors of commission (EOCs) can be defined as the performance of any inappropriate action that aggravates the situation. The primary focus in current PSA is placed on those sequences of hardware failures and/or EOOs that lead to unsafe system states. Although EOCs can be treated when identified, a systematic and comprehensive treatment of EOC opportunities remains outside the scope of PSAs. However, some past experiences in the nuclear industry show that EOCs have contributed to severe accidents. Some recent and emerging human reliability analysis (HRA) methods suggest approaches to identify and quantify EOCs, such as ATHEANA, MERMOS, GRS, MDTA, and CESA. The CESA method, developed by the Risk and Human Reliability Group at the Paul Scherrer Institute, is to identify potentially risk-significant EOCs, given an existing PSA. The main idea underlying the method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. This paper aims at identifying EOCs in the LOCA by using the CESA method. This study is focused on the identification of EOCs, while the quantification of EOCs is out of scope. Then, this paper applies the CESA method to the emergency operating procedure (EOP) of LOCA for APR1400. Finally, this study presents potential EOCs that may lead to the aggravation in the mitigation of LOCA. This study has identified the EOC events for APR1400 in the LOCA using CESA method. The result identified three candidate EOCs event using operator action catalog and RAW cutset of LOCA. These candidate EOC events are inappropriate terminations of safety injection system, safety injection tank and containment spray system. Then after reviewing top 100 accident sequences of PSA, this study finally identified one EOC scenario and EOC path, that is, inappropriate termination of safety injection system.

  13. Age estimation in forensic anthropology: quantification of observer error in phase versus component-based methods.

    Science.gov (United States)

    Shirley, Natalie R; Ramirez Montes, Paula Andrea

    2015-01-01

    The purpose of this study was to assess observer error in phase versus component-based scoring systems used to develop age estimation methods in forensic anthropology. A method preferred by forensic anthropologists in the AAFS was selected for this evaluation (the Suchey-Brooks method for the pubic symphysis). The Suchey-Brooks descriptions were used to develop a corresponding component-based scoring system for comparison. Several commonly used reliability statistics (kappa, weighted kappa, and the intraclass correlation coefficient) were calculated to assess observer agreement between two observers and to evaluate the efficacy of each of these statistics for this study. The linear weighted kappa was determined to be the most suitable measure of observer agreement. The results show that a component-based system offers the possibility for more objective scoring than a phase system as long as the coding possibilities for each trait do not exceed three states of expression, each with as little overlap as possible. © 2014 American Academy of Forensic Sciences.

  14. Methods for producing single crystal mixed halide perovskites

    Science.gov (United States)

    Zhu, Kai; Zhao, Yixin

    2017-07-11

    An aspect of the present invention is a method that includes contacting a metal halide and a first alkylammonium halide in a solvent to form a solution and maintaining the solution at a first temperature, resulting in the formation of at least one alkylammonium halide perovskite crystal, where the metal halide includes a first halogen and a metal, the first alkylammonium halide includes the first halogen, the at least one alkylammonium halide perovskite crystal includes the metal and the first halogen, and the first temperature is above about 21.degree. C.

  15. Diagnostic and therapeutic capsules and method of producing

    International Nuclear Information System (INIS)

    Morcos, N.A.; Haney, T.A.; Wedeking, P.W.

    1981-01-01

    An article of manufacture comprising a pharmaceutical radioactive capsule formed essentially of a non-toxic, water soluble material adapted to being ingested and rapidly disintegrating on contract with fluids of the gastro-intestinal tract, and having a filler material supporting a pharmaceutically useful radioactive compound absorbable from the gastro-intestinal tract said filler material being supported by said capsule. And a method of filling a pharmaceutical radioactive capsule comprising providing filler material supporting a pharmaceutically useful radioactive compound and transporting said filler material carrying a pharmaceutically useful radioactive compound into the chamber of said capsule

  16. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors

    International Nuclear Information System (INIS)

    Gordon, J J; Siebers, J V

    2007-01-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Σ and σ. For clinically relevant combinations of σ, Σ and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: σ[1 - γN/25] 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when σ ∼> σ P , where σ P = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if σ P takes values other than 0.32 cm.) When σ P , dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Σ and N. When σ ∼> σ P , consistent with the above criteria, it was found that the VHMF can underestimate margins for large σ, small Σ and small N. A potential consequence of this underestimate is that the CTV minimum dose can fall below its planned value in more than the prescribed 10% of treatments. The proposed alternative margin algorithm provides better margin

  17. Intelligent error correction method applied on an active pixel sensor based star tracker

    Science.gov (United States)

    Schmidt, Uwe

    2005-10-01

    Star trackers are opto-electronic sensors used on-board of satellites for the autonomous inertial attitude determination. During the last years star trackers became more and more important in the field of the attitude and orbit control system (AOCS) sensors. High performance star trackers are based up today on charge coupled device (CCD) optical camera heads. The active pixel sensor (APS) technology, introduced in the early 90-ties, allows now the beneficial replacement of CCD detectors by APS detectors with respect to performance, reliability, power, mass and cost. The company's heritage in star tracker design started in the early 80-ties with the launch of the worldwide first fully autonomous star tracker system ASTRO1 to the Russian MIR space station. Jena-Optronik recently developed an active pixel sensor based autonomous star tracker "ASTRO APS" as successor of the CCD based star tracker product series ASTRO1, ASTRO5, ASTRO10 and ASTRO15. Key features of the APS detector technology are, a true xy-address random access, the multiple windowing read out and the on-chip signal processing including the analogue to digital conversion. These features can be used for robust star tracking at high slew rates and under worse conditions like stray light and solar flare induced single event upsets. A special algorithm have been developed to manage the typical APS detector error contributors like fixed pattern noise (FPN), dark signal non-uniformity (DSNU) and white spots. The algorithm works fully autonomous and adapts to e.g. increasing DSNU and up-coming white spots automatically without ground maintenance or re-calibration. In contrast to conventional correction methods the described algorithm does not need calibration data memory like full image sized calibration data sets. The application of the presented algorithm managing the typical APS detector error contributors is a key element for the design of star trackers for long term satellite applications like

  18. MO-F-BRA-04: Voxel-Based Statistical Analysis of Deformable Image Registration Error via a Finite Element Method.

    Science.gov (United States)

    Li, S; Lu, M; Kim, J; Glide-Hurst, C; Chetty, I; Zhong, H

    2012-06-01

    Purpose Clinical implementation of adaptive treatment planning is limited by the lack of quantitative tools to assess deformable image registration errors (R-ERR). The purpose of this study was to develop a method, using finite element modeling (FEM), to estimate registration errors based on mechanical changes resulting from them. Methods An experimental platform to quantify the correlation between registration errors and their mechanical consequences was developed as follows: diaphragm deformation was simulated on the CT images in patients with lung cancer using a finite element method (FEM). The simulated displacement vector fields (F-DVF) were used to warp each CT image to generate a FEM image. B-Spline based (Elastix) registrations were performed from reference to FEM images to generate a registration DVF (R-DVF). The F- DVF was subtracted from R-DVF. The magnitude of the difference vector was defined as the registration error, which is a consequence of mechanically unbalanced energy (UE), computed using 'in-house-developed' FEM software. A nonlinear regression model was used based on imaging voxel data and the analysis considered clustered voxel data within images. Results A regression model analysis showed that UE was significantly correlated with registration error, DVF and the product of registration error and DVF respectively with R̂2=0.73 (R=0.854). The association was verified independently using 40 tracked landmarks. A linear function between the means of UE values and R- DVF*R-ERR has been established. The mean registration error (N=8) was 0.9 mm. 85.4% of voxels fit this model within one standard deviation. Conclusions An encouraging relationship between UE and registration error has been found. These experimental results suggest the feasibility of UE as a valuable tool for evaluating registration errors, thus supporting 4D and adaptive radiotherapy. The research was supported by NIH/NCI R01CA140341. © 2012 American Association of Physicists in

  19. Thermoelectric skutterudite compositions and methods for producing the same

    Science.gov (United States)

    Ren, Zhifeng; Yang, Jian; Yan, Xiao; He, Qinyu; Chen, Gang; Hao, Qing

    2014-11-11

    Compositions related to skutterudite-based thermoelectric materials are disclosed. Such compositions can result in materials that have enhanced ZT values relative to one or more bulk materials from which the compositions are derived. Thermoelectric materials such as n-type and p-type skutterudites with high thermoelectric figures-of-merit can include materials with filler atoms and/or materials formed by compacting particles (e.g., nanoparticles) into a material with a plurality of grains each having a portion having a skutterudite-based structure. Methods of forming thermoelectric skutterudites, which can include the use of hot press processes to consolidate particles, are also disclosed. The particles to be consolidated can be derived from (e.g., grinded from), skutterudite-based bulk materials, elemental materials, other non-Skutterudite-based materials, or combinations of such materials.

  20. Corrosion and wear resistant metallic layers produced by electrochemical methods

    DEFF Research Database (Denmark)

    Christoffersen, Lasse; Maahn, Ernst Emanuel

    1999-01-01

    Corrosion and wear-corrosion properties of novel nickel alloy coatings with promising production characteristics have been compared with conventional bulk materials and hard platings. Corrosion properties in neutral and acidic environments have been investigated with electrochemical methods....... Determination of polarisation resistance during 100 hours followed by stepwise anodic polarisation seems to be a promising technique to obtain steady state data on slowly corroding coatings with transient kinetics. A slurry test enables determination of simultaneous corrosion and abrasive wear. Comparison...... of AISI 316, hard chromium and hardened Ni-P shows that there is no universal correlation between surface hardness and wear-corrosion loss. The possible relation between questionable passivity of Ni-P coatings and their high wear-corrosion loss rate compared to hard chromium is discussed....

  1. Water flux in animals: analysis of potential errors in the tritiated water method

    Energy Technology Data Exchange (ETDEWEB)

    Nagy, K.A.; Costa, D.

    1979-03-01

    Laboratory studies indicate that tritiated water measurements of water flux are accurate to within -7 to +4% in mammals, but errors are larger in some reptiles. However, under conditions that can occur in field studies, errors may be much greater. Influx of environmental water vapor via lungs and skin can cause errors exceeding +-50% in some circumstances. If water flux rates in an animal vary through time, errors approach +-15% in extreme situations, but are near +-3% in more typical circumstances. Errors due to fractional evaporation of tritiated water may approach -9%. This error probably varies between species. Use of an inappropriate equation for calculating water flux from isotope data can cause errors exceeding +-100%. The following sources of error are either negligible or avoidable: use of isotope dilution space as a measure of body water volume, loss of nonaqueous tritium bound to excreta, binding of tritium with nonaqueous substances in the body, radiation toxicity effects, and small analytical errors in isotope measurements. Water flux rates measured with tritiated water should be within +-10% of actual flux rates in most situations.

  2. Water flux in animals: analysis of potential errors in the tritiated water method

    International Nuclear Information System (INIS)

    Nagy, K.A.; Costa, D.

    1979-03-01

    Laboratory studies indicate that tritiated water measurements of water flux are accurate to within -7 to +4% in mammals, but errors are larger in some reptiles. However, under conditions that can occur in field studies, errors may be much greater. Influx of environmental water vapor via lungs and skin can cause errors exceeding +-50% in some circumstances. If water flux rates in an animal vary through time, errors approach +-15% in extreme situations, but are near +-3% in more typical circumstances. Errors due to fractional evaporation of tritiated water may approach -9%. This error probably varies between species. Use of an inappropriate equation for calculating water flux from isotope data can cause errors exceeding +-100%. The following sources of error are either negligible or avoidable: use of isotope dilution space as a measure of body water volume, loss of nonaqueous tritium bound to excreta, binding of tritium with nonaqueous substances in the body, radiation toxicity effects, and small analytical errors in isotope measurements. Water flux rates measured with tritiated water should be within +-10% of actual flux rates in most situations

  3. Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.

    Science.gov (United States)

    Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn

    2017-07-01

    The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.

  4. Solidified ceramics of radioactive wastes and method of producing it

    International Nuclear Information System (INIS)

    Oota, Takao; Matake, Shigeru; Ooka, Kazuo.

    1980-01-01

    Purpose: To provide solidified ceramics which have low leaching properties to water of radioactive substance, excellent heat dissipating and resistive properties and high mechanical strength by mixing and sintering limited amounts of titanium and aluminum compounds with calcined radioactive wastes containing special compound. Method: More than 20% by weight of titanium compound (as TiO 2 ) and more than 5% by weight of aluminum compound (as Al 2 O 3 ) are mixed with the calcined radioactive wasted containing, as converted by oxide, 5 to 40% by weight of Na 2 O, 5 to 20% by weight of Fe 2 O 3 , 5 to 15% by weight of MoO 3 , 5 to 15% by weight of ZrO 2 , 2 to 10% by weight of CeO 2 , 2 to 10% by weight of Cs 2 O, 1 to 5% by weight of BaO, 1 to 5% by weight of SrO, 0.2 to 2% by weight of Rb 2 O, 0.2% by weight of Y 2 O 3 , 0.2 to 2% by weight of NiO, 5 to 20% by weight of rare earth metal oxide, and 0.2 to 2% by weight of Cr 2 O 3 . The mixture is molded, sintered, and solidified to ceramics which contains no Mo phase, Na 2 O, MoO 3 , K 2 O, MoO 3 and Cs 2 O, MoO 3 phases and the like. (Yoshino, Y.)

  5. Reducing NIR prediction errors with nonlinear methods and large populations of intact compound feedstuffs

    International Nuclear Information System (INIS)

    Fernández-Ahumada, E; Gómez, A; Vallesquino, P; Guerrero, J E; Pérez-Marín, D; Garrido-Varo, A; Fearn, T

    2008-01-01

    According to the current demands of the authorities, the manufacturers and the consumers, controls and assessments of the feed compound manufacturing process have become a key concern. Among others, it must be assured that a given compound feed is well manufactured and labelled in terms of the ingredient composition. When near-infrared spectroscopy (NIRS) together with linear models were used for the prediction of the ingredient composition, the results were not always acceptable. Therefore, the performance of nonlinear methods has been investigated. Artificial neural networks and least squares support vector machines (LS-SVM) have been applied to a large (N = 20 320) and heterogeneous population of non-milled feed compounds for the NIR prediction of the inclusion percentage of wheat and sunflower meal, as representative of two different classes of ingredients. Compared to partial least squares regression, results showed considerable reductions of standard error of prediction values for both methods and ingredients: reductions of 45% with ANN and 49% with LS-SVM for wheat and reductions of 44% with ANN and 46% with LS-SVM for sunflower meal. These improvements together with the facility of NIRS technology to be implemented in the process make it ideal for meeting the requirements of the animal feed industry

  6. Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System.

    Science.gov (United States)

    Shi, Shuai; Zhao, Kaichun; You, Zheng; Ouyang, Chenguang; Cao, Yongkui; Wang, Zhenzhou

    2017-03-22

    The Multiple Field-of-view Navigation System (MFNS) is a spacecraft subsystem built to realize the autonomous navigation of the Spacecraft Inside Tiangong Space Station. This paper introduces the basics of the MFNS, including its architecture, mathematical model and analysis, and numerical simulation of system errors. According to the performance requirement of the MFNS, the calibration of both intrinsic and extrinsic parameters of the system is assumed to be essential and pivotal. Hence, a novel method based on the geometrical constraints in object space, called checkerboard-fixed post-processing calibration (CPC), is proposed to solve the problem of simultaneously obtaining the intrinsic parameters of the cameras integrated in the MFNS and the transformation between the MFNS coordinate and the cameras' coordinates. This method utilizes a two-axis turntable and a prior alignment of the coordinates is needed. Theoretical derivation and practical operation of the CPC method are introduced. The calibration experiment results of the MFNS indicate that the extrinsic parameter accuracy of the CPC reaches 0.1° for each Euler angle and 0.6 mm for each position vector component (1σ). A navigation experiment verifies the calibration result and the performance of the MFNS. The MFNS is found to work properly, and the accuracy of the position vector components and Euler angle reaches 1.82 mm and 0.17° (1σ) respectively. The basic mechanism of the MFNS may be utilized as a reference for the design and analysis of multiple-camera systems. Moreover, the calibration method proposed has practical value for its convenience for use and potential for integration into a toolkit.

  7. Stand-alone error characterisation of microwave satellite soil moisture using a Fourier method

    Science.gov (United States)

    Error characterisation of satellite-retrieved soil moisture (SM) is crucial for maximizing their utility in research and applications in hydro-meteorology and climatology. Error characteristics can provide insights for retrieval development and validation, and inform suitable strategies for data fus...

  8. A method for local transport analysis in tokamaks with error calculation

    International Nuclear Information System (INIS)

    Hogeweij, G.M.D.; Hordosy, G.; Lopes Cardozo, N.J.

    1989-01-01

    Global transport studies have revealed that heat transport in a tokamak is anomalous, but cannot provide information about the nature of the anomaly. Therefore, local transport analysis is essential for the study of anomalous transport. However, the determination of local transport coefficients is not a trivial affair. Generally speaking one can either directly measure the heat diffusivity, χ, by means of heat pulse propagation analysis, or deduce the profile of χ from measurements of the profiles of the temperature, T, and the power deposition. Here we are concerned only with the latter method, the local power balance analysis. For the sake of clarity heat diffusion only is considered: ρ=-gradT/q (1) where ρ=κ -1 =(nχ) -1 is the heat resistivity and q is the heat flux per unit area. It is assumed that the profiles T(r) and q(r) are given with some experimental error. In practice T(r) is measured directly, e.g. from ECE spectroscopy, while q(r) is deduced from the power deposition and loss profiles. The latter cannot be measured directly and is partly determined on the basis of models. This complication will not be considered here. Since in eq. (1) the gradient of T appears, noise on T can severely affect the solution ρ. This means that in general some form of smoothing must be applied. A criterion is needed to select the optimal smoothing. Too much smoothing will wipe out the details, whereas with too little smoothing the noise will distort the reconstructed profile of ρ. Here a new method to solve eq. (1) is presented which expresses ρ(r) as a cosine-series. The coefficients of this series are given as linear combinations of the Fourier coefficients of the measured T- and q-profiles. This formulation allows 1) the stable and accurate calculation of the ρ-profile, and 2) the analytical calculation of the error in this profile. (author) 5 refs., 3 figs

  9. Ceramic residue for producing cements, method for the production thereof, and cements containing same

    OpenAIRE

    Sánchez de Rojas, María Isabel; Frías, Moisés; Asensio, Eloy; Medina Martínez, César

    2014-01-01

    [EN] The invention relates to a ceramic residue produced from construction and demolition residues, as a puzzolanic component of cements. The invention also relates to a method for producing said ceramic residues and to another method of producing cements using said residues. This type of residue is collected in recycling plants, where it is managed. This invention facilitates a potential commercial launch.

  10. On stochastic error and computational efficiency of the Markov Chain Monte Carlo method

    KAUST Repository

    Li, Jun

    2014-01-01

    In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.

  11. Investigating the error sources of the online state of charge estimation methods for lithium-ion batteries in electric vehicles

    Science.gov (United States)

    Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu

    2018-02-01

    Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.

  12. Investigation of error estimation method of observational data and comparison method between numerical and observational results toward V and V of seismic simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Kawakami, Yoshiaki; Nakajima, Norihiro

    2017-01-01

    The method to estimate errors included in observational data and the method to compare numerical results with observational results are investigated toward the verification and validation (V and V) of a seismic simulation. For the method to estimate errors, 144 literatures for the past 5 years (from the year 2010 to 2014) in the structure engineering field and earthquake engineering field where the description about acceleration data is frequent are surveyed. As a result, it is found that some processes to remove components regarded as errors from observational data are used in about 30% of those literatures. Errors are caused by the resolution, the linearity, the temperature coefficient for sensitivity, the temperature coefficient for zero shift, the transverse sensitivity, the seismometer property, the aliasing, and so on. Those processes can be exploited to estimate errors individually. For the method to compare numerical results with observational results, public materials of ASME V and V Symposium 2012-2015, their references, and above 144 literatures are surveyed. As a result, it is found that six methods have been mainly proposed in existing researches. Evaluating those methods using nine items, advantages and disadvantages for those methods are arranged. The method is not well established so that it is necessary to employ those methods by compensating disadvantages and/or to search for a solution to a novel method. (author)

  13. Information System Hazard Analysis: A Method for Identifying Technology-induced Latent Errors for Safety.

    Science.gov (United States)

    Weber, Jens H; Mason-Blakley, Fieran; Price, Morgan

    2015-01-01

    Many health information and communication technologies (ICT) are safety-critical; moreover, reports of technology-induced adverse events related to them are plentiful in the literature. Despite repeated criticism and calls to action, recent data collected by the Institute of Medicine (IOM) and other organization do not indicate significant improvements with respect to the safety of health ICT systems. A large part of the industry still operates on a reactive "break & patch" model; the application of pro-active, systematic hazard analysis methods for engineering ICT that produce "safe by design" products is sparse. This paper applies one such method: Information System Hazard Analysis (ISHA). ISHA adapts and combines hazard analysis techniques from other safety-critical domains and customizes them for ICT. We provide an overview of the steps involved in ISHA and describe.

  14. Construction of a Mean Square Error Adaptive Euler–Maruyama Method With Applications in Multilevel Monte Carlo

    KAUST Repository

    Hoel, Hakon

    2016-06-13

    A formal mean square error expansion (MSE) is derived for Euler-Maruyama numerical solutions of stochastic differential equations (SDE). The error expansion is used to construct a pathwise, a posteriori, adaptive time-stepping Euler-Maruyama algorithm for numerical solutions of SDE, and the resulting algorithm is incorporated into a multilevel Monte Carlo (MLMC) algorithm for weak approximations of SDE. This gives an efficient MSE adaptive MLMC algorithm for handling a number of low-regularity approximation problems. In low-regularity numerical example problems, the developed adaptive MLMC algorithm is shown to outperform the uniform time-stepping MLMC algorithm by orders of magnitude, producing output whose error with high probability is bounded by TOL > 0 at the near-optimal MLMC cost rate б(TOL log(TOL)) that is achieved when the cost of sample generation is б(1).

  15. Wavelets and triple difference as a mathematical method for filtering and mitigation of DGPS errors

    Directory of Open Access Journals (Sweden)

    Aly M. El-naggar

    2015-12-01

    Wavelet spectral techniques can separate GPS signals into sub-bands where different errors can be separated and mitigated. The main goal of this paper was the development and implementation of DGPS error mitigation techniques using triple difference and wavelet. This paper studies, analyzes and provides new techniques that will help mitigate these errors in the frequency domain. The proposed technique applied to smooth noise for GPS receiver positioning data is based upon the analysis of wavelet transform (WT. The technique is applied using wavelet as a de-noising tool to tackle the high-frequency errors in the triple difference domain and to obtain a de-noised triple difference signal that can be used in a positioning calculation.

  16. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s. 5 figures

  17. CO2 production in animals: analysis of potential errors in the doubly labeled water method

    International Nuclear Information System (INIS)

    Nagy, K.A.

    1979-03-01

    Laboratory validation studies indicate that doubly labeled water ( 3 HH 18 O and 2 HH 18 O) measurements of CO 2 production are accurate to within +-9% in nine species of mammals and reptiles, a bird, and an insect. However, in field studies, errors can be much larger under certain circumstances. Isotopic fraction of labeled water can cause large errors in animals whose evaporative water loss comprises a major proportion of total water efflux. Input of CO 2 across lungs and skin caused errors exceeding +80% in kangaroo rats exposed to air containing 3.4% unlabeled CO 2 . Analytical errors of +-1% in isotope concentrations can cause calculated rates of CO 2 production to contain errors exceeding +-70% in some circumstances. These occur: 1) when little decline in isotope concentractions has occured during the measurement period; 2) when final isotope concentrations closely approach background levels; and 3) when the rate of water flux in an animal is high relative to its rate of CO 2 production. The following sources of error are probably negligible in most situations: 1) use of an inappropriate equation for calculating CO 2 production, 2) variations in rates of water or CO 2 flux through time, 3) use of H 2 O-18 dilution space as a measure of body water volume, 4) exchange of 0-18 between water and nonaqueous compounds in animals (including excrement), 5) incomplete mixing of isotopes in the animal, and 6) input of unlabeled water via lungs and skin. Errors in field measurements of CO 2 production can be reduced to acceptable levels (< 10%) by appropriate selection of study subjects and recapture intervals

  18. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    Science.gov (United States)

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  19. Instance Analysis for the Error of Three-pivot Pressure Transducer Static Balancing Method for Hydraulic Turbine Runner

    Science.gov (United States)

    Weng, Hanli; Li, Youping

    2017-04-01

    The working principle, process device and test procedure of runner static balancing test method by weighting with three-pivot pressure transducers are introduced in this paper. Based on an actual instance of a V hydraulic turbine runner, the error and sensitivity of the three-pivot pressure transducer static balancing method are analysed. Suggestions about improving the accuracy and the application of the method are also proposed.

  20. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning.

    Science.gov (United States)

    Deng, Zhongliang; Fu, Xiao; Wang, Hanhua

    2018-01-20

    Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization.

  1. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning

    Directory of Open Access Journals (Sweden)

    Zhongliang Deng

    2018-01-01

    Full Text Available Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS. Wireless positioning signals have a considerable attenuation in received signal strength (RSS when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization.

  2. The Scientific Method, Diagnostic Bayes, and How to Detect Epistemic Errors

    Science.gov (United States)

    Vrugt, J. A.

    2015-12-01

    In the past decades, Bayesian methods have found widespread application and use in environmental systems modeling. Bayes theorem states that the posterior probability, P(H|D) of a hypothesis, H is proportional to the product of the prior probability, P(H) of this hypothesis and the likelihood, L(H|hat{D}) of the same hypothesis given the new/incoming observations, \\hat {D}. In science and engineering, H often constitutes some numerical simulation model, D = F(x,.) which summarizes using algebraic, empirical, and differential equations, state variables and fluxes, all our theoretical and/or practical knowledge of the system of interest, and x are the d unknown parameters which are subject to inference using some data, \\hat {D} of the observed system response. The Bayesian approach is intimately related to the scientific method and uses an iterative cycle of hypothesis formulation (model), experimentation and data collection, and theory/hypothesis refinement to elucidate the rules that govern the natural world. Unfortunately, model refinement has proven to be very difficult in large part because of the poor diagnostic power of residual based likelihood functions tep{gupta2008}. This has inspired te{vrugt2013} to advocate the use of 'likelihood-free' inference using approximate Bayesian computation (ABC). This approach uses one or more summary statistics, S(\\hat {D}) of the original data, \\hat {D} designed ideally to be sensitive only to one particular process in the model. Any mismatch between the observed and simulated summary metrics is then easily linked to a specific model component. A recurrent issue with the application of ABC is self-sufficiency of the summary statistics. In theory, S(.) should contain as much information as the original data itself, yet complex systems rarely admit sufficient statistics. In this article, we propose to combine the ideas of ABC and regular Bayesian inference to guarantee that no information is lost in diagnostic model

  3. Measurement method of activation cross-sections of reactions producing short-lived nuclei with 14 MeV neutrons

    CERN Document Server

    Kawade, K; Kasugai, Y; Shibata, M; Iida, T; Takahashi, A; Fukahori, T

    2003-01-01

    We describe a method for obtaining reliable activation cross-sections in the neutron energy range between 13.4 and 14.9 MeV for the reactions producing short-lived nuclei with half-lives between 0.5 and 30 min. We noted neutron irradiation fields and measured induced activities, including (1) the contribution of scattered low-energy neutrons, (2) the fluctuation of the neutron fluence rate during the irradiation, (3) the true coincidence sum effect, (4) the random coincidence sum effect, (5) the deviation in the measuring position due to finite sample thickness, (6) the self-absorption of the gamma-ray in the sample material and (7) the interference reactions producing the same radionuclides or the ones emitting the gamma-ray with the same energy of interest. The cross-sections can be obtained within a total error of 3.6%, when good counting statistics are achieved, including an error of 3.0% for the standard cross-section of sup 2 sup 7 Al (n, alpha) sup 2 sup 4 Na. We propose here simple methods for measuri...

  4. Hyper-dendritic nanoporous zinc foam anodes, methods of producing the same, and methods for their use

    Science.gov (United States)

    Steingart, Daniel A.; Chamoun, Mylad; Hertzberg, Benjamin; Davies, Greg; Hsieh, Andrew G.

    2018-02-13

    Disclosed are hyper-dendritic nanoporous zinc foam electrodes, viz., anodes, methods of producing the same, and methods for their use in electrochemical cells, especially in rechargeable electrical batteries.

  5. Error analysis of isotope dilution mass spectrometry method with internal standard

    International Nuclear Information System (INIS)

    Rizhinskii, M.W.; Vitinskii, M.Y.

    1989-02-01

    The computation algorithms of the normalized isotopic ratios and element concentration by isotope dilution mass spectrometry with internal standard are presented. A procedure based on the Monte-Carlo calculation is proposed for predicting the magnitude of the errors to be expected. The estimation of systematic and random errors is carried out in the case of the certification of uranium and plutonium reference materials as well as for the use of those reference materials in the analysis of irradiated nuclear fuels. 4 refs, 11 figs, 2 tabs

  6. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    Science.gov (United States)

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  7. Optimal Error Estimates of Two Mixed Finite Element Methods for Parabolic Integro-Differential Equations with Nonsmooth Initial Data

    KAUST Repository

    Goswami, Deepjyoti

    2013-05-01

    In the first part of this article, a new mixed method is proposed and analyzed for parabolic integro-differential equations (PIDE) with nonsmooth initial data. Compared to the standard mixed method for PIDE, the present method does not bank on a reformulation using a resolvent operator. Based on energy arguments combined with a repeated use of an integral operator and without using parabolic type duality technique, optimal L2 L2-error estimates are derived for semidiscrete approximations, when the initial condition is in L2 L2. Due to the presence of the integral term, it is, further, observed that a negative norm estimate plays a crucial role in our error analysis. Moreover, the proposed analysis follows the spirit of the proof techniques used in deriving optimal error estimates for finite element approximations to PIDE with smooth data and therefore, it unifies both the theories, i.e., one for smooth data and other for nonsmooth data. Finally, we extend the proposed analysis to the standard mixed method for PIDE with rough initial data and provide an optimal error estimate in L2, L 2, which improves upon the results available in the literature. © 2013 Springer Science+Business Media New York.

  8. Identification and Assessment of Human Errors in Postgraduate Endodontic Students of Kerman University of Medical Sciences by Using the SHERPA Method

    Directory of Open Access Journals (Sweden)

    Saman Dastaran

    2016-03-01

    Full Text Available Introduction: Human errors are the cause of many accidents, including industrial and medical, therefore finding out an approach for identifying and reducing them is very important. Since no study has been done about human errors in the dental field, this study aimed to identify and assess human errors in postgraduate endodontic students of Kerman University of Medical Sciences by using the SHERPA Method. Methods: This cross-sectional study was performed during year 2014. Data was collected using task observation and interviewing postgraduate endodontic students. Overall, 10 critical tasks, which were most likely to cause harm to patients were determined. Next, Hierarchical Task Analysis (HTA was conducted and human errors in each task were identified by the Systematic Human Error Reduction Prediction Approach (SHERPA technique worksheets. Results: After analyzing the SHERPA worksheets, 90 human errors were identified including (67.7% action errors, (13.3% checking errors, (8.8% selection errors, (5.5% retrieval errors and (4.4% communication errors. As a result, most of them were action errors and less of them were communication errors. Conclusions: The results of the study showed that the highest percentage of errors and the highest level of risk were associated with action errors, therefore, to reduce the occurrence of such errors and limit their consequences, control measures including periodical training of work procedures, providing work check-lists, development of guidelines and establishment of a systematic and standardized reporting system, should be put in place. Regarding the results of this study, the control of recovery errors with the highest percentage of undesirable risk and action errors with the highest frequency of errors should be in the priority of control

  9. L∞-error estimates of a finite element method for the Hamilton-Jacobi-Bellman equations

    International Nuclear Information System (INIS)

    Bouldbrachene, M.

    1994-11-01

    We study the finite element approximation for the solution of the Hamilton-Jacobi-Bellman equations involving a system of quasi-variational inequalities (QVI). We also give the optimal L ∞ -error estimates, using the concepts of subsolutions and discrete regularity. (author). 7 refs

  10. Method of producing a carbon coated ceramic membrane and associated product

    Science.gov (United States)

    Liu, Paul K. T.; Gallaher, George R.; Wu, Jeffrey C. S.

    1993-01-01

    A method of producing a carbon coated ceramic membrane including passing a selected hydrocarbon vapor through a ceramic membrane and controlling ceramic membrane exposure temperature and ceramic membrane exposure time. The method produces a carbon coated ceramic membrane of reduced pore size and modified surface properties having increased chemical, thermal and hydrothermal stability over an uncoated ceramic membrane.

  11. IPR CURVE CALCULATING FOR A WELL PRODUCING BY INTERMITTENT GAS-LIFT METHOD

    Directory of Open Access Journals (Sweden)

    Zoran Mršić

    2009-12-01

    Full Text Available Master degree thesis (Mršić Z., 2009 shows the detailed procedure of calculating inflow performance curve for intermittent gas lift, based entirely on the data measured at surface. This article explains the detailed approach of the mentioned research and the essence of the results and observations acquired during the study. To evaluate the proposed method of calculating the average bottom hole flowing pressure (BHFP as the key parameter of inflow performance calculation, downhole pressure surveys have been conducted in three producing wells at Šandrovac and Bilogora oil fields: Šandrovac-75α, Bilogora-52 and Šandrovac-34. Absolute difference between measured and calculated values of average BHFP for first two wells was Δp=0,64 bar and Δp=0,06 bar while calculated relative error was εr=0,072 and εr=0,0038 respectively. Due to gas-lift valve malfunction in well Šandrovac-34, noticed during downhole pressure survey, value of calculated BHFP cannot be considered correct to compare with measured value. Based on the measured data the information have been revealed about actual values of a certain intermittent gas lift parameters that are usually assumed based on experience gained values or are calculated using empirical equations given in literature. The significant difference has been noticed for a parameter t2. The length of a minimum pressure period for which the measured values were in range of 10,74 min up to 16 min, while empirical equation gives values in the range of 1,23 min up to 1,75 min. Based on measured values of above mentioned parameter a new empirical equation has been established (the paper is published in Croatian.

  12. Computational method for the astral servey and the effect of measurement errors on the closed orbit distortion

    International Nuclear Information System (INIS)

    Kamiya, Yukihide.

    1980-05-01

    Has been developed a computational method for the astral survey procedure of the primary monuments that consists in the measurements of short chords and perpendicular distances. This method can be applied to any astral polygon with the lengths of chords and vertical angles different from each other. We will study the propagation of measurement errors for KEK-PF storage ring, and also examine its effect on the closed orbit distortion. (author)

  13. [Monitoring medication errors in personalised dispensing using the Sentinel Surveillance System method].

    Science.gov (United States)

    Pérez-Cebrián, M; Font-Noguera, I; Doménech-Moral, L; Bosó-Ribelles, V; Romero-Boyero, P; Poveda-Andrés, J L

    2011-01-01

    To assess the efficacy of a new quality control strategy based on daily randomised sampling and monitoring a Sentinel Surveillance System (SSS) medication cart, in order to identify medication errors and their origin at different levels of the process. Prospective quality control study with one year follow-up. A SSS medication cart was randomly selected once a week and double-checked before dispensing medication. Medication errors were recorded before it was taken to the relevant hospital ward. Information concerning complaints after receiving medication and 24-hour monitoring were also noted. Type and origin error data were assessed by a Unit Dose Quality Control Group, which proposed relevant improvement measures. Thirty-four SSS carts were assessed, including 5130 medication lines and 9952 dispensed doses, corresponding to 753 patients. Ninety erroneous lines (1.8%) and 142 mistaken doses (1.4%) were identified at the Pharmacy Department. The most frequent error was dose duplication (38%) and its main cause inappropriate management and forgetfulness (69%). Fifty medication complaints (6.6% of patients) were mainly due to new treatment at admission (52%), and 41 (0.8% of all medication lines), did not completely match the prescription (0.6% lines) as recorded by the Pharmacy Department. Thirty-seven (4.9% of patients) medication complaints due to changes at admission and 32 matching errors (0.6% medication lines) were recorded. The main cause also was inappropriate management and forgetfulness (24%). The simultaneous recording of incidences due to complaints and new medication coincided in 33.3%. In addition, 433 (4.3%) of dispensed doses were returned to the Pharmacy Department. After the Unit Dose Quality Control Group conducted their feedback analysis, 64 improvement measures for Pharmacy Department nurses, 37 for pharmacists, and 24 for the hospital ward were introduced. The SSS programme has proven to be useful as a quality control strategy to identify Unit

  14. Solution of Large Systems of Linear Equations in the Presence of Errors. A Constructive Criticism of the Least Squares Method

    Energy Technology Data Exchange (ETDEWEB)

    Nygaard, K

    1968-09-15

    From the point of view that no mathematical method can ever minimise or alter errors already made in a physical measurement, the classical least squares method has severe limitations which makes it unsuitable for the statistical analysis of many physical measurements. Based on the assumptions that the experimental errors are characteristic for each single experiment and that the errors must be properly estimated rather than minimised, a new method for solving large systems of linear equations is developed. The new method exposes the entire range of possible solutions before the decision is taken which of the possible solutions should be chosen as a representative one. The choice is based on physical considerations which (in two examples, curve fitting and unfolding of a spectrum) are presented in such a form that a computer is able to make the decision, A description of the computation is given. The method described is a tool for removing uncertainties due to conventional mathematical formulations (zero determinant, linear dependence) and which are not inherent in the physical problem as such. The method is therefore especially well fitted for unfolding of spectra.

  15. Solution of Large Systems of Linear Equations in the Presence of Errors. A Constructive Criticism of the Least Squares Method

    International Nuclear Information System (INIS)

    Nygaard, K.

    1968-09-01

    From the point of view that no mathematical method can ever minimise or alter errors already made in a physical measurement, the classical least squares method has severe limitations which makes it unsuitable for the statistical analysis of many physical measurements. Based on the assumptions that the experimental errors are characteristic for each single experiment and that the errors must be properly estimated rather than minimised, a new method for solving large systems of linear equations is developed. The new method exposes the entire range of possible solutions before the decision is taken which of the possible solutions should be chosen as a representative one. The choice is based on physical considerations which (in two examples, curve fitting and unfolding of a spectrum) are presented in such a form that a computer is able to make the decision, A description of the computation is given. The method described is a tool for removing uncertainties due to conventional mathematical formulations (zero determinant, linear dependence) and which are not inherent in the physical problem as such. The method is therefore especially well fitted for unfolding of spectra

  16. Comparative evaluation of three cognitive error analysis methods through an application to accident management tasks in NPPs

    International Nuclear Information System (INIS)

    Jung, Won Dea; Kim, Jae Whan; Ha, Jae Joo; Yoon, Wan C.

    1999-01-01

    This study was performed to comparatively evaluate selected Human Reliability Analysis (HRA) methods which mainly focus on cognitive error analysis, and to derive the requirement of a new human error analysis (HEA) framework for Accident Management (AM) in nuclear power plants(NPPs). In order to achieve this goal, we carried out a case study of human error analysis on an AM task in NPPs. In the study we evaluated three cognitive HEA methods, HRMS, CREAM and PHECA, which were selected through the review of the currently available seven cognitive HEA methods. The task of reactor cavity flooding was chosen for the application study as one of typical tasks of AM in NPPs. From the study, we derived seven requirement items for a new HEA method of AM in NPPs. We could also evaluate the applicability of three cognitive HEA methods to AM tasks. CREAM is considered to be more appropriate than others for the analysis of AM tasks. But, PHECA is regarded less appropriate for the predictive HEA technique as well as for the analysis of AM tasks. In addition to these, the advantages and disadvantages of each method are described. (author)

  17. Control of Human Error and comparison Level risk after correction action With the SHERPA Method in a control Room of petrochemical industry

    Directory of Open Access Journals (Sweden)

    A. Zakerian

    2011-12-01

    Full Text Available Background and aims Today in many jobs like nuclear, military and chemical industries, human errors may result in a disaster. Accident in different places of the world emphasizes this subject and we indicate for example, Chernobyl disaster in (1986, tree Mile accident in (1974 and Flixborough explosion in (1974.So human errors identification especially in important and intricate systems is necessary and unavoidable for predicting control methods.   Methods Recent research is a case study and performed in Zagross Methanol Company in Asalouye (South pars.   Walking –Talking through method with process expert and control room operators, inspecting technical documents are used for collecting required information and completing Systematic Human Error Reductive and Predictive Approach (SHERPA worksheets.   Results analyzing SHERPA worksheet indicated that, were accepting capable invertebrate errors % 71.25, % 26.75 undesirable errors, % 2 accepting capable(with change errors, % 0 accepting capable errors, and after correction action forecast Level risk to this arrangement, accepting capable invertebrate errors % 0, % 4.35 undesirable errors , % 58.55 accepting capable(with change errors, % 37.1 accepting capable errors .   ConclusionFinally this result is comprehension that this method in different industries especially in chemical industries is enforceable and useful for human errors identification that may lead to accident and adventures.

  18. Methods for determining and processing 3D errors and uncertainties for AFM data analysis

    Science.gov (United States)

    Klapetek, P.; Nečas, D.; Campbellová, A.; Yacoot, A.; Koenders, L.

    2011-02-01

    This paper describes the processing of three-dimensional (3D) scanning probe microscopy (SPM) data. It is shown that 3D volumetric calibration error and uncertainty data can be acquired for both metrological atomic force microscope systems and commercial SPMs. These data can be used within nearly all the standard SPM data processing algorithms to determine local values of uncertainty of the scanning system. If the error function of the scanning system is determined for the whole measurement volume of an SPM, it can be converted to yield local dimensional uncertainty values that can in turn be used for evaluation of uncertainties related to the acquired data and for further data processing applications (e.g. area, ACF, roughness) within direct or statistical measurements. These have been implemented in the software package Gwyddion.

  19. Methods for determining and processing 3D errors and uncertainties for AFM data analysis

    International Nuclear Information System (INIS)

    Klapetek, P; Campbellová, A; Nečas, D; Yacoot, A; Koenders, L

    2011-01-01

    This paper describes the processing of three-dimensional (3D) scanning probe microscopy (SPM) data. It is shown that 3D volumetric calibration error and uncertainty data can be acquired for both metrological atomic force microscope systems and commercial SPMs. These data can be used within nearly all the standard SPM data processing algorithms to determine local values of uncertainty of the scanning system. If the error function of the scanning system is determined for the whole measurement volume of an SPM, it can be converted to yield local dimensional uncertainty values that can in turn be used for evaluation of uncertainties related to the acquired data and for further data processing applications (e.g. area, ACF, roughness) within direct or statistical measurements. These have been implemented in the software package Gwyddion

  20. Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System

    OpenAIRE

    Shi, Shuai; Zhao, Kaichun; You, Zheng; Ouyang, Chenguang; Cao, Yongkui; Wang, Zhenzhou

    2017-01-01

    The Multiple Field-of-view Navigation System (MFNS) is a spacecraft subsystem built to realize the autonomous navigation of the Spacecraft Inside Tiangong Space Station. This paper introduces the basics of the MFNS, including its architecture, mathematical model and analysis, and numerical simulation of system errors. According to the performance requirement of the MFNS, the calibration of both intrinsic and extrinsic parameters of the system is assumed to be essential and pivotal. Hence, a n...

  1. Error analysis of the finite element and finite volume methods for some viscoelastic fluids

    Czech Academy of Sciences Publication Activity Database

    Lukáčová-Medviďová, M.; Mizerová, H.; She, B.; Stebel, Jan

    2016-01-01

    Roč. 24, č. 2 (2016), s. 105-123 ISSN 1570-2820 R&D Projects: GA ČR(CZ) GAP201/11/1304 Institutional support: RVO:67985840 Keywords : error analysis * Oldroyd-B type models * viscoelastic fluids Subject RIV: BA - General Mathematics Impact factor: 0.405, year: 2016 http://www.degruyter.com/view/j/jnma.2016.24.issue-2/jnma-2014-0057/jnma-2014-0057. xml

  2. An Investigation of Methods for Reducing Sampling Error in Certain IRT (Item Response Theory) Procedures.

    Science.gov (United States)

    1983-08-01

    Standard Errors for B1 Bell-shaped distribution Rectangular Item b Bn-45 n=90 n-45 n=45 -No. i i N-1500 N=1500 N-6000 N=1500 1 -2.01 -1.75 0.516 0.466...34th Streets Lawrence, KS 66045 Baltimore, MD 21218 ENIC Facility-Acquisitions 1 Dr. Ron Hambleton 4t33 Rugby Avenue School of Education Lcthesda, !ID

  3. Methods for determining the effect of flatness deviations, eccentricity and pyramidal errors on angle measurements

    CSIR Research Space (South Africa)

    Kruger, OA

    2000-01-01

    Full Text Available on face-to-face angle measurements. The results show that flatness and eccentricity deviations have less effect on angle measurements than do pyramidal errors. 1. Introduction Polygons and angle blocks are the most important transfer standards in the field... of angle metrology. Polygons are used by national metrology institutes (NMIs) as transfer standards to industry, where they are used in conjunction with autocollimators to calibrate index tables, rotary tables and other forms of angle- measuring equipment...

  4. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    Science.gov (United States)

    2014-04-01

    Integral Role in Soft Tissue Mechanics, K. Troyer, D. Estep, and C. Puttlitz, Acta Biomaterialia 8 (201 2), 234-244 • A posteriori analysis of multi rate...2013, submitted • A posteriori error estimation for the Lax -Wendroff finite difference scheme, J. B. Collins, D. Estep, and S. Tavener, Journal of...oped over neArly six decades of activity and the major developments form a highly inter- connected web. We do not. ətternpt to review the history of

  5. Methods of producing alkylated hydrocarbons from an in situ heat treatment process liquid

    Science.gov (United States)

    Roes, Augustinus Wilhelmus Maria [Houston, TX; Mo, Weijian [Sugar Land, TX; Muylle, Michel Serge Marie [Houston, TX; Mandema, Remco Hugo [Houston, TX; Nair, Vijay [Katy, TX

    2009-09-01

    A method for producing alkylated hydrocarbons is disclosed. Formation fluid is produced from a subsurface in situ heat treatment process. The formation fluid is separated to produce a liquid stream and a first gas stream. The first gas stream includes olefins. The liquid stream is fractionated to produce at least a second gas stream including hydrocarbons having a carbon number of at least 3. The first gas stream and the second gas stream are introduced into an alkylation unit to produce alkylated hydrocarbons. At least a portion of the olefins in the first gas stream enhance alkylation.

  6. Qualitative Research of AZ31 Magnesium Alloy Aircraft Brackets Produced by a New Forging Method

    Directory of Open Access Journals (Sweden)

    Dziubińska A.

    2016-06-01

    Full Text Available The paper reports a selection of numerical and experimental results of a new closed-die forging method for producing AZ31 magnesium alloy aircraft brackets with one rib. The numerical modelling of the new forming process was performed by the finite element method.The distributions of stresses, strains, temperature and forces were examined. The numerical results confirmed that the forgings produced by the new forming method are correct. For this reason, the new forming process was verified experimentally. The experimental results showed good agreement with the numerical results. The produced forgings of AZ31 magnesium alloy aircraft brackets with one rib were then subjected to qualitative tests.

  7. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  8. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    Directory of Open Access Journals (Sweden)

    Shanshan He

    2015-10-01

    Full Text Available Piecewise linear (G01-based tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical instability, lack of chord error constraint, and lack of assurance of a usable result. Progressive and Iterative Approximation for Least Squares (LSPIA is an efficient method for data fitting that solves the numerical instability problem. However, it does not consider chord errors and needs more work to ensure ironclad results for commercial applications. In this paper, we use LSPIA method incorporating Energy term (ELSPIA to avoid the numerical instability, and lower chord errors by using stretching energy term. We implement several algorithm improvements, including (1 an improved technique for initial control point determination over Dominant Point Method, (2 an algorithm that updates foot point parameters as needed, (3 analysis of the degrees of freedom of control points to insert new control points only when needed, (4 chord error refinement using a similar ELSPIA method with the above enhancements. The proposed approach can generate a shape-preserving B-spline curve. Experiments with data analysis and machining tests are presented for verification of quality and efficiency. Comparisons with other known solutions are included to evaluate the worthiness of the proposed solution.

  9. Analysis of a HP-refinement method for solving the neutron transport equation using two error estimators

    International Nuclear Information System (INIS)

    Fournier, D.; Le Tellier, R.; Suteau, C.; Herbin, R.

    2011-01-01

    The solution of the time-independent neutron transport equation in a deterministic way invariably consists in the successive discretization of the three variables: energy, angle and space. In the SNATCH solver used in this study, the energy and the angle are respectively discretized with a multigroup approach and the discrete ordinate method. A set of spatial coupled transport equations is obtained and solved using the Discontinuous Galerkin Finite Element Method (DGFEM). Within this method, the spatial domain is decomposed into elements and the solution is approximated by a hierarchical polynomial basis in each one. This approach is time and memory consuming when the mesh becomes fine or the basis order high. To improve the computational time and the memory footprint, adaptive algorithms are proposed. These algorithms are based on an error estimation in each cell. If the error is important in a given region, the mesh has to be refined (h−refinement) or the polynomial basis order increased (p−refinement). This paper is related to the choice between the two types of refinement. Two ways to estimate the error are compared on different benchmarks. Analyzing the differences, a hp−refinement method is proposed and tested. (author)

  10. Longitudinal Cut Method Revisited: A Survey on the Main Error Sources

    OpenAIRE

    Moriconi, Alessandro; Lalli, Francesco; Di Felice, Fabio; Esposito, Pier Giorgio; Piscopia, Rodolfo

    2000-01-01

    Some of the main error sources in wave pattern resistance determination were investigated. The experimental data obtained at the Italian Ship Model Basin (longitudinal wave cuts concerned with the steady motion of the Series 60 model and a hard-chine catamaran) were analyzed. It was found that, within the range of Froude numbers tested (0.225 ≤ Fr ≤ 0.345 for the Series 60 and 0.5 ≤ Fr ≤ 1 for the catamaran) two sources of uncertainty play a significant role: (i) the p...

  11. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    Science.gov (United States)

    2013-06-24

    l> h L), MFE2 or GFV2 = (RUR:<p h R-n h R<t> h R), MFEi or GFV, = (RPLXl-P hdhL), MFE , or GFV4 = (RPR.,^-P^), MFE5 or GFV5 = {Ri,ß h...common to both MFE and GFV, are often similar in size. As a gross measure of the effect of geometric progression and of the use of quadrature, we...their true value, the error in the quantity of interest MFE £(e,!//) or GFV £(<?, y/). Tables 1 and 2 show this using coarse and fine forward

  12. Quantitative developments in the cognitive reliability and error analysis method (CREAM) for the assessment of human performance

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Librizzi, Massimo

    2006-01-01

    The current 'second generation' approaches in human reliability analysis focus their attention on the contextual conditions under which a given action is performed rather than on the notion of inherent human error probabilities, as was done in the earlier 'first generation' techniques. Among the 'second generation' methods, this paper considers the Cognitive Reliability and Error Analysis Method (CREAM) and proposes some developments with respect to a systematic procedure for computing probabilities of action failure. The starting point for the quantification is a previously introduced fuzzy version of the CREAM paradigm which is here further extended to include uncertainty on the qualification of the conditions under which the action is performed and to account for the fact that the effects of the common performance conditions (CPCs) on performance reliability may not all be equal. By the proposed approach, the probability of action failure is estimated by rating the performance conditions in terms of their effect on the action

  13. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    Science.gov (United States)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  14. Developmental competence and epigenetic profile of porcine embryos produced by two different cloning methods

    DEFF Research Database (Denmark)

    Liu, Ying; Lucas-Hahn, Andrea; Petersen, Bjoern

    2017-01-01

    on conventionally produced embryos. The goal of this study was to unravel putative differences between two cloning methods, with regard to developmental competence, expression profile of a panel of developmentally important genes and epigenetic profile of porcine cloned embryos produced by either CNT or HMC, either...

  15. Error-free pathology: applying lean production methods to anatomic pathology.

    Science.gov (United States)

    Condel, Jennifer L; Sharbaugh, David T; Raab, Stephen S

    2004-12-01

    The current state of our health care system calls for dramatic changes. In their pathology department, the authors believe these changes may be accomplished by accepting the long-term commitment of applying a lean production system. The ideal state of zero pathology errors is one that should be pursued by consistently asking, "Why can't we?" The philosophy of lean production systems began in the manufacturing industry: "All we are doing is looking at the time from the moment the customer gives us an order to the point when we collect the cash. And we are reducing that time line by removing non-value added wastes". The ultimate goals in pathology and overall health care are not so different. The authors' intention is to provide the patient (customer) with the most accurate diagnostic information in a timely and efficient manner. Their lead histotechnologist recently summarized this philosophy: she indicated that she felt she could sleep better at night knowing she truly did the best job she could. Her chances of making an error (in cutting or labeling) were dramatically decreased in the one-by-one continuous flow work process compared with previous practices. By designing a system that enables employees to be successful in meeting customer demand, and by empowering the frontline staff in the development and problem solving processes, one can meet the challenges of eliminating waste and build an improved, efficient system.

  16. Methods to reduce medication errors in a clinical trial of an investigational parenteral medication

    Directory of Open Access Journals (Sweden)

    Gillian L. Fell

    2016-12-01

    Full Text Available There are few evidence-based guidelines to inform optimal design of complex clinical trials, such as those assessing the safety and efficacy of intravenous drugs administered daily with infusion times over many hours per day and treatment durations that may span years. This study is a retrospective review of inpatient administration deviation reports for an investigational drug that is administered daily with infusion times of 8–24 h, and variable treatment durations for each patient. We report study design modifications made in 2007–2008 aimed at minimizing deviations from an investigational drug infusion protocol approved by an institutional review board and the United States Food and Drug Administration. Modifications were specifically aimed at minimizing errors of infusion rate, incorrect dose, incorrect patient, or wrong drug administered. We found that the rate of these types of administration errors of the study drug was significantly decreased following adoption of the specific study design changes. This report provides guidance in the design of clinical trials testing the safety and efficacy of study drugs administered via intravenous infusion in an inpatient setting so as to minimize drug administration protocol deviations and optimize patient safety.

  17. BLESS 2: accurate, memory-efficient and fast error correction method.

    Science.gov (United States)

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Are adequate methods available to detect protist parasites on fresh produce?

    Science.gov (United States)

    Human parasitic protists such as Cryptosporidium, Giardia and microsporidia contaminate a variety of fresh produce worldwide. Existing detection methods lack sensitivity and specificity for most foodborne parasites. Furthermore, detection has been problematic because these parasites adhere tenacious...

  19. Some error estimates for the lumped mass finite element method for a parabolic problem

    KAUST Repository

    Chatzipantelidis, P.; Lazarov, R. D.; Thomé e, V.

    2012-01-01

    for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods

  20. A comparative evaluation of emerging methods for errors of commission based on applications to the Davis-Besse (1985) event

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B.; Dang, V.N.; Hirschberg, S. [Paul Scherrer Inst., Nuclear Energy and Safety Research Dept., CH-5232 Villigen PSI (Switzerland); Straeter, O. [Gesellschaft fur Anlagen- und Reaktorsicherheit (Germany)

    1999-12-01

    In considering the human role in accidents, the classical PSA methodology applied today focuses primarily on the omissions of actions required of the operators at specific points in the scenario models. A practical, proven methodology is not available for systematically identifying and analyzing the scenario contexts in which the operators might perform inappropriate actions that aggravate the scenario. As a result, typical PSA's do not comprehensively treat these actions, referred to as errors of commission (EOCs). This report presents the results of a joint project of the Paul Scherrer Institut (PSI, Villigen, Switzerland) and the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, Garching, Germany) that examined some methods recently proposed for addressing the EOC issue. Five methods were investigated: 1 ) ATHEANA, 2) the Borssele screening methodology. 3) CREAM, 4) CAHR, and 5) CODA. In addition to a comparison of their scope, basic assumptions, and analytical approach, the methods were each applied in the analysis of PWR Loss of Feedwater scenarios based on the 1985 Davis-Besse event, in which the operator response included actions that can be categorized as EOCs. The aim was to compare how the methods consider a concrete scenario in which EOCs have in fact been observed. These case applications show how the methods are used in practical terms and constitute a common basis for comparing the methods and the insights that they provide. The identification of the potentially significant EOCs to be analysed in the PSA is currently the central problem for their treatment. The identification or search scheme has to consider an extensive set of potential actions that the operators may take. These actions may take place instead of required actions, for example, because the operators fail to assess the plant state correctly, or they may occur even when no action is required. As a result of this broad search space, most methodologies apply multiple schemes to

  1. A comparative evaluation of emerging methods for errors of commission based on applications to the Davis-Besse (1985) event

    International Nuclear Information System (INIS)

    Reer, B.; Dang, V.N.; Hirschberg, S.; Straeter, O.

    1999-12-01

    In considering the human role in accidents, the classical PSA methodology applied today focuses primarily on the omissions of actions required of the operators at specific points in the scenario models. A practical, proven methodology is not available for systematically identifying and analyzing the scenario contexts in which the operators might perform inappropriate actions that aggravate the scenario. As a result, typical PSA's do not comprehensively treat these actions, referred to as errors of commission (EOCs). This report presents the results of a joint project of the Paul Scherrer Institut (PSI, Villigen, Switzerland) and the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, Garching, Germany) that examined some methods recently proposed for addressing the EOC issue. Five methods were investigated: 1 ) ATHEANA, 2) the Borssele screening methodology. 3) CREAM, 4) CAHR, and 5) CODA. In addition to a comparison of their scope, basic assumptions, and analytical approach, the methods were each applied in the analysis of PWR Loss of Feedwater scenarios based on the 1985 Davis-Besse event, in which the operator response included actions that can be categorized as EOCs. The aim was to compare how the methods consider a concrete scenario in which EOCs have in fact been observed. These case applications show how the methods are used in practical terms and constitute a common basis for comparing the methods and the insights that they provide. The identification of the potentially significant EOCs to be analysed in the PSA is currently the central problem for their treatment. The identification or search scheme has to consider an extensive set of potential actions that the operators may take. These actions may take place instead of required actions, for example, because the operators fail to assess the plant state correctly, or they may occur even when no action is required. As a result of this broad search space, most methodologies apply multiple schemes to

  2. The Adaptive-Clustering and Error-Correction Method for Forecasting Cyanobacteria Blooms in Lakes and Reservoirs

    Directory of Open Access Journals (Sweden)

    Xiao-zhe Bai

    2017-01-01

    Full Text Available Globally, cyanobacteria blooms frequently occur, and effective prediction of cyanobacteria blooms in lakes and reservoirs could constitute an essential proactive strategy for water-resource protection. However, cyanobacteria blooms are very complicated because of the internal stochastic nature of the system evolution and the external uncertainty of the observation data. In this study, an adaptive-clustering algorithm is introduced to obtain some typical operating intervals. In addition, the number of nearest neighbors used for modeling was optimized by particle swarm optimization. Finally, a fuzzy linear regression method based on error-correction was used to revise the model dynamically near the operating point. We found that the combined method can characterize the evolutionary track of cyanobacteria blooms in lakes and reservoirs. The model constructed in this paper is compared to other cyanobacteria-bloom forecasting methods (e.g., phase space reconstruction and traditional-clustering linear regression, and, then, the average relative error and average absolute error are used to compare the accuracies of these models. The results suggest that the proposed model is superior. As such, the newly developed approach achieves more precise predictions, which can be used to prevent the further deterioration of the water environment.

  3. Capacitors for Integrated Circuits Produced by Means of a Double Implantation Method

    International Nuclear Information System (INIS)

    Zukowski, P.; Partyka, J.; Wegierek, P.

    1998-01-01

    The paper presents a description of a method to produce capacitors in integrated circuits that consists in implanting weakly doped silicon with the same impurity, then subjecting it to annealing (producing the inner plate) and implanting it again with ions of neutral elements to produce the dielectric layer. Results of the testing capacitors produced that way are also presented. Unit capacity of C u = 4.5 nF/mm 2 at tgδ = 0.01 has been obtained. The authors are of the opinion that the interesting problem of discontinuous variations of dielectric losses and capacities considered as functions of temperature, must be viewed as an open problem. (author)

  4. The boundary element method : errors and gridding for problems with hot spots

    NARCIS (Netherlands)

    Kakuba, G.

    2011-01-01

    Adaptive gridding methods are of fundamental importance both for industry and academia. As one of the computing methods, the Boundary Element Method (BEM) is used to simulate problems whose fundamental solutions are available. The method is usually characterised as constant elements BEM or linear

  5. A Theoretically Consistent Method for Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2014-01-01

    We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features for noise robust automatic speech recognition (ASR). The method is based on a minimum number of well-established statistical assumptions; no assumptions are made which are inconsistent with others....... The strength of the proposed method is that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC's), cepstral mean-subtracted MFCC's (CMS-MFCC's), velocity, and acceleration coefficients. Furthermore, the method is easily modified to take into account other compressive non-linearities than...... the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non...

  6. Sound transmission analysis of plate structures using the finite element method and elementary radiator approach with radiator error index

    DEFF Research Database (Denmark)

    Jung, Jaesoon; Kook, Junghwan; Goo, Seongyeol

    2017-01-01

    combines the FEM and Elementary Radiator Approach (ERA) is proposed. The FE-ERA method analyzes the vibrational response of the plate structure excited by incident sound using FEM and then computes the transmitted acoustic pressure from the vibrating plate using ERA. In order to improve the accuracy...... and efficiency of the FE-ERA method, a novel criterion for the optimal number of elementary radiators is proposed. The criterion is based on the radiator error index that is derived to estimate the accuracy of the computation with used number of radiators. Using the proposed criterion a radiator selection method...... is presented for determining the optimum number of radiators. The presented radiator selection method and the FE-ERA method are combined to improve the computational accuracy and efficiency. Several numerical examples that have been rarely addressed in previous studies, are presented with the proposed method...

  7. Report from LHC MDs 1391 and 1483: Tests of new methods for study of nonlinear errors in the LHC experimental insertions

    CERN Document Server

    Maclean, Ewen Hamish; Fuchsberger, Kajetan; Giovannozzi, Massimo; Persson, Tobias Hakan Bjorn; Tomas Garcia, Rogelio; CERN. Geneva. ATS Department

    2017-01-01

    Nonlinear errors in experimental insertions can pose a significant challenge to the operability of low-β∗ colliders. Previously such errors in the LHC have been studied via their feed-down to tune and coupling under the influence of the nominal crossing angle bumps. This method has proved useful in validating various components of the magnetic model. To understand and correct those errors where significant discrepancies exist with the magnetic model however, will require further development of this technique, in addition to the application of novel methods. In 2016 studies were performed to test new methods for the study of the IR-nonlinear errors.

  8. Evolutionary enhancement of the SLIM-MAUD method of estimating human error rates

    International Nuclear Information System (INIS)

    Zamanali, J.H.; Hubbard, F.R.; Mosleh, A.; Waller, M.A.

    1992-01-01

    The methodology described in this paper assigns plant-specific dynamic human error rates (HERs) for individual plant examinations based on procedural difficulty, on configuration features, and on the time available to perform the action. This methodology is an evolutionary improvement of the success likelihood index methodology (SLIM-MAUD) for use in systemic scenarios. It is based on the assumption that the HER in a particular situation depends of the combined effects of a comprehensive set of performance-shaping factors (PSFs) that influence the operator's ability to perform the action successfully. The PSFs relate the details of the systemic scenario in which the action must be performed according to the operator's psychological and cognitive condition

  9. Error Concealment Method Based on Motion Vector Prediction Using Particle Filters

    Directory of Open Access Journals (Sweden)

    B. Hrusovsky

    2011-09-01

    Full Text Available Video transmitted over unreliable environment, such as wireless channel or in generally any network with unreliable transport protocol, is facing the losses of video packets due to network congestion and different kind of noises. The problem is becoming more important using highly effective video codecs. Visual quality degradation could propagate into subsequent frames due to redundancy elimination in order to obtain high compression ratio. Since the video stream transmission in real time is limited by transmission channel delay, it is not possible to retransmit all faulty or lost packets. It is therefore inevitable to conceal these defects. To reduce the undesirable effects of information losses, the lost data is usually estimated from the received data, which is generally known as error concealment problem. This paper discusses packet loss modeling in order to simulate losses during video transmission, packet losses analysis and their impacts on the motion vectors losses.

  10. Study protocol: the empirical investigation of methods to correct for measurement error in biobanks with dietary assessment

    Directory of Open Access Journals (Sweden)

    Masson Lindsey F

    2011-10-01

    Full Text Available Abstract Background The Public Population Project in Genomics (P3G is an organisation that aims to promote collaboration between researchers in the field of population-based genomics. The main objectives of P3G are to encourage collaboration between researchers and biobankers, optimize study design, promote the harmonization of information use in biobanks, and facilitate transfer of knowledge between interested parties. The importance of calibration and harmonisation of methods for environmental exposure assessment to allow pooling of data across studies in the evaluation of gene-environment interactions has been recognised by P3G, which has set up a methodological group on calibration with the aim of; 1 reviewing the published methodological literature on measurement error correction methods with assumptions and methods of implementation; 2 reviewing the evidence available from published nutritional epidemiological studies that have used a calibration approach; 3 disseminating information in the form of a comparison chart on approaches to perform calibration studies and how to obtain correction factors in order to support research groups collaborating within the P3G network that are unfamiliar with the methods employed; 4 with application to the field of nutritional epidemiology, including gene-diet interactions, ultimately developing a inventory of the typical correction factors for various nutrients. Methods/Design Systematic review of (a the methodological literature on methods to correct for measurement error in epidemiological studies; and (b studies that have been designed primarily to investigate the association between diet and disease and have also corrected for measurement error in dietary intake. Discussion The conduct of a systematic review of the methodological literature on calibration will facilitate the evaluation of methods to correct for measurement error and the design of calibration studies for the prospective pooling of

  11. Methods for producing extracted and digested products from pretreated lignocellulosic biomass

    Science.gov (United States)

    Chundawat, Shishir; Sousa, Leonardo Da Costa; Cheh, Albert M.; Balan; , Venkatesh; Dale, Bruce

    2017-05-16

    Methods for producing extracted and digested products from pretreated lignocellulosic biomass are provided. The methods include converting native cellulose I.sub..beta. to cellulose III.sub.I by pretreating the lignocellulosic biomass with liquid ammonia under certain conditions, and performing extracting or digesting steps on the pretreated/converted lignocellulosic biomass.

  12. High pressure low temperature hot pressing method for producing a zirconium carbide ceramic

    Science.gov (United States)

    Cockeram, Brian V.

    2017-01-10

    A method for producing monolithic Zirconium Carbide (ZrC) is described. The method includes raising a pressure applied to a ZrC powder until a final pressure of greater than 40 MPa is reached; and raising a temperature of the ZrC powder until a final temperature of less than 2200.degree. C. is reached.

  13. Evaluation of beef trim sampling methods for detection of Shiga toxin-producing Escherichia coli (STEC)

    Science.gov (United States)

    Presence of Shiga toxin-producing Escherichia coli (STEC) is a major concern in ground beef. Several methods for sampling beef trim prior to grinding are currently used in the beef industry. The purpose of this study was to determine the efficacy of the sampling methods for detecting STEC in beef ...

  14. The applicability of micro-filters produced by nuclear methods in the food industry

    International Nuclear Information System (INIS)

    Szabo, S.A.; Ember, G.

    1982-01-01

    Problems of the applicability in the food industry of micro-filters produced by nuclear methods are dealt with. Production methods of the polymeric micro-filters, their main characteristics as well as their most important application fields (breweries, dairies, alcoholic- and soft-drink plants, wine industry) are briefly reviewed. (author)

  15. Measuring nuclear-spin-dependent parity violation with molecules: Experimental methods and analysis of systematic errors

    Science.gov (United States)

    Altuntaş, Emine; Ammon, Jeffrey; Cahn, Sidney B.; DeMille, David

    2018-04-01

    Nuclear-spin-dependent parity violation (NSD-PV) effects in atoms and molecules arise from Z0 boson exchange between electrons and the nucleus and from the magnetic interaction between electrons and the parity-violating nuclear anapole moment. It has been proposed to study NSD-PV effects using an enhancement of the observable effect in diatomic molecules [D. DeMille et al., Phys. Rev. Lett. 100, 023003 (2008), 10.1103/PhysRevLett.100.023003]. Here we demonstrate highly sensitive measurements of this type, using the test system 138Ba19F. We show that systematic errors associated with our technique can be suppressed to at least the level of the present statistical sensitivity. With ˜170 h of data, we measure the matrix element W of the NSD-PV interaction with uncertainty δ W /(2 π )<0.7 Hz for each of two configurations where W must have different signs. This sensitivity would be sufficient to measure NSD-PV effects of the size anticipated across a wide range of nuclei.

  16. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  17. Methods for Estimation of Radiation Risk in Epidemiological Studies Accounting for Classical and Berkson Errors in Doses

    KAUST Repository

    Kukush, Alexander

    2011-01-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  18. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    Science.gov (United States)

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  19. Circuit and method for comparator offset error detection and correction in ADC

    NARCIS (Netherlands)

    2017-01-01

    PROBLEM TO BE SOLVED: To provide a method for calibrating an analog-to-digital converter (ADC).SOLUTION: The method comprises: sampling an input voltage signal; comparing the sampled input voltage signal with an output signal of a feedback digital-to-analog converter (DAC) 40; determining in a

  20. On error estimation in the fourier modal method for diffractive gratings

    NARCIS (Netherlands)

    Hlod, A.; Maubach, J.M.L.

    2010-01-01

    The Fourier Modal Method (FMM, also called the Rigorous Coupled Wave Analysis, RCWA) is a numerical discretization method which is often used to calculate a scattered field from a periodic diffraction grating. For 1D periodic gratings in FMM the electromagnetic field is presented by a truncated

  1. Intra- and interobserver error of the Greulich-Pyle method as used on a Danish forensic sample

    DEFF Research Database (Denmark)

    Lynnerup, N; Belard, E; Buch-Olsen, K

    2008-01-01

    that atlas-based techniques are obsolete and ought to be replaced by other methods. Specifically, the GPA test sample consisted of American "white" children "above average in economic and educational status", leading to the question as to how comparable subjects being scored by the GPA method today...... and intraoral dental radiographs. Different methods are used depending on the maturity of the individual examined; and (3) a carpal X-ray examination, using the Greulich and Pyle Atlas (GPA) method. We present the results of intra- and interobserver tests of carpal X-rays in blind trials, and a comparison...... of the age estimations by carpal X-rays and odontological age estimation. We retrieved 159 cases from the years 2000-2002 (inclusive). The intra- and interobserver errors are overall small. We found full agreement in 126/159 cases, and this was between experienced users and novices. Overall, the mean...

  2. Comparison of Sample and Detection Quantification Methods for Salmonella Enterica from Produce

    Science.gov (United States)

    Hummerick, M. P.; Khodadad, C.; Richards, J. T.; Dixit, A.; Spencer, L. M.; Larson, B.; Parrish, C., II; Birmele, M.; Wheeler, Raymond

    2014-01-01

    The purpose of this study was to identify and optimize fast and reliable sampling and detection methods for the identification of pathogens that may be present on produce grown in small vegetable production units on the International Space Station (ISS), thus a field setting. Microbiological testing is necessary before astronauts are allowed to consume produce grown on ISS where currently there are two vegetable production units deployed, Lada and Veggie.

  3. Galilean-invariant preconditioned central-moment lattice Boltzmann method without cubic velocity errors for efficient steady flow simulations

    Science.gov (United States)

    Hajabdollahi, Farzaneh; Premnath, Kannan N.

    2018-05-01

    Lattice Boltzmann (LB) models used for the computation of fluid flows represented by the Navier-Stokes (NS) equations on standard lattices can lead to non-Galilean-invariant (GI) viscous stress involving cubic velocity errors. This arises from the dependence of their third-order diagonal moments on the first-order moments for standard lattices, and strategies have recently been introduced to restore Galilean invariance without such errors using a modified collision operator involving corrections to either the relaxation times or the moment equilibria. Convergence acceleration in the simulation of steady flows can be achieved by solving the preconditioned NS equations, which contain a preconditioning parameter that can be used to tune the effective sound speed, and thereby alleviating the numerical stiffness. In the present paper, we present a GI formulation of the preconditioned cascaded central-moment LB method used to solve the preconditioned NS equations, which is free of cubic velocity errors on a standard lattice, for steady flows. A Chapman-Enskog analysis reveals the structure of the spurious non-GI defect terms and it is demonstrated that the anisotropy of the resulting viscous stress is dependent on the preconditioning parameter, in addition to the fluid velocity. It is shown that partial correction to eliminate the cubic velocity defects is achieved by scaling the cubic velocity terms in the off-diagonal third-order moment equilibria with the square of the preconditioning parameter. Furthermore, we develop additional corrections based on the extended moment equilibria involving gradient terms with coefficients dependent locally on the fluid velocity and the preconditioning parameter. Such parameter dependent corrections eliminate the remaining truncation errors arising from the degeneracy of the diagonal third-order moments and fully restore Galilean invariance without cubic defects for the preconditioned LB scheme on a standard lattice. Several

  4. Methods of producing free-standing semiconductors using sacrificial buffer layers and recyclable substrates

    Science.gov (United States)

    Ptak, Aaron Joseph; Lin, Yong; Norman, Andrew; Alberi, Kirstin

    2015-05-26

    A method of producing semiconductor materials and devices that incorporate the semiconductor materials are provided. In particular, a method is provided of producing a semiconductor material, such as a III-V semiconductor, on a spinel substrate using a sacrificial buffer layer, and devices such as photovoltaic cells that incorporate the semiconductor materials. The sacrificial buffer material and semiconductor materials may be deposited using lattice-matching epitaxy or coincident site lattice-matching epitaxy, resulting in a close degree of lattice matching between the substrate material and deposited material for a wide variety of material compositions. The sacrificial buffer layer may be dissolved using an epitaxial liftoff technique in order to separate the semiconductor device from the spinel substrate, and the spinel substrate may be reused in the subsequent fabrication of other semiconductor devices. The low-defect density semiconductor materials produced using this method result in the enhanced performance of the semiconductor devices that incorporate the semiconductor materials.

  5. A highly accurate finite-difference method with minimum dispersion error for solving the Helmholtz equation

    KAUST Repository

    Wu, Zedong; Alkhalifah, Tariq Ali

    2018-01-01

    Numerical simulation of the acoustic wave equation in either isotropic or anisotropic media is crucial to seismic modeling, imaging and inversion. Actually, it represents the core computation cost of these highly advanced seismic processing methods

  6. Residual and Backward Error Bounds in Minimum Residual Krylov Subspace Methods

    Czech Academy of Sciences Publication Activity Database

    Paige, C. C.; Strakoš, Zdeněk

    2002-01-01

    Roč. 23, č. 6 (2002), s. 1899-1924 ISSN 1064-8275 R&D Projects: GA AV ČR IAA1030103 Institutional research plan: AV0Z1030915 Keywords : linear equations * eigenproblem * large sparse matrices * iterative solutions * Krylov subspace methods * Arnoldi method * GMRES * modified Gram-Schmidt * least squares * total least squares * singular values Subject RIV: BA - General Mathematics Impact factor: 1.291, year: 2002

  7. Error analysis of Newmark's method for the second order equation with inhomogeneous term

    International Nuclear Information System (INIS)

    Chiba, F.; Kako, T.

    2000-01-01

    For the second order time evolution equation with a general dissipation term, we introduce a recurrence relation of Newmark's method. Deriving an energy inequality from this relation, we consider the stability and the convergence criteria of Newmark's method. We treat a dissipation term under the assumption that the coefficient-damping matrix is constant in time and non-negative. We can relax however the assumptions for the dissipation and the rigidity matrices to be arbitrary symmetric matrices. (author)

  8. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China).

    Science.gov (United States)

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-05-25

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The

  9. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    Science.gov (United States)

    Zhao, Q.

    2017-12-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The

  10. A Trial-and-Error Method with Autonomous Vehicle-to-Infrastructure Traffic Counts for Cordon-Based Congestion Pricing

    Directory of Open Access Journals (Sweden)

    Zhiyuan Liu

    2017-01-01

    Full Text Available This study proposes a practical trial-and-error method to solve the optimal toll design problem of cordon-based pricing, where only the traffic counts autonomously collected on the entry links of the pricing cordon are needed. With the fast development and adoption of vehicle-to-infrastructure (V2I facilities, it is very convenient to autonomously collect these data. Two practical properties of the cordon-based pricing are further considered in this article: the toll charge on each entry of one pricing cordon is identical; the total inbound flow to one cordon should be restricted in order to maintain the traffic conditions within the cordon area. Then, the stochastic user equilibrium (SUE with asymmetric link travel time functions is used to assess each feasible toll pattern. Based on a variational inequality (VI model for the optimal toll pattern, this study proposes a theoretically convergent trial-and-error method for the addressed problem, where only traffic counts data are needed. Finally, the proposed method is verified based on a numerical network example.

  11. Development of a new cause classification method considering plant ageing and human errors for adverse events which occurred in nuclear power plants and some results of its application

    International Nuclear Information System (INIS)

    Miyazaki, Takamasa

    2007-01-01

    The adverse events which occurred in nuclear power plants are analyzed to prevent similar events, and in the analysis of each event, the cause of the event is classified by a cause classification method. This paper shows a new cause classification method which is improved in several points as follows: (1) the whole causes are systematically classified into three major categories such as machine system, operation system and plant outside causes, (2) the causes of the operation system are classified into several management errors normally performed in a nuclear power plant, (3) the content of ageing is defined in detail for their further analysis, (4) human errors are divided and defined by the error stage, (5) human errors can be related to background factors, and so on. This new method is applied to the adverse events which occurred in domestic and overseas nuclear power plants in 2005. From these results, it is clarified that operation system errors account for about 60% of the whole causes, of which approximately 60% are maintenance errors, about 40% are worker's human errors, and that the prevention of maintenance errors, especially worker's human errors is crucial. (author)

  12. Technical Note: Regularization performances with the error consistency method in the case of retrieved atmospheric profiles

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2007-01-01

    Full Text Available The retrieval of concentration vertical profiles of atmospheric constituents from spectroscopic measurements is often an ill-conditioned problem and regularization methods are frequently used to improve its stability. Recently a new method, that provides a good compromise between precision and vertical resolution, was proposed to determine analytically the value of the regularization parameter. This method is applied for the first time to real measurements with its implementation in the operational retrieval code of the satellite limb-emission measurements of the MIPAS instrument and its performances are quantitatively analyzed. The adopted regularization improves the stability of the retrieval providing smooth profiles without major degradation of the vertical resolution. In the analyzed measurements the retrieval procedure provides a vertical resolution that, in the troposphere and low stratosphere, is smaller than the vertical field of view of the instrument.

  13. A Multipixel Time Series Analysis Method Accounting for Ground Motion, Atmospheric Noise, and Orbital Errors

    Science.gov (United States)

    Jolivet, R.; Simons, M.

    2018-02-01

    Interferometric synthetic aperture radar time series methods aim to reconstruct time-dependent ground displacements over large areas from sets of interferograms in order to detect transient, periodic, or small-amplitude deformation. Because of computational limitations, most existing methods consider each pixel independently, ignoring important spatial covariances between observations. We describe a framework to reconstruct time series of ground deformation while considering all pixels simultaneously, allowing us to account for spatial covariances, imprecise orbits, and residual atmospheric perturbations. We describe spatial covariances by an exponential decay function dependent of pixel-to-pixel distance. We approximate the impact of imprecise orbit information and residual long-wavelength atmosphere as a low-order polynomial function. Tests on synthetic data illustrate the importance of incorporating full covariances between pixels in order to avoid biased parameter reconstruction. An example of application to the northern Chilean subduction zone highlights the potential of this method.

  14. The method to evaluate the position error in graphic positioning technology

    Institute of Scientific and Technical Information of China (English)

    Huiqing Lu(卢慧卿); Baoguang Wang(王宝光); Lishuang Liu(刘力双); Yabiao Li(李亚标)

    2004-01-01

    In the measurement of automobile body-in-white, it has been widely studied to position the two dimensional(2D)visual sensors with high precision. In this paper a graphic positioning method is proposed,a hollow tetrahedron is used for a positioning target to replace all the edges of a standard automobile body.A 2D visual sensor can be positioned through adjusting two triangles to be superposed on a screen of the computer, so it is very important to evaluate the superposition precision of the two triangles. Several methods are discussed and the least square method is adopted at last, it makes the adjustment more easy and intuitive with high precision.

  15. An error estimate for Tremolieres method for the discretization of parabolic variational inequalities

    International Nuclear Information System (INIS)

    Uko, L.U.

    1990-02-01

    We study a scheme for the time-discretization of parabolic variational inequalities that is often easier to use than the classical method of Rothe. We show that if the data are compatible in a certain sense, then this scheme is of order ≥1/2. (author). 10 refs

  16. A New Method to Produce Ni-Cr Ferroalloy Used for Stainless Steel Production

    Science.gov (United States)

    Chen, Pei-Xian; Chu, Shao-Jun; Zhang, Guo-Hua

    2016-08-01

    A new electrosilicothermic method has been proposed in the present paper to produce Ni-Cr ferroalloy, which can be used for the production of 300 series stainless steel. Based on this new process, the Ni-Si ferroalloy is first produced as the intermediate alloy, and then the desiliconization process of Ni-Si ferroalloy melt with chromium concentrate is carried out to generate Ni-Cr ferroalloy. The silicon content in the Ni-Si ferroalloy produced in the submerged arc furnace should be more than 15 mass% (for the propose of reducing dephosphorization), in order to make sure the phosphorus content in the subsequently produced Ni-Cr ferroalloy is less than 0.03 mass%. A high utilization ratio of Si and a high recovery ratio of Cr can be obtained after the desiliconization reaction between Ni-Si ferroalloy and chromium concentrate in the electric arc furnace (EAF)-shaking ladle (SL) process.

  17. Irrigation water sources and irrigation application methods used by U.S. plant nursery producers

    Science.gov (United States)

    Paudel, Krishna P.; Pandit, Mahesh; Hinson, Roger

    2016-02-01

    We examine irrigation water sources and irrigation methods used by U.S. nursery plant producers using nested multinomial fractional regression models. We use data collected from the National Nursery Survey (2009) to identify effects of different firm and sales characteristics on the fraction of water sources and irrigation methods used. We find that regions, sales of plants types, farm income, and farm age have significant roles in what water source is used. Given the fraction of alternative water sources used, results indicated that use of computer, annual sales, region, and the number of IPM practices adopted play an important role in the choice of irrigation method. Based on the findings from this study, government can provide subsidies to nursery producers in water deficit regions to adopt drip irrigation method or use recycled water or combination of both. Additionally, encouraging farmers to adopt IPM may enhance the use of drip irrigation and recycled water in nursery plant production.

  18. Porous alumina scaffold produced by sol-gel combined polymeric sponge method

    Science.gov (United States)

    Hasmaliza, M.; Fazliah, M. N.; Shafinaz, R. J.

    2012-09-01

    Sol gel is a novel method used to produce high purity alumina with nanometric scale. In this study, three-dimensional porous alumina scaffold was produced using sol-gel polymeric sponge method. Briefly, sol gel alumina was prepared by evaporation and polymeric sponge cut to designated sizes were immersed in the sol gel followed by sintering at 1250 and 1550°C. In order to study the cell interaction, the porous alumina scaffold was sterilized using autoclave prior to Human Mesenchymal Stem Cells (HMSCs) seeding on the scaffold and the cell proliferation was assessed by alamarBlue® assay. SEM results showed that during the 21 day period, HMSCs were able to attach on the scaffold surface and the interconnecting pores while maintaining its proliferation. These findings suggested the potential use of the porous alumina produced as a scaffold for implantation procedure.

  19. A novel method of producing a microcrystalline beta-sitosterol suspension in oil

    DEFF Research Database (Denmark)

    Christiansen, Leena I; Rantanen, Jukka T; von Bonsdorff, Anna K

    2002-01-01

    This paper describes a novel method of producing a microcrystalline oral suspension containing beta-sitosterol in oil for the treatment of hypercholesterolaemia. beta-Sitosterol pseudopolymorphs with different water contents were crystallized from acetone and acetone-water solutions. Structural...

  20. Optically transparent, superhydrophobic, biocompatible thin film coatings and methods for producing same

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Beth L.; Aytug, Tolga; Paranthaman, Mariappan Parans; Simpson, John T.; Hillesheim, Daniel A.; Trammell, Neil E.

    2017-09-05

    An optically transparent, hydrophobic coating, exhibiting an average contact angle of at least 100 degrees with a drop of water. The coating can be produced using low-cost, environmentally friendly components. Methods of preparing and using the optically transparent, hydrophobic coating.

  1. Antimicrobial activity evaluation and comparison of methods of susceptibility for Klebsiella pneumoniae carbapenemase (KPC)-producing Enterobacter spp. isolates.

    Science.gov (United States)

    Rechenchoski, Daniele Zendrini; Dambrozio, Angélica Marim Lopes; Vivan, Ana Carolina Polano; Schuroff, Paulo Alfonso; Burgos, Tatiane das Neves; Pelisson, Marsileni; Perugini, Marcia Regina Eches; Vespero, Eliana Carolina

    The production of KPC (Klebsiella pneumoniae carbapenemase) is the major mechanism of resistance to carbapenem agents in enterobacterias. In this context, forty KPC-producing Enterobacter spp. clinical isolates were studied. It was evaluated the activity of antimicrobial agents: polymyxin B, tigecycline, ertapenem, imipenem and meropenem, and was performed a comparison of the methodologies used to determine the susceptibility: broth microdilution, Etest ® (bioMérieux), Vitek 2 ® automated system (bioMérieux) and disc diffusion. It was calculated the minimum inhibitory concentration (MIC) for each antimicrobial and polymyxin B showed the lowest concentrations for broth microdilution. Errors also were calculated among the techniques, tigecycline and ertapenem were the antibiotics with the largest and the lower number of discrepancies, respectively. Moreover, Vitek 2 ® automated system was the method most similar compared to the broth microdilution. Therefore, is important to evaluate the performance of new methods in comparison to the reference method, broth microdilution. Copyright © 2017 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  2. Developmental Competence and Epigenetic Profile of Porcine Embryos Produced by Two Different Cloning Methods.

    Science.gov (United States)

    Liu, Ying; Lucas-Hahn, Andrea; Petersen, Bjoern; Li, Rong; Hermann, Doris; Hassel, Petra; Ziegler, Maren; Larsen, Knud; Niemann, Heiner; Callesen, Henrik

    2017-06-01

    The "Dolly" based cloning (classical nuclear transfer, [CNT]) and the handmade cloning (HMC) are methods that are nowadays routinely used for somatic cloning of large domestic species. Both cloning protocols share several similarities, but differ with regard to the required in vitro culture, which in turn results in different time intervals until embryo transfer. It is not yet known whether the differences between cloned embryos from the two protocols are due to the cloning methods themselves or the in vitro culture, as some studies have shown detrimental effects of in vitro culture on conventionally produced embryos. The goal of this study was to unravel putative differences between two cloning methods, with regard to developmental competence, expression profile of a panel of developmentally important genes and epigenetic profile of porcine cloned embryos produced by either CNT or HMC, either with (D5 or D6) or without (D0) in vitro culture. Embryos cloned by these two methods had a similar morphological appearance on D0, but displayed different cleavage rates and different quality of blastocysts, with HMC embryos showing higher blastocyst rates (HMC vs. CNT: 35% vs. 10%, p cloned embryos were similar on D0, but differed on D6. In conclusion, both cloning methods and the in vitro culture may affect porcine embryo development and epigenetic profile. The two cloning methods essentially produce embryos of similar quality on D0 and after 5 days in vitro culture, but thereafter both histone acetylation and gene expression differ between the two types of cloned embryos.

  3. Diffraction grating strain gauge method: error analysis and its application for the residual stress measurement in thermal barrier coatings

    Science.gov (United States)

    Yin, Yuanjie; Fan, Bozhao; He, Wei; Dai, Xianglu; Guo, Baoqiao; Xie, Huimin

    2018-03-01

    Diffraction grating strain gauge (DGSG) is an optical strain measurement method. Based on this method, a six-spot diffraction grating strain gauge (S-DGSG) system has been developed with the advantages of high and adjustable sensitivity, compact structure, and non-contact measurement. In this study, this system is applied for the residual stress measurement in thermal barrier coatings (TBCs) combining the hole-drilling method. During the experiment, the specimen’s location is supposed to be reset accurately before and after the hole-drilling, however, it is found that the rigid body displacements from the resetting process could seriously influence the measurement accuracy. In order to understand and eliminate the effects from the rigid body displacements, such as the three-dimensional (3D) rotations and the out-of-plane displacement of the grating, the measurement error of this system is systematically analyzed, and an optimized method is proposed. Moreover, a numerical experiment and a verified tensile test are conducted, and the results verify the applicability of this optimized method successfully. Finally, combining this optimized method, a residual stress measurement experiment is conducted, and the results show that this method can be applied to measure the residual stress in TBCs.

  4. Comparison of prosthetic models produced by traditional and additive manufacturing methods.

    Science.gov (United States)

    Park, Jin-Young; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Jae-Hong; Kim, Woong-Chul

    2015-08-01

    The purpose of this study was to verify the clinical-feasibility of additive manufacturing by comparing the accuracy of four different manufacturing methods for metal coping: the conventional lost wax technique (CLWT); subtractive methods with wax blank milling (WBM); and two additive methods, multi jet modeling (MJM), and micro-stereolithography (Micro-SLA). Thirty study models were created using an acrylic model with the maxillary upper right canine, first premolar, and first molar teeth. Based on the scan files from a non-contact blue light scanner (Identica; Medit Co. Ltd., Seoul, Korea), thirty cores were produced using the WBM, MJM, and Micro-SLA methods, respectively, and another thirty frameworks were produced using the CLWT method. To measure the marginal and internal gap, the silicone replica method was adopted, and the silicone images obtained were evaluated using a digital microscope (KH-7700; Hirox, Tokyo, Japan) at 140X magnification. Analyses were performed using two-way analysis of variance (ANOVA) and Tukey post hoc test (α=.05). The mean marginal gaps and internal gaps showed significant differences according to tooth type (Pmanufacturing method (Pmanufacturing methods were within a clinically allowable range, and, thus, the clinical use of additive manufacturing methods is acceptable as an alternative to the traditional lost wax-technique and subtractive manufacturing.

  5. Monitoring the Error Rate of Modern Methods of Construction Based on Wood

    Science.gov (United States)

    Švajlenka, Jozef; Kozlovská, Mária

    2017-06-01

    A range of new and innovative construction systems, currently developed, represent modern methods of construction (MMC), which has the ambition to improve the performance parameters of buildings throughout their life cycle. Regarding the implementation modern methods of construction in Slovakia, assembled buildings based on wood seem to be the most preferred construction system. In the study, presented in the paper, were searched already built and lived-in wood based family houses. The residents' attitudes to such type of buildings in the context with declared designing and qualitative parameters of efficiency and sustainability are overlooked. The methodology of the research study is based on the socio-economic survey carried out during the years 2015 - 2017 within the Slovak Republic. Due to the large extent of data collected through questionnaire, only selected parts of the survey results are evaluated and discussed in the paper. This paper is aimed at evaluating the quality of buildings expressed in a view of users of existing wooden buildings. Research indicates some defects, which can be eliminated in the next production process. Research indicates, that some defects occur, so the production process quality should be improved in the future development.

  6. Method of surface error visualization using laser 3D projection technology

    Science.gov (United States)

    Guo, Lili; Li, Lijuan; Lin, Xuezhu

    2017-10-01

    In the process of manufacturing large components, such as aerospace, automobile and shipping industry, some important mold or stamped metal plate requires precise forming on the surface, which usually needs to be verified, if necessary, the surface needs to be corrected and reprocessed. In order to make the correction of the machined surface more convenient, this paper proposes a method based on Laser 3D projection system, this method uses the contour form of terrain contour, directly showing the deviation between the actually measured data and the theoretical mathematical model (CAD) on the measured surface. First, measure the machined surface to get the point cloud data and the formation of triangular mesh; secondly, through coordinate transformation, unify the point cloud data to the theoretical model and calculate the three-dimensional deviation, according to the sign (positive or negative) and size of the deviation, use the color deviation band to denote the deviation of three-dimensional; then, use three-dimensional contour lines to draw and represent every coordinates deviation band, creating the projection files; finally, import the projection files into the laser projector, and make the contour line projected to the processed file with 1:1 in the form of a laser beam, compare the Full-color 3D deviation map with the projection graph, then, locate and make quantitative correction to meet the processing precision requirements. It can display the trend of the machined surface deviation clearly.

  7. Enrichment of the hydrogen-producing microbial community from marine intertidal sludge by different pretreatment methods

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Hongyan [Institute of Oceanology, Chinese Academy of Sciences, 7 Nanhai Road, Shinan District, Qingdao 266071, Shandong (China); College of Marine Science and Engineering, University of Science and Technology, Tianjin 300457 (China); Graduate School, Chinese Academy of Sciences, Beijing 100039 (China); Wang, Guangce [Institute of Oceanology, Chinese Academy of Sciences, 7 Nanhai Road, Shinan District, Qingdao 266071, Shandong (China); College of Marine Science and Engineering, University of Science and Technology, Tianjin 300457 (China); Zhu, Daling; Pan, Guanghua [College of Marine Science and Engineering, University of Science and Technology, Tianjin 300457 (China)

    2009-12-15

    To determine the effects of pretreatment on hydrogen production and the hydrogen-producing microbial community, we treated the sludge from the intertidal zone of a bathing beach in Tianjin with four different pretreatment methods, including acid treatment, heat-shock, base treatment as well as freezing and thawing. The results showed that acid pretreatment significantly promoted the hydrogen production by sludge and provided the highest efficiency of hydrogen production among the four methods. The efficiency of the hydrogen production of the acid-pretreated sludge was 0.86 {+-} 0.07 mol H{sub 2}/mol glucose (mean {+-} S.E.), whereas that of the sludge treated with heat-shock, freezing and thawing, base method and control was 0.41 {+-} 0.03 mol H{sub 2}/mol glucose, 0.17 {+-} 0.01 mol H{sub 2}/mol glucose, 0.11 {+-} 0.01 mol H{sub 2}/mol glucose and 0.20 {+-} 0.04 mol H{sub 2}/mol glucose, respectively. The result of denaturing gradient gel electrophoresis (DGGE) showed that pretreatment methods altered the composition of the microbial community that accounts for hydrogen production. Acid and heat pretreatments were favorable to enrich the dominant hydrogen-producing bacterium, i.e. Clostridium sp., Enterococcus sp. and Bacillus sp. However, besides hydrogen-producing bacteria, much non-hydrogen-producing Lactobacillus sp. was also found in the sludge pretreated with base, freezing and thawing methods. Therefore, based on our results, we concluded that, among the four pretreatment methods using acid, heat-shock, base or freezing and thawing, acid pretreatment was the most effective method for promoting hydrogen production of microbial community. (author)

  8. The flame characteristics of the biogas has produced through the digester method with various starters

    Science.gov (United States)

    Ketut, Caturwati Ni; Agung, Sudrajat; Mekro, Permana; Heri, Haryanto; Bachtiar

    2018-01-01

    Increasing the volume of waste, especially in urban areas is a source of problems in realizing the comfort and health of the environment. It needs to do a good handling of garbage so as to provide benefits for the whole community. Organic waste processing through bio-digester method to produce a biogas as an energy source is an effort. This research was conducted to test the characteristics of biogas flame generated from organic waste processing through digester with various of the starter such as: cow dung, goat manure, and leachate that obtained from the landfill at Bagendung-Cilegon. The flame height and maximum temperature of the flame are measured for the same pressure of biogas. The measurements showed the flame produced by bio-digester with leachate starter has the lowest flame height compared to the other types of biogas, and the highest flame height is given by biogas from digester with cow dung as a starter. The maximum flame temperature of biogas produced by leachate as a starter reaches 1027 °C. This value is 7% lower than the maximum flame temperature of biogas produced by cow dung as a starter. Cow dung was observed to be the best starter compared to goat manure and leachate, but the use of leachate as a starter in producing biogas with biodigester method is not the best but it worked.

  9. Fatigue resistance of engine-driven rotary nickel-titanium instruments produced by new manufacturing methods.

    Science.gov (United States)

    Gambarini, Gianluca; Grande, Nicola Maria; Plotino, Gianluca; Somma, Francesco; Garala, Manish; De Luca, Massimo; Testarelli, Luca

    2008-08-01

    The aim of the present study was to investigate whether cyclic fatigue resistance is increased for nickel-titanium instruments manufactured by using new processes. This was evaluated by comparing instruments produced by using the twisted method (TF; SybronEndo, Orange, CA) and those using the M-wire alloy (GTX; Dentsply Tulsa-Dental Specialties, Tulsa, OK) with instruments produced by a traditional NiTi grinding process (K3, SybronEndo). Tests were performed with a specific cyclic fatigue device that evaluated cycles to failure of rotary instruments inside curved artificial canals. Results indicated that size 06-25 TF instruments showed a significant increase (p 0.05) in the mean number of cycles to failure when compared with size 06-20 GT series X instruments. The new manufacturing process produced nickel-titanium rotary files (TF) significantly more resistant to fatigue than instruments produced with the traditional NiTi grinding process. Instruments produced with M-wire (GTX) were not found to be more resistant to fatigue than instruments produced with the traditional NiTi grinding process.

  10. Standard practice for construction of a stepped block and its use to estimate errors produced by speed-of-sound measurement systems for use on solids

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 This practice provides a means for evaluating both systematic and random errors for ultrasonic speed-of-sound measurement systems which are used for evaluating material characteristics associated with residual stress and which may also be used for nondestructive measurements of the dynamic elastic moduli of materials. Important features and construction details of a reference block crucial to these error evaluations are described. This practice can be used whenever the precision and bias of sound speed values are in question. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  11. Characterization of graphene oxide produced by Hummers method and its supercapacitor applications

    Energy Technology Data Exchange (ETDEWEB)

    Akgül, Ö., E-mail: omeraakgul@gmail.com; Tanrıverdi, A., E-mail: aa.kudret@hotmail.com [Kahramanmaras Sutcu Imam University, Dept. of Physics, 46100 K.Maras-Turkey (Turkey); Alver, Ü., E-mail: ualver@ktu.edu.tr [Karadeniz Technical University, Dept. of Metallurgical and Materials Eng. 61080, Trabzon-Turkey (Turkey)

    2016-03-25

    In this study, Graphene Oxide (GO) is produced using Hummers method. The produced GO were investigated by x-ray diffraction (XRD), fourier transform infrared spectroscopy (FTIR), UV-Vis spectrum, Raman spectroscopy and scanning electron microscopy (SEM). GO films on Ni foam were prepared by doctor-blading technique. The electrochemical performances of the as-synthesized GO electrode was evaluated using cyclic voltammetry (CV) in 6 M KOH aqueous solution. Capacitances of GO electrode was measured as 0.76 F/g.

  12. Characterization of graphene oxide produced by Hummers method and its supercapacitor applications

    International Nuclear Information System (INIS)

    Akgül, Ö.; Tanrıverdi, A.; Alver, Ü.

    2016-01-01

    In this study, Graphene Oxide (GO) is produced using Hummers method. The produced GO were investigated by x-ray diffraction (XRD), fourier transform infrared spectroscopy (FTIR), UV-Vis spectrum, Raman spectroscopy and scanning electron microscopy (SEM). GO films on Ni foam were prepared by doctor-blading technique. The electrochemical performances of the as-synthesized GO electrode was evaluated using cyclic voltammetry (CV) in 6 M KOH aqueous solution. Capacitances of GO electrode was measured as 0.76 F/g.

  13. Low oxygen biomass-derived pyrolysis oils and methods for producing the same

    Science.gov (United States)

    Marinangeli, Richard; Brandvold, Timothy A; Kocal, Joseph A

    2013-08-27

    Low oxygen biomass-derived pyrolysis oils and methods for producing them from carbonaceous biomass feedstock are provided. The carbonaceous biomass feedstock is pyrolyzed in the presence of a catalyst comprising base metal-based catalysts, noble metal-based catalysts, treated zeolitic catalysts, or combinations thereof to produce pyrolysis gases. During pyrolysis, the catalyst catalyzes a deoxygenation reaction whereby at least a portion of the oxygenated hydrocarbons in the pyrolysis gases are converted into hydrocarbons. The oxygen is removed as carbon oxides and water. A condensable portion (the vapors) of the pyrolysis gases is condensed to low oxygen biomass-derived pyrolysis oil.

  14. Methods for determining microcystins (peptide hepatotoxins) and microcystin-producing cyanobacteria.

    Science.gov (United States)

    Sangolkar, Lalita N; Maske, Sarika S; Chakrabarti, Tapan

    2006-11-01

    Episodes of cyanobacterial toxic blooms and fatalities to animals and humans due to cyanobacterial toxins (CBT) are known worldwide. The hepatotoxins and neurotoxins (cyanotoxins) produced by bloom-forming cyanobacteria have been the cause of human and animal health hazards and even death. Prevailing concentration of cell bound endotoxin, exotoxin and the toxin variants depend on developmental stages of the bloom and the cyanobacterial (CB) species involved. Toxic and non-toxic strains do not show any predictable morphological difference. The current instrumental, immunological and molecular methods applied for determining microcystins (peptide hepatotoxins) and microcystin-producing cyanobacteria are reviewed.

  15. Method for excluding salt and other soluble materials from produced water

    Science.gov (United States)

    Phelps, Tommy J [Knoxville, TN; Tsouris, Costas [Oak Ridge, TN; Palumbo, Anthony V [Oak Ridge, TN; Riestenberg, David E [Knoxville, TN; McCallum, Scott D [Knoxville, TN

    2009-08-04

    A method for reducing the salinity, as well as the hydrocarbon concentration of produced water to levels sufficient to meet surface water discharge standards. Pressure vessel and coflow injection technology developed at the Oak Ridge National Laboratory is used to mix produced water and a gas hydrate forming fluid to form a solid or semi-solid gas hydrate mixture. Salts and solids are excluded from the water that becomes a part of the hydrate cage. A three-step process of dissociation of the hydrate results in purified water suitable for irrigation.

  16. Error assessment of biogeochemical models by lower bound methods (NOMMA-1.0)

    Science.gov (United States)

    Sauerland, Volkmar; Löptien, Ulrike; Leonhard, Claudine; Oschlies, Andreas; Srivastav, Anand

    2018-03-01

    Biogeochemical models, capturing the major feedbacks of the pelagic ecosystem of the world ocean, are today often embedded into Earth system models which are increasingly used for decision making regarding climate policies. These models contain poorly constrained parameters (e.g., maximum phytoplankton growth rate), which are typically adjusted until the model shows reasonable behavior. Systematic approaches determine these parameters by minimizing the misfit between the model and observational data. In most common model approaches, however, the underlying functions mimicking the biogeochemical processes are nonlinear and non-convex. Thus, systematic optimization algorithms are likely to get trapped in local minima and might lead to non-optimal results. To judge the quality of an obtained parameter estimate, we propose determining a preferably large lower bound for the global optimum that is relatively easy to obtain and that will help to assess the quality of an optimum, generated by an optimization algorithm. Due to the unavoidable noise component in all observations, such a lower bound is typically larger than zero. We suggest deriving such lower bounds based on typical properties of biogeochemical models (e.g., a limited number of extremes and a bounded time derivative). We illustrate the applicability of the method with two real-world examples. The first example uses real-world observations of the Baltic Sea in a box model setup. The second example considers a three-dimensional coupled ocean circulation model in combination with satellite chlorophyll a.

  17. Error assessment of biogeochemical models by lower bound methods (NOMMA-1.0

    Directory of Open Access Journals (Sweden)

    V. Sauerland

    2018-03-01

    Full Text Available Biogeochemical models, capturing the major feedbacks of the pelagic ecosystem of the world ocean, are today often embedded into Earth system models which are increasingly used for decision making regarding climate policies. These models contain poorly constrained parameters (e.g., maximum phytoplankton growth rate, which are typically adjusted until the model shows reasonable behavior. Systematic approaches determine these parameters by minimizing the misfit between the model and observational data. In most common model approaches, however, the underlying functions mimicking the biogeochemical processes are nonlinear and non-convex. Thus, systematic optimization algorithms are likely to get trapped in local minima and might lead to non-optimal results. To judge the quality of an obtained parameter estimate, we propose determining a preferably large lower bound for the global optimum that is relatively easy to obtain and that will help to assess the quality of an optimum, generated by an optimization algorithm. Due to the unavoidable noise component in all observations, such a lower bound is typically larger than zero. We suggest deriving such lower bounds based on typical properties of biogeochemical models (e.g., a limited number of extremes and a bounded time derivative. We illustrate the applicability of the method with two real-world examples. The first example uses real-world observations of the Baltic Sea in a box model setup. The second example considers a three-dimensional coupled ocean circulation model in combination with satellite chlorophyll a.

  18. Technical errors in complete mouth radiographic survey according to radiographic techniques and film holding methods

    International Nuclear Information System (INIS)

    Choi, Karp Sik; Byun, Chong Soo; Choi, Soon Chul

    1986-01-01

    The purpose of this study was to investigate the numbers and causes of retakes in 300 complete mouth radiographic surveys made by 75 senior dental students. According to radiographic techniques and film holding methods, they were divided into 4 groups: Group I: Bisecting-angle technique with patient's fingers. Group II: Bisecting-angle technique with Rinn Snap-A-Ray device. Group III: Bisecting-angle technique with Rinn XCP instrument (short cone) Group IV: Bisecting-angle technique with Rinn XCP instrument (long cone). The most frequent cases of retakes, the most frequent tooth area examined, of retakes and average number of retakes per complete mouth survey were evaluated. The obtained results were as follows: Group I: Incorrect film placement (47.8), upper canine region, and 0.89. Group II: Incorrect film placement (44.0), upper canine region, and 1.12. Group III: Incorrect film placement (79.2), upper canine region, and 2.05. Group IV: Incorrect film placement (67.7), upper canine region, and 1.69.

  19. A rapid colorimetric screening method for vanillic acid and vanillin-producing bacterial strains.

    Science.gov (United States)

    Zamzuri, N A; Abd-Aziz, S; Rahim, R A; Phang, L Y; Alitheen, N B; Maeda, T

    2014-04-01

    To isolate a bacterial strain capable of biotransforming ferulic acid, a major component of lignin, into vanillin and vanillic acid by a rapid colorimetric screening method. For the production of vanillin, a natural aroma compound, we attempted to isolate a potential strain using a simple screening method based on pH change resulting from the degradation of ferulic acid. The strain Pseudomonas sp. AZ10 UPM exhibited a significant result because of colour changes observed on the assay plate on day 1 with a high intensity of yellow colour. The biotransformation of ferulic acid into vanillic acid by the AZ10 strain provided the yield (Yp/s ) and productivity (Pr ) of 1·08 mg mg(-1) and 53·1 mg L(-1) h(-1) , respectively. In fact, new investigations regarding lignin degradation revealed that the strain was not able to produce vanillin and vanillic acid directly from lignin; however, partially digested lignin by mixed enzymatic treatment allowed the strain to produce 30·7 mg l(-1) and 1·94 mg l(-1) of vanillic acid and biovanillin, respectively. (i) The rapid colorimetric screening method allowed the isolation of a biovanillin producer using ferulic acid as the sole carbon source. (ii) Enzymatic treatment partially digested lignin, which could then be utilized by the strain to produce biovanillin and vanillic acid. To the best of our knowledge, this is the first study reporting the use of a rapid colorimetric screening method for bacterial strains producing vanillin and vanillic acid from ferulic acid. © 2013 The Society for Applied Microbiology.

  20. Method and apparatus for producing average magnetic well in a reversed field pinch

    International Nuclear Information System (INIS)

    Ohkawa, T.

    1983-01-01

    A magnetic well reversed field plasma pinch method and apparatus produces hot magnetically confined pinch plasma in a toroidal chamber having a major toroidal axis and a minor toroidal axis and a small aspect ratio, e.g. < 6. A pinch current channel within the plasma and at least one hyperbolic magnetic axis outside substantially all of the plasma form a region of average magnetic well in a region surrounding the plasma current channel. The apparatus is operated so that reversal of the safety factor q and of the toroidal magnetic field takes place within the plasma. The well-producing plasma cross section shape is produced by a conductive shell surrounding the shaped envelope and by coils. A shell is of copper or aluminium with non-conductive breaks, and is bonded to a thin aluminium envelope by silicone rubber. (author)

  1. Method and apparatus for producing a porosity log of a subsurface formation corrected for detector standoff

    International Nuclear Information System (INIS)

    Allen, L.S.; Mills, W.R.; Stromswold, D.C.

    1991-01-01

    This paper describes a method and apparatus for producing a porosity log of a substance formation corrected for detector stand of. It includes: lowering a logging tool having a neutron source and a neutron detector into the borehole, irradiating the subsurface formation with neutrons from the neutron source as the logging tool is traversed along the subsurface formation, recording die-away signals representing the die-away of nuclear radiation in the subsurface formation as detected by the neutron detector, producing intensity signals representing the variations in intensity of the die-away signals, producing a model of the die-away of nuclear radiation in the subsurface formation having terms varying exponentially in response to borehole, formation and background effects on the die-away of nuclear radiation as detected by the detector

  2. A robust and rapid method of producing soluble, stable, and functional G-protein coupled receptors.

    Directory of Open Access Journals (Sweden)

    Karolina Corin

    Full Text Available Membrane proteins, particularly G-protein coupled receptors (GPCRs, are notoriously difficult to express. Using commercial E. coli cell-free systems with the detergent Brij-35, we could rapidly produce milligram quantities of 13 unique GPCRs. Immunoaffinity purification yielded receptors at >90% purity. Secondary structure analysis using circular dichroism indicated that the purified receptors were properly folded. Microscale thermophoresis, a novel label-free and surface-free detection technique that uses thermal gradients, showed that these receptors bound their ligands. The secondary structure and ligand-binding results from cell-free produced proteins were comparable to those expressed and purified from HEK293 cells. Our study demonstrates that cell-free protein production using commercially available kits and optimal detergents is a robust technology that can be used to produce sufficient GPCRs for biochemical, structural, and functional analyses. This robust and simple method may further stimulate others to study the structure and function of membrane proteins.

  3. A New Method to Detect and Correct the Critical Errors and Determine the Software-Reliability in Critical Software-System

    International Nuclear Information System (INIS)

    Krini, Ossmane; Börcsök, Josef

    2012-01-01

    In order to use electronic systems comprising of software and hardware components in safety related and high safety related applications, it is necessary to meet the Marginal risk numbers required by standards and legislative provisions. Existing processes and mathematical models are used to verify the risk numbers. On the hardware side, various accepted mathematical models, processes, and methods exist to provide the required proof. To this day, however, there are no closed models or mathematical procedures known that allow for a dependable prediction of software reliability. This work presents a method that makes a prognosis on the residual critical error number in software. Conventional models lack this ability and right now, there are no methods that forecast critical errors. The new method will show that an estimate of the residual error number of critical errors in software systems is possible by using a combination of prediction models, a ratio of critical errors, and the total error number. Subsequently, the critical expected value-function at any point in time can be derived from the new solution method, provided the detection rate has been calculated using an appropriate estimation method. Also, the presented method makes it possible to make an estimate on the critical failure rate. The approach is modelled on a real process and therefore describes two essential processes - detection and correction process.

  4. Producing accurate wave propagation time histories using the global matrix method

    International Nuclear Information System (INIS)

    Obenchain, Matthew B; Cesnik, Carlos E S

    2013-01-01

    This paper presents a reliable method for producing accurate displacement time histories for wave propagation in laminated plates using the global matrix method. The existence of inward and outward propagating waves in the general solution is highlighted while examining the axisymmetric case of a circular actuator on an aluminum plate. Problems with previous attempts to isolate the outward wave for anisotropic laminates are shown. The updated method develops a correction signal that can be added to the original time history solution to cancel the inward wave and leave only the outward propagating wave. The paper demonstrates the effectiveness of the new method for circular and square actuators bonded to the surface of isotropic laminates, and these results are compared with exact solutions. Results for circular actuators on cross-ply laminates are also presented and compared with experimental results, showing the ability of the new method to successfully capture the displacement time histories for composite laminates. (paper)

  5. A Time- and Cost-Saving Method of Producing Rat Polyclonal Antibodies

    International Nuclear Information System (INIS)

    Wakayama, Tomohiko; Kato, Yukio; Utsumi, Rie; Tsuji, Akira; Iseki, Shoichi

    2006-01-01

    Producing antibodies usually takes more than three months. In the present study, we introduce a faster way of producing polyclonal antibodies based on preparation of the recombinant oligopeptide as antigen followed by immunization of rats. Using this method, we produced antisera against two mouse proteins: ERGIC-53 and c-Kit. An expression vector ligated with a pair of complementary synthetic oligodeoxyribonucleotides encoding the protein was introduced into bacteria, and the recombinant oligopeptide fused with the carrier protein glutathione-S-transferase was purified. Wistar rats were immunized by injecting the emulsified antigen subcutaneously into the hind footpads, followed by a booster injection after 2 weeks. One week after the booster, the sera were collected and examined for the antibody titer by immunohistochemistry. Antisera with 1600-fold titer at the maximum were obtained for both antigens and confirmed for their specificity by Western blotting. Anti-ERGIC-53 antisera recognized acinar cells in the sublingual gland, and anti-c-Kit antisera recognized spermatogenic and Leydig cells in the testis. These antisera were applicable to fluorescent double immunostaining with mouse monoclonal or rabbit polyclonal antibodies. Consequently, this method enabled us to produce specific rat polyclonal antisera available for immunohistochemistry in less than one month at a relatively low cost

  6. Method of producing exfoliated graphite, flexible graphite, and nano-scaled graphene platelets

    Science.gov (United States)

    Zhamu, Aruna; Shi, Jinjun; Guo, Jiusheng; Jang, Bor Z.

    2010-11-02

    The present invention provides a method of exfoliating a layered material (e.g., graphite and graphite oxide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of graphite, graphite oxide, or a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites. Nano-scaled graphene platelets are much lower-cost alternatives to carbon nano-tubes or carbon nano-fibers.

  7. Electrochemical method for producing a biodiesel mixture comprising fatty acid alkyl esters and glycerol

    Science.gov (United States)

    Lin, YuPo J; St. Martin, Edward J

    2013-08-13

    The present invention relates to an integrated method and system for the simultaneous production of biodiesel from free fatty acids (via esterification) and from triglycerides (via transesterification) within the same reaction chamber. More specifically, one preferred embodiment of the invention relates to a method and system for the production of biodiesel using an electrodeionization stack, wherein an ion exchange resin matrix acts as a heterogeneous catalyst for simultaneous esterification and transesterification reactions between a feedstock and a lower alcohol to produce biodiesel, wherein the feedstock contains significant levels of free fatty acid. In addition, because of the use of a heterogeneous catalyst, the glycerol and biodiesel have much lower salt concentrations than raw biodiesel produced by conventional transesterification processes. The present invention makes it much easier to purify glycerol and biodiesel.

  8. A Frequency-Domain Adaptive Filter (FDAF) Prediction Error Method (PEM) Framework for Double-Talk-Robust Acoustic Echo Cancellation

    DEFF Research Database (Denmark)

    Gil-Cacho, Jose M.; van Waterschoot, Toon; Moonen, Marc

    2014-01-01

    to the FDAF-PEM-AFROW algorithm. We show that FDAF-PEM-AFROW is by construction related to the best linear unbiased estimate (BLUE) of the echo path. We depart from this framework to show an improvement in performance with respect to other adaptive filters minimizing the BLUE criterion, namely the PEM......In this paper, we propose a new framework to tackle the double-talk (DT) problem in acoustic echo cancellation (AEC). It is based on a frequency-domain adaptive filter (FDAF) implementation of the so-called prediction error method adaptive filtering using row operations (PEM-AFROW) leading...... regularization (VR) algorithms. The FDAF-PEM-AFROW versions significantly outperform the original versions in every simulation. In terms of computational complexity, the FDAF-PEM-AFROW versions are themselves about two orders of magnitude cheaper than the original versions....

  9. Ion transport membrane reactor systems and methods for producing synthesis gas

    Science.gov (United States)

    Repasky, John Michael

    2015-05-12

    Embodiments of the present invention provide cost-effective systems and methods for producing a synthesis gas product using a steam reformer system and an ion transport membrane (ITM) reactor having multiple stages, without requiring inter-stage reactant injections. Embodiments of the present invention also provide techniques for compensating for membrane performance degradation and other changes in system operating conditions that negatively affect synthesis gas production.

  10. Single-step electrochemical method for producing very sharp Au scanning tunneling microscopy tips

    International Nuclear Information System (INIS)

    Gingery, David; Buehlmann, Philippe

    2007-01-01

    A single-step electrochemical method for making sharp gold scanning tunneling microscopy tips is described. 3.0M NaCl in 1% perchloric acid is compared to several previously reported etchants. The addition of perchloric acid to sodium chloride solutions drastically shortens etching times and is shown by transmission electron microscopy to produce very sharp tips with a mean radius of curvature of 15 nm

  11. Metal oxide targets produced by the polymer-assisted deposition method

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Mitch A., E-mail: mitch@berkeley.ed [Department of Chemistry, Room 446 Latimer Hall, University of California Berkeley, Berkeley, CA 94720-1460 (United States); Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Ali, Mazhar N.; Chang, Noel N.; Parsons-Moss, T. [Department of Chemistry, Room 446 Latimer Hall, University of California Berkeley, Berkeley, CA 94720-1460 (United States); Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Ashby, Paul D. [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Gates, Jacklyn M. [Department of Chemistry, Room 446 Latimer Hall, University of California Berkeley, Berkeley, CA 94720-1460 (United States); Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Stavsetra, Liv [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Gregorich, Kenneth E.; Nitsche, Heino [Department of Chemistry, Room 446 Latimer Hall, University of California Berkeley, Berkeley, CA 94720-1460 (United States); Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2010-02-11

    The polymer-assisted deposition (PAD) method was used to create crack-free homogenous metal oxide films for use as targets in nuclear science applications. Metal oxide films of europium, thulium, and hafnium were prepared as models for actinide oxides. Films produced by a single application of PAD were homogenous and uniform and ranged in thickness from 30 to 320 nm. Reapplication of the PAD method (six times) with a 10% by weight hafnium(IV) solution resulted in an equally homogeneous and uniform film with a total thickness of 600 nm.

  12. Metal oxide targets produced by the polymer-assisted deposition method

    International Nuclear Information System (INIS)

    Garcia, Mitch A.; Ali, Mazhar N.; Chang, Noel N.; Parsons-Moss, T.; Ashby, Paul D.; Gates, Jacklyn M.; Stavsetra, Liv; Gregorich, Kenneth E.; Nitsche, Heino

    2010-01-01

    The polymer-assisted deposition (PAD) method was used to create crack-free homogenous metal oxide films for use as targets in nuclear science applications. Metal oxide films of europium, thulium, and hafnium were prepared as models for actinide oxides. Films produced by a single application of PAD were homogenous and uniform and ranged in thickness from 30 to 320 nm. Reapplication of the PAD method (six times) with a 10% by weight hafnium(IV) solution resulted in an equally homogeneous and uniform film with a total thickness of 600 nm.

  13. A new method of producing local enhancement of buoyancy in liquid flows

    Science.gov (United States)

    Bhat, G. S.; Narasimha, R.; Arakeri, V. H.

    1989-11-01

    We describe here a novel method of generating large volumetric heating in a liquid. The method uses the principle of ohmic heating of the liquid, rendered electrically conducting by suitable additives if necessary. Electrolysis is prevented by the use of high frequency alternating voltage and chemically treated electrodes. The technique is demonstrated by producing substantial heating in an initially neutral jet of water. Simple flow visualisation studies, made by adding dye to the jet, show marked changes in the growth and development of the jet with heat addition.

  14. Ion transport by gating voltage to nanopores produced via metal-assisted chemical etching method

    Science.gov (United States)

    Van Toan, Nguyen; Inomata, Naoki; Toda, Masaya; Ono, Takahito

    2018-05-01

    In this work, we report a simple and low-cost way to create nanopores that can be employed for various applications in nanofluidics. Nano sized Ag particles in the range from 1 to 20 nm are formed on a silicon substrate with a de-wetting method. Then the silicon nanopores with an approximate 15 nm average diameter and 200 μm height are successfully produced by the metal-assisted chemical etching method. In addition, electrically driven ion transport in the nanopores is demonstrated for nanofluidic applications. Ion transport through the nanopores is observed and could be controlled by an application of a gating voltage to the nanopores.

  15. A Rapid and Efficient Screening Method for Antibacterial Compound-Producing Bacteria.

    Science.gov (United States)

    Hettiarachchi, Sachithra; Lee, Su-Jin; Lee, Youngdeuk; Kwon, Young-Kyung; De Zoysa, Mahanama; Moon, Song; Jo, Eunyoung; Kim, Taeho; Kang, Do-Hyung; Heo, Soo-Jin; Oh, Chulhong

    2017-08-28

    Antibacterial compounds are widely used in the treatment of human and animal diseases. The overuse of antibiotics has led to a rapid rise in the prevalence of drug-resistant bacteria, making the development of new antibacterial compounds essential. This study focused on developing a fast and easy method for identifying marine bacteria that produce antibiotic compounds. Eight randomly selected marine target bacterial species ( Agrococcus terreus, Bacillus algicola, Mesoflavibacter zeaxanthinifaciens, Pseudoalteromonas flavipulchra, P. peptidolytica, P. piscicida, P. rubra , and Zunongwangia atlantica ) were tested for production of antibacterial compounds against four strains of test bacteria ( B. cereus, B. subtilis, Halomonas smyrnensis , and Vibrio alginolyticus ). Colony picking was used as the primary screening method. Clear zones were observed around colonies of P. flavipulchra, P. peptidolytica, P. piscicida , and P. rubra tested against B. cereus, B. subtilis , and H. smyrnensis . The efficiency of colony scraping and broth culture methods for antimicrobial compound extraction was also compared using a disk diffusion assay. P. peptidolytica, P. piscicida , and P. rubra showed antagonistic activity against H. smyrnensis, B. cereus , and B. subtilis , respectively, only in the colony scraping method. Our results show that colony picking and colony scraping are effective, quick, and easy methods of screening for antibacterial compound-producing bacteria.

  16. Evaluation and comparison of the processing methods of airborne gravimetry concerning the errors effects on downward continuation results: Case studies in Louisiana (USA) and the Tibetan Plateau (China)

    DEFF Research Database (Denmark)

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng

    2017-01-01

    and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method...... in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA......) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic...

  17. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  18. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    Science.gov (United States)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  19. Shielding Factor Method for producing effective cross sections: MINX/SPHINX and the CCCC interface system

    International Nuclear Information System (INIS)

    MacFarlane, R.E.; Weisbin, C.R.; Paik, N.C.

    1978-01-01

    The Shielding Factor Method (SFM) is an economical designer-oriented method for producing the coarse-group space and energy self-shielded cross sections needed for reactor-core analysis. Extensive experience with the ETOX/1DX and ENDRUN/TDOWN systems has made the SFM the method of choice for most US fast-reactor design activities. The MINX/SPHINX system was designed to expand upon the capabilities of the older SFM codes and to incorporate the new standard interfaces for fast-reactor cross sections specified by the Committee for Computer Code Coordination (CCCC). MINX is the cross-section processor. It generates multigroup cross sections, shielding factors, and group-to-group transfer matriccs from ENDF/B-IV and writes them out as CCCC ISOTXS and BRKOXS files. It features detailed pointwise resonance reconstruction, accurate Doppler broadening, and an efficient treatment of anisotropic scattering. SPHINX is the space-and-energy shielding code. It uses specific mixture and geometry information together with equivalence principles to construct shielded macroscopic multigroup cross sections in as many as 240 groups. It then makes a flux calculation by diffusion or transport methods and collapses to an appropriate set of cell-averaged coarse-group effective cross sections. The integration of MINX and SPHINX with the CCCC interface system provides an efficient, accurate, and convenient system for producing effective cross sections for use in fast-reactor problems. The system has also proved useful in shielding and CTR applications. 3 figures, 4 tables

  20. Improved method of producing satisfactory sections of whole eyeball by routine histology.

    Science.gov (United States)

    Arko-Boham, Benjamin; Ahenkorah, John; Hottor, Bismarck Afedo; Dennis, Esther; Addai, Frederick Kwaku

    2014-02-01

    To overcome the loss of structural integrity when eyeball sections are prepared by wax embedding, we experimentally modified the routine histological procedure and report satisfactorily well-preserved antero-posterior sections of whole eyeballs for teaching/learning purposes. Presently histological sections of whole eyeballs are not readily available because substantial structural distortions attributable to variable consistency of tissue components (and their undesired differential shrinkage) result from routine processing. Notably, at the dehydration stage of processing, the soft, gel-like vitreous humor considerably shrinks relative to the tough fibrous sclera causing collapse of the ocular globe. Additionally, the combined effects of fixation, dehydration, and embedding at 60°C renders the eye lens too hard for microtome slicing at thicknesses suitable for light microscopy. We satisfactorily preserved intact antero-posterior sections of eyeballs via routine paraffin wax processing procedure entailing two main modifications; (i) careful needle aspiration of vitreous humor and replacement with molten wax prior to wax infiltration; (ii) softening of lens in trimmed wax block by placing a drop of concentrated liquid phenol on it for 3 h during microtomy. These variations of the routine histological method produced intact whole eyeball sections with retinal detachment as the only structural distortion. Intact sections of the eyeball obtained compares well with the laborious, expensive, and 8-week long celloidin method. Our method has wider potential usability than costly freeze drying method which requires special skills and equipment (cryotome) and does not produce whole eyeball sections. Copyright © 2013 Wiley Periodicals, Inc.