WorldWideScience

Sample records for method validation step

  1. The method validation step of biological dosimetry accreditation process

    Roy, L.; Voisin, P.A.; Guillou, A.C.; Busset, A.; Gregoire, E.; Buard, V.; Delbos, M.; Voisin, Ph.

    2006-01-01

    One of the missions of the Laboratory of Biological Dosimetry (L.D.B.) of the Institute for Radiation and Nuclear Safety (I.R.S.N.) is to assess the radiological dose after an accidental overexposure suspicion to ionising radiation, by using radio-induced changes of some biological parameters. The 'gold standard' is the yield of dicentrics observed in patients lymphocytes, and this yield is converted in dose using dose effect relationships. This method is complementary to clinical and physical dosimetry, for medical team in charge of the patients. To obtain a formal recognition of its operational activity, the laboratory decided three years ago, to require an accreditation, by following the recommendations of both 17025 General Requirements for the Competence of Testing and Calibration Laboratories and 19238 Performance criteria for service laboratories performing biological dosimetry by cyto-genetics. Diagnostics, risks analysis were realized to control the whole analysis process leading to documents writing. Purchases, personnel department, vocational training were also included in the quality system. Audits were very helpful to improve the quality system. One specificity of this technique is that it is not normalized therefore apart from quality management aspects, several technical points needed some validations. An inventory of potentially influent factors was carried out. To estimate their real effect on the yield of dicentrics, a Placket-Burman experimental design was conducted. The effect of seven parameters was tested: the BUdr (bromodeoxyuridine), PHA (phytohemagglutinin) and colcemid concentration, the culture duration, the incubator temperature, the blood volume and the medium volume. The chosen values were calculated according to the uncertainties on the way they were measured i.e. pipettes, thermometers, test tubes. None of the factors has a significant impact on the yield of dicentrics. Therefore the uncertainty linked to their use was considered as

  2. The method validation step of biological dosimetry accreditation process

    Roy, L.; Voisin, P.A.; Guillou, A.C.; Busset, A.; Gregoire, E.; Buard, V.; Delbos, M.; Voisin, Ph. [Institut de Radioprotection et de Surete Nucleaire, LDB, 92 - Fontenay aux Roses (France)

    2006-07-01

    One of the missions of the Laboratory of Biological Dosimetry (L.D.B.) of the Institute for Radiation and Nuclear Safety (I.R.S.N.) is to assess the radiological dose after an accidental overexposure suspicion to ionising radiation, by using radio-induced changes of some biological parameters. The 'gold standard' is the yield of dicentrics observed in patients lymphocytes, and this yield is converted in dose using dose effect relationships. This method is complementary to clinical and physical dosimetry, for medical team in charge of the patients. To obtain a formal recognition of its operational activity, the laboratory decided three years ago, to require an accreditation, by following the recommendations of both 17025 General Requirements for the Competence of Testing and Calibration Laboratories and 19238 Performance criteria for service laboratories performing biological dosimetry by cyto-genetics. Diagnostics, risks analysis were realized to control the whole analysis process leading to documents writing. Purchases, personnel department, vocational training were also included in the quality system. Audits were very helpful to improve the quality system. One specificity of this technique is that it is not normalized therefore apart from quality management aspects, several technical points needed some validations. An inventory of potentially influent factors was carried out. To estimate their real effect on the yield of dicentrics, a Placket-Burman experimental design was conducted. The effect of seven parameters was tested: the BUdr (bromodeoxyuridine), PHA (phytohemagglutinin) and colcemid concentration, the culture duration, the incubator temperature, the blood volume and the medium volume. The chosen values were calculated according to the uncertainties on the way they were measured i.e. pipettes, thermometers, test tubes. None of the factors has a significant impact on the yield of dicentrics. Therefore the uncertainty linked to their use was

  3. Validation of DRAGON side-step method for Bruce-A restart Phase-B physics tests

    Shen, W.; Ngo-Trong, C.; Davis, R.S.

    2004-01-01

    The DRAGON side-step method, developed at AECL, has a number of advantages over the all-DRAGON method that was used before. It is now the qualified method for reactivity-device calculations. Although the side-step-method-generated incremental cross sections have been validated against those previously calculated with the all-DRAGON method, it is highly desirable to validate the side-step method against device-worth measurements in power reactors directly. In this paper, the DRAGON side-step method was validated by comparison with the device-calibration measurements made in Bruce-A NGS Unit 4 restart Phase-B commissioning in 2003. The validation exercise showed excellent results, with the DRAGON code overestimating the measured ZCR worth by ∼5%. A sensitivity study was also performed in this paper to assess the effect of various DRAGON modelling techniques on the incremental cross sections. The assessment shows that the refinement of meshes in 3-D and the use of the side-step method are two major reasons contributing to the improved agreement between the calculated ZCR worths and the measurements. Use of different DRAGON versions, DRAGON libraries, local-parameter core conditions, and weighting techniques for the homogenization of tube clusters inside the ZCR have a very small effect on the ZCR incremental thermal absorption cross section and ZCR reactivity worth. (author)

  4. Validation of a One-Step Method for Extracting Fatty Acids from Salmon, Chicken and Beef Samples.

    Zhang, Zhichao; Richardson, Christine E; Hennebelle, Marie; Taha, Ameer Y

    2017-10-01

    Fatty acid extraction methods are time-consuming and expensive because they involve multiple steps and copious amounts of extraction solvents. In an effort to streamline the fatty acid extraction process, this study compared the standard Folch lipid extraction method to a one-step method involving a column that selectively elutes the lipid phase. The methods were tested on raw beef, salmon, and chicken. Compared to the standard Folch method, the one-step extraction process generally yielded statistically insignificant differences in chicken and salmon fatty acid concentrations, percent composition and weight percent. Initial testing showed that beef stearic, oleic and total fatty acid concentrations were significantly lower by 9-11% with the one-step method as compared to the Folch method, but retesting on a different batch of samples showed a significant 4-8% increase in several omega-3 and omega-6 fatty acid concentrations with the one-step method relative to the Folch. Overall, the findings reflect the utility of a one-step extraction method for routine and rapid monitoring of fatty acids in chicken and salmon. Inconsistencies in beef concentrations, although minor (within 11%), may be due to matrix effects. A one-step fatty acid extraction method has broad applications for rapidly and routinely monitoring fatty acids in the food supply and formulating controlled dietary interventions. © 2017 Institute of Food Technologists®.

  5. Free Modal Algebras Revisited: The Step-by-Step Method

    Bezhanishvili, N.; Ghilardi, Silvio; Jibladze, Mamuka

    2012-01-01

    We review the step-by-step method of constructing finitely generated free modal algebras. First we discuss the global step-by-step method, which works well for rank one modal logics. Next we refine the global step-by-step method to obtain the local step-by-step method, which is applicable beyond

  6. Valve cam design using numerical step-by-step method

    Vasilyev, Aleksandr; Bakhracheva, Yuliya; Kabore, Ousman; Zelenskiy, Yuriy

    2014-01-01

    This article studies the numerical step-by-step method of cam profile design. The results of the study are used for designing the internal combustion engine valve gear. This method allows to profile the peak efficiency of cams in view of many restrictions, connected with valve gear serviceability and reliability.

  7. Step by step parallel programming method for molecular dynamics code

    Orii, Shigeo; Ohta, Toshio

    1996-07-01

    Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)

  8. Validation Process Methods

    Lewis, John E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); English, Christine M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gesick, Joshua C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mukkamala, Saikrishna [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2018-01-04

    This report documents the validation process as applied to projects awarded through Funding Opportunity Announcements (FOAs) within the U.S. Department of Energy Bioenergy Technologies Office (DOE-BETO). It describes the procedures used to protect and verify project data, as well as the systematic framework used to evaluate and track performance metrics throughout the life of the project. This report also describes the procedures used to validate the proposed process design, cost data, analysis methodologies, and supporting documentation provided by the recipients.

  9. Development and Validation of an Automated Step Ergometer

    C. de Sousa Maria do Socorro

    2014-12-01

    Full Text Available Laboratory ergometers have high costs, becoming inaccessible for most of the population, hence, it is imperative to develop affordable devices making evaluations like cardiorespiratory fitness feasible and easier. The objective of this study was to develop and validate an Automated Step Ergometer (ASE, adjusted according to the height of the subject, for predicting VO2max through a progressive test. The development process was comprised by three steps, the theoretical part, the prototype assembly and further validation. The ASE consists in an elevating platform that makes the step at a higher or lower level as required for testing. The ASE validation was obtained by comparing the values of predicted VO2max (equation and direct gas analysis on the prototype and on a, treadmill. For the validation process 167 subjects with average age of 31.24 ± 14.38 years, of both genders and different degrees of cardiorespiratory fitness, were randomized and divided by gender and training condition, into untrained (n=106, active (n=24 and trained (n=37 subjects. Each participant performed a progressive test on which the ASE started at the same height (20 cm for all. Then, according to the subject’s height, it varied to a maximum of 45 cm. Time in each stage and rhythm was chosen in accordance with training condition from lowest to highest (60-180 s; 116-160 bpm, respectively. Data was compared with the student’s t test and ANOVA; correlations were tested with Pearson’s r. The value of α was set at 0.05. No differences were found between the predicted VO2max and the direct gas analysis VO2max, nor between the ASE and treadmill VO2max (p= 0.365 with high correlation between ergometers (r= 0.974. The values for repeatability, reproducibility, and reliability of male and female groups measures were, respectively, 4.08 and 5.02; 0.50 and 1.11; 4.11 and 5.15. The values of internal consistency (Cronbach’s alpha among measures were all >0.90. It was verified

  10. Uncertainty propagation applied to multi-scale thermal-hydraulics coupled codes. A step towards validation

    Geffray, Clotaire Clement

    2017-03-20

    The work presented here constitutes an important step towards the validation of the use of coupled system thermal-hydraulics and computational fluid dynamics codes for the simulation of complex flows in liquid metal cooled pool-type facilities. First, a set of methods suited for uncertainty and sensitivity analysis and validation activities with regards to the specific constraints of the work with coupled and expensive-to-run codes is proposed. Then, these methods are applied to the ATHLET - ANSYS CFX model of the TALL-3D facility. Several transients performed at this latter facility are investigated. The results are presented, discussed and compared to the experimental data. Finally, assessments of the validity of the selected methods and of the quality of the model are offered.

  11. Method validation in pharmaceutical analysis: from theory to practical optimization

    Jaqueline Kaleian Eserian

    2015-01-01

    Full Text Available The validation of analytical methods is required to obtain high-quality data. For the pharmaceutical industry, method validation is crucial to ensure the product quality as regards both therapeutic efficacy and patient safety. The most critical step in validating a method is to establish a protocol containing well-defined procedures and criteria. A well planned and organized protocol, such as the one proposed in this paper, results in a rapid and concise method validation procedure for quantitative high performance liquid chromatography (HPLC analysis.   Type: Commentary

  12. M-step preconditioned conjugate gradient methods

    Adams, L.

    1983-01-01

    Preconditioned conjugate gradient methods for solving sparse symmetric and positive finite systems of linear equations are described. Necessary and sufficient conditions are given for when these preconditioners can be used and an analysis of their effectiveness is given. Efficient computer implementations of these methods are discussed and results on the CYBER 203 and the Finite Element Machine under construction at NASA Langley Research Center are included.

  13. MIDPOINT TWO- STEPS RULE FOR THE SQUARE ROOT METHOD

    DR S.E UWAMUSI

    Aberth third order method for finding zeros of a polynomial in interval ... KEY WORDS: Square root iteration, midpoint two steps Method, ...... A New set of Methods for the simultaneous determination of zeros of polynomial equation and iterative ...

  14. An improved 4-step commutation method application for matrix converter

    Guo, Yu; Guo, Yougui; Deng, Wenlang

    2014-01-01

    A novel four-step commutation method is proposed for matrix converter cell, 3 phase inputs to 1 phase output in this paper, which is obtained on the analysis of published commutation methods for matrix converter. The first and fourth step can be shorter than the second or third one. The discussed...... method here is implemented by programming in VHDL language. Finally, the novel method in this paper is verified by experiments....

  15. Ehrenfest's theorem and the validity of the two-step model for strong-field ionization

    Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer

    By comparison with the solution of the time-dependent Schrodinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...

  16. A Normalized Transfer Matrix Method for the Free Vibration of Stepped Beams: Comparison with Experimental and FE(3D Methods

    Tamer Ahmed El-Sayed

    2017-01-01

    Full Text Available The exact solution for multistepped Timoshenko beam is derived using a set of fundamental solutions. This set of solutions is derived to normalize the solution at the origin of the coordinates. The start, end, and intermediate boundary conditions involve concentrated masses and linear and rotational elastic supports. The beam start, end, and intermediate equations are assembled using the present normalized transfer matrix (NTM. The advantage of this method is that it is quicker than the standard method because the size of the complete system coefficient matrix is 4 × 4. In addition, during the assembly of this matrix, there are no inverse matrix steps required. The validity of this method is tested by comparing the results of the current method with the literature. Then the validity of the exact stepped analysis is checked using experimental and FE(3D methods. The experimental results for stepped beams with single step and two steps, for sixteen different test samples, are in excellent agreement with those of the three-dimensional finite element FE(3D. The comparison between the NTM method and the finite element method results shows that the modal percentage deviation is increased when a beam step location coincides with a peak point in the mode shape. Meanwhile, the deviation decreases when a beam step location coincides with a straight portion in the mode shape.

  17. Validity of the Stages of Change in Steps instrument (SoC-Step) for achieving the physical activity goal of 10,000 steps per day.

    Rosenkranz, Richard R; Duncan, Mitch J; Caperchione, Cristina M; Kolt, Gregory S; Vandelanotte, Corneel; Maeder, Anthony J; Savage, Trevor N; Mummery, W Kerry

    2015-11-30

    Physical activity (PA) offers numerous benefits to health and well-being, but most adults are not sufficiently physically active to afford such benefits. The 10,000 steps campaign has been a popular and effective approach to promote PA. The Transtheoretical Model posits that individuals have varying levels of readiness for health behavior change, known as Stages of Change (Precontemplation, Contemplation, Preparation, Action, and Maintenance). Few validated assessment instruments are available for determining Stages of Change in relation to the PA goal of 10,000 steps per day. The purpose of this study was to assess the criterion-related validity of the SoC-Step, a brief 10,000 steps per day Stages of Change instrument. Participants were 504 Australian adults (176 males, 328 females, mean age = 50.8 ± 13.0 years) from the baseline sample of the Walk 2.0 randomized controlled trial. Measures included 7-day accelerometry (Actigraph GT3X), height, weight, and self-reported intention, self-efficacy, and SoC-Step: Stages of Change relative to achieving 10,000 steps per day. Kruskal-Wallis H tests with pairwise comparisons were used to determine whether participants differed by stage, according to steps per day, general health, body mass index, intention, and self-efficacy to achieve 10,000 steps per day. Binary logistic regression was used to test the hypothesis that participants in Maintenance or Action stages would have greater likelihood of meeting the 10,000 steps goal, in comparison to participants in the other three stages. Consistent with study hypotheses, participants in Precontemplation had significantly lower intention scores than those in Contemplation (p = 0.003) or Preparation (p per day (OR = 3.11; 95 % CI = 1.66,5.83) compared to those in Precontemplation, Contemplation, or Preparation. Intention (p per day. Australian New Zealand Clinical Trials Registry reference: ACTRN12611000157976 World Health Organization Universal Trial

  18. Improved perovskite phototransistor prepared using multi-step annealing method

    Cao, Mingxuan; Zhang, Yating; Yu, Yu; Yao, Jianquan

    2018-02-01

    Organic-inorganic hybrid perovskites with good intrinsic physical properties have received substantial interest for solar cell and optoelectronic applications. However, perovskite film always suffers from a low carrier mobility due to its structural imperfection including sharp grain boundaries and pinholes, restricting their device performance and application potential. Here we demonstrate a straightforward strategy based on multi-step annealing process to improve the performance of perovskite photodetector. Annealing temperature and duration greatly affects the surface morphology and optoelectrical properties of perovskites which determines the device property of phototransistor. The perovskite films treated with multi-step annealing method tend to form highly uniform, well-crystallized and high surface coverage perovskite film, which exhibit stronger ultraviolet-visible absorption and photoluminescence spectrum compare to the perovskites prepared by conventional one-step annealing process. The field-effect mobilities of perovskite photodetector treated by one-step direct annealing method shows mobility as 0.121 (0.062) cm2V-1s-1 for holes (electrons), which increases to 1.01 (0.54) cm2V-1s-1 for that treated with muti-step slow annealing method. Moreover, the perovskite phototransistors exhibit a fast photoresponse speed of 78 μs. In general, this work focuses on the influence of annealing methods on perovskite phototransistor, instead of obtains best parameters of it. These findings prove that Multi-step annealing methods is feasible to prepared high performance based photodetector.

  19. Strong Stability Preserving Two-step Runge–Kutta Methods

    Ketcheson, David I.; Gottlieb, Sigal; Macdonald, Colin B.

    2011-01-01

    We investigate the strong stability preserving (SSP) property of two-step Runge–Kutta (TSRK) methods. We prove that all SSP TSRK methods belong to a particularly simple subclass of TSRK methods, in which stages from the previous step are not used. We derive simple order conditions for this subclass. Whereas explicit SSP Runge–Kutta methods have order at most four, we prove that explicit SSP TSRK methods have order at most eight. We present explicit TSRK methods of up to eighth order that were found by numerical search. These methods have larger SSP coefficients than any known methods of the same order of accuracy and may be implemented in a form with relatively modest storage requirements. The usefulness of the TSRK methods is demonstrated through numerical examples, including integration of very high order weighted essentially non-oscillatory discretizations.

  20. Strong Stability Preserving Two-step Runge–Kutta Methods

    Ketcheson, David I.

    2011-12-22

    We investigate the strong stability preserving (SSP) property of two-step Runge–Kutta (TSRK) methods. We prove that all SSP TSRK methods belong to a particularly simple subclass of TSRK methods, in which stages from the previous step are not used. We derive simple order conditions for this subclass. Whereas explicit SSP Runge–Kutta methods have order at most four, we prove that explicit SSP TSRK methods have order at most eight. We present explicit TSRK methods of up to eighth order that were found by numerical search. These methods have larger SSP coefficients than any known methods of the same order of accuracy and may be implemented in a form with relatively modest storage requirements. The usefulness of the TSRK methods is demonstrated through numerical examples, including integration of very high order weighted essentially non-oscillatory discretizations.

  1. Two-step Raman spectroscopy method for tumor diagnosis

    Zakharov, V. P.; Bratchenko, I. A.; Kozlov, S. V.; Moryatov, A. A.; Myakinin, O. O.; Artemyev, D. N.

    2014-05-01

    Two-step Raman spectroscopy phase method was proposed for differential diagnosis of malignant tumor in skin and lung tissue. It includes detection of malignant tumor in healthy tissue on first step with identification of concrete cancer type on the second step. Proposed phase method analyze spectral intensity alteration in 1300-1340 and 1640-1680 cm-1 Raman bands in relation to the intensity of the 1450 cm-1 band on first step, and relative differences between RS intensities for tumor area and healthy skin closely adjacent to the lesion on the second step. It was tested more than 40 ex vivo samples of lung tissue and more than 50 in vivo skin tumors. Linear Discriminant Analysis, Quadratic Discriminant Analysis and Support Vector Machine were used for tumors type classification on phase planes. It is shown that two-step phase method allows to reach 88.9% sensitivity and 87.8% specificity for malignant melanoma diagnosis (skin cancer); 100% sensitivity and 81.5% specificity for adenocarcinoma diagnosis (lung cancer); 90.9% sensitivity and 77.8% specificity for squamous cell carcinoma diagnosis (lung cancer).

  2. Comprehensive Approach to Verification and Validation of CFD Simulations Applied to Backward Facing Step-Application of CFD Uncertainty Analysis

    Groves, Curtis E.; LLie, Marcel; Shallhorn, Paul A.

    2012-01-01

    There are inherent uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field and there is no standard method for evaluating uncertainty in the CFD community. This paper describes an approach to -validate the . uncertainty in using CFD. The method will use the state of the art uncertainty analysis applying different turbulence niodels and draw conclusions on which models provide the least uncertainty and which models most accurately predict the flow of a backward facing step.

  3. Method of making stepped photographic density standards of radiographic photographs

    Borovin, I.V.; Kondina, M.A.

    1987-01-01

    In industrial radiography practice the need often arises for a prompt evaluation of the photographic density of an x-ray film. A method of making stepped photographic density standards for industrial radiography by contact printing from a negative is described. The method is intended for industrial radiation flaw detection laboratories not having specialized sensitometric equipment

  4. Practical procedure for method validation in INAA- A tutorial

    Petroni, Robson; Moreira, Edson G., E-mail: robsonpetroni@usp.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    This paper describes the procedure employed by the Neutron Activation Laboratory at the Nuclear and Energy Research Institute (LAN, IPEN - CNEN/SP) for validation of Instrumental Neutron Activation Analysis (INAA) methods. According to recommendations of ISO/IEC 17025 the method performance characteristics (limit of detection, limit of quantification, trueness, repeatability, intermediate precision, reproducibility, selectivity, linearity and uncertainties budget) were outline in an easy, fast and convenient way. The paper presents step by step how to calculate the required method performance characteristics in a process of method validation, what are the procedures, adopted strategies and acceptance criteria for the results, that is, how to make a method validation in INAA. In order to exemplify the methodology applied, obtained results for the method validation of mass fraction determination of Co, Cr, Fe, Rb, Se and Zn in biological matrix samples, using an internal reference material of mussel tissue were presented. It was concluded that the methodology applied for validation of INAA methods is suitable, meeting all the requirements of ISO/IEC 17025, and thereby, generating satisfactory results for the studies carried at LAN, IPEN - CNEN/SP. (author)

  5. Practical procedure for method validation in INAA- A tutorial

    Petroni, Robson; Moreira, Edson G.

    2015-01-01

    This paper describes the procedure employed by the Neutron Activation Laboratory at the Nuclear and Energy Research Institute (LAN, IPEN - CNEN/SP) for validation of Instrumental Neutron Activation Analysis (INAA) methods. According to recommendations of ISO/IEC 17025 the method performance characteristics (limit of detection, limit of quantification, trueness, repeatability, intermediate precision, reproducibility, selectivity, linearity and uncertainties budget) were outline in an easy, fast and convenient way. The paper presents step by step how to calculate the required method performance characteristics in a process of method validation, what are the procedures, adopted strategies and acceptance criteria for the results, that is, how to make a method validation in INAA. In order to exemplify the methodology applied, obtained results for the method validation of mass fraction determination of Co, Cr, Fe, Rb, Se and Zn in biological matrix samples, using an internal reference material of mussel tissue were presented. It was concluded that the methodology applied for validation of INAA methods is suitable, meeting all the requirements of ISO/IEC 17025, and thereby, generating satisfactory results for the studies carried at LAN, IPEN - CNEN/SP. (author)

  6. Considerations for the independent reaction times and step-by-step methods for radiation chemistry simulations

    Plante, Ianik; Devroye, Luc

    2017-10-01

    Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.

  7. Validation for chromatographic and electrophoretic methods

    Ribani, Marcelo; Bottoli, Carla Beatriz Grespan; Collins, Carol H.; Jardim, Isabel Cristina Sales Fontes; Melo, Lúcio Flávio Costa

    2004-01-01

    The validation of an analytical method is fundamental to implementing a quality control system in any analytical laboratory. As the separation techniques, GC, HPLC and CE, are often the principal tools used in such determinations, procedure validation is a necessity. The objective of this review is to describe the main aspects of validation in chromatographic and electrophoretic analysis, showing, in a general way, the similarities and differences between the guidelines established by the dif...

  8. Detecting free-living steps and walking bouts: validating an algorithm for macro gait analysis.

    Hickey, Aodhán; Del Din, Silvia; Rochester, Lynn; Godfrey, Alan

    2017-01-01

    Research suggests wearables and not instrumented walkways are better suited to quantify gait outcomes in clinic and free-living environments, providing a more comprehensive overview of walking due to continuous monitoring. Numerous validation studies in controlled settings exist, but few have examined the validity of wearables and associated algorithms for identifying and quantifying step counts and walking bouts in uncontrolled (free-living) environments. Studies which have examined free-living step and bout count validity found limited agreement due to variations in walking speed, changing terrain or task. Here we present a gait segmentation algorithm to define free-living step count and walking bouts from an open-source, high-resolution, accelerometer-based wearable (AX3, Axivity). Ten healthy participants (20-33 years) wore two portable gait measurement systems; a wearable accelerometer on the lower-back and a wearable body-mounted camera (GoPro HERO) on the chest, for 1 h on two separate occasions (24 h apart) during free-living activities. Step count and walking bouts were derived for both measurement systems and compared. For all participants during a total of almost 20 h of uncontrolled and unscripted free-living activity data, excellent relative (rho  ⩾  0.941) and absolute (ICC (2,1)   ⩾  0.975) agreement with no presence of bias were identified for step count compared to the camera (gold standard reference). Walking bout identification showed excellent relative (rho  ⩾  0.909) and absolute agreement (ICC (2,1)   ⩾  0.941) but demonstrated significant bias. The algorithm employed for identifying and quantifying steps and bouts from a single wearable accelerometer worn on the lower-back has been demonstrated to be valid and could be used for pragmatic gait analysis in prolonged uncontrolled free-living environments.

  9. Video-Recorded Validation of Wearable Step Counters under Free-living Conditions.

    Toth, Lindsay P; Park, Susan; Springer, Cary M; Feyerabend, McKenzie D; Steeves, Jeremy A; Bassett, David R

    2018-06-01

    The purpose of this study was to determine the accuracy of 14-step counting methods under free-living conditions. Twelve adults (mean ± SD age, 35 ± 13 yr) wore a chest harness that held a GoPro camera pointed down at the feet during all waking hours for 1 d. The GoPro continuously recorded video of all steps taken throughout the day. Simultaneously, participants wore two StepWatch (SW) devices on each ankle (all programmed with different settings), one activPAL on each thigh, four devices at the waist (Fitbit Zip, Yamax Digi-Walker SW-200, New Lifestyles NL-2000, and ActiGraph GT9X (AG)), and two devices on the dominant and nondominant wrists (Fitbit Charge and AG). The GoPro videos were downloaded to a computer and researchers counted steps using a hand tally device, which served as the criterion method. The SW devices recorded between 95.3% and 102.8% of actual steps taken throughout the day (P > 0.05). Eleven step counting methods estimated less than 100% of actual steps; Fitbit Zip, Yamax Digi-Walker SW-200, and AG with the moving average vector magnitude algorithm on both wrists recorded 71% to 91% of steps (P > 0.05), whereas the activPAL, New Lifestyles NL-2000, and AG (without low-frequency extension (no-LFE), moving average vector magnitude) worn on the hip, and Fitbit Charge recorded 69% to 84% of steps (P 0.05), whereas the AG (LFE) on both wrists and the hip recorded 128% to 220% of steps (P < 0.05). Across all waking hours of 1 d, step counts differ between devices. The SW, regardless of settings, was the most accurate method of counting steps.

  10. The Method of Manufactured Universes for validating uncertainty quantification methods

    Stripling, H.F.; Adams, M.L.; McClarren, R.G.; Mallick, B.K.

    2011-01-01

    The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework

  11. One step geometrical calibration method for optical coherence tomography

    Díaz, Jesús Díaz; Ortmaier, Tobias; Stritzel, Jenny; Rahlves, Maik; Reithmeier, Eduard; Roth, Bernhard; Majdani, Omid

    2016-01-01

    We present a novel one-step calibration methodology for geometrical distortion correction for optical coherence tomography (OCT). A calibration standard especially designed for OCT is introduced, which consists of an array of inverse pyramidal structures. The use of multiple landmarks situated on four different height levels on the pyramids allow performing a 3D geometrical calibration. The calibration procedure itself is based on a parametric model of the OCT beam propagation. It is validated by experimental results and enables the reduction of systematic errors by more than one order of magnitude. In future, our results can improve OCT image reconstruction and interpretation for medical applications such as real time monitoring of surgery. (paper)

  12. ASTM Validates Air Pollution Test Methods

    Chemical and Engineering News, 1973

    1973-01-01

    The American Society for Testing and Materials (ASTM) has validated six basic methods for measuring pollutants in ambient air as the first part of its Project Threshold. Aim of the project is to establish nationwide consistency in measuring pollutants; determining precision, accuracy and reproducibility of 35 standard measuring methods. (BL)

  13. Validated modified Lycopodium spore method development for ...

    Validated modified lycopodium spore method has been developed for simple and rapid quantification of herbal powdered drugs. Lycopodium spore method was performed on ingredients of Shatavaryadi churna, an ayurvedic formulation used as immunomodulator, galactagogue, aphrodisiac and rejuvenator. Estimation of ...

  14. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  15. Aerial robot intelligent control method based on back-stepping

    Zhou, Jian; Xue, Qian

    2018-05-01

    The aerial robot is characterized as strong nonlinearity, high coupling and parameter uncertainty, a self-adaptive back-stepping control method based on neural network is proposed in this paper. The uncertain part of the aerial robot model is compensated online by the neural network of Cerebellum Model Articulation Controller and robust control items are designed to overcome the uncertainty error of the system during online learning. At the same time, particle swarm algorithm is used to optimize and fix parameters so as to improve the dynamic performance, and control law is obtained by the recursion of back-stepping regression. Simulation results show that the designed control law has desired attitude tracking performance and good robustness in case of uncertainties and large errors in the model parameters.

  16. Validity of a Newly-Designed Rectilinear Stepping Ergometer Submaximal Exercise Test to Assess Cardiorespiratory Fitness

    Rubin Zhang, Likui Zhan, Shaoming Sun, Wei Peng, Yining Sun

    2017-01-01

    The maximum oxygen uptake (V̇O2 max), determined from graded maximal or submaximal exercise tests, is used to classify the cardiorespiratory fitness level of individuals. The purpose of this study was to examine the validity and reliability of the YMCA submaximal exercise test protocol performed on a newly-designed rectilinear stepping ergometer (RSE) that used up and down reciprocating vertical motion in place of conventional circular motion and giving precise measurement of workload, to det...

  17. Method Validation Procedure in Gamma Spectroscopy Laboratory

    El Samad, O.; Baydoun, R.

    2008-01-01

    The present work describes the methodology followed for the application of ISO 17025 standards in gamma spectroscopy laboratory at the Lebanese Atomic Energy Commission including the management and technical requirements. A set of documents, written procedures and records were prepared to achieve the management part. The technical requirements, internal method validation was applied through the estimation of trueness, repeatability , minimum detectable activity and combined uncertainty, participation in IAEA proficiency tests assure the external method validation, specially that the gamma spectroscopy lab is a member of ALMERA network (Analytical Laboratories for the Measurements of Environmental Radioactivity). Some of these results are presented in this paper. (author)

  18. Recursive regularization step for high-order lattice Boltzmann methods

    Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre

    2017-09-01

    A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.

  19. Validation of the ADAMO Care Watch for step counting in older adults.

    Magistro, Daniele; Brustio, Paolo Riccardo; Ivaldi, Marco; Esliger, Dale Winfield; Zecca, Massimiliano; Rainoldi, Alberto; Boccia, Gennaro

    2018-01-01

    Accurate measurement devices are required to objectively quantify physical activity. Wearable activity monitors, such as pedometers, may serve as affordable and feasible instruments for measuring physical activity levels in older adults during their normal activities of daily living. Currently few available accelerometer-based steps counting devices have been shown to be accurate at slow walking speeds, therefore there is still lacking appropriate devices tailored for slow speed ambulation, typical of older adults. This study aimed to assess the validity of step counting using the pedometer function of the ADAMO Care Watch, containing an embedded algorithm for measuring physical activity in older adults. Twenty older adults aged ≥ 65 years (mean ± SD, 75±7 years; range, 68-91) and 20 young adults (25±5 years, range 20-40), wore a care watch on each wrist and performed a number of randomly ordered tasks: walking at slow, normal and fast self-paced speeds; a Timed Up and Go test (TUG); a step test and ascending/descending stairs. The criterion measure was the actual number of steps observed, counted with a manual tally counter. Absolute percentage error scores, Intraclass Correlation Coefficients (ICC), and Bland-Altman plots were used to assess validity. ADAMO Care Watch demonstrated high validity during slow and normal speeds (range 0.5-1.5 m/s) showing an absolute error from 1.3% to 1.9% in the older adult group and from 0.7% to 2.7% in the young adult group. The percentage error for the 30-metre walking tasks increased with faster pace in both young adult (17%) and older adult groups (6%). In the TUG test, there was less error in the steps recorded for older adults (1.3% to 2.2%) than the young adults (6.6% to 7.2%). For the total sample, the ICCs for the ADAMO Care Watch for the 30-metre walking tasks at each speed and for the TUG test were ranged between 0.931 to 0.985. These findings provide evidence that the ADAMO Care Watch demonstrated highly accurate

  20. An Improved Split-Step Wavelet Transform Method for Anomalous Radio Wave Propagation Modelling

    A. Iqbal

    2014-12-01

    Full Text Available Anomalous tropospheric propagation caused by ducting phenomenon is a major problem in wireless communication. Thus, it is important to study the behavior of radio wave propagation in tropospheric ducts. The Parabolic Wave Equation (PWE method is considered most reliable to model anomalous radio wave propagation. In this work, an improved Split Step Wavelet transform Method (SSWM is presented to solve PWE for the modeling of tropospheric propagation over finite and infinite conductive surfaces. A large number of numerical experiments are carried out to validate the performance of the proposed algorithm. Developed algorithm is compared with previously published techniques; Wavelet Galerkin Method (WGM and Split-Step Fourier transform Method (SSFM. A very good agreement is found between SSWM and published techniques. It is also observed that the proposed algorithm is about 18 times faster than WGM and provide more details of propagation effects as compared to SSFM.

  1. Moving beyond Traditional Methods of Survey Validation

    Maul, Andrew

    2017-01-01

    In his focus article, "Rethinking Traditional Methods of Survey Validation," published in this issue of "Measurement: Interdisciplinary Research and Perspectives," Andrew Maul wrote that it is commonly believed that self-report, survey-based instruments can be used to measure a wide range of psychological attributes, such as…

  2. A two-step method for rapid characterization of electroosmotic flows in capillary electrophoresis.

    Zhang, Wenjing; He, Muyi; Yuan, Tao; Xu, Wei

    2017-12-01

    The measurement of electroosmotic flow (EOF) is important in a capillary electrophoresis (CE) experiment in terms of performance optimization and stability improvement. Although several methods exist, there are demanding needs to accurately characterize ultra-low electroosmotic flow rates (EOF rates), such as in coated capillaries used in protein separations. In this work, a new method, called the two-step method, was developed to accurately and rapidly measure EOF rates in a capillary, especially for measuring the ultra-low EOF rates in coated capillaries. In this two-step method, the EOF rates were calculated by measuring the migration time difference of a neutral marker in two consecutive experiments, in which a pressure driven was introduced to accelerate the migration and the DC voltage was reversed to switch the EOF direction. Uncoated capillaries were first characterized by both this two-step method and a conventional method to confirm the validity of this new method. Then this new method was applied in the study of coated capillaries. Results show that this new method is not only fast in speed, but also better in accuracy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Validation of patient determined disease steps (PDDS) scale scores in persons with multiple sclerosis.

    Learmonth, Yvonne C; Motl, Robert W; Sandroff, Brian M; Pula, John H; Cadavid, Diego

    2013-04-25

    The Patient Determined Disease Steps (PDDS) is a promising patient-reported outcome (PRO) of disability in multiple sclerosis (MS). To date, there is limited evidence regarding the validity of PDDS scores, despite its sound conceptual development and broad inclusion in MS research. This study examined the validity of the PDDS based on (1) the association with Expanded Disability Status Scale (EDSS) scores and (2) the pattern of associations between PDDS and EDSS scores with Functional System (FS) scores as well as ambulatory and other outcomes. 96 persons with MS provided demographic/clinical information, completed the PDDS and other PROs including the Multiple Sclerosis Walking Scale-12 (MSWS-12), and underwent a neurological examination for generating FS and EDSS scores. Participants completed assessments of cognition, ambulation including the 6-minute walk (6 MW), and wore an accelerometer during waking hours over seven days. There was a strong correlation between EDSS and PDDS scores (ρ = .783). PDDS and EDSS scores were strongly correlated with Pyramidal (ρ = .578 &ρ = .647, respectively) and Cerebellar (ρ = .501 &ρ = .528, respectively) FS scores as well as 6 MW distance (ρ = .704 &ρ = .805, respectively), MSWS-12 scores (ρ = .801 &ρ = .729, respectively), and accelerometer steps/day (ρ = -.740 &ρ = -.717, respectively). This study provides novel evidence supporting the PDDS as valid PRO of disability in MS.

  4. Validation of the Rotation Ratios Method

    Foss, O.A.; Klaksvik, J.; Benum, P.; Anda, S.

    2007-01-01

    Background: The rotation ratios method describes rotations between pairs of sequential pelvic radiographs. The method seems promising but has not been validated. Purpose: To validate the accuracy of the rotation ratios method. Material and Methods: Known pelvic rotations between 165 radiographs obtained from five skeletal pelvises in an experimental material were compared with the corresponding calculated rotations to describe the accuracy of the method. The results from a clinical material of 262 pelvic radiographs from 46 patients defined the ranges of rotational differences compared. Repeated analyses, both on the experimental and the clinical material, were performed using the selected reference points to describe the robustness and the repeatability of the method. Results: The reference points were easy to identify and barely influenced by pelvic rotations. The mean differences between calculated and real pelvic rotations were 0.0 deg (SD 0.6) for vertical rotations and 0.1 deg (SD 0.7) for transversal rotations in the experimental material. The intra- and interobserver repeatability of the method was good. Conclusion: The accuracy of the method was reasonably high, and the method may prove to be clinically useful

  5. One-step method for the production of nanofluids

    Kostic, Milivoje [Chicago, IL; Golubovic, Mihajlo [Chicago, IL; Hull, John R [Downers Grove, IL; Choi, Stephen U. S. [Napersville, IL

    2010-05-18

    A one step method and system for producing nanofluids by a particle-source evaporation and deposition of the evaporant into a base fluid. The base fluid such (i.e. ethylene glycol) is placed in a rotating cylindrical drum having an adjustable heater-boat-evaporator and heat exchanger-cooler apparatus. As the drum rotates, a thin liquid layer is formed on the inside surface of the drum. A heater-boat-evaporator having an evaporant material (particle-source) placed within its boat evaporator is adjustably positioned near a portion of the rotating thin liquid layer, the evaporant material being heated thereby evaporating a portion of the evaporant material, the evaporated material absorbed by the liquid film to form nanofluid.

  6. Model-Based Method for Sensor Validation

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  7. A Spiral Step-by-Step Educational Method for Cultivating Competent Embedded System Engineers to Meet Industry Demands

    Jing,Lei; Cheng, Zixue; Wang, Junbo; Zhou, Yinghui

    2011-01-01

    Embedded system technologies are undergoing dramatic change. Competent embedded system engineers are becoming a scarce resource in the industry. Given this, universities should revise their specialist education to meet industry demands. In this paper, a spirally tight-coupled step-by-step educational method, based on an analysis of industry…

  8. A Renormalisation Group Method. V. A Single Renormalisation Group Step

    Brydges, David C.; Slade, Gordon

    2015-05-01

    This paper is the fifth in a series devoted to the development of a rigorous renormalisation group method applicable to lattice field theories containing boson and/or fermion fields, and comprises the core of the method. In the renormalisation group method, increasingly large scales are studied in a progressive manner, with an interaction parametrised by a field polynomial which evolves with the scale under the renormalisation group map. In our context, the progressive analysis is performed via a finite-range covariance decomposition. Perturbative calculations are used to track the flow of the coupling constants of the evolving polynomial, but on their own perturbative calculations are insufficient to control error terms and to obtain mathematically rigorous results. In this paper, we define an additional non-perturbative coordinate, which together with the flow of coupling constants defines the complete evolution of the renormalisation group map. We specify conditions under which the non-perturbative coordinate is contractive under a single renormalisation group step. Our framework is essentially combinatorial, but its implementation relies on analytic results developed earlier in the series of papers. The results of this paper are applied elsewhere to analyse the critical behaviour of the 4-dimensional continuous-time weakly self-avoiding walk and of the 4-dimensional -component model. In particular, the existence of a logarithmic correction to mean-field scaling for the susceptibility can be proved for both models, together with other facts about critical exponents and critical behaviour.

  9. Spacecraft early design validation using formal methods

    Bozzano, Marco; Cimatti, Alessandro; Katoen, Joost-Pieter; Katsaros, Panagiotis; Mokos, Konstantinos; Nguyen, Viet Yen; Noll, Thomas; Postma, Bart; Roveri, Marco

    2014-01-01

    The size and complexity of software in spacecraft is increasing exponentially, and this trend complicates its validation within the context of the overall spacecraft system. Current validation methods are labor-intensive as they rely on manual analysis, review and inspection. For future space missions, we developed – with challenging requirements from the European space industry – a novel modeling language and toolset for a (semi-)automated validation approach. Our modeling language is a dialect of AADL and enables engineers to express the system, the software, and their reliability aspects. The COMPASS toolset utilizes state-of-the-art model checking techniques, both qualitative and probabilistic, for the analysis of requirements related to functional correctness, safety, dependability and performance. Several pilot projects have been performed by industry, with two of them having focused on the system-level of a satellite platform in development. Our efforts resulted in a significant advancement of validating spacecraft designs from several perspectives, using a single integrated system model. The associated technology readiness level increased from level 1 (basic concepts and ideas) to early level 4 (laboratory-tested)

  10. Influence of the drying method in chitosans purification step

    Fonseca, Ana C.M.; Batista, Jorge G.S.; Bettega, Antonio; Lima, Nelson B. de

    2015-01-01

    Currently, the study of extracellular biopolymers properties has received prominence for being easy extraction and purification. Chitosan has been an attractive proposition for applications in various fields such as engineering, biotechnology, medicine and pharmacology. For such applications, it is necessary purification of chitosan to obtain a product more concentrated and free of undesirable impurities. However, at this stage of the process of obtaining the biopolymer may occur morphological and physicochemical changes. This study evaluated the influence of the drying process after purification of a commercial chitosan sample and the importance of this step and its cost/benefit in applications requiring a high degree of purity. The method of drying influenced in the organoleptic properties and in the main characteristics of material. Analysis of the crystal structure by X-ray diffraction showed that the degree of crystallinity, X (%), in the purified chitosan samples was lower when compared with the unpurified sample. The degree of acetylation, DA (%), was analyzed by spectroscopy infrared with no significant changes on the three drying methods assessed, unlike the viscosimetric molecular weight, M_v, determined by capillary viscometry. (author)

  11. Validity of activity trackers, smartphones, and phone applications to measure steps in various walking conditions.

    Höchsmann, C; Knaier, R; Eymann, J; Hintermann, J; Infanger, D; Schmidt-Trucksäss, A

    2018-02-20

    To examine the validity of popular smartphone accelerometer applications and a consumer activity wristband compared to a widely used research accelerometer while assessing the impact of the phone's position on the accuracy of step detection. Twenty volunteers from 2 different age groups (Group A: 18-25 years, n = 10; Group B 45-70 years, n = 10) were equipped with 3 iPhone SE smartphones (placed in pants pocket, shoulder bag, and backpack), 1 Samsung Galaxy S6 Edge (pants pocket), 1 Garmin Vivofit 2 wristband, and 2 ActiGraph wGTX+ devices (worn at wrist and hip) while walking on a treadmill (1.6, 3.2, 4.8, and 6.0 km/h) and completing a walking course. All smartphones included 6 accelerometer applications. Video observation was used as gold standard. Validity was evaluated by comparing each device with the gold standard using mean absolute percentage errors (MAPE). The MAPE of the iPhone SE (all positions) and the Garmin Vivofit was small (Samsung Galaxy and hip-worn ActiGraph showed small MAPE only for treadmill walking at 4.8 and 6.0 km/h and for free walking. The wrist-worn ActiGraph showed high MAPE (17-47) for all walking conditions. The iPhone SE and the Garmin Vivofit 2 are accurate tools for step counting in different age groups and during various walking conditions, even during slow walking. The phone's position does not impact the accuracy of step detection, which substantially improves the versatility for physical activity assessment in clinical and research settings. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Evaluation of lung and chest wall mechanics during anaesthesia using the PEEP-step method.

    Persson, P; Stenqvist, O; Lundin, S

    2018-04-01

    Postoperative pulmonary complications are common. Between patients there are differences in lung and chest wall mechanics. Individualised mechanical ventilation based on measurement of transpulmonary pressures would be a step forward. A previously described method evaluates lung and chest wall mechanics from a change of ΔPEEP and calculation of change in end-expiratory lung volume (ΔEELV). The aim of the present study was to validate this PEEP-step method (PSM) during general anaesthesia by comparing it with the conventional method using oesophageal pressure (PES) measurements. In 24 lung healthy subjects (BMI 18.5-32), three different sizes of PEEP steps were performed during general anaesthesia and ΔEELVs were calculated. Transpulmonary driving pressure (ΔPL) for a tidal volume equal to each ΔEELV was measured using PES measurements and compared to ΔPEEP with limits of agreement and intraclass correlation coefficients (ICC). ΔPL calculated with both methods was compared with a Bland-Altman plot. Mean differences between ΔPEEP and ΔPL were mechanical properties among the lung healthy patients stresses the need for individualised ventilator settings based on measurements of lung and chest wall mechanics. The agreement between ΔPLs measured by the two methods during general anaesthesia suggests the use of the non-invasive PSM in this patient population. NCT 02830516. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. A Novel Motion Compensation Method for Random Stepped Frequency Radar with M-sequence

    Liao, Zhikun; Hu, Jiemin; Lu, Dawei; Zhang, Jun

    2018-01-01

    The random stepped frequency radar is a new kind of synthetic wideband radar. In the research, it has been found that it possesses a thumbtack-like ambiguity function which is considered to be the ideal one. This also means that only a precise motion compensation could result in the correct high resolution range profile. In this paper, we will introduce the random stepped frequency radar coded by M-sequence firstly and briefly analyse the effect of relative motion between target and radar on the distance imaging, which is called defocusing problem. Then, a novel motion compensation method, named complementary code cancellation, will be put forward to solve this problem. Finally, the simulated experiments will demonstrate its validity and the computational analysis will show up its efficiency.

  14. Robustness study in SSNTD method validation: indoor radon quality

    Dias, D.C.S.; Silva, N.C.; Bonifácio, R.L.

    2017-01-01

    Quality control practices are indispensable to organizations aiming to reach analytical excellence. Method validation is an essential component to quality systems in laboratories, serving as a powerful tool for standardization and reliability of outcomes. This paper presents a study of robustness conducted over a SSNTD technique validation process, with the goal of developing indoor radon measurements at the highest level of quality. This quality parameter indicates how well a technique is able to provide reliable results in face of unexpected variations along the measurement. In this robustness study, based on the Youden method, 7 analytical conditions pertaining to different phases of the SSNTD technique (with focus on detector etching) were selected. Based on the ideal values for each condition as reference, extreme levels regarded as high and low were prescribed to each condition. A partial factorial design of 8 unique etching procedures was defined, where each presented their own set of high and low condition values. The Youden test provided 8 indoor radon concentration results, which allowed percentage estimations that indicate the potential influence of each analytical condition on the SSNTD technique. As expected, detector etching factors such as etching solution concentration, temperature and immersion time were identified as the most critical parameters to the technique. Detector etching is a critical step in the SSNTD method – one that must be carefully designed during validation and meticulously controlled throughout the entire process. (author)

  15. Robustness study in SSNTD method validation: indoor radon quality

    Dias, D.C.S.; Silva, N.C.; Bonifácio, R.L., E-mail: danilacdias@gmail.com [Comissao Nacional de Energia Nuclear (LAPOC/CNEN), Pocos de Caldas, MG (Brazil). Laboratorio de Pocos de Caldas

    2017-07-01

    Quality control practices are indispensable to organizations aiming to reach analytical excellence. Method validation is an essential component to quality systems in laboratories, serving as a powerful tool for standardization and reliability of outcomes. This paper presents a study of robustness conducted over a SSNTD technique validation process, with the goal of developing indoor radon measurements at the highest level of quality. This quality parameter indicates how well a technique is able to provide reliable results in face of unexpected variations along the measurement. In this robustness study, based on the Youden method, 7 analytical conditions pertaining to different phases of the SSNTD technique (with focus on detector etching) were selected. Based on the ideal values for each condition as reference, extreme levels regarded as high and low were prescribed to each condition. A partial factorial design of 8 unique etching procedures was defined, where each presented their own set of high and low condition values. The Youden test provided 8 indoor radon concentration results, which allowed percentage estimations that indicate the potential influence of each analytical condition on the SSNTD technique. As expected, detector etching factors such as etching solution concentration, temperature and immersion time were identified as the most critical parameters to the technique. Detector etching is a critical step in the SSNTD method – one that must be carefully designed during validation and meticulously controlled throughout the entire process. (author)

  16. Space Suit Joint Torque Measurement Method Validation

    Valish, Dana; Eversley, Karina

    2012-01-01

    In 2009 and early 2010, a test method was developed and performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits. This was done in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design met the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future development programs. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis; the results indicated a significant variance in values reported for a subset of the re-tested joints. Potential variables that could have affected the data were identified and a third round of testing was conducted in an attempt to eliminate and/or quantify the effects of these variables. The results of the third test effort will be used to determine whether or not the proposed joint torque methodology can be applied to future space suit development contracts.

  17. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    Hadjimichael, Yiannis; Ketcheson, David I.; Loczi, Lajos; Né meth, Adriá n

    2016-01-01

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order

  18. One step linear reconstruction method for continuous wave diffuse optical tomography

    Ukhrowiyah, N.; Yasin, M.

    2017-09-01

    The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.

  19. Comparison of validation methods for forming simulations

    Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus

    2018-05-01

    The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.

  20. Solving delay differential equations in S-ADAPT by method of steps.

    Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech

    2013-09-01

    S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. Published by Elsevier Ireland Ltd.

  1. Solving point reactor kinetic equations by time step-size adaptable numerical methods

    Liao Chaqing

    2007-01-01

    Based on the analysis of effects of time step-size on numerical solutions, this paper showed the necessity of step-size adaptation. Based on the relationship between error and step-size, two-step adaptation methods for solving initial value problems (IVPs) were introduced. They are Two-Step Method and Embedded Runge-Kutta Method. PRKEs were solved by implicit Euler method with step-sizes optimized by using Two-Step Method. It was observed that the control error has important influence on the step-size and the accuracy of solutions. With suitable control errors, the solutions of PRKEs computed by the above mentioned method are accurate reasonably. The accuracy and usage of MATLAB built-in ODE solvers ode23 and ode45, both of which adopt Runge-Kutta-Fehlberg method, were also studied and discussed. (authors)

  2. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    Hadjimichael, Yiannis

    2016-09-08

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.

  3. Multi-time-step domain coupling method with energy control

    Mahjoubi, N.; Krenk, Steen

    2010-01-01

    the individual time step. It is demonstrated that displacement continuity between the subdomains leads to cancelation of the interface contributions to the energy balance equation, and thus stability and algorithmic damping properties of the original algorithms are retained. The various subdomains can...... by a numerical example using a refined mesh around concentrated forces. Copyright © 2010 John Wiley & Sons, Ltd....

  4. Softcopy quality ruler method: implementation and validation

    Jin, Elaine W.; Keelan, Brian W.; Chen, Junqing; Phillips, Jonathan B.; Chen, Ying

    2009-01-01

    A softcopy quality ruler method was implemented for the International Imaging Industry Association (I3A) Camera Phone Image Quality (CPIQ) Initiative. This work extends ISO 20462 Part 3 by virtue of creating reference digital images of known subjective image quality, complimenting the hardcopy Standard Reference Stimuli (SRS). The softcopy ruler method was developed using images from a Canon EOS 1Ds Mark II D-SLR digital still camera (DSC) and a Kodak P880 point-and-shoot DSC. Images were viewed on an Apple 30in Cinema Display at a viewing distance of 34 inches. Ruler images were made for 16 scenes. Thirty ruler images were generated for each scene, representing ISO 20462 Standard Quality Scale (SQS) values of approximately 2 to 31 at an increment of one just noticeable difference (JND) by adjusting the system modulation transfer function (MTF). A Matlab GUI was developed to display the ruler and test images side-by-side with a user-adjustable ruler level controlled by a slider. A validation study was performed at Kodak, Vista Point Technology, and Aptina Imaging in which all three companies set up a similar viewing lab to run the softcopy ruler method. The results show that the three sets of data are in reasonable agreement with each other, with the differences within the range expected from observer variability. Compared to previous implementations of the quality ruler, the slider-based user interface allows approximately 2x faster assessments with 21.6% better precision.

  5. Proposal for a Five-Step Method to Elicit Expert Judgment

    Duco Veen

    2017-12-01

    Full Text Available Elicitation is a commonly used tool to extract viable information from experts. The information that is held by the expert is extracted and a probabilistic representation of this knowledge is constructed. A promising avenue in psychological research is to incorporated experts’ prior knowledge in the statistical analysis. Systematic reviews on elicitation literature however suggest that it might be inappropriate to directly obtain distributional representations from experts. The literature qualifies experts’ performance on estimating elements of a distribution as unsatisfactory, thus reliably specifying the essential elements of the parameters of interest in one elicitation step seems implausible. Providing feedback within the elicitation process can enhance the quality of the elicitation and interactive software can be used to facilitate the feedback. Therefore, we propose to decompose the elicitation procedure into smaller steps with adjustable outcomes. We represent the tacit knowledge of experts as a location parameter and their uncertainty concerning this knowledge by a scale and shape parameter. Using a feedback procedure, experts can accept the representation of their beliefs or adjust their input. We propose a Five-Step Method which consists of (1 Eliciting the location parameter using the trial roulette method. (2 Provide feedback on the location parameter and ask for confirmation or adjustment. (3 Elicit the scale and shape parameter. (4 Provide feedback on the scale and shape parameter and ask for confirmation or adjustment. (5 Use the elicited and calibrated probability distribution in a statistical analysis and update it with data or to compute a prior-data conflict within a Bayesian framework. User feasibility and internal validity for the Five-Step Method are investigated using three elicitation studies.

  6. Reliability and convergent validity of the five-step test in people with chronic stroke.

    Ng, Shamay S M; Tse, Mimi M Y; Tam, Eric W C; Lai, Cynthia Y Y

    2018-01-10

    (i) To estimate the intra-rater, inter-rater and test-retest reliabilities of the Five-Step Test (FST), as well as the minimum detectable change in FST completion times in people with stroke. (ii) To estimate the convergent validity of the FST with other measures of stroke-specific impairments. (iii) To identify the best cut-off times for distinguishing FST performance in people with stroke from that of healthy older adults. A cross-sectional study. University-based rehabilitation centre. Forty-eight people with stroke and 39 healthy controls. None. The FST, along with (for the stroke survivors only) scores on the Fugl-Meyer Lower Extremity Assessment (FMA-LE), the Berg Balance Scale (BBS), Limits of Stability (LOS) tests, and Activities-specific Balance Confidence (ABC) scale were tested. The FST showed excellent intra-rater (intra-class correlation coefficient; ICC = 0.866-0.905), inter-rater (ICC = 0.998), and test-retest (ICC = 0.838-0.842) reliabilities. A minimum detectable change of 9.16 s was found for the FST in people with stroke. The FST correlated significantly with the FMA-LE, BBS, and LOS results in the forward and sideways directions (r = -0.411 to -0.716, p people with stroke and healthy older adults. The FST is a reliable, easy-to-administer clinical test for assessing stroke survivors' ability to negotiate steps and stairs.

  7. Validity of a Newly-Designed Rectilinear Stepping Ergometer Submaximal Exercise Test to Assess Cardiorespiratory Fitness.

    Zhang, Rubin; Zhan, Likui; Sun, Shaoming; Peng, Wei; Sun, Yining

    2017-09-01

    The maximum oxygen uptake (V̇O 2 max), determined from graded maximal or submaximal exercise tests, is used to classify the cardiorespiratory fitness level of individuals. The purpose of this study was to examine the validity and reliability of the YMCA submaximal exercise test protocol performed on a newly-designed rectilinear stepping ergometer (RSE) that used up and down reciprocating vertical motion in place of conventional circular motion and giving precise measurement of workload, to determine V̇O 2 max in young healthy male adults. Thirty-two young healthy male adults (32 males; age range: 20-35 years; height: 1.75 ± 0.05 m; weight: 67.5 ± 8.6 kg) firstly participated in a maximal-effort graded exercise test using a cycle ergometer (CE) to directly obtain measured V̇O 2 max. Subjects then completed the progressive multistage test on the RSE beginning at 50W and including additional stages of 70, 90, 110, 130, and 150W, and the RSE YMCA submaximal test consisting of a workload increase every 3 minutes until the termination criterion was reached. A metabolic equation was derived from the RSE multistage exercise test to predict oxygen consumption (V̇O 2 ) from power output (W) during the submaximal exercise test (V̇O 2 (mL·min -1 )=12.4 ×W(watts)+3.5 mL·kg -1 ·min -1 ×M+160mL·min -1 , R 2 = 0.91, standard error of the estimate (SEE) = 134.8mL·min -1 ). A high correlation was observed between the RSE YMCA estimated V̇O 2 max and the CE measured V̇O 2 max (r=0.87). The mean difference between estimated and measured V̇O 2 max was 2.5 mL·kg -1 ·min -1 , with an SEE of 3.55 mL·kg -1 ·min -1 . The data suggest that the RSE YMCA submaximal exercise test is valid for predicting V̇O 2 max in young healthy male adults. The findings show that the rectilinear stepping exercise is an effective submaximal exercise for predicting V̇O 2 max. The newly-designed RSE may be potentially further developed as an alternative ergometer for assessing

  8. Two step continuous method to synthesize colloidal spheroid gold nanorods.

    Chandra, S; Doran, J; McCormack, S J

    2015-12-01

    This research investigated a two-step continuous process to synthesize colloidal suspension of spheroid gold nanorods. In the first step; gold precursor was reduced to seed-like particles in the presence of polyvinylpyrrolidone and ascorbic acid. In continuous second step; silver nitrate and alkaline sodium hydroxide produced various shape and size Au nanoparticles. The shape was manipulated through weight ratio of ascorbic acid to silver nitrate by varying silver nitrate concentration. The specific weight ratio of 1.35-1.75 grew spheroid gold nanorods of aspect ratio ∼1.85 to ∼2.2. Lower weight ratio of 0.5-1.1 formed spherical nanoparticle. The alkaline medium increased the yield of gold nanorods and reduced reaction time at room temperature. The synthesized gold nanorods retained their shape and size in ethanol. The surface plasmon resonance was red shifted by ∼5 nm due to higher refractive index of ethanol than water. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. The Technique of Changing the Drive Method of Micro Step Drive and Sensorless Drive for Hybrid Stepping Motor

    Yoneda, Makoto; Dohmeki, Hideo

    The position control system with the advantage large torque, low vibration, and high resolution can be obtained by the constant current micro step drive applied to hybrid stepping motor. However loss is large, in order not to be concerned with load torque but to control current uniformly. As the one technique of a position control system in which high efficiency is realizable, the same sensorless control as a permanent magnet motor is effective. But, it was the purpose that the control method proposed until now controls speed. Then, this paper proposed changing the drive method of micro step drive and sensorless drive. The change of the drive method was verified from the simulation and the experiment. On no load, it was checked not producing change of a large speed at the time of a change by making electrical angle and carrying out zero reset of the integrator. On load, it was checked that a large speed change arose. The proposed system could change drive method by setting up the initial value of an integrator using the estimated result, without producing speed change. With this technique, the low loss position control system, which employed the advantage of the hybrid stepping motor, has been built.

  10. The Method of Manufactured Universes for validating uncertainty quantification methods

    Stripling, H.F.

    2011-09-01

    The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which experimental data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented in this paper manufactures a particle-transport universe, models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new experiments within the manufactured reality. The results of this preliminary study indicate that, even in a simple problem, the improper application of a specific UQ method or unrealized effects of a modeling assumption may produce inaccurate predictions. We conclude that the validation framework presented in this paper is a powerful and flexible tool for the investigation and understanding of UQ methodologies. © 2011 Elsevier Ltd. All rights reserved.

  11. Toward a Unified Validation Framework in Mixed Methods Research

    Dellinger, Amy B.; Leech, Nancy L.

    2007-01-01

    The primary purpose of this article is to further discussions of validity in mixed methods research by introducing a validation framework to guide thinking about validity in this area. To justify the use of this framework, the authors discuss traditional terminology and validity criteria for quantitative and qualitative research, as well as…

  12. FDIR Strategy Validation with the B Method

    Sabatier, D.; Dellandrea, B.; Chemouil, D.

    2008-08-01

    In a formation flying satellite system, the FDIR strategy (Failure Detection, Isolation and Recovery) is paramount. When a failure occurs, satellites should be able to take appropriate reconfiguration actions to obtain the best possible results given the failure, ranging from avoiding satellite-to-satellite collision to continuing the mission without disturbance if possible. To achieve this goal, each satellite in the formation has an implemented FDIR strategy that governs how it detects failures (from tests or by deduction) and how it reacts (reconfiguration using redundant equipments, avoidance manoeuvres, etc.). The goal is to protect the satellites first and the mission as much as possible. In a project initiated by the CNES, ClearSy experiments the B Method to validate the FDIR strategies developed by Thales Alenia Space, of the inter satellite positioning and communication devices that will be used for the SIMBOL-X (2 satellite configuration) and the PEGASE (3 satellite configuration) missions and potentially for other missions afterward. These radio frequency metrology sensor devices provide satellite positioning and inter satellite communication in formation flying. This article presents the results of this experience.

  13. A multi-time-step noise reduction method for measuring velocity statistics from particle tracking velocimetry

    Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain

    2017-10-01

    We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.

  14. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  15. Validation Testing of the Nitric Acid Dissolution Step Within the K Basin Sludge Pretreatment Process

    AJ Schmidt; CH Delegard; KL Silvers; PR Bredt; CD Carlson; EW Hoppe; JC Hayes; DE Rinehart; SR Gano; BM Thornton

    1999-03-24

    The work described in this report involved comprehensive bench-scale testing of nitric acid (HNO{sub 3}) dissolution of actual sludge materials from the Hanford K East (KE) Basin to confirm the baseline chemical pretreatment process. In addition, process monitoring and material balance information was collected to support the development and refinement of process flow diagrams. The testing was performed by Pacific Northwest National Laboratory (PNNL)for the US Department of Energy's Office of Spent Fuel Stabilization (EM-67) and Numatec Hanford Corporation (NHC) to assist in the development of the K Basin Sludge Pretreatment Process. The baseline chemical pretreatment process for K Basin sludge is nitric acid dissolution of all particulate material passing a 1/4-in. screen. The acid-insoluble fraction (residual solids) will be stabilized (possibly by chemical leaching/rinsing and grouting), packaged, and transferred to the Hanford Environmental Restoration Disposal Facility (ERDF). The liquid fraction is to be diluted with depleted uranium for uranium criticality safety and iron nitrate for plutonium criticality safety, and neutralized with sodium hydroxide. The liquid fraction and associated precipitates are to be stored in the Hanford Tank Waste Remediation Systems (TWRS) pending vitrification. It is expected that most of the polychlorinated biphenyls (PCBs), associated with some K Basin sludges, will remain with the residual solids for ultimate disposal to ERDF. Filtration and precipitation during the neutralization step will further remove trace quantities of PCBs within the liquid fraction. The purpose of the work discussed in this report was to examine the dissolution behavior of actual KE Basin sludge materials at baseline flowsheet conditions and validate the.dissolution process step through bench-scale testing. The progress of the dissolution was evaluated by measuring the solution electrical conductivity and concentrations of key species in the

  16. Validation Testing of the Nitric Acid Dissolution Step Within the K Basin Sludge Pretreatment Process

    AJ Schmidt; CH Delegard; KL Silvers; PR Bredt; CD Carlson; EW Hoppe; JC Hayes; DE Rinehart; SR Gano; BM Thornton

    1999-01-01

    The work described in this report involved comprehensive bench-scale testing of nitric acid (HNO 3 ) dissolution of actual sludge materials from the Hanford K East (KE) Basin to confirm the baseline chemical pretreatment process. In addition, process monitoring and material balance information was collected to support the development and refinement of process flow diagrams. The testing was performed by Pacific Northwest National Laboratory (PNNL)for the US Department of Energy's Office of Spent Fuel Stabilization (EM-67) and Numatec Hanford Corporation (NHC) to assist in the development of the K Basin Sludge Pretreatment Process. The baseline chemical pretreatment process for K Basin sludge is nitric acid dissolution of all particulate material passing a 1/4-in. screen. The acid-insoluble fraction (residual solids) will be stabilized (possibly by chemical leaching/rinsing and grouting), packaged, and transferred to the Hanford Environmental Restoration Disposal Facility (ERDF). The liquid fraction is to be diluted with depleted uranium for uranium criticality safety and iron nitrate for plutonium criticality safety, and neutralized with sodium hydroxide. The liquid fraction and associated precipitates are to be stored in the Hanford Tank Waste Remediation Systems (TWRS) pending vitrification. It is expected that most of the polychlorinated biphenyls (PCBs), associated with some K Basin sludges, will remain with the residual solids for ultimate disposal to ERDF. Filtration and precipitation during the neutralization step will further remove trace quantities of PCBs within the liquid fraction. The purpose of the work discussed in this report was to examine the dissolution behavior of actual KE Basin sludge materials at baseline flowsheet conditions and validate the.dissolution process step through bench-scale testing. The progress of the dissolution was evaluated by measuring the solution electrical conductivity and concentrations of key species in the dissolver

  17. Three-Step Predictor-Corrector of Exponential Fitting Method for Nonlinear Schroedinger Equations

    Tang Chen; Zhang Fang; Yan Haiqing; Luo Tao; Chen Zhanqing

    2005-01-01

    We develop the three-step explicit and implicit schemes of exponential fitting methods. We use the three-step explicit exponential fitting scheme to predict an approximation, then use the three-step implicit exponential fitting scheme to correct this prediction. This combination is called the three-step predictor-corrector of exponential fitting method. The three-step predictor-corrector of exponential fitting method is applied to numerically compute the coupled nonlinear Schroedinger equation and the nonlinear Schroedinger equation with varying coefficients. The numerical results show that the scheme is highly accurate.

  18. Validation of the baking process as a kill-step for controlling Salmonella in muffins.

    Channaiah, Lakshmikantha H; Michael, Minto; Acuff, Jennifer C; Phebus, Randall K; Thippareddi, Harshavardhan; Olewnik, Maureen; Milliken, George

    2017-06-05

    This research investigates the potential risk of Salmonella in muffins when contamination is introduced via flour, the main ingredient. Flour was inoculated with a 3-strain cocktail of Salmonella serovars (Newport, Typhimurium, and Senftenberg) and re-dried to achieve a target concentration of ~8logCFU/g. The inoculated flour was then used to prepare muffin batter following a standard commercial recipe. The survival of Salmonella during and after baking at 190.6°C for 21min was analyzed by plating samples on selective and injury-recovery media at regular intervals. The thermal inactivation parameters (D and z values) of the 3-strain Salmonella cocktail were determined. A ≥5logCFU/g reduction in Salmonella population was demonstrated by 17min of baking, and a 6.1logCFU/g reduction in Salmonella population by 21min of baking. The D-values of Salmonella serovar cocktail in muffin batter were 62.2±3.0, 40.1±0.9 and 16.5±1.7min at 55, 58 and 61°C, respectively; and the z-value was 10.4±0.6°C. The water activity (a w ) of the muffin crumb (0.928) after baking and 30min of cooling was similar to that of pre-baked muffin batter, whereas the a w of the muffin crust decreased to (0.700). This study validates a typical commercial muffin baking process utilizing an oven temperature of 190.6°C for at least 17min as an effective kill-step in reducing a Salmonella serovar population by ≥5logCFU/g. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Validity of Garmin Vívofit and Polar Loop for measuring daily step counts in free-living conditions in adults

    Adam Šimůnek

    2016-09-01

    Full Text Available Background: Wrist activity trackers (WATs are becoming popular and widely used for the monitoring of physical activity. However, the validity of many WATs in measuring steps remains unknown. Objective: To determine the validity of the following WATs: Garmin Vívofit (Vívofit and Polar Loop (Loop, by comparing them with well-validated devices, Yamax Digiwalker SW-701 pedometer (Yamax and hip-mounted ActiGraph GT3X+ accelerometer (ActiGraph, in healthy adults. Methods: In free-living conditions, adult volunteers (N = 20 aged 25 to 52 years wore two WATs (Vívofit and Loop with Yamax and ActiGraph simultaneously over a 7 day period. The validity of Vívofit and Loop was assessed by comparing each device with the Yamax and ActiGraph, using a paired samples t-test, mean absolute percentage errors, intraclass correlation coefficients (ICC and Bland-Altman plots. Results: The differences between average steps per day were significant for all devices, except the difference between Vívofit and Yamax (p = .06; d = 0.2. Compared with Yamax and ActiGraph, the mean absolute percentage errors of Vívofit were -4.0% and 12.5%, respectively. For Loop the mean absolute percentage error was 8.9% compared with Yamax and 28.0% compared with ActiGraph. Vívofit showed a very strong correlation with both Yamax and ActiGraph (ICC = .89. Loop showed a very strong correlation with Yamax (ICC = .89 and a strong correlation with ActiGraph (ICC = .70. Conclusions: Vívofit showed higher validity than Loop in measuring daily step counts in free-living conditions. Loop appears to overestimate the daily number of steps in individuals who take more steps during a day.

  20. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  1. The Screening Test for Emotional Problems-Parent Report (STEP-P): Studies of Reliability and Validity

    Erford, Bradley T.; Alsamadi, Silvana C.

    2012-01-01

    Score reliability and validity of parent responses concerning their 10- to 17-year-old students were analyzed using the Screening Test for Emotional Problems-Parent Report (STEP-P), which assesses a variety of emotional problems classified under the Individuals with Disabilities Education Improvement Act. Score reliability, convergent, and…

  2. The Screening Test for Emotional Problems--Teacher-Report Version (Step-T): Studies of Reliability and Validity

    Erford, Bradley T.; Butler, Caitlin; Peacock, Elizabeth

    2015-01-01

    The Screening Test for Emotional Problems-Teacher Version (STEP-T) was designed to identify students aged 7-17 years with wide-ranging emotional disturbances. Coefficients alpha and test-retest reliability were adequate for all subscales except Anxiety. The hypothesized five-factor model fit the data very well and external aspects of validity were…

  3. Method of forming catalyst layer by single step infiltration

    Gerdes, Kirk; Lee, Shiwoo; Dowd, Regis

    2018-05-01

    Provided herein is a method for electrocatalyst infiltration of a porous substrate, of particular use for preparation of a cathode for a solid oxide fuel cell. The method generally comprises preparing an electrocatalyst infiltrate solution comprising an electrocatalyst, surfactant, chelating agent, and a solvent; pretreating a porous mixed ionic-electric conductive substrate; and applying the electrocatalyst infiltration solution to the porous mixed ionic-electric conductive substrate.

  4. Development and Validation of a Dissolution Test Method for ...

    Purpose: To develop and validate a dissolution test method for dissolution release of artemether and lumefantrine from tablets. Methods: A single dissolution method for evaluating the in vitro release of artemether and lumefantrine from tablets was developed and validated. The method comprised of a dissolution medium of ...

  5. Development of interface between MCNP-FISPACT-MCNP (IPR-MFM) based on rigorous two step method

    Shaw, A.K.; Swami, H.L.; Danani, C.

    2015-01-01

    In this work we present the development of interface tool between MCNP-FISPACT-MCNP (MFM) based on Rigorous Two Step method for the shutdown dose rate (SDDR) calculation. The MFM links MCNP radiation transport and the FISPACT inventory code through a suitable coupling scheme. MFM coupling scheme has three steps. In first step it picks neutron spectrum and total flux from MCNP output file to use as input parameter for FISPACT. It prepares the FISPACT input files by using irradiation history, neutron flux and neutron spectrum and then execute the FISPACT input file in the second step. Third step of MFM coupling scheme extracts the decay gammas from the FISPACT output file and prepares MCNP input file for decay gamma transport followed by execution of MCNP input file and estimation of SDDR. Here detailing of MFM methodology and flow scheme has been described. The programming language PYTHON has been chosen for this development of the coupling scheme. A complete loop of MCNP-FISPACT-MCNP has been developed to handle the simplified geometrical problems. For validation of MFM interface a manual cross-check has been performed which shows good agreements. The MFM interface also has been validated with exiting MCNP-D1S method for a simple geometry with 14 MeV cylindrical neutron source. (author)

  6. Using a Three-Step Method in a Calculus Class: Extending the Worked Example

    Miller, David

    2010-01-01

    This article discusses a three-step method that was used in a college calculus course. The three-step method was developed to help students understand the course material and transition to be more independent learners. In addition, the method helped students to transfer concepts from short-term to long-term memory while lowering cognitive load.…

  7. MIDPOINT TWO- STEPS RULE FOR THE SQUARE ROOT METHOD

    DR S.E UWAMUSI

    Carstensen, C., and Petkovic, M.S., 1994. An Improvement of Gargantini's Simultaneous Inclusion Method for. Polynomial Roots by Scroder's Correction. Applied Numerical Mathematics No. 13, pp 453-458. Carstensen, C., 1991. Linear Construction of Comparison Matrices. Linear Algebra and its Application No 14, pp. 191 ...

  8. Methods in Professional Training: Indoctrination from Step One.

    Powell, Marjorie

    A preliminary classification of methods used during first-year law courses to develop a sense of professional identification among students is presented. Professors' images of lawyers conveyed to students are described based on faculty comments. In addition, informal student interviews were conducted to determine their awareness of this…

  9. Content Validity of National Post Marriage Educational Program Using Mixed Methods

    MOHAJER RAHBARI, Masoumeh; SHARIATI, Mohammad; KERAMAT, Afsaneh; YUNESIAN, Masoud; ESLAMI, Mohammad; MOUSAVI, Seyed Abbas; MONTAZERI, Ali

    2015-01-01

    Background: Although the validity of content of program is mostly conducted with qualitative methods, this study used both qualitative and quantitative methods for the validation of content of post marriage training program provided for newly married couples. Content validity is a preliminary step of obtaining authorization required to install the program in country's health care system. Methods: This mixed methodological content validation study carried out in four steps with forming three expert panels. Altogether 24 expert panelists were involved in 3 qualitative and quantitative panels; 6 in the first item development one; 12 in the reduction kind, 4 of them were common with the first panel, and 10 executive experts in the last one organized to evaluate psychometric properties of CVR and CVI and Face validity of 57 educational objectives. Results: The raw data of post marriage program had been written by professional experts of Ministry of Health, using qualitative expert panel, the content was more developed by generating 3 topics and refining one topic and its respective content. In the second panel, totally six other objectives were deleted, three for being out of agreement cut of point and three on experts' consensus. The validity of all items was above 0.8 and their content validity indices (0.8–1) were completely appropriate in quantitative assessment. Conclusion: This study provided a good evidence for validation and accreditation of national post marriage program planned for newly married couples in health centers of the country in the near future. PMID:26056672

  10. The Value of Qualitative Methods in Social Validity Research

    Leko, Melinda M.

    2014-01-01

    One quality indicator of intervention research is the extent to which the intervention has a high degree of social validity, or practicality. In this study, I drew on Wolf's framework for social validity and used qualitative methods to ascertain five middle schoolteachers' perceptions of the social validity of System 44®--a phonics-based reading…

  11. Comparing Multi-Step IMAC and Multi-Step TiO2 Methods for Phosphopeptide Enrichment

    Yue, Xiaoshan; Schunter, Alissa; Hummon, Amanda B.

    2016-01-01

    Phosphopeptide enrichment from complicated peptide mixtures is an essential step for mass spectrometry-based phosphoproteomic studies to reduce sample complexity and ionization suppression effects. Typical methods for enriching phosphopeptides include immobilized metal affinity chromatography (IMAC) or titanium dioxide (TiO2) beads, which have selective affinity and can interact with phosphopeptides. In this study, the IMAC enrichment method was compared with the TiO2 enrichment method, using a multi-step enrichment strategy from whole cell lysate, to evaluate their abilities to enrich for different types of phosphopeptides. The peptide-to-beads ratios were optimized for both IMAC and TiO2 beads. Both IMAC and TiO2 enrichments were performed for three rounds to enable the maximum extraction of phosphopeptides from the whole cell lysates. The phosphopeptides that are unique to IMAC enrichment, unique to TiO2 enrichment, and identified with both IMAC and TiO2 enrichment were analyzed for their characteristics. Both IMAC and TiO2 enriched similar amounts of phosphopeptides with comparable enrichment efficiency. However, phosphopeptides that are unique to IMAC enrichment showed a higher percentage of multi-phosphopeptides, as well as a higher percentage of longer, basic, and hydrophilic phosphopeptides. Also, the IMAC and TiO2 procedures clearly enriched phosphopeptides with different motifs. Finally, further enriching with two rounds of TiO2 from the supernatant after IMAC enrichment, or further enriching with two rounds of IMAC from the supernatant TiO2 enrichment does not fully recover the phosphopeptides that are not identified with the corresponding multi-step enrichment. PMID:26237447

  12. A simple method for validation and verification of pipettes mounted on automated liquid handlers

    Stangegaard, Michael; Hansen, Anders Johannes; Frøslev, Tobias Guldberg

     We have implemented a simple method for validation and verification of the performance of pipettes mounted on automated liquid handlers as necessary for laboratories accredited under ISO 17025. An 8-step serial dilution of Orange G was prepared in quadruplicates in a flat bottom 96-well microtit...... available. In conclusion, we have set up a simple solution for the continuous validation of automated liquid handlers used for accredited work. The method is cheap, simple and easy to use for aqueous solutions but requires a spectrophotometer that can read microtiter plates....... We have implemented a simple method for validation and verification of the performance of pipettes mounted on automated liquid handlers as necessary for laboratories accredited under ISO 17025. An 8-step serial dilution of Orange G was prepared in quadruplicates in a flat bottom 96-well microtiter...

  13. Initial Steps in Creating a Developmentally Valid Tool for Observing/Assessing Rope Jumping

    Roberton, Mary Ann; Thompson, Gregory; Langendorfer, Stephen J.

    2017-01-01

    Background: Valid motor development sequences show the various behaviors that children display as they progress toward competence in specific motor skills. Teachers can use these sequences to observe informally or formally assess their students. While longitudinal study is ultimately required to validate developmental sequences, there are earlier,…

  14. Validity of using tri-axial accelerometers to measure human movement - Part II: Step counts at a wide range of gait velocities.

    Fortune, Emma; Lugade, Vipul; Morrow, Melissa; Kaufman, Kenton

    2014-06-01

    A subject-specific step counting method with a high accuracy level at all walking speeds is needed to assess the functional level of impaired patients. The study aim was to validate step counts and cadence calculations from acceleration data by comparison to video data during dynamic activity. Custom-built activity monitors, each containing one tri-axial accelerometer, were placed on the ankles, thigh, and waist of 11 healthy adults. ICC values were greater than 0.98 for video inter-rater reliability of all step counts. The activity monitoring system (AMS) algorithm demonstrated a median (interquartile range; IQR) agreement of 92% (8%) with visual observations during walking/jogging trials at gait velocities ranging from 0.1 to 4.8m/s, while FitBits (ankle and waist), and a Nike Fuelband (wrist) demonstrated agreements of 92% (36%), 93% (22%), and 33% (35%), respectively. The algorithm results demonstrated high median (IQR) step detection sensitivity (95% (2%)), positive predictive value (PPV) (99% (1%)), and agreement (97% (3%)) during a laboratory-based simulated free-living protocol. The algorithm also showed high median (IQR) sensitivity, PPV, and agreement identifying walking steps (91% (5%), 98% (4%), and 96% (5%)), jogging steps (97% (6%), 100% (1%), and 95% (6%)), and less than 3% mean error in cadence calculations. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Development and validation of a spectroscopic method for the ...

    Development and validation of a spectroscopic method for the simultaneous analysis of ... advanced analytical methods such as high pressure liquid ..... equipment. DECLARATIONS ... high-performance liquid chromatography. J Chromatogr.

  16. Increased efficacy for in-house validation of real-time PCR GMO detection methods.

    Scholtens, I M J; Kok, E J; Hougs, L; Molenaar, B; Thissen, J T N M; van der Voet, H

    2010-03-01

    To improve the efficacy of the in-house validation of GMO detection methods (DNA isolation and real-time PCR, polymerase chain reaction), a study was performed to gain insight in the contribution of the different steps of the GMO detection method to the repeatability and in-house reproducibility. In the present study, 19 methods for (GM) soy, maize canola and potato were validated in-house of which 14 on the basis of an 8-day validation scheme using eight different samples and five on the basis of a more concise validation protocol. In this way, data was obtained with respect to the detection limit, accuracy and precision. Also, decision limits were calculated for declaring non-conformance (>0.9%) with 95% reliability. In order to estimate the contribution of the different steps in the GMO analysis to the total variation variance components were estimated using REML (residual maximum likelihood method). From these components, relative standard deviations for repeatability and reproducibility (RSD(r) and RSD(R)) were calculated. The results showed that not only the PCR reaction but also the factors 'DNA isolation' and 'PCR day' are important factors for the total variance and should therefore be included in the in-house validation. It is proposed to use a statistical model to estimate these factors from a large dataset of initial validations so that for similar GMO methods in the future, only the PCR step needs to be validated. The resulting data are discussed in the light of agreed European criteria for qualified GMO detection methods.

  17. Validation of method in instrumental NAA for food products sample

    Alfian; Siti Suprapti; Setyo Purwanto

    2010-01-01

    NAA is a method of testing that has not been standardized. To affirm and confirm that this method is valid. it must be done validation of the method with various sample standard reference materials. In this work. the validation is carried for food product samples using NIST SRM 1567a (wheat flour) and NIST SRM 1568a (rice flour). The results show that the validation method for testing nine elements (Al, K, Mg, Mn, Na, Ca, Fe, Se and Zn) in SRM 1567a and eight elements (Al, K, Mg, Mn, Na, Ca, Se and Zn ) in SRM 1568a pass the test of accuracy and precision. It can be conclude that this method has power to give valid result in determination element of the food products samples. (author)

  18. Validated High Performance Liquid Chromatography Method for ...

    Purpose: To develop a simple, rapid and sensitive high performance liquid chromatography (HPLC) method for the determination of cefadroxil monohydrate in human plasma. Methods: Schimadzu HPLC with LC solution software was used with Waters Spherisorb, C18 (5 μm, 150mm × 4.5mm) column. The mobile phase ...

  19. Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

    Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.

    2009-01-01

    We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step,

  20. Criterion-Validity of Commercially Available Physical Activity Tracker to Estimate Step Count, Covered Distance and Energy Expenditure during Sports Conditions

    Yvonne Wahl

    2017-09-01

    Full Text Available Background: In the past years, there was an increasing development of physical activity tracker (Wearables. For recreational people, testing of these devices under walking or light jogging conditions might be sufficient. For (elite athletes, however, scientific trustworthiness needs to be given for a broad spectrum of velocities or even fast changes in velocities reflecting the demands of the sport. Therefore, the aim was to evaluate the validity of eleven Wearables for monitoring step count, covered distance and energy expenditure (EE under laboratory conditions with different constant and varying velocities.Methods: Twenty healthy sport students (10 men, 10 women performed a running protocol consisting of four 5 min stages of different constant velocities (4.3; 7.2; 10.1; 13.0 km·h−1, a 5 min period of intermittent velocity, and a 2.4 km outdoor run (10.1 km·h−1 while wearing eleven different Wearables (Bodymedia Sensewear, Beurer AS 80, Polar Loop, Garmin Vivofit, Garmin Vivosmart, Garmin Vivoactive, Garmin Forerunner 920XT, Fitbit Charge, Fitbit Charge HR, Xaomi MiBand, Withings Pulse Ox. Step count, covered distance, and EE were evaluated by comparing each Wearable with a criterion method (Optogait system and manual counting for step count, treadmill for covered distance and indirect calorimetry for EE.Results: All Wearables, except Bodymedia Sensewear, Polar Loop, and Beurer AS80, revealed good validity (small MAPE, good ICC for all constant and varying velocities for monitoring step count. For covered distance, all Wearables showed a very low ICC (<0.1 and high MAPE (up to 50%, revealing no good validity. The measurement of EE was acceptable for the Garmin, Fitbit and Withings Wearables (small to moderate MAPE, while Bodymedia Sensewear, Polar Loop, and Beurer AS80 showed a high MAPE up to 56% for all test conditions.Conclusion: In our study, most Wearables provide an acceptable level of validity for step counts at different

  1. Phd study of reliability and validity: One step closer to a standardized music therapy assessment model

    Jacobsen, Stine Lindahl

    The paper will present a phd study concerning reliability and validity of music therapy assessment model “Assessment of Parenting Competences” (APC) in the area of families with emotionally neglected children. This study had a multiple strategy design with a philosophical base of critical realism...... and pragmatism. The fixed design for this study was a between and within groups design in testing the APCs reliability and validity. The two different groups were parents with neglected children and parents with non-neglected children. The flexible design had a multiple case study strategy specifically...

  2. Detection of protein concentrations using a pH-step titration method

    Kruise, J.; Kruise, J.; Eijkel, Jan C.T.; Bergveld, Piet

    1997-01-01

    A stimulus-response method based on the application of a pH step is proposed for the detection of protein immobilized in a membrane on top of an ion-sensitive field-effect transistor (ISFET). The ISFET response to a step-wise change in pH, applied at the interface between the membrane and the

  3. Human Factors methods concerning integrated validation of nuclear power plant control rooms; Metodutveckling foer integrerad validering

    Oskarsson, Per-Anders; Johansson, Bjoern J.E.; Gonzalez, Natalia (Swedish Defence Research Agency, Information Systems, Linkoeping (Sweden))

    2010-02-15

    The frame of reference for this work was existing recommendations and instructions from the NPP area, experiences from the review of the Turbic Validation and experiences from system validations performed at the Swedish Armed Forces, e.g. concerning military control rooms and fighter pilots. These enterprises are characterized by complex systems in extreme environments, often with high risks, where human error can lead to serious consequences. A focus group has been performed with representatives responsible for Human Factors issues from all Swedish NPP:s. The questions that were discussed were, among other things, for whom an integrated validation (IV) is performed and its purpose, what should be included in an IV, the comparison with baseline measures, the design process, the role of SSM, which methods of measurement should be used, and how the methods are affected of changes in the control room. The report brings different questions to discussion concerning the validation process. Supplementary methods of measurement for integrated validation are discussed, e.g. dynamic, psychophysiological, and qualitative methods for identification of problems. Supplementary methods for statistical analysis are presented. The study points out a number of deficiencies in the validation process, e.g. the need of common guidelines for validation and design, criteria for different types of measurements, clarification of the role of SSM, and recommendations for the responsibility of external participants in the validation process. The authors propose 12 measures for taking care of the identified problems

  4. Validated high performance liquid chromatographic (HPLC) method ...

    STORAGESEVER

    2010-02-22

    Feb 22, 2010 ... specific and accurate high performance liquid chromatographic method for determination of ZER in micro-volumes ... tional medicine as a cure for swelling, sores, loss of appetite and ... Receptor Activator for Nuclear Factor κ B Ligand .... The effect of ... be suitable for preclinical pharmacokinetic studies. The.

  5. Validation of qualitative microbiological test methods

    IJzerman-Boon, Pieta C.; van den Heuvel, Edwin R.

    2015-01-01

    This paper considers a statistical model for the detection mechanism of qualitative microbiological test methods with a parameter for the detection proportion (the probability to detect a single organism) and a parameter for the false positive rate. It is demonstrated that the detection proportion

  6. Validation Method of a Telecommunications Blackout Attack

    Amado, Joao; Nunes, Paulo

    2005-01-01

    ..., and to obtain the maximum disruptive effect over the services. The proposed method uses a top-down approach, starting on the service level and ending on the different network elements that can be identified in the end as the targets for the attack.

  7. Comprehension of Multiple Documents with Conflicting Information: A Two-Step Model of Validation

    Richter, Tobias; Maier, Johanna

    2017-01-01

    In this article, we examine the cognitive processes that are involved when readers comprehend conflicting information in multiple texts. Starting from the notion of routine validation during comprehension, we argue that readers' prior beliefs may lead to a biased processing of conflicting information and a one-sided mental model of controversial…

  8. Valid methods: the quality assurance of test method development, validation, approval, and transfer for veterinary testing laboratories.

    Wiegers, Ann L

    2003-07-01

    Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.

  9. A two-step Hilbert transform method for 2D image reconstruction

    Noo, Frederic; Clackdoyle, Rolf; Pack, Jed D

    2004-01-01

    The paper describes a new accurate two-dimensional (2D) image reconstruction method consisting of two steps. In the first step, the backprojected image is formed after taking the derivative of the parallel projection data. In the second step, a Hilbert filtering is applied along certain lines in the differentiated backprojection (DBP) image. Formulae for performing the DBP step in fan-beam geometry are also presented. The advantage of this two-step Hilbert transform approach is that in certain situations, regions of interest (ROIs) can be reconstructed from truncated projection data. Simulation results are presented that illustrate very similar reconstructed image quality using the new method compared to standard filtered backprojection, and that show the capability to correctly handle truncated projections. In particular, a simulation is presented of a wide patient whose projections are truncated laterally yet for which highly accurate ROI reconstruction is obtained

  10. Lagrangian fractional step method for the incompressible Navier--Stokes equations on a periodic domain

    Boergers, C.; Peskin, C.S.

    1987-01-01

    In the Lagrangian fractional step method introduced in this paper, the fluid velocity and pressure are defined on a collection of N fluid markers. At each time step, these markers are used to generate a Voronoi diagram, and this diagram is used to construct finite-difference operators corresponding to the divergence, gradient, and Laplacian. The splitting of the Navier--Stokes equations leads to discrete Helmholtz and Poisson problems, which we solve using a two-grid method. The nonlinear convection terms are modeled simply by the displacement of the fluid markers. We have implemented this method on a periodic domain in the plane. We describe an efficient algorithm for the numerical construction of periodic Voronoi diagrams, and we report on numerical results which indicate the the fractional step method is convergent of first order. The overall work per time step is proportional to N log N. copyright 1987 Academic Press, Inc

  11. Pesticide applicators questionnaire content validation: A fuzzy delphi method.

    Manakandan, S K; Rosnah, I; Mohd Ridhuan, J; Priya, R

    2017-08-01

    The most crucial step in forming a set of survey questionnaire is deciding the appropriate items in a construct. Retaining irrelevant items and removing important items will certainly mislead the direction of a particular study. This article demonstrates Fuzzy Delphi method as one of the scientific analysis technique to consolidate consensus agreement within a panel of experts pertaining to each item's appropriateness. This method reduces the ambiguity, diversity, and discrepancy of the opinions among the experts hence enhances the quality of the selected items. The main purpose of this study was to obtain experts' consensus on the suitability of the preselected items on the questionnaire. The panel consists of sixteen experts from the Occupational and Environmental Health Unit of Ministry of Health, Vector-borne Disease Control Unit of Ministry of Health and Occupational and Safety Health Unit of both public and private universities. A set of questionnaires related to noise and chemical exposure were compiled based on the literature search. There was a total of six constructs with 60 items in which three constructs for knowledge, attitude, and practice of noise exposure and three constructs for knowledge, attitude, and practice of chemical exposure. The validation process replicated recent Fuzzy Delphi method that using a concept of Triangular Fuzzy Numbers and Defuzzification process. A 100% response rate was obtained from all the sixteen experts with an average Likert scoring of four to five. Post FDM analysis, the first prerequisite was fulfilled with a threshold value (d) ≤ 0.2, hence all the six constructs were accepted. For the second prerequisite, three items (21%) from noise-attitude construct and four items (40%) from chemical-practice construct had expert consensus lesser than 75%, which giving rise to about 12% from the total items in the questionnaire. The third prerequisite was used to rank the items within the constructs by calculating the average

  12. Development of the Modified Four Square Step Test and its reliability and validity in people with stroke.

    Roos, Margaret A; Reisman, Darcy S; Hicks, Gregory; Rose, William; Rudolph, Katherine S

    2016-01-01

    Adults with stroke have difficulty avoiding obstacles when walking, especially when a time constraint is imposed. The Four Square Step Test (FSST) evaluates dynamic balance by requiring individuals to step over canes in multiple directions while being timed, but many people with stroke are unable to complete it. The purposes of this study were to (1) modify the FSST by replacing the canes with tape so that more persons with stroke could successfully complete the test and (2) examine the reliability and validity of the modified version. Fifty-five subjects completed the Modified FSST (mFSST) by stepping over tape in all four directions while being timed. The mFSST resulted in significantly greater numbers of subjects completing the test than the FSST (39/55 [71%] and 33/55 [60%], respectively) (p < 0.04). The test-retest, intrarater, and interrater reliability of the mFSST were excellent (intraclass correlation coefficient ranges: 0.81-0.99). Construct and concurrent validity of the mFSST were also established. The minimal detectable change was 6.73 s. The mFSST, an ideal measure of dynamic balance, can identify progress in people with stroke in varied settings and can be completed by a wide range of people with stroke in approximately 5 min with the use of minimal equipment (tape, stop watch).

  13. [Validation of a triage scale: first step in patient admission and in emergency service models].

    Legrand, A; Thys, F; Vermeiren, E; Touwaide, M; D'Hoore, W; Hubin, V; Reynaert, M S

    2003-03-01

    At present, most emergency services handle the multitude of various demands in the same unity of place and by the same team of nurses aides, with direct consequences on the waiting time and in the handling of problems of varying degrees of importance. Our service examines other administrative models based on a triage of time and of orientation. In a prospective study on 679 patients, we have validated a triage tool inspired from the ICEM model (International Cooperation of Emergency Medicine) allowing patients to receive, while they wait, information and training, based on the resources provided, in order to deal with their particular medical problem. The validation of this tool was carried out in terms of its utilization as well as its reliability. It appears that, with the type of triage offered, there is a theoretical reserve of waiting time for the patients in which the urgency is relative, and which could be better used in the handling of more vital cases.

  14. A preliminary investigation of PSA validation methods

    Unwin, S D [Science Applications International Corp., (United States)

    1995-09-01

    This document has been prepared to support the initial phase of the Atomic Energy Control Board`s program to review and evaluate Probabilistic Safety Assessment (PSA) studies conducted by nuclear generating station designers and licensees. The document provides (1) a review of current and prospective applications of PSA technology in the Canadian nuclear power industry; (2) an assessment of existing practices and techniques for the review or risk and hazard identification studies in the international nuclear power sector and other technological sectors; and (3) proposed analytical framework in which to develop systematic techniques for the scrutiny and evaluation of a PSA model. These frameworks are based on consideration of the mathematical structure of a PSA model and are intended to facilitate the development of methods to evaluate a model relative to intended end-uses. (author). 34 refs., 10 tabs., 3 figs.

  15. A preliminary investigation of PSA validation methods

    Unwin, S.D.

    1995-09-01

    This document has been prepared to support the initial phase of the Atomic Energy Control Board's program to review and evaluate Probabilistic Safety Assessment (PSA) studies conducted by nuclear generating station designers and licensees. The document provides (1) a review of current and prospective applications of PSA technology in the Canadian nuclear power industry; (2) an assessment of existing practices and techniques for the review or risk and hazard identification studies in the international nuclear power sector and other technological sectors; and (3) proposed analytical framework in which to develop systematic techniques for the scrutiny and evaluation of a PSA model. These frameworks are based on consideration of the mathematical structure of a PSA model and are intended to facilitate the development of methods to evaluate a model relative to intended end-uses. (author). 34 refs., 10 tabs., 3 figs

  16. A simple method for validation and verification of pipettes mounted on automated liquid handlers

    Stangegaard, Michael; Hansen, Anders Johannes; Frøslev, Tobias G

    2011-01-01

    We have implemented a simple, inexpensive, and fast procedure for validation and verification of the performance of pipettes mounted on automated liquid handlers (ALHs) as necessary for laboratories accredited under ISO 17025. A six- or seven-step serial dilution of OrangeG was prepared in quadru......We have implemented a simple, inexpensive, and fast procedure for validation and verification of the performance of pipettes mounted on automated liquid handlers (ALHs) as necessary for laboratories accredited under ISO 17025. A six- or seven-step serial dilution of OrangeG was prepared...... are freely available. In conclusion, we have set up a simple, inexpensive, and fast solution for the continuous validation of ALHs used for accredited work according to the ISO 17025 standard. The method is easy to use for aqueous solutions but requires a spectrophotometer that can read microtiter plates....

  17. HVS-based quantization steps for validation of digital cinema extended bitrates

    Larabi, M.-C.; Pellegrin, P.; Anciaux, G.; Devaux, F.-O.; Tulet, O.; Macq, B.; Fernandez, C.

    2009-02-01

    In Digital Cinema, the video compression must be as transparent as possible to provide the best image quality to the audience. The goal of compression is to simplify transport, storing, distribution and projection of films. For all those tasks, equipments need to be developed. It is thus mandatory to reduce the complexity of the equipments by imposing limitations in the specifications. In this sense, the DCI has fixed the maximum bitrate for a compressed stream to 250 Mbps independently from the input format (4K/24fps, 2K/48fps or 2K/24fps). The work described in this paper This parameter is discussed in this paper because it is not consistent to double/quadruple the input rate without increasing the output rate. The work presented in this paper is intended to define quantization steps ensuring the visually lossless compression. Two steps are followed first to evaluate the effect of each subband separately and then to fin the scaling ratio. The obtained results show that it is necessary to increase the bitrate limit for cinema material in order to achieve the visually lossless.

  18. some generalized two-step block hybrid numerov method for solving ...

    Nwokem et al.

    ABSTRACT. This paper proposes a class of generalized two-step Numerov methods, a block hybrid type for the direct solution of general second order ordinary differential equations. Both the main method and additional methods were derived via interpolation and collocation procedures. The basic properties of zero ...

  19. The Child-care Food and Activity Practices Questionnaire (CFAPQ): development and first validation steps.

    Gubbels, Jessica S; Sleddens, Ester Fc; Raaijmakers, Lieke Ch; Gies, Judith M; Kremers, Stef Pj

    2016-08-01

    To develop and validate a questionnaire to measure food-related and activity-related practices of child-care staff, based on existing, validated parenting practices questionnaires. A selection of items from the Comprehensive Feeding Practices Questionnaire (CFPQ) and the Preschooler Physical Activity Parenting Practices (PPAPP) questionnaire was made to include items most suitable for the child-care setting. The converted questionnaire was pre-tested among child-care staff during cognitive interviews and pilot-tested among a larger sample of child-care staff. Factor analyses with Varimax rotation and internal consistencies were used to examine the scales. Spearman correlations, t tests and ANOVA were used to examine associations between the scales and staff's background characteristics (e.g. years of experience, gender). Child-care centres in the Netherlands. The qualitative pre-test included ten child-care staff members. The quantitative pilot test included 178 child-care staff members. The new questionnaire, the Child-care Food and Activity Practices Questionnaire (CFAPQ), consists of sixty-three items (forty food-related and twenty-three activity-related items), divided over twelve scales (seven food-related and five activity-related scales). The CFAPQ scales are to a large extent similar to the original CFPQ and PPAPP scales. The CFAPQ scales show sufficient internal consistency with Cronbach's α ranging between 0·53 and 0·96, and average corrected item-total correlations within acceptable ranges (0·30-0·89). Several of the scales were significantly associated with child-care staff's background characteristics. Scale psychometrics of the CFAPQ indicate it is a valid questionnaire that assesses child-care staff's practices related to both food and activities.

  20. Steps toward validity in active living research: research design that limits accusations of physical determinism.

    Riggs, William

    2014-03-01

    "Active living research" has been accused of being overly "physically deterministic" and this article argues that urban planners must continue to evolve research and address biases in this area. The article first provides background on how researchers have dealt with the relationship between the built environment and health over years. This leads to a presentation of how active living research might be described as overly deterministic. The article then offers lessons for researchers planning to embark in active-living studies as to how they might increase validity and minimize criticism of physical determinism. © 2013 Published by Elsevier Ltd.

  1. Method validation for strobilurin fungicides in cereals and fruit

    Christensen, Hanne Bjerre; Granby, Kit

    2001-01-01

    Strobilurins are a new class of fungicides that are active against a broad spectrum of fungi. In the present work a GC method for analysis of strobilurin fungicides was validated. The method was based on extraction with ethyl acetate/cyclohexane, clean-up by gel permeation chromatography (GPC......) and determination of the content by gas chromatography (GC) with electron capture (EC-), nitrogen/phosphorous (NP-), and mass spectrometric (MS-) detection. Three strobilurins, azoxystrobin, kresoxim-methyl and trifloxystrobin were validated on three matrices, wheat, apple and grapes. The validation was based...

  2. Development and validation of a local time stepping-based PaSR solver for combustion and radiation modeling

    Pang, Kar Mun; Ivarsson, Anders; Haider, Sajjad

    2013-01-01

    In the current work, a local time stepping (LTS) solver for the modeling of combustion, radiative heat transfer and soot formation is developed and validated. This is achieved using an open source computational fluid dynamics code, OpenFOAM. Akin to the solver provided in default assembly i...... library in the edcSimpleFoam solver which was introduced during the 6th OpenFOAM workshop is modified and coupled with the current solver. One of the main amendments made is the integration of soot radiation submodel since this is significant in rich flames where soot particles are formed. The new solver...

  3. Comparison of Stepped Care Delivery Against a Single, Empirically Validated Cognitive-Behavioral Therapy Program for Youth With Anxiety: A Randomized Clinical Trial.

    Rapee, Ronald M; Lyneham, Heidi J; Wuthrich, Viviana; Chatterton, Mary Lou; Hudson, Jennifer L; Kangas, Maria; Mihalopoulos, Cathrine

    2017-10-01

    Stepped care is embraced as an ideal model of service delivery but is minimally evaluated. The aim of this study was to evaluate the efficacy of cognitive-behavioral therapy (CBT) for child anxiety delivered via a stepped-care framework compared against a single, empirically validated program. A total of 281 youth with anxiety disorders (6-17 years of age) were randomly allocated to receive either empirically validated treatment or stepped care involving the following: (1) low intensity; (2) standard CBT; and (3) individually tailored treatment. Therapist qualifications increased at each step. Interventions did not differ significantly on any outcome measures. Total therapist time per child was significantly shorter to deliver stepped care (774 minutes) compared with best practice (897 minutes). Within stepped care, the first 2 steps returned the strongest treatment gains. Stepped care and a single empirically validated program for youth with anxiety produced similar efficacy, but stepped care required slightly less therapist time. Restricting stepped care to only steps 1 and 2 would have led to considerable time saving with modest loss in efficacy. Clinical trial registration information-A Randomised Controlled Trial of Standard Care Versus Stepped Care for Children and Adolescents With Anxiety Disorders; http://anzctr.org.au/; ACTRN12612000351819. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  4. Assessment of a recombinant androgen receptor binding assay: initial steps towards validation.

    Freyberger, Alexius; Weimer, Marc; Tran, Hoai-Son; Ahr, Hans-Jürgen

    2010-08-01

    Despite more than a decade of research in the field of endocrine active compounds with affinity for the androgen receptor (AR), still no validated recombinant AR binding assay is available, although recombinant AR can be obtained from several sources. With funding from the European Union (EU)-sponsored 6th framework project, ReProTect, we developed a model protocol for such an assay based on a simple AR binding assay recently developed at our institution. Important features of the protocol were the use of a rat recombinant fusion protein to thioredoxin containing both the hinge region and ligand binding domain (LBD) of the rat AR (which is identical to the human AR-LBD) and performance in a 96-well plate format. Besides two reference compounds [dihydrotestosterone (DHT), androstenedione] ten test compounds with different affinities for the AR [levonorgestrel, progesterone, prochloraz, 17alpha-methyltestosterone, flutamide, norethynodrel, o,p'-DDT, dibutylphthalate, vinclozolin, linuron] were used to explore the performance of the assay. At least three independent experiments per compound were performed. The AR binding properties of reference and test compounds were well detected, in terms of the relative ranking of binding affinities, there was good agreement with published data obtained from experiments using recombinant AR preparations. Irrespective of the chemical nature of the compound, individual IC(50)-values for a given compound varied by not more than a factor of 2.6. Our data demonstrate that the assay reliably ranked compounds with strong, weak, and no/marginal affinity for the AR with high accuracy. It avoids the manipulation and use of animals, as a recombinant protein is used and thus contributes to the 3R concept. On the whole, this assay is a promising candidate for further validation. Copyright 2009 Elsevier Inc. All rights reserved.

  5. International Harmonization and Cooperation in the Validation of Alternative Methods.

    Barroso, João; Ahn, Il Young; Caldeira, Cristiane; Carmichael, Paul L; Casey, Warren; Coecke, Sandra; Curren, Rodger; Desprez, Bertrand; Eskes, Chantra; Griesinger, Claudius; Guo, Jiabin; Hill, Erin; Roi, Annett Janusch; Kojima, Hajime; Li, Jin; Lim, Chae Hyung; Moura, Wlamir; Nishikawa, Akiyoshi; Park, HyeKyung; Peng, Shuangqing; Presgrave, Octavio; Singer, Tim; Sohn, Soo Jung; Westmoreland, Carl; Whelan, Maurice; Yang, Xingfen; Yang, Ying; Zuang, Valérie

    The development and validation of scientific alternatives to animal testing is important not only from an ethical perspective (implementation of 3Rs), but also to improve safety assessment decision making with the use of mechanistic information of higher relevance to humans. To be effective in these efforts, it is however imperative that validation centres, industry, regulatory bodies, academia and other interested parties ensure a strong international cooperation, cross-sector collaboration and intense communication in the design, execution, and peer review of validation studies. Such an approach is critical to achieve harmonized and more transparent approaches to method validation, peer-review and recommendation, which will ultimately expedite the international acceptance of valid alternative methods or strategies by regulatory authorities and their implementation and use by stakeholders. It also allows achieving greater efficiency and effectiveness by avoiding duplication of effort and leveraging limited resources. In view of achieving these goals, the International Cooperation on Alternative Test Methods (ICATM) was established in 2009 by validation centres from Europe, USA, Canada and Japan. ICATM was later joined by Korea in 2011 and currently also counts with Brazil and China as observers. This chapter describes the existing differences across world regions and major efforts carried out for achieving consistent international cooperation and harmonization in the validation and adoption of alternative approaches to animal testing.

  6. Validation of pestice multi residue analysis method on cucumber

    2011-01-01

    In this study we aimed to validate the method of multi pesticide residue analysis on cucumber. Before real sample injection, system suitability test was performed in gas chromatography (GC). For this purpose, a sensitive pesticide mixture was used for GC-NPD and estimated the performance parameters such as number of effective theoretical plates, resolution factor, asymmetry, tailing and selectivity. It was detected that the system was suitable for calibration and sample injection. Samples were fortified at the level of 0.02, 0.2, 0.8 and 1 mg/kg with mixture of dichlorvos, malathion and chloropyrifos pesticides. In the fortification step 1 4C-carbaryl was also added on homogenized analytical portions to make use of 1 4C labelled pesticides for the determining extraction efficiency. Then the basic analytical process, such as ethyl acetate extraction, filtration, evaporation and cleanup, were performed. The GPC calibration using 1 4C- carbaryl and fortification mixture (dichlorvos, malathion and chloropyrifos) showed that pesticide fraction come through the column between the 8-23 ml fractions. The recovery of 1 4C-carbaryl after the extraction and cleanup step were 92.63-111.73 % and 74.83-102.22 %, respectively. The stability of pesticides during analysis is an important factor. In this study, stability test was performed including matrix effect. Our calculation and t test results showed that above mentioned pesticides were not stabile during sample processing in our laboratory conditions and it was found that sample comminution with dry ice may improve stability. In the other part of the study, 1 4C-chloropyrifos was used to determine homogeneity of analytical portions taken from laboratory samples. Use of 1 4C labelled pesticides allows us for quick quantification analyte, even with out clean-up. The analytical results show that after sample processing with waring blender, analytical portions were homogenous. Sample processing uncertainty depending on quantity of

  7. Stability analysis and time-step limits for a Monte Carlo Compton-scattering method

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.

    2010-01-01

    A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.

  8. Synthesis and characterization of copper nanofluid by a novel one-step method

    Kumar, S. Ananda; Meenakshi, K. Shree; Narashimhan, B.R.V.; Srikanth, S.; Arthanareeswaran, G.

    2009-01-01

    This paper presents a novel one-step method for the preparation of stable, non-agglomerated copper nanofluids by reducing copper sulphate pentahydrate with sodium hypophosphite as reducing agent in ethylene glycol as base fluid by means of conventional heating. This is an in situ, one-step method which gives high yield of product with less time consumption. The characterization of the nanofluid is done by particle size analyzer, X-ray diffraction topography, UV-vis analysis and Fourier transform infrared spectroscopy (FT-IR) followed by the study of thermal conductivity of nanofluid by the transient hot wire method

  9. Validation of methods for the determination of radium in waters and soil

    Decaillon, J.-G.; Bickel, M.; Hill, C.; Altzitzoglou, T.

    2004-01-01

    This article describes the advantages and disadvantages of several analytical methods used to prepare the alpha-particle source. As a result of this study, a new method combining commercial extraction and ion chromatography prior to a final co-precipitation step is proposed. This method has been applied and validated on several matrices (soil, waters) in the framework of international intercomparisons. The integration of this method in a global procedure to analyze actinoids and radium from a single solution (or digested soil) is also described

  10. Rotor cascade shape optimization with unsteady passing wakes using implicit dual time stepping method

    Lee, Eun Seok

    2000-10-01

    An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with

  11. Testing and Validation of Computational Methods for Mass Spectrometry.

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  12. A two-step method for developing a control rod program for boiling water reactors

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1992-01-01

    This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in a computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift

  13. First steps towards a validation of the new burnup and depletion code TNT

    Herber, S.C.; Allelein, H.J. [RWTH Aachen (Germany). Inst. for Reactor Safety and Reactor Technology; Research Center Juelich (Germany). Inst. for Energy and Climate Research - Nuclear Waste Disposal and Reactor Safety (IEK-6); Friege, N. [RWTH Aachen (Germany). Inst. for Reactor Safety and Reactor Technology; Kasselmann, S. [Research Center Juelich (Germany). Inst. for Energy and Climate Research - Nuclear Waste Disposal and Reactor Safety (IEK-6)

    2012-11-01

    In the frame of the fusion of the core design calculation capabilities, represented by V.S.O.P., and the accident calculation capabilities, represented by MGT(-3D), the successor of the TINTE code, difficulties were observed in defining an interface between a program backbone and the ORIGEN code respectively the ORIGENJUEL code. The estimation of the effort of refactoring the ORIGEN code or to write a new burnup code from scratch, led to the decision that it would be more efficient writing a new code, which could benefit from existing programming and software engineering tools from the computer code side and which can use the latest knowledge of nuclear reactions, e.g. consider all documented reaction channels. Therefore a new code with an object-oriented approach was developed at IEK-6. Object-oriented programming is currently state of the art and provides mostly an improved extensibility and maintainability. The new code was named TNT which stands for Topological Nuclide Transformation, since the code makes use of the real topology of the nuclear reactions. Here we want to present some first validation results from code to code benchmarks with the codes ORIGEN V2.2 and FISPACT2005 and whenever possible analytical results also used for the comparison. The 2 reference codes were chosen due to their high reputation in the field of fission reactor analysis (ORIGEN) and fusion facilities (FISPACT). (orig.)

  14. Q-Step methods for Newton-Jacobi operator equation | Uwasmusi ...

    The paper considers the Newton-Jacobi operator equation for the solution of nonlinear systems of equations. Special attention is paid to the computational part of this method with particular reference to the q-step methods. Journal of the Nigerian Association of Mathematical Physics Vol. 8 2004: pp. 237-241 ...

  15. A Three Step Explicit Method for Direct Solution of Third Order ...

    This study produces a three step discrete Linear Multistep Method for Direct solution of third order initial value problems of ordinary differential equations of the form y'''= f(x,y,y',y''). Taylor series expansion technique was adopted in the development of the method. The differential system from the basis polynomial function to ...

  16. Measuring perceptions related to e-cigarettes: Important principles and next steps to enhance study validity.

    Gibson, Laura A; Creamer, MeLisa R; Breland, Alison B; Giachello, Aida Luz; Kaufman, Annette; Kong, Grace; Pechacek, Terry F; Pepper, Jessica K; Soule, Eric K; Halpern-Felsher, Bonnie

    2018-04-01

    Measuring perceptions associated with e-cigarette use can provide valuable information to help explain why youth and adults initiate and continue to use e-cigarettes. However, given the complexity of e-cigarette devices and their continuing evolution, measures of perceptions of this product have varied greatly. Our goal, as members of the working group on e-cigarette measurement within the Tobacco Centers of Regulatory Science (TCORS) network, is to provide guidance to researchers developing surveys concerning e-cigarette perceptions. We surveyed the 14 TCORS sites and received and reviewed 371 e-cigarette perception items from seven sites. We categorized the items based on types of perceptions asked, and identified measurement approaches that could enhance data validity and approaches that researchers may consider avoiding. The committee provides suggestions in four areas: (1) perceptions of benefits, (2) harm perceptions, (3) addiction perceptions, and (4) perceptions of social norms. Across these 4 areas, the most appropriate way to assess e-cigarette perceptions depends largely on study aims. The type and number of items used to examine e-cigarette perceptions will also vary depending on respondents' e-cigarette experience (i.e., user vs. non-user), level of experience (e.g., experimental vs. established), type of e-cigarette device (e.g., cig-a-like, mod), and age. Continuous formative work is critical to adequately capture perceptions in response to the rapidly changing e-cigarette landscape. Most important, it is imperative to consider the unique perceptual aspects of e-cigarettes, building on the conventional cigarette literature as appropriate, but not relying on existing conventional cigarette perception items without adjustment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Development and Validation of Improved Method for Fingerprint ...

    Purpose: To develop and validate an improved method by capillary zone electrophoresis with photodiode array detection for the fingerprint analysis of Ligusticum chuanxiong Hort. (Rhizoma Chuanxiong). Methods: The optimum high performance capillary electrophoresis (HPCE) conditions were 30 mM borax containing 5 ...

  18. Development and Validation of a Bioanalytical Method for Direct ...

    Purpose: To develop and validate a user-friendly spiked plasma method for the extraction of diclofenac potassium that reduces the number of treatments with plasma sample, in order to minimize human error. Method: Instead of solvent evaporation technique, the spiked plasma sample was modified with H2SO4 and NaCl, ...

  19. Development and Validation of Analytical Method for Losartan ...

    Development and Validation of Analytical Method for Losartan-Copper Complex Using UV-Vis Spectrophotometry. ... Tropical Journal of Pharmaceutical Research ... Purpose: To develop a new spectrophotometric method for the analysis of losartan potassium in pharmaceutical formulations by making its complex with ...

  20. Triangulation, Respondent Validation, and Democratic Participation in Mixed Methods Research

    Torrance, Harry

    2012-01-01

    Over the past 10 years or so the "Field" of "Mixed Methods Research" (MMR) has increasingly been exerting itself as something separate, novel, and significant, with some advocates claiming paradigmatic status. Triangulation is an important component of mixed methods designs. Triangulation has its origins in attempts to validate research findings…

  1. Coupling of Spinosad Fermentation and Separation Process via Two-Step Macroporous Resin Adsorption Method.

    Zhao, Fanglong; Zhang, Chuanbo; Yin, Jing; Shen, Yueqi; Lu, Wenyu

    2015-08-01

    In this paper, a two-step resin adsorption technology was investigated for spinosad production and separation as follows: the first step resin addition into the fermentor at early cultivation period to decrease the timely product concentration in the broth; the second step of resin addition was used after fermentation to adsorb and extract the spinosad. Based on this, a two-step macroporous resin adsorption-membrane separation process for spinosad fermentation, separation, and purification was established. Spinosad concentration in 5-L fermentor increased by 14.45 % after adding 50 g/L macroporous at the beginning of fermentation. The established two-step macroporous resin adsorption-membrane separation process got the 95.43 % purity and 87 % yield for spinosad, which were both higher than that of the conventional crystallization of spinosad from aqueous phase that were 93.23 and 79.15 % separately. The two-step macroporous resin adsorption method has not only carried out the coupling of spinosad fermentation and separation but also increased spinosad productivity. In addition, the two-step macroporous resin adsorption-membrane separation process performs better in spinosad yield and purity.

  2. Validation of Land Cover Products Using Reliability Evaluation Methods

    Wenzhong Shi

    2015-06-01

    Full Text Available Validation of land cover products is a fundamental task prior to data applications. Current validation schemes and methods are, however, suited only for assessing classification accuracy and disregard the reliability of land cover products. The reliability evaluation of land cover products should be undertaken to provide reliable land cover information. In addition, the lack of high-quality reference data often constrains validation and affects the reliability results of land cover products. This study proposes a validation schema to evaluate the reliability of land cover products, including two methods, namely, result reliability evaluation and process reliability evaluation. Result reliability evaluation computes the reliability of land cover products using seven reliability indicators. Process reliability evaluation analyzes the reliability propagation in the data production process to obtain the reliability of land cover products. Fuzzy fault tree analysis is introduced and improved in the reliability analysis of a data production process. Research results show that the proposed reliability evaluation scheme is reasonable and can be applied to validate land cover products. Through the analysis of the seven indicators of result reliability evaluation, more information on land cover can be obtained for strategic decision-making and planning, compared with traditional accuracy assessment methods. Process reliability evaluation without the need for reference data can facilitate the validation and reflect the change trends of reliabilities to some extent.

  3. Necessary steps in factor analysis : Enhancing validation studies of educational instruments. The PHEEM applied to clerks as an example

    Schonrock-Adema, Johanna; Heijne-Penninga, Marjolein; van Hell, Elisabeth A.; Cohen-Schotanus, Janke

    2009-01-01

    Background: The validation of educational instruments, in particular the employment of factor analysis, can be improved in many instances. Aims: To demonstrate the superiority of a sophisticated method of factor analysis, implying an integration of recommendations described in the factor analysis

  4. 76 FR 28664 - Method 301-Field Validation of Pollutant Measurement Methods From Various Waste Media

    2011-05-18

    ... . d m = The mean of the paired sample differences. n = Total number of paired samples. 7.4.2 t Test... being compared to a validated test method as part of the Method 301 validation and an audit sample for... tighten the acceptance criteria for the precision of candidate alternative test methods. One commenter...

  5. Using a Two-Step Method to Measure Transgender Identity in Latin America/the Caribbean, Portugal, and Spain

    Reisner, Sari L.; Biello, Katie; Rosenberger, Joshua G.; Austin, S. Bryn; Haneuse, Sebastien; Perez-Brumer, Amaya; Novak, David S.; Mimiaga, Matthew J.

    2014-01-01

    Few comparative data are available internationally to examine health differences by transgender identity. A barrier to monitoring the health and well-being of transgender people is the lack of inclusion of measures to assess natal sex/gender identity status in surveys. Data were from a cross-sectional anonymous online survey of members (n > 36,000) of a sexual networking website targeting men who have sex with men in Spanish- and Portuguese-speaking countries/ territories in Latin America/the Caribbean, Portugal, and Spain. Natal sex/gender identity status was assessed using a two-step method (Step 1: assigned birth sex, Step 2: current gender identity). Male-to-female (MTF) and female-to-male (FTM) participants were compared to non-transgender males in age-adjusted regression models on socioeconomic status (SES) (education, income, sex work), masculine gender conformity, psychological health and well-being (lifetime suicidality, past-week depressive distress, positive self-worth, general self-rated health, gender related stressors), and sexual health (HIV-infection, past-year STIs, past-3 month unprotected anal or vaginal sex). The two-step method identified 190 transgender participants (0.54%; 158 MTF, 32 FTM). Of the 12 health-related variables, six showed significant differences between the three groups: SES, masculine gender conformity, lifetime suicidality, depressive distress, positive self-worth, and past-year genital herpes. A two-step approach is recommended for health surveillance efforts to assess natal sex/gender identity status. Cognitive testing to formally validate assigned birth sex and current gender identity survey items in Spanish and Portuguese is encouraged. PMID:25030120

  6. Validation of NAA Method for Urban Particulate Matter

    Woro Yatu Niken Syahfitri; Muhayatun; Diah Dwiana Lestiani; Natalia Adventini

    2009-01-01

    Nuclear analytical techniques have been applied in many countries for determination of environmental pollutant. Method of NAA (neutron activation analysis) representing one of nuclear analytical technique of that has low detection limits, high specificity, high precision, and accuracy for large majority of naturally occurring elements, and ability of non-destructive and simultaneous determination of multi-elemental, and can handle small sample size (< 1 mg). To ensure quality and reliability of the method, validation are needed to be done. A standard reference material, SRM NIST 1648 Urban Particulate Matter, has been used to validate NAA method. Accuracy and precision test were used as validation parameters. Particulate matter were validated for 18 elements: Ti, I, V, Br, Mn, Na, K, Cl, Cu, Al, As, Fe, Co, Zn, Ag, La, Cr, and Sm,. The result showed that the percent relative standard deviation of the measured elemental concentrations are found to be within ranged from 2 to 14,8% for most of the elements analyzed whereas Hor rat value in range 0,3-1,3. Accuracy test results showed that relative bias ranged from -11,1 to 3,6%. Based on validation results, it can be stated that NAA method is reliable for characterization particulate matter and other similar matrix samples to support air quality monitoring. (author)

  7. Validation of calculational methods for nuclear criticality safety - approved 1975

    Anon.

    1977-01-01

    The American National Standard for Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactors, N16.1-1975, states in 4.2.5: In the absence of directly applicable experimental measurements, the limits may be derived from calculations made by a method shown to be valid by comparison with experimental data, provided sufficient allowances are made for uncertainties in the data and in the calculations. There are many methods of calculation which vary widely in basis and form. Each has its place in the broad spectrum of problems encountered in the nuclear criticality safety field; however, the general procedure to be followed in establishing validity is common to all. The standard states the requirements for establishing the validity and area(s) of applicability of any calculational method used in assessing nuclear criticality safety

  8. An analytical method for the calculation of static characteristics of linear step motors for control rod drives in nuclear reactors

    Khan, S.H.; Ivanov, A.A.

    1995-01-01

    An analytical method for calculating static characteristics of linear dc step motors (LSM) is described. These multiphase passive-armature motors are now being developed for control rod drives (CRD) in large nuclear reactors. The static characteristics of such LSM is defined by the variation of electromagnetic force with armature displacement and it determines motor performance in its standing and dynamic modes of operation. The proposed analytical technique for calculating this characteristic is based on the permeance analysis method applied to phase magnetic circuits of LSM. Reluctances of various parts of phase magnetic circuit is calculated analytically by assuming probable flux paths and by taking into account complex nature of magnetic field distribution in it. For given armature positions stator and armature iron saturations are taken into account by an efficient iterative algorithm which gives fast convergence. The method is validated by comparing theoretical results with experimental ones which shows satisfactory agreement for small stator currents and weak iron saturation

  9. Stability of one-step methods in transient nonlinear heat conduction

    Hughes, J.R.

    1977-01-01

    The purpose of the present work is to ascertain practical stability conditions for one-step methods commonly used in transient nonlinear heat conduction analyses. The class of problems considered is governed by a temporally continuous, spatially discrete system involving the capacity matrix C, conductivity matrix K, heat supply vector, temperature vector and time differenciation. In the linear case, in which K and C are constant, the stability behavior of one-step methods is well known. But in this paper the concepts of stability, appropriate to the nonlinear problem, are thoroughly discussed. They of course reduce to the usual stability criterion for the linear, constant coefficient case. However, for nonlinear problems there are differences and these ideas are of key importance in obtaining practical stability conditions. Of particular importance is a recent result which indicates that, in a sense, the trapezoidal and midpoint families are quivalent. Thus, stability results for one family may be translated into a result for the other. The main results obtained are summarized as follows. The stability behavior of the explicit Euler method in the nonlinear regime is analogous to that for linear problems. In particular, an a priori step size restriction may be determined for each time step. The precise time step restriction on implicit conditionally stable members of the trapezoidal and midpoint families is shown not to be determinable a priori. Of considerable practical significance, unconditionally stable members of the trapezoidal and midpoint families are identified

  10. Design of a Two-Step Calibration Method of Kinematic Parameters for Serial Robots

    WANG, Wei; WANG, Lei; YUN, Chao

    2017-03-01

    Serial robots are used to handle workpieces with large dimensions, and calibrating kinematic parameters is one of the most efficient ways to upgrade their accuracy. Many models are set up to investigate how many kinematic parameters can be identified to meet the minimal principle, but the base frame and the kinematic parameter are indistinctly calibrated in a one-step way. A two-step method of calibrating kinematic parameters is proposed to improve the accuracy of the robot's base frame and kinematic parameters. The forward kinematics described with respect to the measuring coordinate frame are established based on the product-of-exponential (POE) formula. In the first step the robot's base coordinate frame is calibrated by the unit quaternion form. The errors of both the robot's reference configuration and the base coordinate frame's pose are equivalently transformed to the zero-position errors of the robot's joints. The simplified model of the robot's positioning error is established in second-power explicit expressions. Then the identification model is finished by the least square method, requiring measuring position coordinates only. The complete subtasks of calibrating the robot's 39 kinematic parameters are finished in the second step. It's proved by a group of calibration experiments that by the proposed two-step calibration method the average absolute accuracy of industrial robots is updated to 0.23 mm. This paper presents that the robot's base frame should be calibrated before its kinematic parameters in order to upgrade its absolute positioning accuracy.

  11. A simple three step method for selective placement of organic groups in mesoporous silica thin films

    Franceschini, Esteban A. [Gerencia Química, Centro Atómico Constituyentes, Comisión Nacional de Energía Atómica, Av. Gral Paz 1499 (B1650KNA) San Martín, Buenos Aires (Argentina); Llave, Ezequiel de la; Williams, Federico J. [Departamento de Química Inorgánica, Analítica y Química Física and INQUIMAE-CONICET, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria, Pabellón II, C1428EHA Buenos Aires (Argentina); Soler-Illia, Galo J.A.A., E-mail: galo.soler.illia@gmail.com [Departamento de Química Inorgánica, Analítica y Química Física and INQUIMAE-CONICET, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria, Pabellón II, C1428EHA Buenos Aires (Argentina); Instituto de Nanosistemas, Universidad Nacional de General San Martín, 25 de Mayo y Francia (1650) San Martín, Buenos Aires (Argentina)

    2016-02-01

    Selective functionalization of mesoporous silica thin films was achieved using a three step method. The first step consists in an outer surface functionalization, followed by washing off the structuring agent (second step), leaving the inner surface of the pores free to be functionalized in the third step. This reproducible method permits to anchor a volatile silane group in the outer film surface, and a second type of silane group in the inner surface of the pores. As a concept test we modified the outer surface of a mesoporous silica film with trimethylsilane (–Si–(CH{sub 3}){sub 3}) groups and the inner pore surface with propylamino (–Si–(CH{sub 2}){sub 3}–NH{sub 2}) groups. The obtained silica films were characterized by Environmental Ellipsometric Porosimetry (EEP), EDS, XPS, contact angle and electron microscopy. The selectively functionalized silica (SF) shows an amount of surface amino functions 4.3 times lower than the one-step functionalized (OSF) silica samples. The method presented here can be extended to a combination of silane chlorides and alkoxides as functional groups, opening up a new route toward the synthesis of multifunctional mesoporous thin films with precisely localized organic functions. - Highlights: • Selective functionalization of mesoporous silica thin films was achieved using a three step method. • A volatile silane group is anchored by evaporation on the outer film surface. • A second silane is deposited in the inner surface of the pores by post-grafting. • Contact angle, EDS and XPS measurements show different proportions of amino groups on both surfaces. • This method can be extended to a combination of silane chlorides and alkoxides functional groups.

  12. Multi-step polynomial regression method to model and forecast malaria incidence.

    Chandrajit Chatterjee

    Full Text Available Malaria is one of the most severe problems faced by the world even today. Understanding the causative factors such as age, sex, social factors, environmental variability etc. as well as underlying transmission dynamics of the disease is important for epidemiological research on malaria and its eradication. Thus, development of suitable modeling approach and methodology, based on the available data on the incidence of the disease and other related factors is of utmost importance. In this study, we developed a simple non-linear regression methodology in modeling and forecasting malaria incidence in Chennai city, India, and predicted future disease incidence with high confidence level. We considered three types of data to develop the regression methodology: a longer time series data of Slide Positivity Rates (SPR of malaria; a smaller time series data (deaths due to Plasmodium vivax of one year; and spatial data (zonal distribution of P. vivax deaths for the city along with the climatic factors, population and previous incidence of the disease. We performed variable selection by simple correlation study, identification of the initial relationship between variables through non-linear curve fitting and used multi-step methods for induction of variables in the non-linear regression analysis along with applied Gauss-Markov models, and ANOVA for testing the prediction, validity and constructing the confidence intervals. The results execute the applicability of our method for different types of data, the autoregressive nature of forecasting, and show high prediction power for both SPR and P. vivax deaths, where the one-lag SPR values plays an influential role and proves useful for better prediction. Different climatic factors are identified as playing crucial role on shaping the disease curve. Further, disease incidence at zonal level and the effect of causative factors on different zonal clusters indicate the pattern of malaria prevalence in the city

  13. Development, reliability, and validity testing of Toddler NutriSTEP: a nutrition risk screening questionnaire for children 18-35 months of age.

    Randall Simpson, Janis; Gumbley, Jillian; Whyte, Kylie; Lac, Jane; Morra, Crystal; Rysdale, Lee; Turfryer, Mary; McGibbon, Kim; Beyers, Joanne; Keller, Heather

    2015-09-01

    Nutrition is vital for optimal growth and development of young children. Nutrition risk screening can facilitate early intervention when followed by nutritional assessment and treatment. NutriSTEP (Nutrition Screening Tool for Every Preschooler) is a valid and reliable nutrition risk screening questionnaire for preschoolers (aged 3-5 years). A need was identified for a similar questionnaire for toddlers (aged 18-35 months). The purpose was to develop a reliable and valid Toddler NutriSTEP. Toddler NutriSTEP was developed in 4 phases. Content and face validity were determined with a literature review, parent focus groups (n = 6; 48 participants), and experts (n = 13) (phase A). A draft questionnaire was refined with key intercept interviews of 107 parents/caregivers (phase B). Test-retest reliability (phase C), based on intra-class correlations (ICC), Kappa (κ) statistics, and Wilcoxon tests was assessed with 133 parents/caregivers. Criterion validity (phase D) was assessed using Receiver Operating Characteristic (ROC) curves by comparing scores on the Toddler NutriSTEP to a comprehensive nutritional assessment of 200 toddlers with a registered dietitian (RD). The Toddler NutriSTEP was reliable between 2 administrations (ICC = 0.951, F = 20.53, p Toddler NutriSTEP were correlated (r = 0.67, p Toddler NutriSTEP questionnaire is both reliable and valid for screening for nutritional risk in toddlers.

  14. Methods for growth of relatively large step-free SiC crystal surfaces

    Neudeck, Philip G. (Inventor); Powell, J. Anthony (Inventor)

    2002-01-01

    A method for growing arrays of large-area device-size films of step-free (i.e., atomically flat) SiC surfaces for semiconductor electronic device applications is disclosed. This method utilizes a lateral growth process that better overcomes the effect of extended defects in the seed crystal substrate that limited the obtainable step-free area achievable by prior art processes. The step-free SiC surface is particularly suited for the heteroepitaxial growth of 3C (cubic) SiC, AlN, and GaN films used for the fabrication of both surface-sensitive devices (i.e., surface channel field effect transistors such as HEMT's and MOSFET's) as well as high-electric field devices (pn diodes and other solid-state power switching devices) that are sensitive to extended crystal defects.

  15. The validity and reliability of the four square step test in different adult populations: a systematic review.

    Moore, Martha; Barker, Karen

    2017-09-11

    The four square step test (FSST) was first validated in healthy older adults to provide a measure of dynamic standing balance and mobility. The FSST has since been used in a variety of patient populations. The purpose of this systematic review is to determine the validity and reliability of the FSST in these different adult patient populations. The literature search was conducted to highlight all the studies that measured validity and reliability of the FSST. Six electronic databases were searched including AMED, CINAHL, MEDLINE, PEDro, Web of Science and Google Scholar. Grey literature was also searched for any documents relevant to the review. Two independent reviewers carried out study selection and quality assessment. The methodological quality was assessed using the QUADAS-2 tool, which is a validated tool for the quality assessment of diagnostic accuracy studies, and the COSMIN four-point checklist, which contains standards for evaluating reliability studies on the measurement properties of health instruments. Fifteen studies were reviewed studying community-dwelling older adults, Parkinson's disease, Huntington's disease, multiple sclerosis, vestibular disorders, post stroke, post unilateral transtibial amputation, knee pain and hip osteoarthritis. Three of the studies were of moderate methodological quality scoring low in risk of bias and applicability for all domains in the QUADAS-2 tool. Three studies scored "fair" on the COSMIN four-point checklist for the reliability components. The concurrent validity of the FSST was measured in nine of the studies with moderate to strong correlations being found. Excellent Intraclass Correlation Coefficients were found between physiotherapists carrying out the tests (ICC = .99) with good to excellent test-retest reliability shown in nine of the studies (ICC = .73-.98). The FSST may be an effective and valid tool for measuring dynamic balance and a participants' falls risk. It has been shown to have strong

  16. A novel enterovirus and parechovirus multiplex one-step real-time PCR-validation and clinical experience

    Nielsen, A. C. Y.; Bottiger, B.; Midgley, S. E.

    2013-01-01

    As the number of new enteroviruses and human parechoviruses seems ever growing, the necessity for updated diagnostics is relevant. We have updated an enterovirus assay and combined it with a previously published assay for human parechovirus resulting in a multiplex one-step RT-PCR assay....... The multiplex assay was validated by analysing the sensitivity and specificity of the assay compared to the respective monoplex assays, and a good concordance was found. Furthermore, the enterovirus assay was able to detect 42 reference strains from all 4 species, and an additional 9 genotypes during panel...... testing and routine usage. During 15 months of routine use, from October 2008 to December 2009, we received and analysed 2187 samples (stool samples, cerebrospinal fluids, blood samples, respiratory samples and autopsy samples) were tested, from 1546 patients and detected enteroviruses and parechoviruses...

  17. Validation of battery-alternator model against experimental data - a first step towards developing a future power supply system

    Boulos, A.M.; Burnham, K.J.; Mahtani, J.L. [Coventry University (United Kingdom). Control Theory and Applications Centre; Pacaud, C. [Jaguar Cars Ltd., Coventry (United Kingdom). Engineering Centre

    2004-01-01

    The electric power system of a modern vehicle has to supply enough electrical energy to drive numerous electrical and electronic systems and components. The electric power system of a vehicle consists of two major components: an alternator and a battery. A detailed understanding of the characteristics of the electric power system, electrical load demands and the operating environment, such as road conditions and vehicle laden weight, is required when the capacities of the generator and the battery are to be determined for a vehicle. In this study, a battery-alternator system has been developed and simulated in MATLAB/Simulink, and data obtained from vehicle tests have been used as a basis for validating the models. This is considered to be a necessary first step in the design and development of a new 42 V power supply system. (author)

  18. Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs

    Hadjimichael, Yiannis

    2017-09-30

    A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions

  19. Studying Hospitalizations and Mortality in the Netherlands: Feasible and Valid Using Two-Step Medical Record Linkage with Nationwide Registers.

    Elske Sieswerda

    Full Text Available In the Netherlands, the postal code is needed to study hospitalizations of individuals in the nationwide hospitalization register. Studying hospitalizations longitudinally becomes troublesome if individuals change address. We aimed to report on the feasibility and validity of a two-step medical record linkage approach to examine longitudinal trends in hospitalizations and mortality in a study cohort. First, we linked a study cohort of 1564 survivors of childhood cancer with the Municipal Personal Records Database (GBA which has postal code history and mortality data available. Within GBA, we sampled a reference population matched on year of birth, gender and calendar year. Second, we extracted hospitalizations from the Hospital Discharge Register (LMR with a date of discharge during unique follow-up (based on date of birth, gender and postal code in GBA. We calculated the agreement of death and being hospitalized in survivors according to the registers and to available cohort data. We retrieved 1477 (94% survivors from GBA. Median percentages of unique/potential follow-up were 87% (survivors and 83% (reference persons. Characteristics of survivors and reference persons contributing to unique follow-up were comparable. Agreement of hospitalization during unique follow-up was 94% and agreement of death was 98%. In absence of unique identifiers in the Dutch hospitalization register, it is feasible and valid to study hospitalizations and mortality of individuals longitudinally using a two-step medical record linkage approach. Cohort studies in the Netherlands have the opportunity to study mortality and hospitalization rates over time. These outcomes provide insight into the burden of clinical events and healthcare use in studies on patients at risk of long-term morbidities.

  20. Accurate step-FMCW ultrasound ranging and comparison with pulse-echo signaling methods

    Natarajan, Shyam; Singh, Rahul S.; Lee, Michael; Cox, Brian P.; Culjat, Martin O.; Grundfest, Warren S.; Lee, Hua

    2010-03-01

    This paper presents a method setup for high-frequency ultrasound ranging based on stepped frequency-modulated continuous waves (FMCW), potentially capable of producing a higher signal-to-noise ratio (SNR) compared to traditional pulse-echo signaling. In current ultrasound systems, the use of higher frequencies (10-20 MHz) to enhance resolution lowers signal quality due to frequency-dependent attenuation. The proposed ultrasound signaling format, step-FMCW, is well-known in the radar community, and features lower peak power, wider dynamic range, lower noise figure and simpler electronics in comparison to pulse-echo systems. In pulse-echo ultrasound ranging, distances are calculated using the transmit times between a pulse and its subsequent echoes. In step-FMCW ultrasonic ranging, the phase and magnitude differences at stepped frequencies are used to sample the frequency domain. Thus, by taking the inverse Fourier transform, a comprehensive range profile is recovered that has increased immunity to noise over conventional ranging methods. Step-FMCW and pulse-echo waveforms were created using custom-built hardware consisting of an arbitrary waveform generator and dual-channel super heterodyne receiver, providing high SNR and in turn, accuracy in detection.

  1. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    Pestana, Reynam C.; Stoffa, Paul L.

    2009-01-01

    an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second

  2. The method of quick satellite aiming with 3-Steps on the mobile satellite station

    Sheng Liang

    2017-02-01

    Full Text Available The study analyses and concludes the technology of the satellite aiming during real-time broadcast of mobile video.We conclude a method of quick satellite aiming with 3-steps according to practical exercises and users' requirement to meet situation of facts and standardized operation,which can improve efficiency and quality of service.

  3. Ion-step method for surface potential sensing of silicon nanowires

    Chen, S.; van Nieuwkasteele, Jan William; van den Berg, Albert; Eijkel, Jan C.T.

    2016-01-01

    This paper presents a novel stimulus-response method for surface potential sensing of silicon nanowire (Si NW) field-effect transistors. When an "ion-step" from low to high ionic strength is given as a stimulus to the gate oxide surface, an increase of double layer capacitance is therefore expected.

  4. The Fractional Step Method Applied to Simulations of Natural Convective Flows

    Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The

  5. Development and Validation of a Liquid Chromatographic Method ...

    A liquid chromatographic method for the simultaneous determination of six human immunodeficiency virus (HIV) protease inhibitors, indinavir, saquinavir, ritonavir, amprenavir, nelfinavir and lopinavir, was developed and validated. Optimal separation was achieved on a PLRP-S 100 Å, 250 x 4.6 mm I.D. column maintained ...

  6. Validated RP-HPLC Method for Quantification of Phenolic ...

    Purpose: To evaluate the total phenolic content and antioxidant potential of the methanol extracts of aerial parts and roots of Thymus sipyleus Boiss and also to determine some phenolic compounds using a newly developed and validated reversed phase high performance liquid chromatography (RP-HPLC) method.

  7. Comparison of the performances and validation of three methods for ...

    SARAH

    2014-02-28

    Feb 28, 2014 ... bacteria in Norwegian slaughter pigs. Int J. Food Microbiol 1, 301–309. [NCFA] Nordic Committee of Food Analysis (1996). Yersinia enterocolitica Detection in foods 117,. 3rd,edn,1-12. Nowak, B., Mueffling, T.V., Caspari, K. and Hartung, J. 2006 Validation of a method for the detection of virulent Yersinia ...

  8. Pressurised liquid extraction of flavonoids in onions. Method development and validation

    Søltoft, Malene; Christensen, J.H.; Nielsen, J.

    2009-01-01

    A rapid and reliable analytical method for quantification of flavonoids in onions was developed and validated. Five extraction methods were tested on freeze-dried onions and subsequently high performance liquid chromatography (HPLC) with UV detection was used for quantification of seven flavonoids...... extraction methods. However. PLE was the preferred extraction method because the method can be highly automated, use only small amounts of solvents, provide the cleanest extracts, and allow the extraction of light and oxygen-sensitive flavonoids to be carried out in an inert atmosphere protected from light......-step PLE method showed good selectivity, precision (RSDs = 3.1-11%) and recovery of the extractable flavonoids (98-99%). The method also appeared to be a multi-method, i.e. generally applicable to, e.g. phenolic acids in potatoes and carrots....

  9. Development and validation of analytical methods for dietary supplements

    Sullivan, Darryl; Crowley, Richard

    2006-01-01

    The expanding use of innovative botanical ingredients in dietary supplements and foods has resulted in a flurry of research aimed at the development and validation of analytical methods for accurate measurement of active ingredients. The pressing need for these methods is being met through an expansive collaborative initiative involving industry, government, and analytical organizations. This effort has resulted in the validation of several important assays as well as important advances in the method engineering procedures which have improved the efficiency of the process. The initiative has also allowed researchers to hurdle many of the barricades that have hindered accurate analysis such as the lack of reference standards and comparative data. As the availability for nutraceutical products continues to increase these methods will provide consumers and regulators with the scientific information needed to assure safety and dependable labeling

  10. Validation of a residue method to determine pesticide residues in cucumber by using nuclear techniques

    Baysoyu, D.; Tiryaki, O.; Secer, E.; Aydin, G.

    2009-01-01

    In this study, a multi-residue method using ethyl acetate for extraction and gel permeation chromatography for clean-up was validated to determine chlorpyrifos, malathion and dichlorvos in cucumber by gas chromatography. For this purpose, homogenized cucumber samples were fortified with pesticides at 0.02 0.2, 0.8 and 1 mg/kg levels. The efficiency and repeatability of the method in extraction and cleanup steps were performed using 1 4C-carbaryl by radioisotope tracer technique. 1 4C-carbaryl recoveries after the extraction and cleanup steps were between 92.63-111.73 % with a repeatability of 4.85% (CV) and 74.83-102.22 % with a repeatability of 7.19% (CV), respectively. The homogeneity of analytical samples and the stability of pesticides during homogenization were determined using radio tracer technique and chromatographic methods, respectively.

  11. Study of CdTe quantum dots grown using a two-step annealing method

    Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2006-02-01

    High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.

  12. Comparison of the Screening Tests for Gestational Diabetes Mellitus between "One-Step" and "Two-Step" Methods among Thai Pregnant Women.

    Luewan, Suchaya; Bootchaingam, Phenphan; Tongsong, Theera

    2018-01-01

    To compare the prevalence and pregnancy outcomes of GDM between those screened by the "one-step" (75 gm GTT) and "two-step" (100 gm GTT) methods. A prospective study was conducted on singleton pregnancies at low or average risk of GDM. All were screened between 24 and 28 weeks, using the one-step or two-step method based on patients' preference. The primary outcome was prevalence of GDM, and secondary outcomes included birthweight, gestational age, rates of preterm birth, small/large-for-gestational age, low Apgar scores, cesarean section, and pregnancy-induced hypertension. A total of 648 women were screened: 278 in the one-step group and 370 in the two-step group. The prevalence of GDM was significantly higher in the one-step group; 32.0% versus 10.3%. Baseline characteristics and pregnancy outcomes in both groups were comparable. However, mean birthweight was significantly higher among pregnancies with GDM diagnosed by the two-step approach (3204 ± 555 versus 3009 ± 666 g; p =0.022). Likewise, the rate of large-for-date tended to be higher in the two-step group, but was not significant. The one-step approach is associated with very high prevalence of GDM among Thai population, without clear evidence of better outcomes. Thus, this approach may not be appropriate for screening in a busy antenatal care clinic like our setting or other centers in developing countries.

  13. The Validity of Dimensional Regularization Method on Fractal Spacetime

    Yong Tao

    2013-01-01

    Full Text Available Svozil developed a regularization method for quantum field theory on fractal spacetime (1987. Such a method can be applied to the low-order perturbative renormalization of quantum electrodynamics but will depend on a conjectural integral formula on non-integer-dimensional topological spaces. The main purpose of this paper is to construct a fractal measure so as to guarantee the validity of the conjectural integral formula.

  14. A simple two-step method to fabricate highly transparent ITO/polymer nanocomposite films

    Liu, Haitao; Zeng, Xiaofei; Kong, Xiangrong; Bian, Shuguang; Chen, Jianfeng

    2012-01-01

    Highlights: ► A simple two-step method without further surface modification step was employed. ► ITO nanoparticles were easily to be uniformly dispersed in polymer matrix. ► ITO/polymer nanocomposite film had high transparency and UV/IR blocking properties. - Abstract: Transparent functional indium tin oxide (ITO)/polymer nanocomposite films were fabricated via a simple approach with two steps. Firstly, the functional monodisperse ITO nanoparticles were synthesized via a facile nonaqueous solvothermal method using bifunctional chemical agent (N-methyl-pyrrolidone, NMP) as the reaction solvent and surface modifier. Secondly, the ITO/acrylics polyurethane (PUA) nanocomposite films were fabricated by a simple sol-solution mixing method without any further surface modification step as often employed traditionally. Flower-like ITO nanoclusters with about 45 nm in diameter were mono-dispersed in ethyl acetate and each nanocluster was assembled by nearly spherical nanoparticles with primary size of 7–9 nm in diameter. The ITO nanoclusters exhibited an excellent dispersibility in polymer matrix of PUA, remaining their original size without any further agglomeration. When the loading content of ITO nanoclusters reached to 5 wt%, the transparent functional nanocomposite film featured a high transparency more than 85% in the visible light region (at 550 nm), meanwhile cutting off near-infrared radiation about 50% at 1500 nm and blocking UV ray about 45% at 350 nm. It could be potential for transparent functional coating materials applications.

  15. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis

    Huanhuan Li

    2017-08-01

    Full Text Available The Shipboard Automatic Identification System (AIS is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW, a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our

  16. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis.

    Li, Huanhuan; Liu, Jingxian; Liu, Ryan Wen; Xiong, Naixue; Wu, Kefeng; Kim, Tai-Hoon

    2017-08-04

    The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with

  17. Analytical models approximating individual processes: a validation method.

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. Numerical Validation of the Delaunay Normalization and the Krylov-Bogoliubov-Mitropolsky Method

    David Ortigosa

    2014-01-01

    Full Text Available A scalable second-order analytical orbit propagator programme based on modern and classical perturbation methods is being developed. As a first step in the validation and verification of part of our orbit propagator programme, we only consider the perturbation produced by zonal harmonic coefficients in the Earth’s gravity potential, so that it is possible to analyze the behaviour of the mathematical expressions involved in Delaunay normalization and the Krylov-Bogoliubov-Mitropolsky method in depth and determine their limits.

  19. Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

    Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.

    2009-01-01

    We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step, which targets the ?+-center of the next pair of perturbed problems. As for the centering steps, we apply a sharper quadratic convergence result, which leads to a slightly wider neighborhood for th...

  20. Three-step interferometric method with blind phase shifts by use of interframe correlation between interferograms

    Muravsky, Leonid I.; Kmet', Arkady B.; Stasyshyn, Ihor V.; Voronyak, Taras I.; Bobitski, Yaroslav V.

    2018-06-01

    A new three-step interferometric method with blind phase shifts to retrieve phase maps (PMs) of smooth and low-roughness engineering surfaces is proposed. Evaluating of two unknown phase shifts is fulfilled by using the interframe correlation between interferograms. The method consists of two stages. The first stage provides recording of three interferograms of a test object and their processing including calculation of unknown phase shifts, and retrieval of a coarse PM. The second stage implements firstly separation of high-frequency and low-frequency PMs and secondly producing of a fine PM consisting of areal surface roughness and waviness PMs. Extraction of the areal surface roughness and waviness PMs is fulfilled by using a linear low-pass filter. The computer simulation and experiments fulfilled to retrieve a gauge block surface area and its areal surface roughness and waviness have confirmed the reliability of the proposed three-step method.

  1. The large discretization step method for time-dependent partial differential equations

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  2. LiLEDDA: A Six-Step Forum-Based Netnographic Research Method for Nursing Science

    MARTIN SALZMANN-ERIKSON

    2012-01-01

    Full Text Available Internet research methods in nursing science are less developed than in other sciences. We choose to present an approach to conducting nursing research on an internet-based forum. This paper presents LiLEDDA, a six-step forum-based netnographic research method for nursing science. The steps consist of: 1. Literature review and identification of the research question(s; 2. Locating the field(s online; 3. Ethical considerations; 4. Data gathering; 5. Data analysis and interpretation; and 6. Abstractions and trustworthiness. Traditional research approaches are limiting when studying non-normative and non-mainstream life-worlds and their cultures. We argue that it is timely to develop more up-to-date research methods and study designs applicable to nursing science that reflect social developments and human living conditions that tend to be increasingly online-based.

  3. A Method of MPPT Control Based on Power Variable Step-size in Photovoltaic Converter System

    Xu Hui-xiang

    2016-01-01

    Full Text Available Since the disadvantage of traditional MPPT algorithms of variable step-size, proposed power tracking based on variable step-size with the advantage method of the constant-voltage and the perturb-observe (P&O[1-3]. The control strategy modify the problem of voltage fluctuation caused by perturb-observe method, at the same time, introducing the advantage of constant-voltage method and simplify the circuit topology. With the theoretical derivation, control the output power of photovoltaic modules to change the duty cycle of main switch. Achieve the maximum power stabilization output, reduce the volatility of energy loss effectively, and improve the inversion efficiency[3,4]. Given the result of experimental test based theoretical derivation and the curve of MPPT when the prototype work.

  4. Two-step nuclear reactions: The Surrogate Method, the Trojan Horse Method and their common foundations

    Hussein, Mahir S. [DCTA, Instituto Tecnologico de Aeronautica, Sao Jose dos Campos, SP (Brazil); Universidade de Sao Paulo, Instituto de Estudos Avancados, C. P. 72012, Sao Paulo, SP (Brazil); Universidade de Sao Paulo, Instituto de Fisica, C. P. 66318, Sao Paulo, SP (Brazil)

    2017-05-15

    In this Letter I argue that the Surrogate Method, used to extract the fast neutron capture cross section on actinide target nuclei, which has important practical application for the next generation of breeder reactors, and the Trojan Horse Method employed to extract reactions of importance to nuclear astrophysics, have a common foundation, the Inclusive Non-Elastic Breakup (INEB) Theory. Whereas the Surrogate Method relies on the premise that the extracted neutron cross section in a (d, p) reaction is predominantly a compound-nucleus one, the Trojan Horse Method assumes a predominantly direct process for the secondary reaction induced by the surrogate fragment. In general, both methods contain both direct and compound contributions, and I show how these seemingly distinct methods are in fact the same but at different energies and different kinematic regions. The unifying theory is the rather well developed INEB theory. (orig.)

  5. Two-step nuclear reactions: The Surrogate Method, the Trojan Horse Method and their common foundations

    Hussein, Mahir S.

    2017-01-01

    In this Letter I argue that the Surrogate Method, used to extract the fast neutron capture cross section on actinide target nuclei, which has important practical application for the next generation of breeder reactors, and the Trojan Horse Method employed to extract reactions of importance to nuclear astrophysics, have a common foundation, the Inclusive Non-Elastic Breakup (INEB) Theory. Whereas the Surrogate Method relies on the premise that the extracted neutron cross section in a (d, p) reaction is predominantly a compound-nucleus one, the Trojan Horse Method assumes a predominantly direct process for the secondary reaction induced by the surrogate fragment. In general, both methods contain both direct and compound contributions, and I show how these seemingly distinct methods are in fact the same but at different energies and different kinematic regions. The unifying theory is the rather well developed INEB theory. (orig.)

  6. Stability of one-step methods in transient nonlinear heat conduction

    Hughes, J.R.

    1977-01-01

    The purpose of the present work is to ascertain practical stability conditions for one-step methods commonly used in transient nonlinear heat conduction analyses. In this paper the concepts of stability, appropriate to the nonlinear problem, are thoroughly discussed. They of course reduce to the usual stability critierion for the linear, constant coefficient case. However, for nonlinear problems there are differences and theses ideas are of key importance in obtaining practical stability conditions. Of particular importance is a recent result which indicates that, in a sense, the trapezoidal and midpoint families are equivalent. Thus, stability results for one family may be translated into a result for the other. The main results obtained are: The stability behaviour of the explicit Euler method in the nonlinear regime is analogous to that for linear problems. In particular, an a priori step size restriction may be determined for each time step. The precise time step restriction on implicit conditionally stable members of the trapezoidal and midpoint families is shown not to be determinable a priori. Of considerable practical significance, unconditionally stable members of the trapezoidal and midpoint families are identified. All notions of stability employed are motivated and defined, and their interpretations in practical computing are indicated. (Auth.)

  7. Study on evaluation method for image quality of radiograph by step plate, (2)

    Terada, Yukihiro; Hirayama, Kazuo; Katoh, Mitsuaki.

    1992-01-01

    Recently, penetrameter sensitivity is used not only for the evaluation of radiographic image quality but also as a control method for examination conditions. However, it is necessary to take the parametric data for radiation quality in order to use it for the second purpose. The quantitative factor of radiation quality is determined by the absorption coefficient and the ratio of scattered radiation to transmitted radiation reaching the X-ray film. When the X-ray equipment changes in conducting the radiographic examination, these data must be measured in each case. This is a demerit in controlling examination conditions based on parametric data. As shown theoretically in the first report, the image quality value of a step plate which is defined by the density difference divided by film contrast and step plate thickness is useful to obtain the value of the radiation quality factor. This report deal with experimental investigation to measure it with the step plate. The result is showing that the value of the radiation quality factor calculated by the parametric data corresponded well with the image quality value measured by the step plate. Therefore, the convenient method to measure the value of the radiation quality factor has been established in order to control examination conditions in radiographic examination. (author)

  8. A single-step method for rapid extraction of total lipids from green microalgae.

    Martin Axelsson

    Full Text Available Microalgae produce a wide range of lipid compounds of potential commercial interest. Total lipid extraction performed by conventional extraction methods, relying on the chloroform-methanol solvent system are too laborious and time consuming for screening large numbers of samples. In this study, three previous extraction methods devised by Folch et al. (1957, Bligh and Dyer (1959 and Selstam and Öquist (1985 were compared and a faster single-step procedure was developed for extraction of total lipids from green microalgae. In the single-step procedure, 8 ml of a 2∶1 chloroform-methanol (v/v mixture was added to fresh or frozen microalgal paste or pulverized dry algal biomass contained in a glass centrifuge tube. The biomass was manually suspended by vigorously shaking the tube for a few seconds and 2 ml of a 0.73% NaCl water solution was added. Phase separation was facilitated by 2 min of centrifugation at 350 g and the lower phase was recovered for analysis. An uncharacterized microalgal polyculture and the green microalgae Scenedesmus dimorphus, Selenastrum minutum, and Chlorella protothecoides were subjected to the different extraction methods and various techniques of biomass homogenization. The less labour intensive single-step procedure presented here allowed simultaneous recovery of total lipid extracts from multiple samples of green microalgae with quantitative yields and fatty acid profiles comparable to those of the previous methods. While the single-step procedure is highly correlated in lipid extractability (r² = 0.985 to the previous method of Folch et al. (1957, it allowed at least five times higher sample throughput.

  9. Influence of application methods of one-step self-etching adhesives on microtensile bond strength

    Chul-Kyu Choi,; Sung-Ae Son; Jin-Hee Ha; Bock Hur; Hyeon-Cheol Kim; Yong-Hun Kwon; Jeong-Kil Park

    2011-01-01

    Objectives The purpose of this study was to evaluate the effect of various application methods of one-step self-etch adhesives to microtensile resin-dentin bond strength. Materials and Methods Thirty-six extracted human molars were used. The teeth were assigned randomly to twelve groups (n = 15), according to the three different adhesive systems (Clearfil Tri-S Bond, Adper Prompt L-Pop, G-Bond) and application methods. The adhesive systems were applied on the dentin as follows: 1) T...

  10. Canine distemper virus detection by different methods of One-Step RT-qPCR

    Claudia de Camargo Tozato

    2016-01-01

    Full Text Available ABSTRACT: Three commercial kits of One-Step RT-qPCR were evaluated for the molecular diagnosis of Canine Distemper Virus. Using the kit that showed better performance, two systems of Real-time RT-PCR (RT-qPCR assays were tested and compared for analytical sensitivity to Canine Distemper Virus RNA detection: a One-Step RT-qPCR (system A and a One-Step RT-qPCR combined with NESTED-qPCR (system B. Limits of detection for both systems were determined using a serial dilution of Canine Distemper Virus synthetic RNA or a positive urine sample. In addition, the same urine sample was tested using samples with prior centrifugation or ultracentrifugation. Commercial kits of One-Step RT-qPCR assays detected canine distemper virus RNA in 10 (100% urine samples from symptomatic animals tested. The One-Step RT-qPCR kit that showed better results was used to evaluate the analytical sensitivity of the A and B systems. Limit of detection using synthetic RNA for the system A was 11 RNA copies µL-1 and 110 RNA copies µl-1 for first round System B. The second round of the NESTED-qPCR for System B had a limit of detection of 11 copies µl-1. Relationship between Ct values and RNA concentration was linear. The RNA extracted from the urine dilutions was detected in dilutions of 10-3 and10-2 by System A and B respectively. Urine centrifugation increased the analytical sensitivity of the test and proved to be useful for routine diagnostics. The One-Step RT-qPCR is a fast, sensitive and specific method for canine distemper routine diagnosis and research projects that require sensitive and quantitative methodology.

  11. Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring.

    Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan

    2017-11-01

    single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring

  12. Acceleration of step and linear discontinuous schemes for the method of characteristics in DRAGON5

    Alain Hébert

    2017-09-01

    Full Text Available The applicability of the algebraic collapsing acceleration (ACA technique to the method of characteristics (MOC in cases with scattering anisotropy and/or linear sources was investigated. Previously, the ACA was proven successful in cases with isotropic scattering and uniform (step sources. A presentation is first made of the MOC implementation, available in the DRAGON5 code. Two categories of schemes are available for integrating the propagation equations: (1 the first category is based on exact integration and leads to the classical step characteristics (SC and linear discontinuous characteristics (LDC schemes and (2 the second category leads to diamond differencing schemes of various orders in space. The acceleration of these MOC schemes using a combination of the generalized minimal residual [GMRES(m] method preconditioned with the ACA technique was focused on. Numerical results are provided for a two-dimensional (2D eight-symmetry pressurized water reactor (PWR assembly mockup in the context of the DRAGON5 code.

  13. Research on the range side lobe suppression method for modulated stepped frequency radar signals

    Liu, Yinkai; Shan, Tao; Feng, Yuan

    2018-05-01

    The magnitude of time-domain range sidelobe of modulated stepped frequency radar affects the imaging quality of inverse synthetic aperture radar (ISAR). In this paper, the cause of high sidelobe in modulated stepped frequency radar imaging is analyzed first in real environment. Then, the chaos particle swarm optimization (CPSO) is used to select the amplitude and phase compensation factors according to the minimum sidelobe criterion. Finally, the compensated one-dimensional range images are obtained. Experimental results show that the amplitude-phase compensation method based on CPSO algorithm can effectively reduce the sidelobe peak value of one-dimensional range images, which outperforms the common sidelobe suppression methods and avoids the coverage of weak scattering points by strong scattering points due to the high sidelobes.

  14. Investigating the Effectiveness of Teaching Methods Based on a Four-Step Constructivist Strategy

    Çalik, Muammer; Ayas, Alipaşa; Coll, Richard K.

    2010-02-01

    This paper reports on an investigation of the effectiveness an intervention using several different methods for teaching solution chemistry. The teaching strategy comprised a four-step approach derived from a constructivist view of learning. A sample consisting of 44 students (18 boys and 26 girls) was selected purposively from two different Grade 9 classes in the city of Trabzon, Turkey. Data collection employed a purpose-designed `solution chemistry concept test', consisting of 17 items, with the quantitative data from the survey supported by qualitative interview data. The findings suggest that using different methods embedded within the four-step constructivist-based teaching strategy enables students to refute some alternative conceptions, but does not completely eliminate student alternative conceptions for solution chemistry.

  15. Steps to standardization and validation of hippocampal volumetry as a biomarker in clinical trials and diagnostic criteria for Alzheimer’s disease

    Jack, Clifford R; Barkhof, Frederik; Bernstein, Matt A; Cantillon, Marc; Cole, Patricia E; DeCarli, Charles; Dubois, Bruno; Duchesne, Simon; Fox, Nick C; Frisoni, Giovanni B; Hampel, Harald; Hill, Derek LG; Johnson, Keith; Mangin, Jean-François; Scheltens, Philip; Schwarz, Adam J; Sperling, Reisa; Suhy, Joyce; Thompson, Paul M; Weiner, Michael; Foster, Norman L

    2012-01-01

    Background The promise of Alzheimer’s disease (AD) biomarkers has led to their incorporation in new diagnostic criteria and in therapeutic trials; however, significant barriers exist to widespread use. Chief among these is the lack of internationally accepted standards for quantitative metrics. Hippocampal volumetry is the most widely studied quantitative magnetic resonance imaging (MRI) measure in AD and thus represents the most rational target for an initial effort at standardization. Methods and Results The authors of this position paper propose a path toward this goal. The steps include: 1) Establish and empower an oversight board to manage and assess the effort, 2) Adopt the standardized definition of anatomic hippocampal boundaries on MRI arising from the EADC-ADNI hippocampal harmonization effort as a Reference Standard, 3) Establish a scientifically appropriate, publicly available Reference Standard Dataset based on manual delineation of the hippocampus in an appropriate sample of subjects (ADNI), and 4) Define minimum technical and prognostic performance metrics for validation of new measurement techniques using the Reference Standard Dataset as a benchmark. Conclusions Although manual delineation of the hippocampus is the best available reference standard, practical application of hippocampal volumetry will require automated methods. Our intent is to establish a mechanism for credentialing automated software applications to achieve internationally recognized accuracy and prognostic performance standards that lead to the systematic evaluation and then widespread acceptance and use of hippocampal volumetry. The standardization and assay validation process outlined for hippocampal volumetry is envisioned as a template that could be applied to other imaging biomarkers. PMID:21784356

  16. Methods for Geometric Data Validation of 3d City Models

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  17. Numerical sensitivity computation for discontinuous gradient-only optimization problems using the complex-step method

    Wilke, DN

    2012-07-01

    Full Text Available problems that utilise remeshing (i.e. the mesh topology is allowed to change) between design updates. Here, changes in mesh topology result in abrupt changes in the discretization error of the computed response. These abrupt changes in turn manifests... in shape optimization but may be present whenever (partial) differential equations are ap- proximated numerically with non-constant discretization methods e.g. remeshing of spatial domains or automatic time stepping in temporal domains. Keywords: Complex...

  18. Solution of the schrodinger equation in one dimension by simple method for a simple step potential

    Ertik, H.

    2005-01-01

    The coefficients of the transmission and reflection for the simple-step barrier potential were calculated by a simple method. Their values were entirely different from those often encountered in the literature. Especially in the case that the total energy is equal to the barrier potential, the value of 0,20 for the reflection coefficient was obtained whereas this is zero in the literature. This may be considered as an interesting point

  19. Single-step electrochemical method for producing very sharp Au scanning tunneling microscopy tips

    Gingery, David; Buehlmann, Philippe

    2007-01-01

    A single-step electrochemical method for making sharp gold scanning tunneling microscopy tips is described. 3.0M NaCl in 1% perchloric acid is compared to several previously reported etchants. The addition of perchloric acid to sodium chloride solutions drastically shortens etching times and is shown by transmission electron microscopy to produce very sharp tips with a mean radius of curvature of 15 nm

  20. Shielding design method for LMFBR validation on the Phenix factor

    Cabrillat, J.C.; Crouzet, J.; Misrakis, J.; Salvatores, M.; Rado, V.; Palmiotti, G.

    1983-05-01

    Shielding design methods, developed at CEA for shielding calculations find a global validation by the means of Phenix power reactor (250 MWe) measurements. Particularly, the secondary sodium activation of pool type LMFBR such as Super Phenix (1200 MWe) which is subject to strict safety limitation is well calculated by the adapted scheme, i.e. a two dimension transport calculation of shielding coupled to a Monte-Carlo calculation of secondary sodium activation

  1. Validating the JobFit system functional assessment method

    Jenny Legge; Robin Burgess-Limerick

    2007-05-15

    Workplace injuries are costing the Australian coal mining industry and its communities $410 Million a year. This ACARP study aims to meet those demands by developing a safe, reliable and valid pre-employment functional assessment tool. All JobFit System Pre-Employment Functional Assessments (PEFAs) consist of a musculoskeletal screen, balance test, aerobic fitness test and job-specific postural tolerances and material handling tasks. The results of each component are compared to the applicant's job demands and an overall PEFA score between 1 and 4 is given with 1 being the better score. The reliability study and validity study were conducted concurrently. The reliability study examined test-retest, intra-tester and inter-tester reliability of the JobFit System Functional Assessment Method. Overall, good to excellent reliability was found, which was sufficient to be used for comparison with injury data for determining the validity of the assessment. The overall assessment score and material handling tasks had the greatest reliability. The validity study compared the assessment results of 336 records from a Queensland underground and open cut coal mine with their injury records. A predictive relationship was found between PEFA score and the risk of a back/trunk/shoulder injury from manual handling. An association was also found between PEFA score of 1 and increased length of employment. Lower aerobic fitness test results had an inverse relationship with injury rates. The study found that underground workers, regardless of PEFA score, were more likely to have an injury when compared to other departments. No relationship was found between age and risk of injury. These results confirm the validity of the JobFit System Functional Assessment method.

  2. Combined Effects of Numerical Method Type and Time Step on Water Stressed Actual Crop ET

    B. Ghahraman

    2016-02-01

    Full Text Available Introduction: Actual crop evapotranspiration (Eta is important in hydrologic modeling and irrigation water management issues. Actual ET depends on an estimation of a water stress index and average soil water at crop root zone, and so depends on a chosen numerical method and adapted time step. During periods with no rainfall and/or irrigation, actual ET can be computed analytically or by using different numerical methods. Overal, there are many factors that influence actual evapotranspiration. These factors are crop potential evapotranspiration, available root zone water content, time step, crop sensitivity, and soil. In this paper different numerical methods are compared for different soil textures and different crops sensitivities. Materials and Methods: During a specific time step with no rainfall or irrigation, change in soil water content would be equal to evapotranspiration, ET. In this approach, however, deep percolation is generally ignored due to deep water table and negligible unsaturated hydraulic conductivity below rooting depth. This differential equation may be solved analytically or numerically considering different algorithms. We adapted four different numerical methods, as explicit, implicit, and modified Euler, midpoint method, and 3-rd order Heun method to approximate the differential equation. Three general soil types of sand, silt, and clay, and three different crop types of sensitive, moderate, and resistant under Nishaboor plain were used. Standard soil fraction depletion (corresponding to ETc=5 mm.d-1, pstd, below which crop faces water stress is adopted for crop sensitivity. Three values for pstd were considered in this study to cover the common crops in the area, including winter wheat and barley, cotton, alfalfa, sugar beet, saffron, among the others. Based on this parameter, three classes for crop sensitivity was considered, sensitive crops with pstd=0.2, moderate crops with pstd=0.5, and resistive crops with pstd=0

  3. Used-habitat calibration plots: A new procedure for validating species distribution, resource selection, and step-selection models

    Fieberg, John R.; Forester, James D.; Street, Garrett M.; Johnson, Douglas H.; ArchMiller, Althea A.; Matthiopoulos, Jason

    2018-01-01

    “Species distribution modeling” was recently ranked as one of the top five “research fronts” in ecology and the environmental sciences by ISI's Essential Science Indicators (Renner and Warton 2013), reflecting the importance of predicting how species distributions will respond to anthropogenic change. Unfortunately, species distribution models (SDMs) often perform poorly when applied to novel environments. Compounding on this problem is the shortage of methods for evaluating SDMs (hence, we may be getting our predictions wrong and not even know it). Traditional methods for validating SDMs quantify a model's ability to classify locations as used or unused. Instead, we propose to focus on how well SDMs can predict the characteristics of used locations. This subtle shift in viewpoint leads to a more natural and informative evaluation and validation of models across the entire spectrum of SDMs. Through a series of examples, we show how simple graphical methods can help with three fundamental challenges of habitat modeling: identifying missing covariates, non-linearity, and multicollinearity. Identifying habitat characteristics that are not well-predicted by the model can provide insights into variables affecting the distribution of species, suggest appropriate model modifications, and ultimately improve the reliability and generality of conservation and management recommendations.

  4. High-performance liquid chromatography method validation for determination of tetracycline residues in poultry meat

    Vikas Gupta

    2014-01-01

    Full Text Available Background: In this study, a method for determination of tetracycline (TC residues in poultry with the help of high-performance liquid chromatography technique was validated. Materials and Methods: The principle step involved in ultrasonic-assisted extraction of TCs from poultry samples by 2 ml of 20% trichloroacetic acid and phosphate buffer (pH 4, which gave a clearer supernatant and high recovery, followed by centrifugation and purification by using 0.22 μm filter paper. Results: Validity study of the method revealed that all obtained calibration curves showed good linearity (r2 > 0.999 over the range of 40-4500 ng. Sensitivity was found to be 1.54 and 1.80 ng for oxytetracycline (OTC and TC. Accuracy was in the range of 87.94-96.20% and 72.40-79.84% for meat. Precision was lower than 10% in all cases indicating that the method can be used as a validated method. Limit of detection was found to be 4.8 and 5.10 ng for OTC and TC, respectively. The corresponding values of limit of quantitation were 11 and 12 ng. Conclusion: The method reliably identifies and quantifies the selected TC and OTC in the reconstituted poultry meat in the low and sub-nanogram range and can be applied in any laboratory.

  5. A novel enterovirus and parechovirus multiplex one-step real-time PCR-validation and clinical experience.

    Nielsen, Alex Christian Yde; Böttiger, Blenda; Midgley, Sofie Elisabeth; Nielsen, Lars Peter

    2013-11-01

    As the number of new enteroviruses and human parechoviruses seems ever growing, the necessity for updated diagnostics is relevant. We have updated an enterovirus assay and combined it with a previously published assay for human parechovirus resulting in a multiplex one-step RT-PCR assay. The multiplex assay was validated by analysing the sensitivity and specificity of the assay compared to the respective monoplex assays, and a good concordance was found. Furthermore, the enterovirus assay was able to detect 42 reference strains from all 4 species, and an additional 9 genotypes during panel testing and routine usage. During 15 months of routine use, from October 2008 to December 2009, we received and analysed 2187 samples (stool samples, cerebrospinal fluids, blood samples, respiratory samples and autopsy samples) were tested, from 1546 patients and detected enteroviruses and parechoviruses in 171 (8%) and 66 (3%) of the samples, respectively. 180 of the positive samples could be genotyped by PCR and sequencing and the most common genotypes found were human parechovirus type 3, echovirus 9, enterovirus 71, Coxsackievirus A16, and echovirus 25. During 2009 in Denmark, both enterovirus and human parechovirus type 3 had a similar seasonal pattern with a peak during the summer and autumn. Human parechovirus type 3 was almost invariably found in children less than 4 months of age. In conclusion, a multiplex assay was developed allowing simultaneous detection of 2 viruses, which can cause similar clinical symptoms. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Stepwise Procedure for Development and Validation of a Multipesticide Method

    Ambrus, A. [Hungarian Food Safety Office, Budapest (Hungary)

    2009-07-15

    The stepwise procedure for development and the validation of so called multi-pesticide methods are described. Principles, preliminary actions, criteria for the selection of chromatographic separation, detection and performance verification of multi-pesticide methods are outlined. Also the long term repeatability and reproducibility, as well as the necessity for the documentation of laboratory work are highlighted. Appendix I hereof describes in detail the calculation of calibration parameters, whereas Appendix II focuses on the calculation of the significance of differences of concentrations obtained on two different separation columns. (author)

  7. Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential

    Zhang Ying; Liang Haozhao; Meng Jie

    2009-01-01

    The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus 12 C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.

  8. Developing and validating a nutrition knowledge questionnaire: key methods and considerations.

    Trakman, Gina Louise; Forsyth, Adrienne; Hoye, Russell; Belski, Regina

    2017-10-01

    To outline key statistical considerations and detailed methodologies for the development and evaluation of a valid and reliable nutrition knowledge questionnaire. Literature on questionnaire development in a range of fields was reviewed and a set of evidence-based guidelines specific to the creation of a nutrition knowledge questionnaire have been developed. The recommendations describe key qualitative methods and statistical considerations, and include relevant examples from previous papers and existing nutrition knowledge questionnaires. Where details have been omitted for the sake of brevity, the reader has been directed to suitable references. We recommend an eight-step methodology for nutrition knowledge questionnaire development as follows: (i) definition of the construct and development of a test plan; (ii) generation of the item pool; (iii) choice of the scoring system and response format; (iv) assessment of content validity; (v) assessment of face validity; (vi) purification of the scale using item analysis, including item characteristics, difficulty and discrimination; (vii) evaluation of the scale including its factor structure and internal reliability, or Rasch analysis, including assessment of dimensionality and internal reliability; and (viii) gathering of data to re-examine the questionnaire's properties, assess temporal stability and confirm construct validity. Several of these methods have previously been overlooked. The measurement of nutrition knowledge is an important consideration for individuals working in the nutrition field. Improved methods in the development of nutrition knowledge questionnaires, such as the use of factor analysis or Rasch analysis, will enable more confidence in reported measures of nutrition knowledge.

  9. OWL-based reasoning methods for validating archetypes.

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Current lipid extraction methods are significantly enhanced adding a water treatment step in Chlorella protothecoides.

    Ren, Xiaojie; Zhao, Xinhe; Turcotte, François; Deschênes, Jean-Sébastien; Tremblay, Réjean; Jolicoeur, Mario

    2017-02-11

    Microalgae have the potential to rapidly accumulate lipids of high interest for the food, cosmetics, pharmaceutical and energy (e.g. biodiesel) industries. However, current lipid extraction methods show efficiency limitation and until now, extraction protocols have not been fully optimized for specific lipid compounds. The present study thus presents a novel lipid extraction method, consisting in the addition of a water treatment of biomass between the two-stage solvent extraction steps of current extraction methods. The resulting modified method not only enhances lipid extraction efficiency, but also yields a higher triacylglycerols (TAG) ratio, which is highly desirable for biodiesel production. Modification of four existing methods using acetone, chloroform/methanol (Chl/Met), chloroform/methanol/H 2 O (Chl/Met/H 2 O) and dichloromethane/methanol (Dic/Met) showed respective lipid extraction yield enhancement of 72.3, 35.8, 60.3 and 60.9%. The modified acetone method resulted in the highest extraction yield, with 68.9 ± 0.2% DW total lipids. Extraction of TAG was particularly improved with the water treatment, especially for the Chl/Met/H 2 O and Dic/Met methods. The acetone method with the water treatment led to the highest extraction level of TAG with 73.7 ± 7.3 µg/mg DW, which is 130.8 ± 10.6% higher than the maximum value obtained for the four classical methods (31.9 ± 4.6 µg/mg DW). Interestingly, the water treatment preferentially improved the extraction of intracellular fractions, i.e. TAG, sterols, and free fatty acids, compared to the lipid fractions of the cell membranes, which are constituted of phospholipids (PL), acetone mobile polar lipids and hydrocarbons. Finally, from the 32 fatty acids analyzed for both neutral lipids (NL) and polar lipids (PL) fractions, it is clear that the water treatment greatly improves NL-to-PL ratio for the four standard methods assessed. Water treatment of biomass after the first solvent extraction step

  11. Two-step calibration method for multi-algorithm score-based face recognition systems by minimizing discrimination loss

    Susyanto, N.; Veldhuis, R.N.J.; Spreeuwers, L.J.; Klaassen, C.A.J.; Fierrez, J.; Li, S.Z.; Ross, A.; Veldhuis, R.; Alonso-Fernandez, F.; Bigun, J.

    2016-01-01

    We propose a new method for combining multi-algorithm score-based face recognition systems, which we call the two-step calibration method. Typically, algorithms for face recognition systems produce dependent scores. The two-step method is based on parametric copulas to handle this dependence. Its

  12. Survey and assessment of conventional software verification and validation methods

    Miller, L.A.; Groundwater, E.; Mirsky, S.M.

    1993-04-01

    By means of a literature survey, a comprehensive set of methods was identified for the verification and validation of conventional software. The 134 methods so identified were classified according to their appropriateness for various phases of a developmental lifecycle -- requirements, design, and implementation; the last category was subdivided into two, static testing and dynamic testing methods. The methods were then characterized in terms of eight rating factors, four concerning ease-of-use of the methods and four concerning the methods' power to detect defects. Based on these factors, two measurements were developed to permit quantitative comparisons among methods, a Cost-Benefit metric and an Effectiveness Metric. The Effectiveness Metric was further refined to provide three different estimates for each method, depending on three classes of needed stringency of V ampersand V (determined by ratings of a system's complexity and required-integrity). Methods were then rank-ordered for each of the three classes in terms of their overall cost-benefits and effectiveness. The applicability was then assessed of each method for the four identified components of knowledge-based and expert systems, as well as the system as a whole

  13. Dependability validation by means of fault injection: method, implementation, application

    Arlat, Jean

    1990-01-01

    This dissertation presents theoretical and practical results concerning the use of fault injection as a means for testing fault tolerance in the framework of the experimental dependability validation of computer systems. The dissertation first presents the state-of-the-art of published work on fault injection, encompassing both hardware (fault simulation, physical fault Injection) and software (mutation testing) issues. Next, the major attributes of fault injection (faults and their activation, experimental readouts and measures, are characterized taking into account: i) the abstraction levels used to represent the system during the various phases of its development (analytical, empirical and physical models), and Il) the validation objectives (verification and evaluation). An evaluation method is subsequently proposed that combines the analytical modeling approaches (Monte Carlo Simulations, closed-form expressions. Markov chains) used for the representation of the fault occurrence process and the experimental fault Injection approaches (fault Simulation and physical injection); characterizing the error processing and fault treatment provided by the fault tolerance mechanisms. An experimental tool - MESSALINE - is then defined and presented. This tool enables physical faults to be Injected In an hardware and software prototype of the system to be validated. Finally, the application of MESSALINE for testing two fault-tolerant systems possessing very dissimilar features and the utilization of the experimental results obtained - both as design feedbacks and for dependability measures evaluation - are used to illustrate the relevance of the method. (author) [fr

  14. The Step Method - Battling Identity Theft Using E-Retailers' Websites

    Schulze, Marion; Shah, Mahmood H.

    Identity theft is the fastest growing crime in the 21st century. This paper investigates firstly what well-known e-commerce organizations are communicating on their websites to address this issue. For this purpose we analyze secondary data (literature and websites of ten organizations). Secondly we investigate the good practice in this area and recommend practical steps. The key findings are that some organizations only publish minimum security information to comply with legal requirements. Others inform consumers on how they actively try to prevent identity theft, how consumers can protect themselves, and about supporting actions when identity theft related fraud actually happens. From these findings we developed the Support - Trust - Empowerment -Prevention (STEP) method. It is aimed at helping to prevent identity theft and dealing with consequences when it occurs. It can help organizations on gaining and keeping consumers’ trust which is so essential for e-retailers in a climate of rising fraud.

  15. VALUE - Validating and Integrating Downscaling Methods for Climate Change Research

    Maraun, Douglas; Widmann, Martin; Benestad, Rasmus; Kotlarski, Sven; Huth, Radan; Hertig, Elke; Wibig, Joanna; Gutierrez, Jose

    2013-04-01

    Our understanding of global climate change is mainly based on General Circulation Models (GCMs) with a relatively coarse resolution. Since climate change impacts are mainly experienced on regional scales, high-resolution climate change scenarios need to be derived from GCM simulations by downscaling. Several projects have been carried out over the last years to validate the performance of statistical and dynamical downscaling, yet several aspects have not been systematically addressed: variability on sub-daily, decadal and longer time-scales, extreme events, spatial variability and inter-variable relationships. Different downscaling approaches such as dynamical downscaling, statistical downscaling and bias correction approaches have not been systematically compared. Furthermore, collaboration between different communities, in particular regional climate modellers, statistical downscalers and statisticians has been limited. To address these gaps, the EU Cooperation in Science and Technology (COST) action VALUE (www.value-cost.eu) has been brought into life. VALUE is a research network with participants from currently 23 European countries running from 2012 to 2015. Its main aim is to systematically validate and develop downscaling methods for climate change research in order to improve regional climate change scenarios for use in climate impact studies. Inspired by the co-design idea of the international research initiative "future earth", stakeholders of climate change information have been involved in the definition of research questions to be addressed and are actively participating in the network. The key idea of VALUE is to identify the relevant weather and climate characteristics required as input for a wide range of impact models and to define an open framework to systematically validate these characteristics. Based on a range of benchmark data sets, in principle every downscaling method can be validated and compared with competing methods. The results of

  16. Validation of ultraviolet method to determine serum phosphorus level

    Garcia Borges, Lisandra; Perez Prieto, Teresa Maria; Valdes Diez, Lilliam

    2009-01-01

    Validation of a spectrophotometry method applicable in clinic labs was proposed to analytical assessment of serum phosphates using a kit UV-phosphorus of domestic production from 'Carlos J. Finlay' Biologics Production Enterprise (Havana, Cuba). Analysis method was based on phosphorus reaction to ammonium molybdenum to acid pH to creating a measurable complex to 340 nm. Specificity and precision were measured considering the method strength, linearity, accuracy and sensitivity. Analytical method was linear up to 4,8 mmol/L, precise (CV 30 .999) during clinical interest concentration interval where there were not interferences by matrix. Detection limit values were of 0.037 mmol/L and of quantification of 0.13 mmol/L both were satisfactory for product use

  17. Validity of the CT to attenuation coefficient map conversion methods

    Faghihi, R.; Ahangari Shahdehi, R.; Fazilat Moadeli, M.

    2004-01-01

    The most important commercialized methods of attenuation correction in SPECT are based on attenuation coefficient map from a transmission imaging method. The transmission imaging system can be the linear source of radioelement or a X-ray CT system. The image of transmission imaging system is not useful unless to replacement of the attenuation coefficient or CT number with the attenuation coefficient in SPECT energy. In this paper we essay to evaluate the validity and estimate the error of the most used method of this transformation. The final result shows that the methods which use a linear or multi-linear curve accept a error in their estimation. The value of mA is not important but the patient thickness is very important and it can introduce a error more than 10 percent in the final result

  18. Methods and practices for verification and validation of programmable systems

    Heimbuerger, H.; Haapanen, P.; Pulkkinen, U.

    1993-01-01

    The programmable systems deviate by their properties and behaviour from the conventional non-programmable systems in such extent, that their verification and validation for safety critical applications requires new methods and practices. The safety assessment can not be based on conventional probabilistic methods due to the difficulties in the quantification of the reliability of the software and hardware. The reliability estimate of the system must be based on qualitative arguments linked to a conservative claim limit. Due to the uncertainty of the quantitative reliability estimate other means must be used to get more assurance about the system safety. Methods and practices based on research done by VTT for STUK, are discussed in the paper as well as the methods applicable in the reliability analysis of software based safety functions. The most essential concepts and models of quantitative reliability analysis are described. The application of software models in probabilistic safety analysis (PSA) is evaluated. (author). 18 refs

  19. A new heat transfer analysis in machining based on two steps of 3D finite element modelling and experimental validation

    Haddag, B.; Kagnaya, T.; Nouari, M.; Cutard, T.

    2013-01-01

    Modelling machining operations allows estimating cutting parameters which are difficult to obtain experimentally and in particular, include quantities characterizing the tool-workpiece interface. Temperature is one of these quantities which has an impact on the tool wear, thus its estimation is important. This study deals with a new modelling strategy, based on two steps of calculation, for analysis of the heat transfer into the cutting tool. Unlike the classical methods, considering only the cutting tool with application of an approximate heat flux at the cutting face, estimated from experimental data (e.g. measured cutting force, cutting power), the proposed approach consists of two successive 3D Finite Element calculations and fully independent on the experimental measurements; only the definition of the behaviour of the tool-workpiece couple is necessary. The first one is a 3D thermomechanical modelling of the chip formation process, which allows estimating cutting forces, chip morphology and its flow direction. The second calculation is a 3D thermal modelling of the heat diffusion into the cutting tool, by using an adequate thermal loading (applied uniform or non-uniform heat flux). This loading is estimated using some quantities obtained from the first step calculation, such as contact pressure, sliding velocity distributions and contact area. Comparisons in one hand between experimental data and the first calculation and at the other hand between measured temperatures with embedded thermocouples and the second calculation show a good agreement in terms of cutting forces, chip morphology and cutting temperature.

  20. Avoid the tsunami of the Dirac sea in the imaginary time step method

    Zhang, Ying; Liang, Haozhao; Meng, Jie

    2010-01-01

    The discrete single-particle spectra in both the Fermi and Dirac sea have been calculated by the imaginary time step (ITS) method for the Schroedinger-like equation after avoiding the "tsunami" of the Dirac sea, i.e. the diving behavior of the single-particle level into the Dirac sea in the direct application of the ITS method for the Dirac equation. It is found that by the transform from the Dirac equation to the Schroedinger-like equation, the single-particle spectra, which extend from the positive to the negative infinity, can be separately obtained by the ITS evolution in either the Fermi sea or the Dirac sea. Identical results with those in the conventional shooting method have been obtained via the ITS evolution for the equivalent Schroedinger-like equation, which demonstrates the feasibility, practicality and reliability of the present algorithm and dispels the doubts on the ITS method in the relativistic system. (author)

  1. Two-step extraction method for lead isotope fractionation to reveal anthropogenic lead pollution.

    Katahira, Kenshi; Moriwaki, Hiroshi; Kamura, Kazuo; Yamazaki, Hideo

    2018-05-28

    This study developed the 2-step extraction method which eluted the Pb adsorbing on the surface of sediments in the first solution by aqua regia and extracted the Pb absorbed inside particles into the second solution by mixed acid of nitric acid, hydrofluoric acid and hydrogen peroxide solution. We applied the method to sediments in the enclosed water area and found out that the isotope ratios of Pb in the second solution represented those of natural origin. This advantage of the method makes it possible to distinguish the Pb between natural origin and anthropogenic source on the basis of the isotope ratios. The results showed that the method was useful to discuss the Pb sources and that anthropogenic Pb in the sediment samples analysed was mainly derived from China because of transboundary air pollution.

  2. The Validation of NAA Method Used as Test Method in Serpong NAA Laboratory

    Rina-Mulyaningsih, Th.

    2004-01-01

    The Validation Of NAA Method Used As Test Method In Serpong NAA Laboratory. NAA Method is a non standard testing method. The testing laboratory shall validate its using method to ensure and confirm that it is suitable with application. The validation of NAA methods have been done with the parameters of accuracy, precision, repeatability and selectivity. The NIST 1573a Tomato Leaves, NIES 10C Rice flour unpolished and standard elements were used in this testing program. The result of testing with NIST 1573a showed that the elements of Na, Zn, Al and Mn are met from acceptance criteria of accuracy and precision, whereas Co is rejected. The result of testing with NIES 10C showed that Na and Zn elements are met from acceptance criteria of accuracy and precision, but Mn element is rejected. The result of selectivity test showed that the value of quantity is between 0.1-2.5 μg, depend on the elements. (author)

  3. Validation method training: nurses' experiences and ratings of work climate.

    Söderlund, Mona; Norberg, Astrid; Hansebo, Görel

    2014-03-01

    Training nursing staff in communication skills can impact on the quality of care for residents with dementia and contributes to nurses' job satisfaction. Changing attitudes and practices takes time and energy and can affect the entire nursing staff, not just the nurses directly involved in a training programme. Therefore, it seems important to study nurses' experiences of a training programme and any influence of the programme on work climate among the entire nursing staff. To explore nurses' experiences of a 1-year validation method training programme conducted in a nursing home for residents with dementia and to describe ratings of work climate before and after the programme. A mixed-methods approach. Twelve nurses participated in the training and were interviewed afterwards. These individual interviews were tape-recorded and transcribed, then analysed using qualitative content analysis. The Creative Climate Questionnaire was administered before (n = 53) and after (n = 56) the programme to the entire nursing staff in the participating nursing home wards and analysed with descriptive statistics. Analysis of the interviews resulted in four categories: being under extra strain, sharing experiences, improving confidence in care situations and feeling uncertain about continuing the validation method. The results of the questionnaire on work climate showed higher mean values in the assessment after the programme had ended. The training strengthened the participating nurses in caring for residents with dementia, but posed an extra strain on them. These nurses also described an extra strain on the entire nursing staff that was not reflected in the results from the questionnaire. The work climate at the nursing home wards might have made it easier to conduct this extensive training programme. Training in the validation method could develop nurses' communication skills and improve their handling of complex care situations. © 2013 Blackwell Publishing Ltd.

  4. Influence of application methods of one-step self-etching adhesives on microtensile bond strength

    Chul-Kyu Choi,

    2011-05-01

    Full Text Available Objectives The purpose of this study was to evaluate the effect of various application methods of one-step self-etch adhesives to microtensile resin-dentin bond strength. Materials and Methods Thirty-six extracted human molars were used. The teeth were assigned randomly to twelve groups (n = 15, according to the three different adhesive systems (Clearfil Tri-S Bond, Adper Prompt L-Pop, G-Bond and application methods. The adhesive systems were applied on the dentin as follows: 1 The single coating, 2 The double coating, 3 Manual agitation, 4 Ultrasonic agitation. Following the adhesive application, light-cure composite resin was constructed. The restored teeth were stored in distilled water at room temperature for 24 hours, and prepared 15 specimens per groups. Then microtensile bond strength was measured and the failure mode was examined. Results Manual agitation and ultrasonic agitation of adhesive significantly increased the microtensile bond strength than single coating and double coating did. Double coating of adhesive significantly increased the microtensile bond strength than single coating did and there was no significant difference between the manual agitation and ultrasonic agitation group. There was significant difference in microtensile bonding strength among all adhesives and Clearfil Tri-S Bond showed the highest bond strength. Conclusions In one-step self-etching adhesives, there was significant difference according to application methods and type of adhesives. No matter of the material, the manual or ultrasonic agitation of the adhesive showed significantly higher microtensile bond strength.

  5. Single Laboratory Validated Method for Determination of Cylindrospermopsin and Anatoxin-a in Ambient Water by Liquid Chromatography/ Tandem Mass Spectrometry (LC/MS/MS)

    This product is an LC/MS/MS single laboratory validated method for the determination of cylindrospermopsin and anatoxin-a in ambient waters. The product contains step-by-step instructions for sample preparation, analyses, preservation, sample holding time and QC protocols to ensu...

  6. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of

  7. A systematic methodology for creep master curve construction using the stepped isostress method (SSM): a numerical assessment

    Miranda Guedes, Rui

    2018-02-01

    Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.

  8. Method validation and stability study of quercetin in topical emulsions

    Rúbia Casagrande

    2009-01-01

    Full Text Available This study validated a high performance liquid chromatography (HPLC method for the quantitative evaluation of quercetin in topical emulsions. The method was linear within 0.05 - 200 μg/mL range with a correlation coefficient of 0.9997, and without interference in the quercetin peak. The detection and quantitation limits were 18 and 29 ng/mL, respectively. The intra- and inter-assay precisions presented R.S.D. values lower than 2%. An average of 93% and 94% of quercetin was recovered for non-ionic and anionic emulsions, respectively. The raw material and anionic emulsion, but not non-ionic emulsion, were stable in all storage conditions for one year. The method reported is a fast and reliable HPLC technique useful for quercetin determination in topical emulsions.

  9. Validation of the Three-Step Strategic Approach for Improving Urban Water Management and Water Resource Quality Improvement

    Alberto Galvis

    2018-02-01

    Full Text Available The impact on water resources caused by municipal wastewater discharges has become a critical and ever-growing environmental and public health concern. So far, interventions have been positioned largely ‘at the end of the pipe’, via the introduction of high-tech and innovative wastewater treatment technologies. This approach is incomplete, inefficient and expensive, and will not be able to address the rapidly growing global wastewater challenge. In order to be able to efficiently address this problem, it is important to adopt an integrated approach such as the three-step strategic approach (3-SSA consisting of (1 minimization and prevention, (2 treatment for reuse and (3 stimulated natural self-purification. In this study, the 3-SSA was validated by applying it to the Upper Cauca river basin, in Colombia and comparing it to a conventional strategy. The pollutant load removed was 64,805 kg/d Biochemical Oxygen Demand BOD5 (46% for the conventional strategy and 69,402 kg/d BOD5 (50% for the unconventional strategy. Cost benefit analysis results clearly favoured the 3-SSA (unconventional strategy: NPV for the conventional strategy = −276,318 × 103 Euros, and NPV for the unconventional strategy (3-SSA = +338,266 × 103 Euros. The application of the 3-SSA resulted in avoided costs for initial investments and operation and maintenance (O&M, especially for groundwater wells and associated pumps for sugar cane irrigation. Furthermore, costs were avoided by optimization of wastewater treatment plants (WWTPs, tariffs and by replacement of fertilizers.

  10. Method validation in plasma source optical emission spectroscopy (ICP-OES) - From samples to results

    Pilon, Fabien; Vielle, Karine; Birolleau, Jean-Claude; Vigneau, Olivier; Labet, Alexandre; Arnal, Nadege; Adam, Christelle; Camilleri, Virginie; Amiel, Jeanine; Granier, Guy; Faure, Joel; Arnaud, Regine; Beres, Andre; Blanchard, Jean-Marc; Boyer-Deslys, Valerie; Broudic, Veronique; Marques, Caroline; Augeray, Celine; Bellefleur, Alexandre; Bienvenu, Philippe; Delteil, Nicole; Boulet, Beatrice; Bourgarit, David; Brennetot, Rene; Fichet, Pascal; Celier, Magali; Chevillotte, Rene; Klelifa, Aline; Fuchs, Gilbert; Le Coq, Gilles; Mermet, Jean-Michel

    2017-01-01

    Even though ICP-OES (Inductively Coupled Plasma - Optical Emission Spectroscopy) is now a routine analysis technique, requirements for measuring processes impose a complete control and mastering of the operating process and of the associated quality management system. The aim of this (collective) book is to guide the analyst during all the measurement validation procedure and to help him to guarantee the mastering of its different steps: administrative and physical management of samples in the laboratory, preparation and treatment of the samples before measuring, qualification and monitoring of the apparatus, instrument setting and calibration strategy, exploitation of results in terms of accuracy, reliability, data covariance (with the practical determination of the accuracy profile). The most recent terminology is used in the book, and numerous examples and illustrations are given in order to a better understanding and to help the elaboration of method validation documents

  11. Statistical methods for mechanistic model validation: Salt Repository Project

    Eggett, D.L.

    1988-07-01

    As part of the Department of Energy's Salt Repository Program, Pacific Northwest Laboratory (PNL) is studying the emplacement of nuclear waste containers in a salt repository. One objective of the SRP program is to develop an overall waste package component model which adequately describes such phenomena as container corrosion, waste form leaching, spent fuel degradation, etc., which are possible in the salt repository environment. The form of this model will be proposed, based on scientific principles and relevant salt repository conditions with supporting data. The model will be used to predict the future characteristics of the near field environment. This involves several different submodels such as the amount of time it takes a brine solution to contact a canister in the repository, how long it takes a canister to corrode and expose its contents to the brine, the leach rate of the contents of the canister, etc. These submodels are often tested in a laboratory and should be statistically validated (in this context, validate means to demonstrate that the model adequately describes the data) before they can be incorporated into the waste package component model. This report describes statistical methods for validating these models. 13 refs., 1 fig., 3 tabs

  12. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Shankarjee Krishnamoorthi

    Full Text Available We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  13. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Krishnamoorthi, Shankarjee; Perotti, Luigi E; Borgstrom, Nils P; Ajijola, Olujimi A; Frid, Anna; Ponnaluri, Aditya V; Weiss, James N; Qu, Zhilin; Klug, William S; Ennis, Daniel B; Garfinkel, Alan

    2014-01-01

    We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  14. On the limitations of fixed-step-size adaptive methods with response confidence.

    Hsu, Yung-Fong; Chin, Ching-Lan

    2014-05-01

    The family of (non-parametric, fixed-step-size) adaptive methods, also known as 'up-down' or 'staircase' methods, has been used extensively in psychophysical studies for threshold estimation. Extensions of adaptive methods to non-binary responses have also been proposed. An example is the three-category weighted up-down (WUD) method (Kaernbach, 2001) and its four-category extension (Klein, 2001). Such an extension, however, is somewhat restricted, and in this paper we discuss its limitations. To facilitate the discussion, we characterize the extension of WUD by an algorithm that incorporates response confidence into a family of adaptive methods. This algorithm can also be applied to two other adaptive methods, namely Derman's up-down method and the biased-coin design, which are suitable for estimating any threshold quantiles. We then discuss via simulations of the above three methods the limitations of the algorithm. To illustrate, we conduct a small scale of experiment using the extended WUD under different response confidence formats to evaluate the consistency of threshold estimation. © 2013 The British Psychological Society.

  15. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysis that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR

  16. Laboratory diagnostic methods, system of quality and validation

    Ašanin Ružica

    2005-01-01

    Full Text Available It is known that laboratory investigations secure safe and reliable results that provide a final confirmation of the quality of work. Ideas, planning, knowledge, skills, experience, and environment, along with good laboratory practice, quality control and reliability of quality, make the area of biological investigations very complex. In recent years, quality control, including the control of work in the laboratory, is based on international standards and is used at that level. The implementation of widely recognized international standards, such as the International Standard ISO/IEC 17025 (1 and the implementing of the quality system series ISO/IEC 9000 (2 have become the imperative on the grounds of which laboratories have a formal, visible and corresponding system of quality. The diagnostic methods that are used must constantly yield results which identify the animal as positive or negative, and the precise status of the animal is determined with a predefined degree of statistical significance. Methods applied on a selected population reduce the risk of obtaining falsely positive or falsely negative results. A condition for this are well conceived and documented methods, with the application of the corresponding reagents, and work with professional and skilled staff. This process requires also a consistent implementation of the most rigorous experimental plans, epidemiological and statistical data and estimations, with constant monitoring of the validity of the applied methods. Such an approach is necessary in order to cut down the number of misconceptions and accidental mistakes, for a referent population of animals on which the validity of a method is tested. Once a valid method is included in daily routine investigations, it is necessary to apply constant monitoring for the purpose of internal quality control, in order adequately to evaluate its reproducibility and reliability. Consequently, it is necessary at least twice yearly to conduct

  17. Development of a three dimensional circulation model based on fractional step method

    Mazen Abualtayef

    2010-03-01

    Full Text Available A numerical model was developed for simulating a three-dimensional multilayer hydrodynamic and thermodynamic model in domains with irregular bottom topography. The model was designed for examining the interactions between flow and topography. The model was based on the three-dimensional Navier-Stokes equations and was solved using the fractional step method, which combines the finite difference method in the horizontal plane and the finite element method in the vertical plane. The numerical techniques were described and the model test and application were presented. For the model application to the northern part of Ariake Sea, the hydrodynamic and thermodynamic results were predicted. The numerically predicted amplitudes and phase angles were well consistent with the field observations.

  18. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    Pestana, Reynam C.

    2009-01-01

    We show that the wave equation solution using a conventional finite‐difference scheme, derived commonly by the Taylor series approach, can be derived directly from the rapid expansion method (REM). After some mathematical manipulation we consider an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second order time finite‐difference scheme that is frequently used in more conventional finite‐difference implementations. We then show that if we use more terms from the REM we can obtain a more accurate time integration of the wave field. Consequently, we have demonstrated that the REM is more accurate than the usual finite‐difference schemes and it provides a wave equation solution which allows us to march in large time steps without numerical dispersion and is numerically stable. We illustrate the method with post and pre stack migration results.

  19. A novel single-step, multipoint calibration method for instrumented Lab-on-Chip systems

    Pfreundt, Andrea; Patou, François; Zulfiqar, Azeem

    2014-01-01

    for instrument-based PoC blood biomarker analysis systems. Motivated by the complexity of associating high-accuracy biosensing using silicon nanowire field effect transistors with ease of use for the PoC system user, we propose a novel one-step, multipoint calibration method for LoC-based systems. Our approach...... specifically addresses the important interfaces between a novel microfluidic unit to integrate the sensor array and a mobile-device hardware accessory. A multi-point calibration curve is obtained by generating a defined set of reference concentrations from a single input. By consecutively splitting the flow...

  20. s-Step Krylov Subspace Methods as Bottom Solvers for Geometric Multigrid

    Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lijewski, Mike [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Almgren, Ann [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Straalen, Brian Van [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Carson, Erin [Univ. of California, Berkeley, CA (United States); Knight, Nicholas [Univ. of California, Berkeley, CA (United States); Demmel, James [Univ. of California, Berkeley, CA (United States)

    2014-08-14

    Geometric multigrid solvers within adaptive mesh refinement (AMR) applications often reach a point where further coarsening of the grid becomes impractical as individual sub domain sizes approach unity. At this point the most common solution is to use a bottom solver, such as BiCGStab, to reduce the residual by a fixed factor at the coarsest level. Each iteration of BiCGStab requires multiple global reductions (MPI collectives). As the number of BiCGStab iterations required for convergence grows with problem size, and the time for each collective operation increases with machine scale, bottom solves in large-scale applications can constitute a significant fraction of the overall multigrid solve time. In this paper, we implement, evaluate, and optimize a communication-avoiding s-step formulation of BiCGStab (CABiCGStab for short) as a high-performance, distributed-memory bottom solver for geometric multigrid solvers. This is the first time s-step Krylov subspace methods have been leveraged to improve multigrid bottom solver performance. We use a synthetic benchmark for detailed analysis and integrate the best implementation into BoxLib in order to evaluate the benefit of a s-step Krylov subspace method on the multigrid solves found in the applications LMC and Nyx on up to 32,768 cores on the Cray XE6 at NERSC. Overall, we see bottom solver improvements of up to 4.2x on synthetic problems and up to 2.7x in real applications. This results in as much as a 1.5x improvement in solver performance in real applications.

  1. Fabrication of titanium removable dental prosthesis frameworks with a 2-step investment coating method.

    Koike, Mari; Hummel, Susan K; Ball, John D; Okabe, Toru

    2012-06-01

    Although pure titanium is known to have good biocompatibility, a titanium alloy with better strength is needed for fabricating clinically acceptable, partial removable dental prosthesis (RDP) frameworks. The mechanical properties of an experimental Ti-5Al-5Cu alloy cast with a 2-step investment technique were examined for RDP framework applications. Patterns for tests for various properties and denture frameworks for a preliminary trial casting were invested with a 2-step coating method using 2 types of mold materials: a less reactive spinel compound (Al(2)O(3)·MgO) and a less expensive SiO(2)-based material. The yield and tensile strength (n=5), modulus of elasticity (n=5), elongation (n=5), and hardness (n=8) of the cast Ti-5Al-5Cu alloy were determined. The external appearance and internal porosities of the preliminary trial castings of denture frameworks (n=2) were examined with a conventional dental radiographic unit. Cast Ti-6Al-4V alloy and commercially pure titanium (CP Ti) were used as controls. The data for the mechanical properties were statistically analyzed with 1-way ANOVA (α=.05). The yield strength of the cast Ti-5Al-5Cu alloy was 851 MPa and the hardness was 356 HV. These properties were comparable to those of the cast Ti-6Al-4V and were higher than those of CP Ti (PAl-5Cu frameworks was found to have been incompletely cast. The cast biocompatible experimental Ti-5Al-5Cu alloy exhibited high strength when cast with a 2-step coating method. With a dedicated study to determine the effect of sprue design on the quality of castings, biocompatible Ti-5Al-5Cu RDP frameworks for a clinical trial can be produced. Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  2. A novel two-step method for screening shade tolerant mutant plants via dwarfism

    Wei Li

    2016-10-01

    Full Text Available When subjected to shade, plants undergo rapid shoot elongation, which often makes them more prone to disease and mechanical damage. Shade-tolerant plants can be difficult to breed; however, they offer a substantial benefit over other varieties in low-light areas. Although perennial ryegrass (Lolium perenne L. is a popular species of turf grasses because of their good appearance and fast establishment, the plant normally does not perform well under shade conditions. It has been reported that, in turfgrass, induced dwarfism can enhance shade tolerance. Here we describe a two-step procedure for isolating shade tolerant mutants of perennial ryegrass by first screening for dominant dwarf mutants, and then screening dwarf plants for shade tolerance. The two-step screening process to isolate shade tolerant mutants can be done efficiently with limited space at early seedling stages, which enables quick and efficient isolation of shade tolerant mutants, and thus facilitates development of shade tolerant new cultivars of turfgrasses. Using the method, we isolated 136 dwarf mutants from 300,000 mutagenized seeds, with 65 being shade tolerant (0.022%. When screened directly for shade tolerance, we recovered only four mutants from a population of 150,000 (0.003% mutagenized seeds. One shade tolerant mutant, shadow-1, was characterized in detail. In addition to dwarfism, shadow-1 and its sexual progeny displayed high degrees of tolerance to both natural and artificial shade. We showed that endogenous gibberellin (GA content in shadow-1 was higher than wild-type controls, and shadow-1 was also partially GA insensitive. Our novel, simple and effective two-step screening method should be applicable to breeding shade tolerant cultivars of turfgrasses, ground covers, and other economically important crop plants that can be used under canopies of existing vegetation to increase productivity per unit area of land.

  3. Validation study of core analysis methods for full MOX BWR

    2013-01-01

    JNES has been developing a technical database used in reviewing validation of core analysis methods of LWRs in the coming occasions: (1) confirming the core safety parameters of the initial core (one-third MOX core) through a full MOX core in Oma Nuclear Power Plant, which is under the construction, (2) licensing high-burnup MOX cores in the future and (3) reviewing topical reports on core analysis codes for safety design and evaluation. Based on the technical database, JNES will issue a guide of reviewing the core analysis methods used for safety design and evaluation of LWRs. The database will be also used for validation and improving of core analysis codes developed by JNES. JNES has progressed with the projects: (1) improving a Doppler reactivity analysis model in a Monte Carlo calculation code MVP, (2) sensitivity study of nuclear cross section date on reactivity calculation of experimental cores composed of UO 2 and MOX fuel rods, (3) analysis of isotopic composition data for UO 2 and MOX fuels and (4) the guide of reviewing the core analysis codes and others. (author)

  4. Validation study of core analysis methods for full MOX BWR

    NONE

    2013-08-15

    JNES has been developing a technical database used in reviewing validation of core analysis methods of LWRs in the coming occasions: (1) confirming the core safety parameters of the initial core (one-third MOX core) through a full MOX core in Oma Nuclear Power Plant, which is under the construction, (2) licensing high-burnup MOX cores in the future and (3) reviewing topical reports on core analysis codes for safety design and evaluation. Based on the technical database, JNES will issue a guide of reviewing the core analysis methods used for safety design and evaluation of LWRs. The database will be also used for validation and improving of core analysis codes developed by JNES. JNES has progressed with the projects: (1) improving a Doppler reactivity analysis model in a Monte Carlo calculation code MVP, (2) sensitivity study of nuclear cross section date on reactivity calculation of experimental cores composed of UO{sub 2} and MOX fuel rods, (3) analysis of isotopic composition data for UO{sub 2} and MOX fuels and (4) the guide of reviewing the core analysis codes and others. (author)

  5. A Simple Three-Step Method for Design and Affinity Testing of New Antisense Peptides: An Example of Erythropoietin

    Nikola Štambuk

    2014-05-01

    Full Text Available Antisense peptide technology is a valuable tool for deriving new biologically active molecules and performing peptide–receptor modulation. It is based on the fact that peptides specified by the complementary (antisense nucleotide sequences often bind to each other with a higher specificity and efficacy. We tested the validity of this concept on the example of human erythropoietin, a well-characterized and pharmacologically relevant hematopoietic growth factor. The purpose of the work was to present and test simple and efficient three-step procedure for the design of an antisense peptide targeting receptor-binding site of human erythropoietin. Firstly, we selected the carboxyl-terminal receptor binding region of the molecule (epitope as a template for the antisense peptide modeling; Secondly, we designed an antisense peptide using mRNA transcription of the epitope sequence in the 3'→5' direction and computational screening of potential paratope structures with BLAST; Thirdly, we evaluated sense–antisense (epitope–paratope peptide binding and affinity by means of fluorescence spectroscopy and microscale thermophoresis. Both methods showed similar Kd values of 850 and 816 µM, respectively. The advantages of the methods were: fast screening with a small quantity of the sample needed, and measurements done within the range of physicochemical parameters resembling physiological conditions. Antisense peptides targeting specific erythropoietin region(s could be used for the development of new immunochemical methods. Selected antisense peptides with optimal affinity are potential lead compounds for the development of novel diagnostic substances, biopharmaceuticals and vaccines.

  6. An extended validation of the last generation of particle finite element method for free surface flows

    Gimenez, Juan M.; González, Leo M.

    2015-03-01

    In this paper, a new generation of the particle method known as Particle Finite Element Method (PFEM), which combines convective particle movement and a fixed mesh resolution, is applied to free surface flows. This interesting variant, previously described in the literature as PFEM-2, is able to use larger time steps when compared to other similar numerical tools which implies shorter computational times while maintaining the accuracy of the computation. PFEM-2 has already been extended to free surface problems, being the main topic of this paper a deep validation of this methodology for a wider range of flows. To accomplish this task, different improved versions of discontinuous and continuous enriched basis functions for the pressure field have been developed to capture the free surface dynamics without artificial diffusion or undesired numerical effects when different density ratios are involved. A collection of problems has been carefully selected such that a wide variety of Froude numbers, density ratios and dominant dissipative cases are reported with the intention of presenting a general methodology, not restricted to a particular range of parameters, and capable of using large time-steps. The results of the different free-surface problems solved, which include: Rayleigh-Taylor instability, sloshing problems, viscous standing waves and the dam break problem, are compared to well validated numerical alternatives or experimental measurements obtaining accurate approximations for such complex flows.

  7. Validation of single-sample doubly labeled water method

    Webster, M.D.; Weathers, W.W.

    1989-01-01

    We have experimentally validated a single-sample variant of the doubly labeled water method for measuring metabolic rate and water turnover in a very small passerine bird, the verdin (Auriparus flaviceps). We measured CO 2 production using the Haldane gravimetric technique and compared these values with estimates derived from isotopic data. Doubly labeled water results based on the one-sample calculations differed from Haldane values by less than 0.5% on average (range -8.3 to 11.2%, n = 9). Water flux computed by the single-sample method differed by -1.5% on average from results for the same birds based on the standard, two-sample technique (range -13.7 to 2.0%, n = 9)

  8. Accelerated solvent extraction method with one-step clean-up for hydrocarbons in soil

    Nurul Huda Mamat Ghani; Norashikin Sain; Rozita Osman; Zuraidah Abdullah Munir

    2007-01-01

    The application of accelerated solvent extraction (ASE) using hexane combined with neutral silica gel and sulfuric acid/ silica gel (SA/ SG) to remove impurities prior to analysis by gas chromatograph with flame ionization detector (GC-FID) was studied. The efficiency of extraction was evaluated based on the three hydrocarbons; dodecane, tetradecane and pentadecane spiked to soil sample. The effect of ASE operating conditions (extraction temperature, extraction pressure, static time) was evaluated and the optimized condition obtained from the study was extraction temperature of 160 degree Celsius, extraction pressure of 2000 psi with 5 minutes static extraction time. The developed ASE with one-step clean-up method was applied in the extraction of hydrocarbons from spiked soil and the amount extracted was comparable to ASE extraction without clean-up step with the advantage of obtaining cleaner extract with reduced interferences. Therefore in the developed method, extraction and clean-up for hydrocarbons in soil can be achieved rapidly and efficiently with reduced solvent usage. (author)

  9. Novel two-step method to form silk fibroin fibrous hydrogel

    Ming, Jinfa; Li, Mengmeng; Han, Yuhui; Chen, Ying; Li, Han; Zuo, Baoqi; Pan, Fukui

    2016-01-01

    Hydrogels prepared by silk fibroin solution have been studied. However, mimicking the nanofibrous structures of extracellular matrix for fabricating biomaterials remains a challenge. Here, a novel two-step method was applied to prepare fibrous hydrogels using regenerated silk fibroin solution containing nanofibrils in a range of tens to hundreds of nanometers. When the gelation process of silk solution occurred, it showed a top-down type gel within 30 min. After gelation, silk fibroin fibrous hydrogels exhibited nanofiber network morphology with β-sheet structure. Moreover, the compressive stress and modulus of fibrous hydrogels were 31.9 ± 2.6 and 2.8 ± 0.8 kPa, respectively, which was formed using 2.0 wt.% concentration solutions. In addition, fibrous hydrogels supported BMSCs attachment and proliferation over 12 days. This study provides important insight in the in vitro processing of silk fibroin into useful new materials. - Highlights: • SF fibrous hydrogel was prepared by a novel two-step method. • SF solution containing nanofibrils in a range of tens to hundreds of nanometers was prepared. • Gelation process was top-down type gel with several minutes. • SF fibrous hydrogels exhibited nanofiber network morphology with β-sheet structure. • Fibrous hydrogels had higher compressive stresses superior to porous hydrogels.

  10. Validation and further development of a novel thermal analysis method

    Mathews, E.H.; Shuttleworth, A.G.; Rousseau, P.G. [Pretoria Univ. (South Africa). Dept. of Mechanical Engineering

    1994-12-31

    The design of thermal and energy efficient buildings requires inter alia the investigation of the passive performance, natural ventilation, mechanical ventilation as well as structural and evaporative cooling of the building. Only when these fail to achieve the desired thermal comfort should mechanical cooling systems be considered. Few computer programs have the ability to investigate all these comfort regulating methods at the design stage. The QUICK design program can simulate these options with the exception of mechanical cooling. In this paper, Quick`s applicability is extended to include the analysis of basic air-conditioning systems. Since the design of these systems is based on indoor loads, it was necessary to validate QUICK`s load predictions before extending it. This article addresses validation in general and proposes a procedure to establish the efficiency of a program`s load predictions. This proposed procedure is used to compare load predictions by the ASHRAE, CIBSE, CARRIER, CHEETAH, BSIMAC and QUICK methods for 46 case studies involving 36 buildings in various climatic conditions. Although significant differences in the results of the various methods were observed, it is concluded that QUICK can be used with the same confidence as the other methods. It was further shown that load prediction programs usually under-estimate the effect of building mass and therefore over-estimate the peak loads. The details for the 46 case studies are available to other researchers for further verification purposes. With the confidence gained in its load predictions, QUICK was extended to include air-conditioning system analysis. The program was then applied to different case studies. It is shown that system size and energy usage can be reduced by more than 60% by using a combination of passive and mechanical cooling systems as well as different control strategies. (author)

  11. Pre-validation methods for developing a patient reported outcome instrument

    Castillo Mayret M

    2011-08-01

    Full Text Available Abstract Background Measures that reflect patients' assessment of their health are of increasing importance as outcome measures in randomised controlled trials. The methodological approach used in the pre-validation development of new instruments (item generation, item reduction and question formatting should be robust and transparent. The totality of the content of existing PRO instruments for a specific condition provides a valuable resource (pool of items that can be utilised to develop new instruments. Such 'top down' approaches are common, but the explicit pre-validation methods are often poorly reported. This paper presents a systematic and generalisable 5-step pre-validation PRO instrument methodology. Methods The method is illustrated using the example of the Aberdeen Glaucoma Questionnaire (AGQ. The five steps are: 1 Generation of a pool of items; 2 Item de-duplication (three phases; 3 Item reduction (two phases; 4 Assessment of the remaining items' content coverage against a pre-existing theoretical framework appropriate to the objectives of the instrument and the target population (e.g. ICF; and 5 qualitative exploration of the target populations' views of the new instrument and the items it contains. Results The AGQ 'item pool' contained 725 items. Three de-duplication phases resulted in reduction of 91, 225 and 48 items respectively. The item reduction phases discarded 70 items and 208 items respectively. The draft AGQ contained 83 items with good content coverage. The qualitative exploration ('think aloud' study resulted in removal of a further 15 items and refinement to the wording of others. The resultant draft AGQ contained 68 items. Conclusions This study presents a novel methodology for developing a PRO instrument, based on three sources: literature reporting what is important to patient; theoretically coherent framework; and patients' experience of completing the instrument. By systematically accounting for all items dropped

  12. One-Step Method for Preparation of Magnetic Nanoparticles Coated with Chitosan

    Karla M. Gregorio-Jauregui

    2012-01-01

    Full Text Available Preparation of magnetic nanoparticles coated with chitosan in one step by the coprecipitation method in the presence of different chitosan concentrations is reported here. Obtaining of magnetic superparamagnetic nanoparticles was confirmed by X-ray diffraction and magnetic measurements. Scanning transmission electron microscopy allowed to identify spheroidal nanoparticles with around 10-11 nm in average diameter. Characterization of the products by Fourier transform infrared spectroscopy demonstrated that composite chitosan-magnetic nanoparticles were obtained. Chitosan content in obtained nanocomposites was estimated by thermogravimetric analysis. The nanocomposites were tested in Pb2+ removal from a PbCl2 aqueous solution, showing a removal efficacy up to 53.6%. This work provides a simple method for chitosan-coated nanoparticles obtaining, which could be useful for heavy metal ions removal from water.

  13. Characteristic analysis of laser isotope separation process by two-step photodissociation method

    Okamoto, Tsuyoshi; Suzuki, Atsuyuki; Kiyose, Ryohei

    1981-01-01

    A large number of laser isotope separation experiments have been performed actively in many countries. In this paper, the selective two-step photodissociation method is chosen and simultaneous nonlinear differential equations that express the separation process are solved directly by using computer. Predicted separation factors are investigated in relation to the incident pulse energy and the concentration of desired molecules. Furthermore, the concept of separative work is used to evaluate the results of separation for this method. It is shown from an example of numerical calculation that a very large separation factor can be obtained if the concentration of desired molecules is lowered and two laser pulses to be closely synchronized are not always required in operation for the photodissociation of molecules. (author)

  14. Fuzzy decision-making: a new method in model selection via various validity criteria

    Shakouri Ganjavi, H.; Nikravesh, K.

    2001-01-01

    Modeling is considered as the first step in scientific investigations. Several alternative models may be candida ted to express a phenomenon. Scientists use various criteria to select one model between the competing models. Based on the solution of a Fuzzy Decision-Making problem, this paper proposes a new method in model selection. The method enables the scientist to apply all desired validity criteria, systematically by defining a proper Possibility Distribution Function due to each criterion. Finally, minimization of a utility function composed of the Possibility Distribution Functions will determine the best selection. The method is illustrated through a modeling example for the A verage Daily Time Duration of Electrical Energy Consumption in Iran

  15. A two-step method for fast and reliable EUV mask metrology

    Helfenstein, Patrick; Mochi, Iacopo; Rajendran, Rajeev; Yoshitake, Shusuke; Ekinci, Yasin

    2017-03-01

    One of the major obstacles towards the implementation of extreme ultraviolet lithography for upcoming technology nodes in semiconductor industry remains the realization of a fast and reliable detection methods patterned mask defects. We are developing a reflective EUV mask-scanning lensless imaging tool (RESCAN), installed at the Swiss Light Source synchrotron at the Paul Scherrer Institut. Our system is based on a two-step defect inspection method. In the first step, a low-resolution defect map is generated by die to die comparison of the diffraction patterns from areas with programmed defects, to those from areas that are known to be defect-free on our test sample. In a later stage, a die to database comparison will be implemented in which the measured diffraction patterns will be compared to those calculated directly from the mask layout. This Scattering Scanning Contrast Microscopy technique operates purely in the Fourier domain without the need to obtain the aerial image and, given a sufficient signal to noise ratio, defects are found in a fast and reliable way, albeit with a location accuracy limited by the spot size of the incident illumination. Having thus identified rough locations for the defects, a fine scan is carried out in the vicinity of these locations. Since our source delivers coherent illumination, we can use an iterative phase-retrieval method to reconstruct the aerial image of the scanned area with - in principle - diffraction-limited resolution without the need of an objective lens. Here, we will focus on the aerial image reconstruction technique and give a few examples to illustrate the capability of the method.

  16. The a3 problem solving report: a 10-step scientific method to execute performance improvements in an academic research vivarium.

    Bassuk, James A; Washington, Ida M

    2013-01-01

    The purpose of this study was to illustrate the application of A3 Problem Solving Reports of the Toyota Production System to our research vivarium through the methodology of Continuous Performance Improvement, a lean approach to healthcare management at Seattle Children's (Hospital, Research Institute, Foundation). The Report format is described within the perspective of a 10-step scientific method designed to realize measurable improvements of Issues identified by the Report's Author, Sponsor and Coach. The 10-step method (Issue, Background, Current Condition, Goal, Root Cause, Target Condition, Countermeasures, Implementation Plan, Test, and Follow-up) was shown to align with Shewhart's Plan-Do-Check-Act process improvement cycle in a manner that allowed for quantitative analysis of the Countermeasure's outcomes and of Testing results. During fiscal year 2012, 9 A3 Problem Solving Reports were completed in the vivarium under the teaching and coaching system implemented by the Research Institute. Two of the 9 reports are described herein. Report #1 addressed the issue of the vivarium's veterinarian not being able to provide input into sick animal cases during the work day, while report #7 tackled the lack of a standard in keeping track of weekend/holiday animal health inspections. In each Report, a measurable Goal that established the basis for improvement recognition was present. A Five Whys analysis identified the Root Cause for Report #1 as historical work patterns that existed before the veterinarian was hired on and that modern electronic communication tools had not been implemented. The same analysis identified the Root Cause for Report #7 as the vivarium had never standardized the process for weekend/holiday checks. Successful outcomes for both Reports were obtained and validated by robust audit plans. The collective data indicate that vivarium staff acquired a disciplined way of reporting on, as well as solving, problems in a manner consistent with high

  17. The a3 problem solving report: a 10-step scientific method to execute performance improvements in an academic research vivarium.

    James A Bassuk

    Full Text Available The purpose of this study was to illustrate the application of A3 Problem Solving Reports of the Toyota Production System to our research vivarium through the methodology of Continuous Performance Improvement, a lean approach to healthcare management at Seattle Children's (Hospital, Research Institute, Foundation. The Report format is described within the perspective of a 10-step scientific method designed to realize measurable improvements of Issues identified by the Report's Author, Sponsor and Coach. The 10-step method (Issue, Background, Current Condition, Goal, Root Cause, Target Condition, Countermeasures, Implementation Plan, Test, and Follow-up was shown to align with Shewhart's Plan-Do-Check-Act process improvement cycle in a manner that allowed for quantitative analysis of the Countermeasure's outcomes and of Testing results. During fiscal year 2012, 9 A3 Problem Solving Reports were completed in the vivarium under the teaching and coaching system implemented by the Research Institute. Two of the 9 reports are described herein. Report #1 addressed the issue of the vivarium's veterinarian not being able to provide input into sick animal cases during the work day, while report #7 tackled the lack of a standard in keeping track of weekend/holiday animal health inspections. In each Report, a measurable Goal that established the basis for improvement recognition was present. A Five Whys analysis identified the Root Cause for Report #1 as historical work patterns that existed before the veterinarian was hired on and that modern electronic communication tools had not been implemented. The same analysis identified the Root Cause for Report #7 as the vivarium had never standardized the process for weekend/holiday checks. Successful outcomes for both Reports were obtained and validated by robust audit plans. The collective data indicate that vivarium staff acquired a disciplined way of reporting on, as well as solving, problems in a manner

  18. The Finite-Surface Method for incompressible flow: a step beyond staggered grid

    Hokpunna, Arpiruk; Misaka, Takashi; Obayashi, Shigeru

    2017-11-01

    We present a newly developed higher-order finite surface method for the incompressible Navier-Stokes equations (NSE). This method defines the velocities as a surface-averaged value on the surfaces of the pressure cells. Consequently, the mass conservation on the pressure cells becomes an exact equation. The only things left to approximate is the momentum equation and the pressure at the new time step. At certain conditions, the exact mass conservation enables the explicit n-th order accurate NSE solver to be used with the pressure treatment that is two or four order less accurate without loosing the apparent convergence rate. This feature was not possible with finite volume of finite difference methods. We use Fourier analysis with a model spectrum to determine the condition and found that the range covers standard boundary layer flows. The formal convergence and the performance of the proposed scheme is compared with a sixth-order finite volume method. Finally, the accuracy and performance of the method is evaluated in turbulent channel flows. This work is partially funded by a research colloaboration from IFS, Tohoku university and ASEAN+3 funding scheme from CMUIC, Chiang Mai University.

  19. Two-Step Injection Method for Collecting Digital Evidence in Digital Forensics

    Nana Rachmana Syambas

    2015-01-01

    Full Text Available In digital forensic investigations, the investigators take digital evidence from computers, laptops or other electronic goods. There are many complications when a suspect or related person does not want to cooperate or has removed digital evidence. A lot of research has been done with the goal of retrieving data from flash memory or other digital storage media from which the content has been deleted. Unfortunately, such methods cannot guarantee that all data will be recovered. Most data can only be recovered partially and sometimes not perfectly, so that some or all files cannot be opened. This paper proposes the development of a new method for the retrieval of digital evidence called the Two-Step Injection method (TSI. It focuses on the prevention of the loss of digital evidence through the deletion of data by suspects or other parties. The advantage of this method is that the system works in secret and can be combined with other digital evidence applications that already exist, so that the accuracy and completeness of the resulting digital evidence can be improved. An experiment to test the effectiveness of the method was set up. The developed TSI system worked properly and had a 100% success rate.

  20. Validation of internal dosimetry protocols based on stochastic method

    Mendes, Bruno M.; Fonseca, Telma C.F.; Almeida, Iassudara G.; Trindade, Bruno M.; Campos, Tarcisio P.R.

    2015-01-01

    Computational phantoms adapted to Monte Carlo codes have been applied successfully in radiation dosimetry fields. NRI research group has been developing Internal Dosimetry Protocols - IDPs, addressing distinct methodologies, software and computational human-simulators, to perform internal dosimetry, especially for new radiopharmaceuticals. Validation of the IDPs is critical to ensure the reliability of the simulations results. Inter comparisons of data from literature with those produced by our IDPs is a suitable method for validation. The aim of this study was to validate the IDPs following such inter comparison procedure. The Golem phantom has been reconfigured to run on MCNP5. The specific absorbed fractions (SAF) for photon at 30, 100 and 1000 keV energies were simulated based on the IDPs and compared with reference values (RV) published by Zankl and Petoussi-Henss, 1998. The SAF average differences from RV and those obtained in IDP simulations was 2.3 %. The SAF largest differences were found in situations involving low energy photons at 30 keV. The Adrenals and thyroid, i.e. the lowest mass organs, had the highest SAF discrepancies towards RV as 7.2 % and 3.8 %, respectively. The statistic differences of SAF applying our IDPs from reference values were considered acceptable at the 30, 100 and 1000 keV spectra. We believe that the main reason for the discrepancies in IDPs run, found in lower masses organs, was due to our source definition methodology. Improvements of source spatial distribution in the voxels may provide outputs more consistent with reference values for lower masses organs. (author)

  1. Validation of internal dosimetry protocols based on stochastic method

    Mendes, Bruno M.; Fonseca, Telma C.F., E-mail: bmm@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Almeida, Iassudara G.; Trindade, Bruno M.; Campos, Tarcisio P.R., E-mail: tprcampos@yahoo.com.br [Universidade Federal de Minas Gerais (DEN/UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2015-07-01

    Computational phantoms adapted to Monte Carlo codes have been applied successfully in radiation dosimetry fields. NRI research group has been developing Internal Dosimetry Protocols - IDPs, addressing distinct methodologies, software and computational human-simulators, to perform internal dosimetry, especially for new radiopharmaceuticals. Validation of the IDPs is critical to ensure the reliability of the simulations results. Inter comparisons of data from literature with those produced by our IDPs is a suitable method for validation. The aim of this study was to validate the IDPs following such inter comparison procedure. The Golem phantom has been reconfigured to run on MCNP5. The specific absorbed fractions (SAF) for photon at 30, 100 and 1000 keV energies were simulated based on the IDPs and compared with reference values (RV) published by Zankl and Petoussi-Henss, 1998. The SAF average differences from RV and those obtained in IDP simulations was 2.3 %. The SAF largest differences were found in situations involving low energy photons at 30 keV. The Adrenals and thyroid, i.e. the lowest mass organs, had the highest SAF discrepancies towards RV as 7.2 % and 3.8 %, respectively. The statistic differences of SAF applying our IDPs from reference values were considered acceptable at the 30, 100 and 1000 keV spectra. We believe that the main reason for the discrepancies in IDPs run, found in lower masses organs, was due to our source definition methodology. Improvements of source spatial distribution in the voxels may provide outputs more consistent with reference values for lower masses organs. (author)

  2. Oxcarbazepine: validation and application of an analytical method

    Paula Cristina Rezende Enéas

    2010-06-01

    Full Text Available Oxcarbazepine (OXC is an important anticonvulsant and mood stabilizing drug. A pharmacopoeial monograph for OXC is not yet available and therefore the development and validation of a new analytical method for quantification of this drug is essential. In the present study, a UV spectrophotometric method for the determination of OXC was developed. The various parameters, such as linearity, precision, accuracy and specificity, were studied according to International Conference on Harmonization Guidelines. Batches of 150 mg OXC capsules were prepared and analyzed using the validated UV method. The formulations were also evaluated for parameters including drug-excipient compatibility, flowability, uniformity of weight, disintegration time, assay, uniformity of content and the amount of drug dissolved during the first hour.Oxcarbazepina (OXC é um fármaco anticonvulsivante e estabilizante do humor. O desenvolvimento e validação de método analítico para quantificação da OXC são de fundamental importância devido à ausência de monografias farmacopéicas oficiais para esse fármaco. Nesse trabalho, um método espectrofotométrico UV para determinação da OXC foi desenvolvido. O método proposto foi validado seguindo os parâmetros de linearidade, precisão, exatidão e especificidade de acordo com as normas da Conferência Internacional de Harmonização. Cápsulas de OXC 150 mg foram preparadas e analisadas utilizando-se o método analítico validado. As formulações foram avaliadas com relação à compatibilidade fármaco-excipientes, fluidez, determinação de peso, tempo de desintegração, doseamento, uniformidade de conteúdo e quantidade do fármaco dissolvido após 60 minutos.

  3. Dose Rate Experiment at JET for Benchmarking the Calculation Direct One Step Method

    Angelone, M.; Petrizzi, L.; Pillon, M.; Villari, R.; Popovichev, S.

    2006-01-01

    Neutrons produced by D-D and D-T plasmas induce the activation of tokamak materials and of components. The development of reliable methods to assess dose rates is a key issue for maintenance and operating nuclear machines, in normal and off-normal conditions. In the frame of the EFDA Fusion Technology work programme, a computational tool based upon MCNP Monte Carlo code has been developed to predict the dose rate after shutdown: it is called Direct One Step Method (D1S). The D1S is an innovative approach in which the decay gammas are coupled to the neutrons as in the prompt case and they are transported in one single step in the same run. Benchmarking of this new tool with experimental data taken in a complex geometry like that of a tokamak is a fundamental step to test the reliability of the D1S method. A dedicated benchmark experiment was proposed for the 2005-2006 experimental campaign of JET. Two irradiation positions have been selected for the benchmark: one inner position inside the vessel, not far from the plasma, called the 2 upper irradiation end (IE2), where neutron fluence is relatively high. The second position is just outside a vertical port in an external position (EX). Here the neutron flux is lower and the dose rate to be measured is not very far from the residual background. Passive detectors are used for in-vessel measurements: the high sensitivity Thermo Luminescent Dosimeters (TLDs) GR-200A (natural LiF), which ensure measurements down to environmental dose level. An active detector of Geiger-Muller (GM) type is used for out of vessel dose rate measurement. Before their use the detectors were calibrated in a secondary gamma-ray standard (Cs-137 and Co-60) facility in term of air-kerma. The background measurement was carried-out in the period July -September 2005 in the outside position EX using the GM tube and in September 2005 inside the vacuum vessel using TLD detectors located in the 2 Upper irradiation end IE2. In the present work

  4. Simulation of the two-fluid model on incompressible flow with Fractional Step method for both resolved and unresolved scale interfaces

    Hou, Xiaofei; Rigola, Joaquim; Lehmkuhl, Oriol; Oliva, Assensi

    2015-01-01

    Highlights: • Two phase flow with free surface is solved by means of two-fluid model (TFM). • Fractional Step method and finite volume technique is used to solve TFM. • Conservative Level Set method reduces interface sharpening diffusion problem. • Cases including high density ratios and high viscosities validate the models. - Abstract: In the present paper, the Fractional Step method usually used in single fluid flow is here extended and applied for the two-fluid model resolution using the finite volume discretization. The use of a projection method resolution instead of the usual pressure-correction method for multi-fluid flow, successfully avoids iteration processes. On the other hand, the main weakness of the two fluid model used for simulations of free surface flows, which is the numerical diffusion of the interface, is also solved by means of the conservative Level Set method (interface sharpening) (Strubelj et al., 2009). Moreover, the use of the algorithm proposed has allowed presenting different free-surface cases with or without Level Set implementation even under coarse meshes under a wide range of density ratios. Thus, the numerical results presented, numerically verified, experimentally validated and converged under high density ratios, shows the capability and reliability of this resolution method for both mixed and unmixed flows

  5. Single step fabrication method of fullerene/TiO2 composite photocatalyst for hydrogen production

    Kum, Jong Min; Cho, Sung Oh

    2011-01-01

    Hydrogen is one of the most promising alternative energy sources. Fossil fuel, which is the most widely used energy source, has two defects. One is CO 2 emission causing global warming. The other is exhaustion. On the other hand, hydrogen emits no CO 2 and can be produced by splitting water which is renewable and easily obtainable source. However, about 95% of hydrogen is derived from fossil fuel. It limits the merits of hydrogen. Hydrogen from fossil fuel is not a renewable energy anymore. To maximize the merits of hydrogen, renewability and no CO 2 emission, unconventional hydrogen production methods without using fossil fuel are required. Photocatalytic water-splitting is one of the unconventional hydrogen production methods. Photocatalytic water-splitting that uses hole/electron pairs of semiconductor is expectable way to produce clean and renewable hydrogen from solar energy. TiO 2 is the semiconductor material which has been most widely used as photocatalyst. TiO 2 shows high photocatalytic reactivity and stability in water. However, its wide band gap only absorbs UV light which is only 5% of sun light. To enhance the visible light responsibility, composition with fullerene based materials has been investigated. 1-2 Methano-fullerene carboxylic acid (FCA) is one of the fullerene based materials. We tried to fabricate FCA/TiO 2 composite using UV assisted single step method. The method not only simplified the fabrication procedures, but enhanced hydrogen production rate

  6. Distribution of photon strength in nuclei by a method of two-step cascades

    Becvar, F.; Cejnar, P.; Kopecky, J.

    1990-01-01

    The applicability of sum-coincidence measurements of two-step cascade γ-ray spectra to the determination of photon strength functions at intermediate γ-ray energies (3 or 4 MeV) is discussed. An experiment based on thermal neutron capture in Nd was undertaken at the Brookhaven National Laboratory High Flux Beam Reactor to test this model. To understand the role of various uncertainties in similar experiments a series of model calculations was performed. We present an analysis of our experimental data which demonstrates the high sensitivity of the method to E1 and M1 photon strength functions. Our experimental data are in sharp contradiction to those expected from an E1 photon strength distributed according to the classical Lorentzian form with an energy invariant damping width. An alternative distribution of Kadmenskij et al., which violates Brink's Hypothesis, is strongly preferred. 13 refs., 5 figs

  7. Fast Measurement of Methanol Concentration in Ionic Liquids by Potential Step Method

    Michael L. Hainstock

    2015-01-01

    Full Text Available The development of direct methanol fuel cells required the attention to the electrolyte. A good electrolyte should not only be ionic conductive but also be crossover resistant. Ionic liquids could be a promising electrolyte for fuel cells. Monitoring methanol was critical in several locations in a direct methanol fuel cell. Conductivity could be used to monitor the methanol content in ionic liquids. The conductivity of 1-butyl-3-methylimidazolium tetrafluoroborate had a linear relationship with the methanol concentration. However, the conductivity was significantly affected by the moisture or water content in the ionic liquid. On the contrary, potential step could be used in sensing methanol in ionic liquids. This method was not affected by the water content. The sampling current at a properly selected sampling time was proportional to the concentration of methanol in 1-butyl-3-methylimidazolium tetrafluoroborate. The linearity still stood even when there was 2.4 M water present in the ionic liquid.

  8. Bidisperse silica nanoparticles close-packed monolayer on silicon substrate by three step spin method

    Khanna, Sakshum; Marathey, Priyanka; Utsav, Chaliawala, Harsh; Mukhopadhyay, Indrajit

    2018-05-01

    We present the studies on the structural properties of monolayer Bidisperse silica (SiO2) nanoparticles (BDS) on Silicon (Si-100) substrate using spin coating technique. The Bidisperse silica nanoparticle was synthesised by the modified sol-gel process. Nanoparticles on the substrate are generally assembled in non-close/close-packed monolayer (CPM) form. The CPM form is obtained by depositing the colloidal suspension onto the silicon substrate using complex techniques. Here we report an effective method for forming a monolayer of bidisperse silica nanoparticle by three step spin coating technique. The samples were prepared by mixing the monodisperse solutions of different particles size 40 and 100 nm diameters. The bidisperse silica nanoparticles were self-assembled on the silicon substrate forming a close-packed monolayer film. The scanning electron microscope images of bidisperse films provided in-depth film structure of the film. The maximum surface coverage obtained was around 70-80%.

  9. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to

  10. Validation of a method for radionuclide activity optimize in SPECT

    Perez Diaz, M.; Diaz Rizo, O.; Lopez Diaz, A.; Estevez Aparicio, E.; Roque Diaz, R.

    2007-01-01

    A discriminant method for optimizing the activity administered in NM studies is validated by comparison with ROC curves. the method is tested in 21 SPECT, performed with a Cardiac phantom. Three different cold lesions (L1, L2 and L3) were placed in the myocardium-wall for each SPECT. Three activities (84 MBq, 37 MBq or 18.5 MBq) of Tc-99m diluted in water were used as background. The linear discriminant analysis was used to select the parameters that characterize image quality (Background-to-Lesion (B/L) and Signal-to-Noise (S/N) ratios). Two clusters with different image quality (p=0.021) were obtained following the selected variables. the first one involved the studies performed with 37 MBq and 84 MBq, and the second one included the studies with 18.5 MBq. the ratios B/L, B/L2 and B/L3 are the parameters capable to construct the function, with 100% of cases correctly classified into the clusters. The value of 37 MBq is the lowest tested activity for which good results for the B/Li variables were obtained,without significant differences from the results with 84 MBq (p>0.05). The result is coincident with the applied ROC-analysis. A correlation between both method of r=890 was obtained. (Author) 26 refs

  11. Testing and Validation of the Dynamic Inertia Measurement Method

    Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David

    2015-01-01

    The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.

  12. Sharp Penalty Term and Time Step Bounds for the Interior Penalty Discontinuous Galerkin Method for Linear Hyperbolic Problems

    Geevers, Sjoerd; van der Vegt, J.J.W.

    2017-01-01

    We present sharp and sucient bounds for the interior penalty term and time step size to ensure stability of the symmetric interior penalty discontinuous Galerkin (SIPDG) method combined with an explicit time-stepping scheme. These conditions hold for generic meshes, including unstructured

  13. Method for Determining Volumetric Efficiency and Its Experimental Validation

    Ambrozik Andrzej

    2017-12-01

    Full Text Available Modern means of transport are basically powered by piston internal combustion engines. Increasingly rigorous demands are placed on IC engines in order to minimise the detrimental impact they have on the natural environment. That stimulates the development of research on piston internal combustion engines. The research involves experimental and theoretical investigations carried out using computer technologies. While being filled, the cylinder is considered to be an open thermodynamic system, in which non-stationary processes occur. To make calculations of thermodynamic parameters of the engine operating cycle, based on the comparison of cycles, it is necessary to know the mean constant value of cylinder pressure throughout this process. Because of the character of in-cylinder pressure pattern and difficulties in pressure experimental determination, in the present paper, a novel method for the determination of this quantity was presented. In the new approach, the iteration method was used. In the method developed for determining the volumetric efficiency, the following equations were employed: the law of conservation of the amount of substance, the first law of thermodynamics for open system, dependences for changes in the cylinder volume vs. the crankshaft rotation angle, and the state equation. The results of calculations performed with this method were validated by means of experimental investigations carried out for a selected engine at the engine test bench. A satisfactory congruence of computational and experimental results as regards determining the volumetric efficiency was obtained. The method for determining the volumetric efficiency presented in the paper can be used to investigate the processes taking place in the cylinder of an IC engine.

  14. Pyrosequencing™ : A one-step method for high resolution HLA typing

    Marincola Francesco M

    2003-11-01

    Full Text Available Abstract While the use of high-resolution molecular typing in routine matching of human leukocyte antigens (HLA is expected to improve unrelated donor selection and transplant outcome, the genetic complexity of HLA still makes the current methodology limited and laborious. Pyrosequencing™ is a gel-free, sequencing-by-synthesis method. In a Pyrosequencing reaction, nucleotide incorporation proceeds sequentially along each DNA template at a given nucleotide dispensation order (NDO that is programmed into a pyrosequencer. Here we describe the design of a NDO that generates a pyrogram unique for any given allele or combination of alleles. We present examples of unique pyrograms generated from each of two heterozygous HLA templates, which would otherwise remain cis/trans ambiguous using standard sequencing based typing (SBT method. In addition, we display representative data that demonstrate long read and linear signal generation. These features are prerequisite of high-resolution typing and automated data analysis. In conclusion Pyrosequencing is a one-step method for high resolution DNA typing.

  15. Seismic data two-step recovery approach combining sparsity-promoting and hyperbolic Radon transform methods

    Wang, Hanchuang; Chen, Shengchang; Ren, Haoran; Liang, Donghui; Zhou, Huamin; She, Deping

    2015-01-01

    In current research of seismic data recovery problems, the sparsity-promoting method usually produces an insufficient recovery result at the locations of null traces. The HRT (hyperbolic Radon transform) method can be applied to problems of seismic data recovery with approximately hyperbolic events. Influenced by deviations of hyperbolic characteristics between real and ideal travel-time curves, some spurious events are usually introduced and the recovery effect of intermediate and far-offset traces is worse than that of near-offset traces. Sparsity-promoting recovery is primarily dependent on the sparsity of seismic data in the sparse transform domain (i.e. on the local waveform characteristics), whereas HRT recovery is severely affected by the global characteristics of the seismic events. Inspired by the above conclusion, a two-step recovery approach combining sparsity-promoting and time-invariant HRT methods is proposed, which is based on both local and global characteristics of the seismic data. Two implementation strategies are presented in detail, and the selection criteria of the relevant strategies is also discussed. Numerical examples of synthetic and real data verify that the new approach can achieve a better recovery effect by simultaneously overcoming the shortcomings of sparsity-promoting recovery and HRT recovery. (paper)

  16. Convergence and Stability of the Split-Step θ-Milstein Method for Stochastic Delay Hopfield Neural Networks

    Qian Guo

    2013-01-01

    Full Text Available A new splitting method designed for the numerical solutions of stochastic delay Hopfield neural networks is introduced and analysed. Under Lipschitz and linear growth conditions, this split-step θ-Milstein method is proved to have a strong convergence of order 1 in mean-square sense, which is higher than that of existing split-step θ-method. Further, mean-square stability of the proposed method is investigated. Numerical experiments and comparisons with existing methods illustrate the computational efficiency of our method.

  17. Teaching Analytical Method Transfer through Developing and Validating Then Transferring Dissolution Testing Methods for Pharmaceuticals

    Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette

    2017-01-01

    Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…

  18. Validation of a pretreatment delivery quality assurance method for the CyberKnife Synchrony system

    Mastella, E., E-mail: edoardo.mastella@cnao.it [Medical Physics Unit, CNAO Foundation—National Centre for Oncological Hadron Therapy, Pavia I-27100, Italy and Medical Physics Unit, IEO—European Institute of Oncology, Milan I-20141 (Italy); Vigorito, S.; Rondi, E.; Cattani, F. [Medical Physics Unit, IEO—European Institute of Oncology, Milan I-20141 (Italy); Piperno, G.; Ferrari, A.; Strata, E.; Rozza, D. [Department of Radiation Oncology, IEO—European Institute of Oncology, Milan I-20141 (Italy); Jereczek-Fossa, B. A. [Department of Radiation Oncology, IEO—European Institute of Oncology, Milan I-20141, Italy and Department of Oncology and Hematology Oncology, University of Milan, Milan I-20122 (Italy)

    2016-08-15

    Purpose: To evaluate the geometric and dosimetric accuracies of the CyberKnife Synchrony respiratory tracking system (RTS) and to validate a method for pretreatment patient-specific delivery quality assurance (DQA). Methods: An EasyCube phantom was mounted on the ExacTrac gating phantom, which can move along the superior–inferior (SI) axis of a patient to simulate a moving target. The authors compared dynamic and static measurements. For each case, a Gafchromic EBT3 film was positioned between two slabs of the EasyCube, while a PinPoint ionization chamber was placed in the appropriate space. There were three steps to their evaluation: (1) the field size, the penumbra, and the symmetry of six secondary collimators were measured along the two main orthogonal axes. Dynamic measurements with deliberately simulated errors were also taken. (2) The delivered dose distributions (from step 1) were compared with the planned ones, using the gamma analysis method. The local gamma passing rates were evaluated using three acceptance criteria: 3% local dose difference (LDD)/3 mm, 2%LDD/2 mm, and 3%LDD/1 mm. (3) The DQA plans for six clinical patients were irradiated in different dynamic conditions, to give a total of 19 cases. The measured and planned dose distributions were evaluated with the same gamma-index criteria used in step 2 and the measured chamber doses were compared with the planned mean doses in the sensitive volume of the chamber. Results: (1) A very slight enlargement of the field size and of the penumbra was observed in the SI direction (on average <1 mm), in line with the overall average CyberKnife system error for tracking treatments. (2) Comparison between the planned and the correctly delivered dose distributions confirmed the dosimetric accuracy of the RTS for simple plans. The multicriteria gamma analysis was able to detect the simulated errors, proving the robustness of their method of analysis. (3) All of the DQA clinical plans passed the tests, both in

  19. Validation of an electrophoretic method to detect albuminuria in cats.

    Ferlizza, Enea; Dondi, Francesco; Andreani, Giulia; Bucci, Diego; Archer, Joy; Isani, Gloria

    2017-08-01

    Objectives The aims of this study were to validate a semi-automated high-resolution electrophoretic technique to quantify urinary albumin in healthy and diseased cats, and to evaluate its diagnostic performance in cases of proteinuria and renal diseases. Methods Urine samples were collected from 88 cats (healthy; chronic kidney disease [CKD]; lower urinary tract disease [LUTD]; non-urinary tract diseases [OTHER]). Urine samples were routinely analysed and high-resolution electrophoresis (HRE) was performed. Within-assay and between-assay variability, linearity, accuracy, recovery and the lowest detectable and quantifiable bands were calculated. Receiver operating curve (ROC) analysis was also performed. Results All coefficients of variation were HRE allowed the visualisation of a faint band of albumin and a diffused band between alpha and beta zones in healthy cats, while profiles from diseased cats were variable. Albumin (mg/dl) and urine albumin:creatinine ratio (UAC) were significantly ( P HRE is an accurate and precise method that could be used to measure albuminuria in cats. UAC was useful to correctly classify proteinuria and to discriminate between healthy and diseased cats. HRE might also provide additional information on urine proteins with a profile of all proteins (albumin and globulins) to aid clinicians in the diagnosis of diseases characterised by proteinuria.

  20. New validated method for piracetam HPLC determination in human plasma.

    Curticapean, Augustin; Imre, Silvia

    2007-01-10

    The new method for HPLC determination of piracetam in human plasma was developed and validated by a new approach. The simple determination by UV detection was performed on supernatant, obtained from plasma, after proteins precipitation with perchloric acid. The chromatographic separation of piracetam under a gradient elution was achieved at room temperature with a RP-18 LiChroSpher 100 column and aqueous mobile phase containing acetonitrile and methanol. The quantitative determination of piracetam was performed at 200 nm with a lower limit of quantification LLQ=2 microg/ml. For this limit, the calculated values of the coefficient of variation and difference between mean and the nominal concentration are CV%=9.7 and bias%=0.9 for the intra-day assay, and CV%=19.1 and bias%=-7.45 for the between-days assay. For precision, the range was CV%=1.8/11.6 in the intra-day and between-days assay, and for accuracy, the range was bias%=2.3/14.9 in the intra-day and between-days assay. In addition, the stability of piracetam in different conditions was verified. Piracetam proved to be stable in plasma during 4 weeks at -20 degrees C and for 36 h at 20 degrees C in the supernatant after protein precipitation. The new proposed method was used for a bioequivalence study of two medicines containing 800 mg piracetam.

  1. Comparison between time-step-integration and probabilistic methods in seismic analysis of a linear structure

    Schneeberger, B.; Breuleux, R.

    1977-01-01

    Assuming that earthquake ground motion is a stationary time function, the seismic analysis of a linear structure can be done by probailistic methods using the 'power spectral density function' (PSD), instead of applying the more traditional time-step-integration using earthquake time histories (TH). A given structure was analysed both by PSD and TH methods computing and comparing 'floor response spectra'. The analysis using TH was performed for two different TH and different frequency intervals for the 'floor-response-spectra'. The analysis using PSD first produced PSD functions of the responses of the floors and these were then converted into 'foor-response-spectra'. Plots of the resulting 'floor-response-spectra' show: (1) The agreement of TH and PSD results is quite close. (2) The curves produced by PSD are much smoother than those produced by TH and mostly form an enelope of the latter. (3) The curves produced by TH are quite jagged with the location and magnitude of the peaks depending on the choice of frequencies at which the 'floor-response-spectra' were evaluated and on the choice of TH. (Auth.)

  2. Production and characterization of carbon nano colloid via one-step electrochemical method

    Kim, Doohyun; Hwang, Yujin; Cheong, Seong Ir; Lee, Jae Keun [Pusan National University, Department of Mechanical Engineering (Korea, Republic of); Hong, Daeseung; Moon, Seongyong [N-BARO TECH CO., LTD, Institute of SamchangTsinghua Nano Application (Korea, Republic of); Lee, Jung Eun [Pusan National University, Industrial Liaison Innovation Cluster (Korea, Republic of); Kim, Soo H., E-mail: sookim@pusan.ac.k [Pusan National University, Department of Nanosystem and Nanoprocess Engineering (Korea, Republic of)

    2008-10-15

    We present a one-step electrochemical method to produce water-based stable carbon nano colloid (CNC) without adding any surfactants at the room temperature. The physical, chemical, and thermal properties of CNC prepared were characterized by using various techniques, such as particle size analyzer, zeta potential meter, TEM, XRD, FT-IR, turbidity meter, viscometer, and transient hot-wire method. The average primary size of the suspended spherical-shaped nanoparticles in the CNC was found to be {approx}15 nm in diameter. The thermal conductivity of CNC compared with that of water was observed to increase up to {approx}14% with the CNC concentration of {approx}4.2 wt%. The CNC prepared in this study was considerably stable over the period of 600 h. With the assistance of FT-IR spectroscopy analysis, we confirmed the presence of carboxyl group (i.e., O-H stretching (3,458 cm{sup -1}) and C=O stretching (1,712 cm{sup -1})) formed in the outer atomic layer of carbon nanoparticles, which (i) made the carbon particles hydrophilic and (ii) prevented the aggregation among primary nanoparticles by increasing the magnitude of zeta potential over the long period.

  3. Production and characterization of carbon nano colloid via one-step electrochemical method

    Kim, Doohyun; Hwang, Yujin; Cheong, Seong Ir; Lee, Jae Keun; Hong, Daeseung; Moon, Seongyong; Lee, Jung Eun; Kim, Soo H.

    2008-01-01

    We present a one-step electrochemical method to produce water-based stable carbon nano colloid (CNC) without adding any surfactants at the room temperature. The physical, chemical, and thermal properties of CNC prepared were characterized by using various techniques, such as particle size analyzer, zeta potential meter, TEM, XRD, FT-IR, turbidity meter, viscometer, and transient hot-wire method. The average primary size of the suspended spherical-shaped nanoparticles in the CNC was found to be ∼15 nm in diameter. The thermal conductivity of CNC compared with that of water was observed to increase up to ∼14% with the CNC concentration of ∼4.2 wt%. The CNC prepared in this study was considerably stable over the period of 600 h. With the assistance of FT-IR spectroscopy analysis, we confirmed the presence of carboxyl group (i.e., O-H stretching (3,458 cm -1 ) and C=O stretching (1,712 cm -1 )) formed in the outer atomic layer of carbon nanoparticles, which (i) made the carbon particles hydrophilic and (ii) prevented the aggregation among primary nanoparticles by increasing the magnitude of zeta potential over the long period.

  4. Chiroplasmonic magnetic gold nanocomposites produced by one-step aqueous method using κ-carrageenan.

    Lesnichaya, Marina V; Sukhov, Boris G; Aleksandrova, Galina P; Gasilova, Ekaterina R; Vakul'skaya, Tamara I; Khutsishvili, Spartak S; Sapozhnikov, Anatoliy N; Klimenkov, Igor V; Trofimov, Boris A

    2017-11-01

    Novel water-soluble chiroplasmonic nanobiocomposites with directly varied gold content were synthesized by a one-step redox method in water using a biocompatible polysaccharide κ-carrageenan (industrial product from algae) as both reducing and stabilizing matrix. The influence of the reactants ratio, temperature, and pH on the reaction was studied and the optimal reaction parameters were found. The structure and the properties of composite nanomaterials were examined in solid state and aqueous solutions by using complementary physical-chemical methods X-ray diffraction analysis, transmission electron microscopy, spectroscopy of electron paramagnetic resonance, atomic absorption and optical spectroscopy, polarimetry including optical rotatory dispersion with registration of interphase-crossbred Cotton effect of a chiral polysaccharide matrix on plasmonic chromophore of gold nanoparticles, dynamic and static light scattering. The new perspective multi-purpose nanocomposites demonstrate a complex of chiroplasmonic and magnetic properties, imparted by both nanoparticles and radicals enriched chiral polysaccharide matrix. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Validation of quantitative method for azoxystrobin residues in green beans and peas.

    Abdelraheem, Ehab M H; Hassan, Sayed M; Arief, Mohamed M H; Mohammad, Somaia G

    2015-09-01

    This study presents a method validation for extraction and quantitative analysis of azoxystrobin residues in green beans and peas using HPLC-UV and the results confirmed by GC-MS. The employed method involved initial extraction with acetonitrile after the addition of salts (magnesium sulfate and sodium chloride), followed by a cleanup step by activated neutral carbon. Validation parameters; linearity, matrix effect, LOQ, specificity, trueness and repeatability precision were attained. The spiking levels for the trueness and the precision experiments were (0.1, 0.5, 3 mg/kg). For HPLC-UV analysis, mean recoveries ranged between 83.69% to 91.58% and 81.99% to 107.85% for green beans and peas, respectively. For GC-MS analysis, mean recoveries ranged from 76.29% to 94.56% and 80.77% to 100.91% for green beans and peas, respectively. According to these results, the method has been proven to be efficient for extraction and determination of azoxystrobin residues in green beans and peas. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A Validated Method for the Quality Control of Andrographis paniculata Preparations.

    Karioti, Anastasia; Timoteo, Patricia; Bergonzi, Maria Camilla; Bilia, Anna Rita

    2017-10-01

    Andrographis paniculata is a herbal drug of Asian traditional medicine largely employed for the treatment of several diseases. Recently, it has been introduced in Europe for the prophylactic and symptomatic treatment of common cold and as an ingredient of dietary supplements. The active principles are diterpenes with andrographolide as the main representative. In the present study, an analytical protocol was developed for the determination of the main constituents in the herb and preparations of A. paniculata . Three different extraction protocols (methanol extraction using a modified Soxhlet procedure, maceration under ultrasonication, and decoction) were tested. Ultrasonication achieved the highest content of analytes. HPLC conditions were optimized in terms of solvent mixtures, time course, and temperature. A reversed phase C18 column eluted with a gradient system consisting of acetonitrile and acidified water and including an isocratic step at 30 °C was used. The HPLC method was validated for linearity, limits of quantitation and detection, repeatability, precision, and accuracy. The overall method was validated for precision and accuracy over at least three different concentration levels. Relative standard deviation was less than 1.13%, whereas recovery was between 95.50% and 97.19%. The method also proved to be suitable for the determination of a large number of commercial samples and was proposed to the European Pharmacopoeia for the quality control of Andrographidis herba. Georg Thieme Verlag KG Stuttgart · New York.

  7. Validated method for the analysis of goji berry, a rich source of zeaxanthin dipalmitate.

    Karioti, Anastasia; Bergonzi, Maria Camilla; Vincieri, Franco F; Bilia, Anna Rita

    2014-12-31

    In the present study an HPLC-DAD method was developed for the determination of the main carotenoid, zeaxanthin dipalmitate, in the fruits of Lycium barbarum. The aim was to develop and optimize an extraction protocol to allow fast, exhaustive, and repeatable extraction, suitable for labile carotenoid content. Use of liquid N2 allowed the grinding of the fruit. A step of ultrasonication with water removed efficiently the polysaccharides and enabled the exhaustive extraction of carotenoids by hexane/acetone 50:50. The assay was fast and simple and permitted the quality control of a large number of commercial samples including fruits, juices, and a jam. The HPLC method was validated according to ICH guidelines and satisfied the requirements. Finally, the overall method was validated for precision (% RSD ranging between 3.81 and 4.13) and accuracy at three concentration levels. The recovery was between 94 and 107% with RSD values <2%, within the acceptable limits, especially if the difficulty of the matrix is taken into consideration.

  8. Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI

    Forer, Barry; Zumbo, Bruno D.

    2011-01-01

    The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…

  9. Interlaboratory validation of an improved U.S. Food and Drug Administration method for detection of Cyclospora cayetanensis in produce using TaqMan real-time PCR

    A collaborative validation study was performed to evaluate the performance of a new U.S. Food and Drug Administration method developed for detection of the protozoan parasite, Cyclospora cayetanensis, on cilantro and raspberries. The method includes a sample preparation step in which oocysts are re...

  10. When Educational Material Is Delivered: A Mixed Methods Content Validation Study of the Information Assessment Method.

    Badran, Hani; Pluye, Pierre; Grad, Roland

    2017-03-14

    The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant (R=4.86% [N=234,196] and R=3.04% [n

  11. Process analysis and modeling of a single-step lutein extraction method for wet microalgae.

    Gong, Mengyue; Wang, Yuruihan; Bassi, Amarjeet

    2017-11-01

    Lutein is a commercial carotenoid with potential health benefits. Microalgae are alternative sources for the lutein production in comparison to conventional approaches using marigold flowers. In this study, a process analysis of a single-step simultaneous extraction, saponification, and primary purification process for free lutein production from wet microalgae biomass was carried out. The feasibility of binary solvent mixtures for wet biomass extraction was successfully demonstrated, and the extraction kinetics of lutein from chloroplast in microalgae were first evaluated. The effects of types of organic solvent, solvent polarity, cell disruption method, and alkali and solvent usage on lutein yields were examined. A mathematical model based on Fick's second law of diffusion was applied to model the experimental data. The mass transfer coefficients were used to estimate the extraction rates. The extraction rate was found more significantly related with alkali ratio to solvent than to biomass. The best conditions for extraction efficiency were found to be pre-treatment with ultrasonication at 0.5 s working cycle per second, react 0.5 h in 0.27 L/g solvent to biomass ratio, and 1:3 ether/ethanol (v/v) with 1.25 g KOH/L. The entire process can be controlled within 1 h and yield over 8 mg/g lutein, which is more economical for scale-up.

  12. One-step synthesis of silver nanoparticles at the air-water interface using different methods

    Liu Hongguo; Xiao Fei; Wang Changwei; Lee, Yong-Ill; Xue Qingbin; Chen Xiao; Qian Dongjin; Hao Jingcheng; Jiang Jianzhuang

    2008-01-01

    Silver nanoparticles were synthesized in a one-step process at the air-AgNO 3 aqueous solution interface under Langmuir monolayers of 5,10,15,20-tetra-4-oxy(2-stearic acid) phenyl porphyrin (TSPP) at room temperature by using different methods including UV-light irradiation, ambient light irradiation, and formaldehyde gas reduction. It was found that parallel aligned one-dimensional (1D) chains composed of discrete silver nanoparticles with the size of 3-5 nm were formed under UV-light irradiation for a short time, while large areas of uniform silver spherical nanoparticles were formed under natural daylight illumination for several days or by formaldehyde gas treatment for several hours. The average size of the spherical nanoparticles ranges from 6.88 ± 0.46 to 11.10 ± 1.47 nm, depending on the experimental conditions. The 1D chains formed under UV-light irradiation result from the templating effect of parallel aligned linear supramolecular arrays formed by TSPP at the air-water interface, and rapid nucleation and growth of the nanoparticles. The formation of the uniform silver nanoparticles under daylight illumination or by formaldehyde gas treatment, however, should be ascribed to a kinetically controlled growth process of the nanoparticles

  13. A method for the statistical interpretation of friction ridge skin impression evidence: Method development and validation.

    Swofford, H J; Koertner, A J; Zemp, F; Ausdemore, M; Liu, A; Salyards, M J

    2018-04-03

    The forensic fingerprint community has faced increasing amounts of criticism by scientific and legal commentators, challenging the validity and reliability of fingerprint evidence due to the lack of an empirically demonstrable basis to evaluate and report the strength of the evidence in a given case. This paper presents a method, developed as a stand-alone software application, FRStat, which provides a statistical assessment of the strength of fingerprint evidence. The performance was evaluated using a variety of mated and non-mated datasets. The results show strong performance characteristics, often with values supporting specificity rates greater than 99%. This method provides fingerprint experts the capability to demonstrate the validity and reliability of fingerprint evidence in a given case and report the findings in a more transparent and standardized fashion with clearly defined criteria for conclusions and known error rate information thereby responding to concerns raised by the scientific and legal communities. Published by Elsevier B.V.

  14. A new four-step hierarchy method for combined assessment of groundwater quality and pollution.

    Zhu, Henghua; Ren, Xiaohua; Liu, Zhizheng

    2017-12-28

    A new four-step hierarchy method was constructed and applied to evaluate the groundwater quality and pollution of the Dagujia River Basin. The assessment index system is divided into four types: field test indices, common inorganic chemical indices, inorganic toxicology indices, and trace organic indices. Background values of common inorganic chemical indices and inorganic toxicology indices were estimated with the cumulative-probability curve method, and the results showed that the background values of Mg 2+ (51.1 mg L -1 ), total hardness (TH) (509.4 mg L -1 ), and NO 3 - (182.4 mg L -1 ) are all higher than the corresponding grade III values of Quality Standard for Groundwater, indicating that they were poor indicators and therefore were not included in the groundwater quality assessment. The quality assessment results displayed that the field test indices were mainly classified as grade II, accounting for 60.87% of wells sampled. The indices of common inorganic chemical and inorganic toxicology were both mostly in the range of grade III, whereas the trace organic indices were predominantly classified as grade I. The variabilities and excess ratios of the indices were also calculated and evaluated. Spatial distributions showed that the groundwater with poor quality indices was mainly located in the northeast of the basin, which was well-connected with seawater intrusion. Additionally, the pollution assessment revealed that groundwater in well 44 was classified as "moderately polluted," wells 5 and 8 were "lightly polluted," and other wells were classified as "unpolluted."

  15. A clinical decision support system algorithm for intravenous to oral antibiotic switch therapy: validity, clinical relevance and usefulness in a three-step evaluation study.

    Akhloufi, H; Hulscher, M; van der Hoeven, C P; Prins, J M; van der Sijs, H; Melles, D C; Verbon, A

    2018-04-26

    To evaluate a clinical decision support system (CDSS) based on consensus-based intravenous to oral switch criteria, which identifies intravenous to oral switch candidates. A three-step evaluation study of a stand-alone CDSS with electronic health record interoperability was performed at the Erasmus University Medical Centre in the Netherlands. During the first step, we performed a technical validation. During the second step, we determined the sensitivity, specificity, negative predictive value and positive predictive value in a retrospective cohort of all hospitalized adult patients starting at least one therapeutic antibacterial drug between 1 and 16 May 2013. ICU, paediatric and psychiatric wards were excluded. During the last step the clinical relevance and usefulness was prospectively assessed by reports to infectious disease specialists. An alert was considered clinically relevant if antibiotics could be discontinued or switched to oral therapy at the time of the alert. During the first step, one technical error was found. The second step yielded a positive predictive value of 76.6% and a negative predictive value of 99.1%. The third step showed that alerts were clinically relevant in 53.5% of patients. For 43.4% it had already been decided to discontinue or switch the intravenous antibiotics by the treating physician. In 10.1%, the alert resulted in advice to change antibiotic policy and was considered useful. This prospective cohort study shows that the alerts were clinically relevant in >50% (n = 449) and useful in 10% (n = 85). The CDSS needs to be evaluated in hospitals with varying activity of infectious disease consultancy services as this probably influences usefulness.

  16. a Three-Step Spatial-Temporal Clustering Method for Human Activity Pattern Analysis

    Huang, W.; Li, S.; Xu, S.

    2016-06-01

    How people move in cities and what they do in various locations at different times form human activity patterns. Human activity pattern plays a key role in in urban planning, traffic forecasting, public health and safety, emergency response, friend recommendation, and so on. Therefore, scholars from different fields, such as social science, geography, transportation, physics and computer science, have made great efforts in modelling and analysing human activity patterns or human mobility patterns. One of the essential tasks in such studies is to find the locations or places where individuals stay to perform some kind of activities before further activity pattern analysis. In the era of Big Data, the emerging of social media along with wearable devices enables human activity data to be collected more easily and efficiently. Furthermore, the dimension of the accessible human activity data has been extended from two to three (space or space-time) to four dimensions (space, time and semantics). More specifically, not only a location and time that people stay and spend are collected, but also what people "say" for in a location at a time can be obtained. The characteristics of these datasets shed new light on the analysis of human mobility, where some of new methodologies should be accordingly developed to handle them. Traditional methods such as neural networks, statistics and clustering have been applied to study human activity patterns using geosocial media data. Among them, clustering methods have been widely used to analyse spatiotemporal patterns. However, to our best knowledge, few of clustering algorithms are specifically developed for handling the datasets that contain spatial, temporal and semantic aspects all together. In this work, we propose a three-step human activity clustering method based on space, time and semantics to fill this gap. One-year Twitter data, posted in Toronto, Canada, is used to test the clustering-based method. The results show that the

  17. A THREE-STEP SPATIAL-TEMPORAL-SEMANTIC CLUSTERING METHOD FOR HUMAN ACTIVITY PATTERN ANALYSIS

    W. Huang

    2016-06-01

    Full Text Available How people move in cities and what they do in various locations at different times form human activity patterns. Human activity pattern plays a key role in in urban planning, traffic forecasting, public health and safety, emergency response, friend recommendation, and so on. Therefore, scholars from different fields, such as social science, geography, transportation, physics and computer science, have made great efforts in modelling and analysing human activity patterns or human mobility patterns. One of the essential tasks in such studies is to find the locations or places where individuals stay to perform some kind of activities before further activity pattern analysis. In the era of Big Data, the emerging of social media along with wearable devices enables human activity data to be collected more easily and efficiently. Furthermore, the dimension of the accessible human activity data has been extended from two to three (space or space-time to four dimensions (space, time and semantics. More specifically, not only a location and time that people stay and spend are collected, but also what people “say” for in a location at a time can be obtained. The characteristics of these datasets shed new light on the analysis of human mobility, where some of new methodologies should be accordingly developed to handle them. Traditional methods such as neural networks, statistics and clustering have been applied to study human activity patterns using geosocial media data. Among them, clustering methods have been widely used to analyse spatiotemporal patterns. However, to our best knowledge, few of clustering algorithms are specifically developed for handling the datasets that contain spatial, temporal and semantic aspects all together. In this work, we propose a three-step human activity clustering method based on space, time and semantics to fill this gap. One-year Twitter data, posted in Toronto, Canada, is used to test the clustering-based method. The

  18. Development and validation of a simple method for the extraction of human skin melanocytes.

    Wang, Yinjuan; Tissot, Marion; Rolin, Gwenaël; Muret, Patrice; Robin, Sophie; Berthon, Jean-Yves; He, Li; Humbert, Philippe; Viennet, Céline

    2018-03-21

    Primary melanocytes in culture are useful models for studying epidermal pigmentation and efficacy of melanogenic compounds, or developing advanced therapy medicinal products. Cell extraction is an inevitable and critical step in the establishment of cell cultures. Many enzymatic methods for extracting and growing cells derived from human skin, such as melanocytes, are described in literature. They are usually based on two enzymatic steps, Trypsin in combination with Dispase, in order to separate dermis from epidermis and subsequently to provide a suspension of epidermal cells. The objective of this work was to develop and validate an extraction method of human skin melanocytes being simple, effective and applicable to smaller skin samples, and avoiding animal reagents. TrypLE™ product was tested on very limited size of human skin, equivalent of multiple 3-mm punch biopsies, and was compared to Trypsin/Dispase enzymes. Functionality of extracted cells was evaluated by analysis of viability, morphology and melanin production. In comparison with Trypsin/Dispase incubation method, the main advantages of TrypLE™ incubation method were the easier of separation between dermis and epidermis and the higher population of melanocytes after extraction. Both protocols preserved morphological and biological characteristics of melanocytes. The minimum size of skin sample that allowed the extraction of functional cells was 6 × 3-mm punch biopsies (e.g., 42 mm 2 ) whatever the method used. In conclusion, this new procedure based on TrypLE™ incubation would be suitable for establishment of optimal primary melanocytes cultures for clinical applications and research.

  19. Validation and application of an high-order spectral difference method for flow induced noise simulation

    Parsani, Matteo

    2011-09-01

    The main goal of this paper is to develop an efficient numerical algorithm to compute the radiated far field noise provided by an unsteady flow field from bodies in arbitrary motion. The method computes a turbulent flow field in the near fields using a high-order spectral difference method coupled with large-eddy simulation approach. The unsteady equations are solved by advancing in time using a second-order backward difference formulae scheme. The nonlinear algebraic system arising from the time discretization is solved with the nonlinear lowerupper symmetric GaussSeidel algorithm. In the second step, the method calculates the far field sound pressure based on the acoustic source information provided by the first step simulation. The method is based on the Ffowcs WilliamsHawkings approach, which provides noise contributions for monopole, dipole and quadrupole acoustic sources. This paper will focus on the validation and assessment of this hybrid approach using different test cases. The test cases used are: a laminar flow over a two-dimensional (2D) open cavity at Re = 1.5 × 10 3 and M = 0.15 and a laminar flow past a 2D square cylinder at Re = 200 and M = 0.5. In order to show the application of the numerical method in industrial cases and to assess its capability for sound field simulation, a three-dimensional turbulent flow in a muffler at Re = 4.665 × 10 4 and M = 0.05 has been chosen as a third test case. The flow results show good agreement with numerical and experimental reference solutions. Comparison of the computed noise results with those of reference solutions also shows that the numerical approach predicts noise accurately. © 2011 IMACS.

  20. Reliability and concurrent validity of an alternative method of lateral ...

    1 University of Northern Iowa, Division of Athletic Training, 003C Human. Performance Center, Cedar ... concurrent validity of the fingertip-to-floor distance test (FFD) ... in these protocols are spinal and extremity range of motion, pelvic control ...

  1. Alternative validation practice of an automated faulting measurement method.

    2010-03-08

    A number of states have adopted profiler based systems to automatically measure faulting, : in jointed concrete pavements. However, little published work exists which documents the : validation process used for such automated faulting systems. This p...

  2. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width.

    Learn, R; Feigenbaum, E

    2016-06-01

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  3. Step responses of a torsional system with multiple clearances: Study of vibro-impact phenomenon using experimental and computational methods

    Oruganti, Pradeep Sharma; Krak, Michael D.; Singh, Rajendra

    2018-01-01

    Recently Krak and Singh (2017) proposed a scientific experiment that examined vibro-impacts in a torsional system under a step down excitation and provided preliminary measurements and limited non-linear model studies. A major goal of this article is to extend the prior work with a focus on the examination of vibro-impact phenomena observed under step responses in a torsional system with one, two or three controlled clearances. First, new measurements are made at several locations with a higher sampling frequency. Measured angular accelerations are examined in both time and time-frequency domains. Minimal order non-linear models of the experiment are successfully constructed, using piecewise linear stiffness and Coulomb friction elements; eight cases of the generic system are examined though only three are experimentally studied. Measured and predicted responses for single and dual clearance configurations exhibit double sided impacts and time varying periods suggest softening trends under the step down torque. Non-linear models are experimentally validated by comparing results with new measurements and with those previously reported. Several metrics are utilized to quantify and compare the measured and predicted responses (including peak to peak accelerations). Eigensolutions and step responses of the corresponding linearized models are utilized to better understand the nature of the non-linear dynamic system. Finally, the effect of step amplitude on the non-linear responses is examined for several configurations, and hardening trends are observed in the torsional system with three clearances.

  4. Method validation to determine total alpha beta emitters in water samples using LSC

    Al-Masri, M. S.; Nashawati, A.; Al-akel, B.; Saaid, S.

    2006-06-01

    In this work a method was validated to determine gross alpha and beta emitters in water samples using liquid scintillation counter. 200 ml of water from each sample were evaporated to 20 ml and 8 ml of them were mixed with 12 ml of the suitable cocktail to be measured by liquid scintillation counter Wallac Winspectral 1414. The lower detection limit by this method (LDL) was 0.33 DPM for total alpha emitters and 1.3 DPM for total beta emitters. and the reproducibility limit was (± 2.32 DPM) and (±1.41 DPM) for total alpha and beta emitters respectively, and the repeatability limit was (±2.19 DPM) and (±1.11 DPM) for total alpha and beta emitters respectively. The method is easy and fast because of the simple preparation steps and the large number of samples that can be measured at the same time. In addition, many real samples and standard samples were analyzed by the method and showed accurate results so it was concluded that the method can be used with various water samples. (author)

  5. Determination of methylmercury in marine sediment samples: Method validation and occurrence data

    Carrasco, Luis; Vassileva, Emilia

    2015-01-01

    Highlights: • A method for MeHg determination at trace level in marine sediments is completely validated. • Validation is performed according to ISO-17025 and Eurachem guidelines. • The extraction efficiency of four sample preparation procedures is evaluated. • The uncertainty budget is used as a tool for evaluation of main uncertainty contributors. • Comparison with independent methods yields good agreement within stated uncertainty. - Abstract: The determination of methylmercury (MeHg) in sediment samples is a difficult task due to the extremely low MeHg/THg (total mercury) ratio and species interconversion. Here, we present the method validation of a cost-effective fit-for-purpose analytical procedure for the measurement of MeHg in sediments, which is based on aqueous phase ethylation, followed by purge and trap and hyphenated gas chromatography–pyrolysis–atomic fluorescence spectrometry (GC–Py–AFS) separation and detection. Four different extraction techniques, namely acid and alkaline leaching followed by solvent extraction and evaporation, microwave-assisted extraction with 2-mercaptoethanol, and acid leaching, solvent extraction and back extraction into sodium thiosulfate, were examined regarding their potential to selectively extract MeHg from estuarine sediment IAEA-405 certified reference material (CRM). The procedure based on acid leaching with HNO 3 /CuSO 4 , solvent extraction and back extraction into Na 2 S 2 O 3 yielded the highest extraction recovery, i.e., 94 ± 3% and offered the possibility to perform the extraction of a large number of samples in a short time, by eliminating the evaporation step. The artifact formation of MeHg was evaluated by high performance liquid chromatography coupled to inductively coupled plasma mass spectrometry (HPLC–ICP–MS), using isotopically enriched Me 201 Hg and 202 Hg and it was found to be nonexistent. A full validation approach in line with ISO 17025 and Eurachem guidelines was followed

  6. Determination of methylmercury in marine sediment samples: Method validation and occurrence data

    Carrasco, Luis; Vassileva, Emilia, E-mail: e.vasileva-veleva@iaea.org

    2015-01-01

    Highlights: • A method for MeHg determination at trace level in marine sediments is completely validated. • Validation is performed according to ISO-17025 and Eurachem guidelines. • The extraction efficiency of four sample preparation procedures is evaluated. • The uncertainty budget is used as a tool for evaluation of main uncertainty contributors. • Comparison with independent methods yields good agreement within stated uncertainty. - Abstract: The determination of methylmercury (MeHg) in sediment samples is a difficult task due to the extremely low MeHg/THg (total mercury) ratio and species interconversion. Here, we present the method validation of a cost-effective fit-for-purpose analytical procedure for the measurement of MeHg in sediments, which is based on aqueous phase ethylation, followed by purge and trap and hyphenated gas chromatography–pyrolysis–atomic fluorescence spectrometry (GC–Py–AFS) separation and detection. Four different extraction techniques, namely acid and alkaline leaching followed by solvent extraction and evaporation, microwave-assisted extraction with 2-mercaptoethanol, and acid leaching, solvent extraction and back extraction into sodium thiosulfate, were examined regarding their potential to selectively extract MeHg from estuarine sediment IAEA-405 certified reference material (CRM). The procedure based on acid leaching with HNO{sub 3}/CuSO{sub 4}, solvent extraction and back extraction into Na{sub 2}S{sub 2}O{sub 3} yielded the highest extraction recovery, i.e., 94 ± 3% and offered the possibility to perform the extraction of a large number of samples in a short time, by eliminating the evaporation step. The artifact formation of MeHg was evaluated by high performance liquid chromatography coupled to inductively coupled plasma mass spectrometry (HPLC–ICP–MS), using isotopically enriched Me{sup 201}Hg and {sup 202}Hg and it was found to be nonexistent. A full validation approach in line with ISO 17025 and

  7. Helping Remedial Readers Master the Reading Vocabulary through a Seven Step Method.

    Aaron, Robert L.

    1981-01-01

    An outline of seven important steps for teaching vocabulary development includes components of language development, visual memory, visual-auditory perception, speeded recall, spelling, reading the word in a sentence, and word comprehension in written context. (JN)

  8. Error analysis and system improvements in phase-stepping methods for photoelasticity

    Wenyan Ji

    1997-11-01

    In the past automated photoelasticity has been demonstrated to be one of the most efficient technique for determining the complete state of stress in a 3-D component. However, the measurement accuracy, which depends on many aspects of both the theoretical foundations and experimental procedures, has not been studied properly. The objective of this thesis is to reveal the intrinsic properties of the errors, provide methods for reducing them and finally improve the system accuracy. A general formulation for a polariscope with all the optical elements in an arbitrary orientation was deduced using the method of Mueller Matrices. The deduction of this formulation indicates an inherent connectivity among the optical elements and gives a knowledge of the errors. In addition, this formulation also shows a common foundation among the photoelastic techniques, consequently, these techniques share many common error sources. The phase-stepping system proposed by Patterson and Wang was used as an exemplar to analyse the errors and provide the proposed improvements. This system can be divided into four parts according to their function, namely the optical system, light source, image acquisition equipment and image analysis software. All the possible error sources were investigated separately and the methods for reducing the influence of the errors and improving the system accuracy are presented. To identify the contribution of each possible error to the final system output, a model was used to simulate the errors and analyse their consequences. Therefore the contribution to the results from different error sources can be estimated quantitatively and finally the accuracy of the systems can be improved. For a conventional polariscope, the system accuracy can be as high as 99.23% for the fringe order and the error less than 5 degrees for the isoclinic angle. The PSIOS system is limited to the low fringe orders. For a fringe order of less than 1.5, the accuracy is 94.60% for fringe

  9. Nonlinear Stability and Convergence of Two-Step Runge-Kutta Methods for Volterra Delay Integro-Differential Equations

    Haiyan Yuan

    2013-01-01

    Full Text Available This paper introduces the stability and convergence of two-step Runge-Kutta methods with compound quadrature formula for solving nonlinear Volterra delay integro-differential equations. First, the definitions of (k,l-algebraically stable and asymptotically stable are introduced; then the asymptotical stability of a (k,l-algebraically stable two-step Runge-Kutta method with 0step Runge-Kutta method is algebraically stable and diagonally stable and its generalized stage order is p, then the method with compound quadrature formula is D-convergent of order at least min{p,ν}, where ν depends on the compound quadrature formula.

  10. Simple One-Step Method to Synthesize Polypyrrole-Indigo Carmine-Silver Nanocomposite

    Loguercio, Lara Fernandes; Demingos, Pedro; Manica, Luiza de Mattos; Griep, Jordana Borges; Santos, Marcos José Leite; Ferreira, Jacqueline

    2016-01-01

    A nanocomposite of indigo carmine doped polypyrrole/silver nanoparticles was obtained by a one-step electrochemical process. The nanocomposite was characterized by scanning electron microscopy, infrared spectroscopy, ultraviolet-visible-near infrared spectroscopy, and cyclic voltammetry. The simple one-step process allowed the growth of silver nanoparticles during the polymerization of polypyrrole, resulting in films with electrochromic behavior and improved electroactivity. In addition, poly...

  11. A mixed methods inquiry into the validity of data

    Vaarst Mette

    2008-07-01

    Full Text Available Abstract Background Research in herd health management solely using a quantitative approach may present major challenges to the interpretation of the results, because the humans involved may have responded to their observations based on previous experiences and own beliefs. This challenge can be met through increased awareness and dialogue between researchers and farmers or other stakeholders about the background for data collection related to management and changes in management. By integrating quantitative and qualitative research methods in a mixed methods research approach, the researchers will improve their understanding of this potential bias of the observed data and farms, which will enable them to obtain more useful results of quantitative analyses. Case description An example is used to illustrate the potentials of combining quantitative and qualitative approaches to herd health related data analyses. The example is based on two studies on bovine metritis. The first study was a quantitative observational study of risk factors for metritis in Danish dairy cows based on data from the Danish Cattle Database. The other study was a semi-structured interview study involving 20 practicing veterinarians with the aim to gain insight into veterinarians' decision making when collecting and processing data related to metritis. Discussion and Evaluation The relations between risk factors and metritis in the first project supported the findings in several other quantitative observational studies; however, the herd incidence risk was highly skewed. There may be simple practical reasons for this, e.g. underreporting and differences in the veterinarians' decision making. Additionally, the interviews in the second project identified several problems with correctness and validity of data regarding the occurrence of metritis because of differences regarding case definitions and thresholds for treatments between veterinarians. Conclusion Studies where

  12. Validation of the WHO-5 as a first-step screening instrument for depression in adults with diabetes

    Halliday, Jennifer A; Hendrieckx, Christel; Busija, Lucy

    2017-01-01

    the sensitivity and specificity of the WHO-5 as a depression screening instrument, comparing two commonly used WHO-5 cut-off values (≤7 and reliability (α=0.90) and convergent validity with the PHQ-9 (r=-0...... was 0.44/0.96 for the ≤7 cut-off, and 0.79/0.79 for the cut-off, with similar findings by diabetes type and treatment. CONCLUSIONS: These findings support use of a WHO-5 cut-point of ... symptoms). Analyses were conducted for the full sample, and separately by diabetes type and treatment (type 1, non-insulin-treated type 2, and insulin-treated type 2 diabetes). Construct (convergent and factorial) validity and reliability of the WHO-5 were examined. ROC analyses were used to examine...

  13. Validation of a Step Detection Algorithm during Straight Walking and Turning in Patients with Parkinson’s Disease and Older Adults Using an Inertial Measurement Unit at the Lower Back

    Minh H. Pham

    2017-09-01

    Full Text Available IntroductionInertial measurement units (IMUs positioned on various body locations allow detailed gait analysis even under unconstrained conditions. From a medical perspective, the assessment of vulnerable populations is of particular relevance, especially in the daily-life environment. Gait analysis algorithms need thorough validation, as many chronic diseases show specific and even unique gait patterns. The aim of this study was therefore to validate an acceleration-based step detection algorithm for patients with Parkinson’s disease (PD and older adults in both a lab-based and home-like environment.MethodsIn this prospective observational study, data were captured from a single 6-degrees of freedom IMU (APDM (3DOF accelerometer and 3DOF gyroscope worn on the lower back. Detection of heel strike (HS and toe off (TO on a treadmill was validated against an optoelectronic system (Vicon (11 PD patients and 12 older adults. A second independent validation study in the home-like environment was performed against video observation (20 PD patients and 12 older adults and included step counting during turning and non-turning, defined with a previously published algorithm.ResultsA continuous wavelet transform (cwt-based algorithm was developed for step detection with very high agreement with the optoelectronic system. HS detection in PD patients/older adults, respectively, reached 99/99% accuracy. Similar results were obtained for TO (99/100%. In HS detection, Bland–Altman plots showed a mean difference of 0.002 s [95% confidence interval (CI −0.09 to 0.10] between the algorithm and the optoelectronic system. The Bland–Altman plot for TO detection showed mean differences of 0.00 s (95% CI −0.12 to 0.12. In the home-like assessment, the algorithm for detection of occurrence of steps during turning reached 90% (PD patients/90% (older adults sensitivity, 83/88% specificity, and 88/89% accuracy. The detection of steps during non-turning phases

  14. Method for making a single-step etch mask for 3D monolithic nanostructures

    Grishina, D A; Harteveld, C A M; Vos, W L; Woldering, L A

    2015-01-01

    Current nanostructure fabrication by etching is usually limited to planar structures as they are defined by a planar mask. The realization of three-dimensional (3D) nanostructures by etching requires technologies beyond planar masks. We present a method for fabricating a 3D mask that allows one to etch three-dimensional monolithic nanostructures using only CMOS-compatible processes. The mask is written in a hard-mask layer that is deposited on two adjacent inclined surfaces of a Si wafer. By projecting in a single step two different 2D patterns within one 3D mask on the two inclined surfaces, the mutual alignment between the patterns is ensured. Thereby after the mask pattern is defined, the etching of deep pores in two oblique directions yields a three-dimensional structure in Si. As a proof of concept we demonstrate 3D mask fabrication for three-dimensional diamond-like photonic band gap crystals in silicon. The fabricated crystals reveal a broad stop gap in optical reflectivity measurements. We propose how 3D nanostructures with five different Bravais lattices can be realized, namely cubic, tetragonal, orthorhombic, monoclinic and hexagonal, and demonstrate a mask for a 3D hexagonal crystal. We also demonstrate the mask for a diamond-structure crystal with a 3D array of cavities. In general, the 2D patterns on the different surfaces can be completely independently structured and still be in perfect mutual alignment. Indeed, we observe an alignment accuracy of better than 3.0 nm between the 2D mask patterns on the inclined surfaces, which permits one to etch well-defined monolithic 3D nanostructures. (paper)

  15. Porous plasmonic nanocomposites for SERS substrates fabricated by two-step laser method

    Koleva, M.E., E-mail: mihaela_ek@yahoo.com [Institute of Electronics, Bulgarian Academy of Sciences, 72 Tsarigradsko Chaussee blvd., Sofia 1784 (Bulgaria); International Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba, 305-0044 (Japan); Nedyalkov, N.N.; Atanasov, P.A. [Institute of Electronics, Bulgarian Academy of Sciences, 72 Tsarigradsko Chaussee blvd., Sofia 1784 (Bulgaria); Gerlach, J.W.; Hirsch, D.; Prager, A.; Rauschenbach, B. [Leibniz Institute of Surface Modification (IOM), Permoserstrasse 15, D-04318 Leipzig (Germany); Fukata, N.; Jevasuwan, W. [International Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba, 305-0044 (Japan)

    2016-04-25

    This research is focused on investigation of coupled plasmonic/metal-semiconductor nanomaterials. A two-step laser-assisted method is demonstrated for formation of plasmonic Ag nanoparticles (NPs) distributed into porous metal–oxide semiconductors. The mosaic Ag-ZnO target is used for laser ablation and, subsequently, laser annealing of the deposited layer is applied. The plasmon resonance properties of the nanostructures produced are confirmed by optical transmission spectroscopy. The wurtzite structure of ZnO is formed with tilted c-axis orientation and, respectively, a mixed Raman mode appears at 580 cm{sup −1}. The oxygen pressure applied during a deposition process has impact on the morphology and thickness of the porous nanostructures, but not on the size and size distribution of AgNPs. The porous nanocomposites exhibited potential for SERS applications, most pronounced for the oxygen deficient sample, grown at lower oxygen pressure. The observed considerable SERS enhancement of R6G molecules on AgNP/ZnO can be attributed to the ZnO-to-molecule charge transfer contribution, enhanced by the additional electrons from the local surface plasmon resonance (LSPR) of AgNPs to the ZnO through the conduction band. - Highlights: • Porous AgNPs/ZnO composites are obtained by laser deposition and laser annealing. • Morphology and properties depend on growth oxygen pressure. • The emergence of mixed-symmetry Raman mode at 580 cm{sup −1} is registered. • The AgNPs/ZnO porous nanocomposites are suitable for SERS-active substrates. • The charge transfer enhanced by LSPR has a contribution to SERS effect.

  16. The Theory and Practice of the Six-Step Method in EFL and Its Transferability to Engineering Programmes

    Ntombela, Berrington X. S.

    2013-01-01

    This paper outlines the theory of the six-step method developed by personnel in the Language and Learning department at Caledonian College of Engineering, Oman. The paper further illustrates the application of this method in teaching Project, Listening, Reading, Writing, and Speaking & Debate at Foundation level. The assumption in applying the…

  17. Chemometric approach for development, optimization, and validation of different chromatographic methods for separation of opium alkaloids.

    Acevska, J; Stefkov, G; Petkovska, R; Kulevanova, S; Dimitrovska, A

    2012-05-01

    The excessive and continuously growing interest in the simultaneous determination of poppy alkaloids imposes the development and optimization of convenient high-throughput methods for the assessment of the qualitative and quantitative profile of alkaloids in poppy straw. Systematic optimization of two chromatographic methods (gas chromatography (GC)/flame ionization detector (FID)/mass spectrometry (MS) and reversed-phase (RP)-high-performance liquid chromatography (HPLC)/diode array detector (DAD)) for the separation of alkaloids from Papaver somniferum L. (Papaveraceae) was carried out. The effects of various conditions on the predefined chromatographic descriptors were investigated using chemometrics. A full factorial linear design of experiments for determining the relationship between chromatographic conditions and the retention behavior of the analytes was used. Central composite circumscribed design was utilized for the final method optimization. By conducting the optimization of the methods in very rational manner, a great deal of excessive and unproductive laboratory research work was avoided. The developed chromatographic methods were validated and compared in line with the resolving power, sensitivity, accuracy, speed, cost, ecological aspects, and compatibility with the poppy straw extraction procedure. The separation of the opium alkaloids using the GC/FID/MS method was achieved within 10 min, avoiding any derivatization step. This method has a stronger resolving power, shorter analysis time, better cost/effectiveness factor than the RP-HPLC/DAD method and is in line with the "green trend" of the analysis. The RP-HPLC/DAD method on the other hand displayed better sensitivity for all tested alkaloids. The proposed methods provide both fast screening and an accurate content assessment of the six alkaloids in the poppy samples obtained from the selection program of Papaver strains.

  18. A three operator split-step method covering a larger set of non-linear partial differential equations

    Zia, Haider

    2017-06-01

    This paper describes an updated exponential Fourier based split-step method that can be applied to a greater class of partial differential equations than previous methods would allow. These equations arise in physics and engineering, a notable example being the generalized derivative non-linear Schrödinger equation that arises in non-linear optics with self-steepening terms. These differential equations feature terms that were previously inaccessible to model accurately with low computational resources. The new method maintains a 3rd order error even with these additional terms and models the equation in all three spatial dimensions and time. The class of non-linear differential equations that this method applies to is shown. The method is fully derived and implementation of the method in the split-step architecture is shown. This paper lays the mathematical ground work for an upcoming paper employing this method in white-light generation simulations in bulk material.

  19. Lipoprotein metabolism in familial hypercholesterolemia: Serial assessment using a one-step ultracentrifugation method

    Hayato Tada

    2015-04-01

    Full Text Available Objectives: It is well known that familial hypercholesterolemia (FH is a common inherited disorder that can markedly elevate the level of plasma LDL cholesterol. However, little data exists regarding the clinical impact of the plasma triglyceride (TG-rich lipoprotein fraction, including VLDL and IDL, in FH. Thus, we assessed the hypothesis that the mutations in the LDL receptor modulate lipoprotein metabolism other than the LDL fraction. Design and methods: We investigated plasma lipoprotein with a one-step ultracentrifugation method for 146 controls (mean age=61.4±17.1 yr, mean LDL cholesterol=92.7±61.2 mg/dl, 213 heterozygous mutation-determined FH subjects (mean age=46.0±18.0 yr, mean LDL cholesterol=225.1±61.2 mg/dl, and 16 homozygous/compound heterozygous mutation-determined FH subjects (mean age=26.9±17.1 yr, mean LDL cholesterol=428.6±86.1 mg/dl. In addition, we evaluated cholesterol/TG ratio in each lipoprotein fraction separated by ultracentrifugation. Results: In addition to total cholesterol and LDL cholesterol levels, VLDL cholesterol (19.5±10.4, 25.2±19.3, 29.5±21.4 mg/dl, respectively and IDL cholesterol (8.3±3.7, 16.8±11.5, 40.0±37.3 mg/dl, respectively exhibited a tri-model distribution according to their status in LDL receptor mutation(s. Moreover, the ratios of cholesterol/TG of each lipoprotein fraction increased significantly in heterozygous FH and homozygous/compound heterozygous FH groups, compared with that of controls, suggesting that the abnormality in LDL receptor modulates the quality as well as the quantity of each lipoprotein fraction. Conclusions: Our results indicate that cholesterol in TG-rich lipoproteins, including VLDL and IDL, are significantly higher in FH subjects, revealing a tri-modal distribution according to the number of LDL receptor mutations. Keywords: LDL cholesterol, Familial hypercholesterolemia, Ultracentrifugation, Lipoprotein

  20. Evaluating teaching methods: validation of an evaluation tool for hydrodissection and phacoemulsification portions of cataract surgery.

    Smith, Ronald J; McCannel, Colin A; Gordon, Lynn K; Hollander, David A; Giaconi, JoAnn A; Stelzner, Sadiqa K; Devgan, Uday; Bartlett, John; Mondino, Bartly J

    2014-09-01

    To develop and assess the validity of an evaluation tool to assess quantitatively the hydrodissection and phacoemulsification portions of cataract surgery performed by residents. Case series. Jules Stein Eye Institute, Olive View-UCLA Medical Center, and Veterans Administration Medical Center, Los Angeles, California, USA. The UCLA ophthalmology faculty members were surveyed and the literature was reviewed to develop a grading tool consisting of 15 questions to evaluate surgical technique, including questions from the Global Rating Assessment of Skills in Intraocular Surgery and from the International Council of Ophthalmology's Ophthalmology Surgical Competency Assessment Rubric. Video clips of the hydrodissection and phacoemulsification portions of cataract surgery performed by 1 postgraduate year 2 (PGY2) resident, 1 PGY3 resident, 2 PGY4 residents, and an advanced surgeon were independently graded in a masked fashion by an 8-member faculty panel. Eleven of the 15 questions had a significant association with surgical experience level (Pinstrument handling, flow of operation, and nucleus rotation. Nucleus cracking also had low variability. Less directly visible tasks, especially 3-dimensional tasks, had wider interobserver variability. Surgical performance can be validly measured using an evaluation tool. Improved videography and studies to identify the best questions for evaluating each step of cataract surgery may help ophthalmic educators more precisely measure training outcomes for improving teaching interventions. No author has a financial or proprietary interest in any material or method mentioned. Published by Elsevier Inc.

  1. A new multi-step technique with differential transform method for analytical solution of some nonlinear variable delay differential equations.

    Benhammouda, Brahim; Vazquez-Leal, Hector

    2016-01-01

    This work presents an analytical solution of some nonlinear delay differential equations (DDEs) with variable delays. Such DDEs are difficult to treat numerically and cannot be solved by existing general purpose codes. A new method of steps combined with the differential transform method (DTM) is proposed as a powerful tool to solve these DDEs. This method reduces the DDEs to ordinary differential equations that are then solved by the DTM. Furthermore, we show that the solutions can be improved by Laplace-Padé resummation method. Two examples are presented to show the efficiency of the proposed technique. The main advantage of this technique is that it possesses a simple procedure based on a few straight forward steps and can be combined with any analytical method, other than the DTM, like the homotopy perturbation method.

  2. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    Bonito, Andrea

    2014-10-31

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  3. Simple One-Step Method to Synthesize Polypyrrole-Indigo Carmine-Silver Nanocomposite

    Lara Fernandes Loguercio

    2016-01-01

    Full Text Available A nanocomposite of indigo carmine doped polypyrrole/silver nanoparticles was obtained by a one-step electrochemical process. The nanocomposite was characterized by scanning electron microscopy, infrared spectroscopy, ultraviolet-visible-near infrared spectroscopy, and cyclic voltammetry. The simple one-step process allowed the growth of silver nanoparticles during the polymerization of polypyrrole, resulting in films with electrochromic behavior and improved electroactivity. In addition, polypyrrole chains in the nanocomposite were found to present longer conjugation length than pristine polypyrrole films.

  4. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun

    2014-01-01

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  5. Standardization of a two-step real-time polymerase chain reaction based method for species-specific detection of medically important Aspergillus species.

    Das, P; Pandey, P; Harishankar, A; Chandy, M; Bhattacharya, S; Chakrabarti, A

    2017-01-01

    Standardization of Aspergillus polymerase chain reaction (PCR) poses two technical challenges (a) standardization of DNA extraction, (b) optimization of PCR against various medically important Aspergillus species. Many cases of aspergillosis go undiagnosed because of relative insensitivity of conventional diagnostic methods such as microscopy, culture or antigen detection. The present study is an attempt to standardize real-time PCR assay for rapid sensitive and specific detection of Aspergillus DNA in EDTA whole blood. Three nucleic acid extraction protocols were compared and a two-step real-time PCR assay was developed and validated following the recommendations of the European Aspergillus PCR Initiative in our setup. In the first PCR step (pan-Aspergillus PCR), the target was 28S rDNA gene, whereas in the second step, species specific PCR the targets were beta-tubulin (for Aspergillus fumigatus, Aspergillus flavus, Aspergillus terreus), gene and calmodulin gene (for Aspergillus niger). Species specific identification of four medically important Aspergillus species, namely, A. fumigatus, A. flavus, A. niger and A. terreus were achieved by this PCR. Specificity of the PCR was tested against 34 different DNA source including bacteria, virus, yeast, other Aspergillus sp., other fungal species and for human DNA and had no false-positive reactions. The analytical sensitivity of the PCR was found to be 102 CFU/ml. The present protocol of two-step real-time PCR assays for genus- and species-specific identification for commonly isolated species in whole blood for diagnosis of invasive Aspergillus infections offers a rapid, sensitive and specific assay option and requires clinical validation at multiple centers.

  6. A multi-step dealloying method to produce nanoporous gold with no volume change and minimal cracking

    Sun Ye [Department of Chemical and Materials Engineering, University of Kentucky, 177 F. Paul Anderson Tower, Lexington, KY 40506 (United States); Balk, T. John [Department of Chemical and Materials Engineering, University of Kentucky, 177 F. Paul Anderson Tower, Lexington, KY 40506 (United States)], E-mail: balk@engr.uky.edu

    2008-05-15

    We report a simple two-step dealloying method for producing bulk nanoporous gold with no volume change and no significant cracking. The galvanostatic dealloying method used here appears superior to potentiostatic methods for fabricating millimeter-scale samples. Care must be taken when imaging the nanoscale, interconnected sponge-like structure with a focused ion beam, as even brief exposure caused immediate and extensive cracking of nanoporous gold, as well as ligament coarsening at the surface00.

  7. A validated RP-HPLC method for the determination of Irinotecan hydrochloride residues for cleaning validation in production area

    Sunil Reddy

    2013-03-01

    Full Text Available Introduction: cleaning validation is an integral part of current good manufacturing practices in pharmaceutical industry. The main purpose of cleaning validation is to prove the effectiveness and consistency of cleaning in a given pharmaceutical production equipment to prevent cross contamination and adulteration of drug product with other active ingredient. Objective: a rapid, sensitive and specific reverse phase HPLC method was developed and validated for the quantitative determination of irinotecan hydrochloride in cleaning validation swab samples. Method: the method was validated using waters symmetry shield RP-18 (250mm x 4.6mm 5 µm column with isocratic mobile phase containing a mixture of 0.02 M potassium di-hydrogen ortho-phosphate, pH adjusted to 3.5 with ortho-phosphoric acid, methanol and acetonitrile (60:20:20 v/v/v. The flow rate of mobile phase was 1.0 mL/min with column temperature of 25°C and detection wavelength at 220nm. The sample injection volume was 100 µl. Results: the calibration curve was linear over a concentration range from 0.024 to 0.143 µg/mL with a correlation coefficient of 0.997. The intra-day and inter-day precision expressed as relative standard deviation were below 3.2%. The recoveries obtained from stainless steel, PCGI, epoxy, glass and decron cloth surfaces were more than 85% and there was no interference from the cotton swab. The detection limit (DL and quantitation limit (QL were 0.008 and 0.023 µg ml-1, respectively. Conclusion: the developed method was validated with respect to specificity, linearity, limit of detection and quantification, accuracy, precision and solution stability. The overall procedure can be used as part of a cleaning validation program in pharmaceutical manufacture of irinotecan hydrochloride.

  8. Qualification test of few group constants generated from an MC method by the two-step neutronics analysis system McCARD/MASTER

    Park, Ho Jin; Shim, Hyung Jin; Joo, Han Gyu; Kim, Chang Hyo

    2011-01-01

    The purpose of this paper is to examine the qualification of few group constants estimated by the Seoul National University Monte Carlo particle transport analysis code McCARD in terms of core neutronics analyses and thus to validate the McCARD method as a few group constant generator. The two- step core neutronics analyses are conducted for a mini and a realistic PWR by the McCARD/MASTER code system in which McCARD is used as an MC group constant generation code and MASTER as a diffusion core analysis code. The two-step calculations for the effective multiplication factors and assembly power distributions of the two PWR cores by McCARD/MASTER are compared with the reference McCARD calculations. By showing excellent agreements between McCARD/MASTER and the reference MC core neutronics analyses for the two PWRs, it is concluded that the MC method implemented in McCARD can generate few group constants which are well qualified for high-accuracy two-step core neutronics calculations. (author)

  9. Variable order one-step methods for initial value problems I ...

    A class of variable order one-step integrators is proposed for Initial Value Problems (IVPs) in Ordinary Differential Equations (ODEs). It is based on a rational interpolant. Journal of the Nigerian Association of Mathematical Physics Vol. 10 2006: pp. 91-96 ...

  10. The Mixing of Methods: a three-step process for improving rigour in impact evaluations

    Ton, G.

    2012-01-01

    This article describes a systematic process that is helpful in improving impact evaluation assignments, within restricted budgets and timelines. It involves three steps: a rethink of the key questions of the evaluation to develop more relevant, specific questions; a way of designing a mix of

  11. A novel two-step method for screening shade tolerant mutant plants via dwarfism

    When subjected to shade, plants undergo rapid shoot elongation, which often makes them more prone to disease and mechanical damage. It has been reported that, in turfgrass, induced dwarfism can enhance shade tolerance. Here, we describe a two-step procedure for isolating shade tolerant mutants of ...

  12. Methods for Assessing Item, Step, and Threshold Invariance in Polytomous Items Following the Partial Credit Model

    Penfield, Randall D.; Myers, Nicholas D.; Wolfe, Edward W.

    2008-01-01

    Measurement invariance in the partial credit model (PCM) can be conceptualized in several different but compatible ways. In this article the authors distinguish between three forms of measurement invariance in the PCM: step invariance, item invariance, and threshold invariance. Approaches for modeling these three forms of invariance are proposed,…

  13. Photon strength function in the Hf-181 nucleus by method of two-step cascade

    Le Hong Khiem

    2003-01-01

    The applicability of sum-coincidence measurements of two-step cascade gamma ray spectra determining Photon Strength Function (PSF) of Hf-181 induced from Hf-180 (n,2γ) Hf-181 reaction is presented. Up to 80% intensity of the primary gamma ray transitions in a wide energy range have been deduced and compared to model calculation. (author)

  14. Specification and Preliminary Validation of IAT (Integrated Analysis Techniques) Methods: Executive Summary.

    1985-03-01

    conceptual framwork , and preliminary validation of IAT concepts. Planned work for FY85, including more extensive validation, is also described. 20...Developments: Required Capabilities .... ......... 10 2-1 IAT Conceptual Framework - FY85 (FEO) ..... ........... 11 2-2 Recursive Nature of Decomposition...approach: 1) Identify needs & requirements for IAT. 2) Develop IAT conceptual framework. 3) Validate IAT methods. 4) Develop applications materials. To

  15. Validity in Mixed Methods Research in Education: The Application of Habermas' Critical Theory

    Long, Haiying

    2017-01-01

    Mixed methods approach has developed into the third methodological movement in educational research. Validity in mixed methods research as an important issue, however, has not been examined as extensively as that of quantitative and qualitative research. Additionally, the previous discussions of validity in mixed methods research focus on research…

  16. JRC Guidelines for 1 - Selecting and/or validating analytical methods for cosmetics 2 - Recommending standardization steps of analytical methods

    VINCENT Ursula

    2015-01-01

    The analysis of cosmetics constitutes a challenge mainly due to the large variety of ingredients and formulations, and to the complexity of cosmetic products, in particular due to huge matrix variability. In 2009, the European Commission issued a Regulation (Regulation (EC) N° 1223/2009 of the European Parliament and of the Council) establishing the requisites for cosmetic products and the responsibilities of the stakeholders. While the manufacturers' are responsible to ensure the safety of t...

  17. Development and validation of a new fallout transport method using variable spectral winds

    Hopkins, A.T.

    1984-01-01

    A new method was developed to incorporate variable winds into fallout transport calculations. The method uses spectral coefficients derived by the National Meteorological Center. Wind vector components are computed with the coefficients along the trajectories of falling particles. Spectral winds are used in the two-step method to compute dose rate on the ground, downwind of a nuclear cloud. First, the hotline is located by computing trajectories of particles from an initial, stabilized cloud, through spectral winds to the ground. The connection of particle landing points is the hotline. Second, dose rate on and around the hotline is computed by analytically smearing the falling cloud's activity along the ground. The feasibility of using spectral winds for fallout particle transport was validated by computing Mount St. Helens ashfall locations and comparing calculations to fallout data. In addition, an ashfall equation was derived for computing volcanic ash mass/area on the ground. Ashfall data and the ashfall equation were used to back-calculate an aggregated particle size distribution for the Mount St. Helens eruption cloud

  18. A validated bioanalytical HPLC method for pharmacokinetic evaluation of 2-deoxyglucose in human plasma.

    Gounder, Murugesan K; Lin, Hongxia; Stein, Mark; Goodin, Susan; Bertino, Joseph R; Kong, Ah-Ng Tony; DiPaola, Robert S

    2012-05-01

    2-Deoxyglucose (2-DG), an analog of glucose, is widely used to interfere with glycolysis in tumor cells and studied as a therapeutic approach in clinical trials. To evaluate the pharmacokinetics of 2-DG, we describe the development and validation of a sensitive HPLC fluorescent method for the quantitation of 2-DG in plasma. Plasma samples were deproteinized with methanol and the supernatant was dried at 45°C. The residues were dissolved in methanolic sodium acetate-boric acid solution. 2-DG and other monosaccharides were derivatized to 2-aminobenzoic acid derivatives in a single step in the presence of sodium cyanoborohydride at 80°C for 45 min. The analytes were separated on a YMC ODS C₁₈ reversed-phase column using gradient elution. The excitation and emission wavelengths were set at 360 and 425 nm. The 2-DG calibration curves were linear over the range of 0.63-300 µg/mL with a limit of detection of 0.5 µg/mL. The assay provided satisfactory intra-day and inter-day precision with RSD less than 9.8%, and the accuracy ranged from 86.8 to 110.0%. The HPLC method is reproducible and suitable for the quantitation of 2-DG in plasma. The method was successfully applied to characterize the pharmacokinetics profile of 2-DG in patients with advanced solid tumors. Copyright © 2011 John Wiley & Sons, Ltd.

  19. Statistical Analysis Methods for Physics Models Verification and Validation

    De Luca, Silvia

    2017-01-01

    The validation and verification process is a fundamental step for any software like Geant4 and GeantV, which aim to perform data simulation using physics models and Monte Carlo techniques. As experimental physicists, we have to face the problem to compare the results obtained using simulations with what the experiments actually observed. One way to solve the problem is to perform a consistency test. Within the Geant group, we developed a C++ compact library which will be added to the automated validation process on the Geant Validation Portal

  20. A step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy and minimization of gate fee.

    Kyriakis, Efstathios; Psomopoulos, Constantinos; Kokkotis, Panagiotis; Bourtsalas, Athanasios; Themelis, Nikolaos

    2017-06-23

    This study attempts the development of an algorithm in order to present a step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy, also considering the basic obstacle which is in many cases, the gate fee. Various parameters identified and evaluated in order to formulate the proposed decision making method in the form of an algorithm. The principle simulation input is the amount of municipal solid wastes (MSW) available for incineration and along with its net calorific value are the most important factors for the feasibility of the plant. Moreover, the research is focused both on the parameters that could increase the energy production and those that affect the R1 energy efficiency factor. Estimation of the final gate fee is achieved through the economic analysis of the entire project by investigating both expenses and revenues which are expected according to the selected site and outputs of the facility. In this point, a number of commonly revenue methods were included in the algorithm. The developed algorithm has been validated using three case studies in Greece-Athens, Thessaloniki, and Central Greece, where the cities of Larisa and Volos have been selected for the application of the proposed decision making tool. These case studies were selected based on a previous publication made by two of the authors, in which these areas where examined. Results reveal that the development of a «solid» methodological approach in selecting the site and the size of waste-to-energy (WtE) facility can be feasible. However, the maximization of the energy efficiency factor R1 requires high utilization factors while the minimization of the final gate fee requires high R1 and high metals recovery from the bottom ash as well as economic exploitation of recovered raw materials if any.

  1. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s 2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful

  2. A new method for assessing content validity in model-based creation and iteration of eHealth interventions.

    Kassam-Adams, Nancy; Marsac, Meghan L; Kohser, Kristen L; Kenardy, Justin A; March, Sonja; Winston, Flaura K

    2015-04-15

    The advent of eHealth interventions to address psychological concerns and health behaviors has created new opportunities, including the ability to optimize the effectiveness of intervention activities and then deliver these activities consistently to a large number of individuals in need. Given that eHealth interventions grounded in a well-delineated theoretical model for change are more likely to be effective and that eHealth interventions can be costly to develop, assuring the match of final intervention content and activities to the underlying model is a key step. We propose to apply the concept of "content validity" as a crucial checkpoint to evaluate the extent to which proposed intervention activities in an eHealth intervention program are valid (eg, relevant and likely to be effective) for the specific mechanism of change that each is intended to target and the intended target population for the intervention. The aims of this paper are to define content validity as it applies to model-based eHealth intervention development, to present a feasible method for assessing content validity in this context, and to describe the implementation of this new method during the development of a Web-based intervention for children. We designed a practical 5-step method for assessing content validity in eHealth interventions that includes defining key intervention targets, delineating intervention activity-target pairings, identifying experts and using a survey tool to gather expert ratings of the relevance of each activity to its intended target, its likely effectiveness in achieving the intended target, and its appropriateness with a specific intended audience, and then using quantitative and qualitative results to identify intervention activities that may need modification. We applied this method during our development of the Coping Coach Web-based intervention for school-age children. In the evaluation of Coping Coach content validity, 15 experts from five countries

  3. Combining Different Conceptual Change Methods within Four-Step Constructivist Teaching Model: A Sample Teaching of Series and Parallel Circuits

    Ipek, Hava; Calik, Muammer

    2008-01-01

    Based on students' alternative conceptions of the topics "electric circuits", "electric charge flows within an electric circuit", "how the brightness of bulbs and the resistance changes in series and parallel circuits", the current study aims to present a combination of different conceptual change methods within a four-step constructivist teaching…

  4. Validation and Application of the Survey of Teaching Beliefs and Practices for Undergraduates (STEP-U): Identifying Factors Associated with Valuing Important Workplace Skills among Biology Students

    Marbach-Ad, Gili; Rietschel, Carly; Thompson, Katerina V.

    2016-01-01

    We present a novel assessment tool for measuring biology students’ values and experiences across their undergraduate degree program. Our Survey of Teaching Beliefs and Practices for Undergraduates (STEP-U) assesses the extent to which students value skills needed for the workplace (e.g., ability to work in groups) and their experiences with teaching practices purported to promote such skills (e.g., group work). The survey was validated through factor analyses in a large sample of biology seniors (n = 1389) and through response process analyses (five interviewees). The STEP-U skills items were characterized by two underlying factors: retention (e.g., memorization) and transfer (e.g., knowledge application). Multiple linear regression models were used to examine relationships between classroom experiences, values, and student characteristics (e.g., gender, cumulative grade point average [GPA], and research experience). Student demographic and experiential factors predicted the extent to which students valued particular skills. Students with lower GPAs valued retention skills more than those with higher GPAs. Students with research experience placed greater value on scientific writing and interdisciplinary understanding. Greater experience with specific teaching practices was associated with valuing the corresponding skills more highly. The STEP-U can provide feedback vital for designing curricula that better prepare students for their intended postgraduate careers. PMID:27856547

  5. Validity of the Demirjian method for dental age estimation for ...

    2015-02-04

    Feb 4, 2015 ... Dental age was calculated using the Demirjian's method. Chronologic age was .... in order to avoid the examiner bias at the time of collecting data. ... age using the. Demirjian method for different age groups and total sample.

  6. System Identification Methods for Aircraft Flight Control Development and Validation

    1995-10-01

    System-identification methods compose a mathematical model, or series of models, : from measurements of inputs and outputs of dynamic systems. This paper : discusses the use of frequency-domain system-identification methods for the : development and ...

  7. Methods and validity of dietary assessments in four Scandinavian populations

    Bingham, S; Wiggins, H S; Englyst, H

    1982-01-01

    and duplicate collections of all food eaten, was validated by chemical analysis of the duplicates, by measuring 24-hour urine and fecal nitrogen excretion, and by comparing the constituents of the urine samples collected during the survey with similar collections 1-2 weeks later. There were good agreements...... between estimates of fat and protein intake obtained by food-table calculations of the 4-day weighed record and the chemically analyzed duplicates. Urinary plus fecal nitrogen excretion was equal to estimated nitrogen intake during the survey, and no discernable changes in urinary output occurred after...

  8. Validation of physics model of FARE-Tool by comparison of RFSP flux simulations with measurements at discrete refuelling steps

    Shad, M.A.

    1995-01-01

    In CANDU 6 the on-power refuelling is done in the direction of the coolant flow through the channel, flow being in alternate direction in adjacent channels. The channel flow pushes the fuel string (a total of 20 bundles with the 8-bundle fuelling scheme) towards the downstream Fuelling Machine (FM), obviating the need for the upstream FM to insert the rams into the active core to push the fuel. In some channels of the outer core region, however, the coolant flow is low and the hydraulic drag is not enough to push the fuel string. In these channels a Flow Assist Ram Extension (FARE) tool is used during refuelling to augment the flow and hence enhance the hydraulic drag required to push the fuel string. The FARE-Tool is a strong neutron absorber and its use results in a severe local flux depression. During the refuelling process, when the FARE-Tool travels in the immediate vicinity of a Reactor Regulating System (RRS) detector, the spatial control system causes a short-term zone-fill reduction in an attempt to maintain the reference zone powers. This short term zone drain may in turn result in a single channel ROP trip of the SDS l or SDS2 reactor protection system. Therefore, the control room operator is always interested in minimizing the occurrence of this category of trip for obvious economic reasons. The physics modelling of the FARE-Tool used in the 2-neutron-energy-group 3-dimensional diffusion code RFSP is validated by comparing the simulated response of the in-core detectors with the measured response of the same detectors during refuelling. These measurements were recorded during a specially scheduled refuelling of channel B09 at the Point Lepreau Generating Station on 1993 July 6 at 3591 Equivalent Full Power Days (EFPDs). 3 refs., 10 tabs., 1 fig

  9. Application of the thermal step method to space charge measurements in inhomogeneous solid insulating structures: A theoretical approach

    Cernomorcenco, Andrei; Notingher, Petru Jr.

    2008-01-01

    The thermal step method is a nondestructive technique for determining electric charge distribution across solid insulating structures. It consists in measuring and analyzing a transient capacitive current due to the redistribution of influence charges when the sample is crossed by a thermal wave. This work concerns the application of the technique to inhomogeneous insulating structures. A general equation of the thermal step current appearing in such a sample is established. It is shown that this expression is close to the one corresponding to a homogeneous sample and allows using similar techniques for calculating electric field and charge distribution

  10. A long-term validation of the modernised DC-ARC-OES solid-sample method.

    Flórián, K; Hassler, J; Förster, O

    2001-12-01

    The validation procedure based on ISO 17025 standard has been used to study and illustrate both the longterm stability of the calibration process of the DC-ARC solid sample spectrometric method and the main validation criteria of the method. In the calculation of the validation characteristics depending on the linearity(calibration), also the fulfilment of predetermining criteria such as normality and homoscedasticity was checked. In order to decide whether there are any trends in the time-variation of the analytical signal or not, also the Neumann test of trend was applied and evaluated. Finally, a comparison with similar validation data of the ETV-ICP-OES method was carried out.

  11. Validation of an HPLC–UV method for the determination of digoxin residues on the surface of manufacturing equipment

    ZORAN B. TODOROVIĆ

    2009-09-01

    Full Text Available In the pharmaceutical industry, an important step consists in the removal of possible drug residues from the involved equipments and areas. The cleaning procedures must be validated and methods to determine trace amounts of drugs have, therefore, to be considered with special attention. An HPLC–UV method for the determination of digoxin residues on stainless steel surfaces was developed and validated in order to control a cleaning procedure. Cotton swabs, moistened with methanol were used to remove any residues of drugs from stainless steel surfaces, and give recoveries of 85.9, 85.2 and 78.7 % for three concentration levels. The precision of the results, reported as the relative standard deviation (RSD, were below 6.3 %. The method was validated over a concentration range of 0.05–12.5 µg mL-1. Low quantities of drug residues were determined by HPLC–UV using a Symmetry C18 column (150´4.6 mm, 5 µm at 20 °C with an acetonitrile–water (28:72, v/v mobile phase at a flow rate of 1.1 mL min-1, an injection volume of 100 µL and were detected at 220 nm. A simple, selective and sensitive HPLC–UV assay for the determination of digoxin residues on stainless steel was developed, validated and applied.

  12. Validation of an analytical method for determining halothane in urine as an instrument for evaluating occupational exposure

    Gonzalez Chamorro, Rita Maria; Jaime Novas, Arelis; Diaz Padron, Heliodora

    2010-01-01

    The occupational exposure to harmful substances may impose the apparition of determined significative changes in the normal physiology of the organism when the adequate security measures are not taken in time in a working place where the risk may be present. Among the chemical risks that may affect the workers' health are the inhalable anesthetic agents. With the objective to take the first steps for the introduction of an epidemiological surveillance system to this personnel, an analytical method for determining this anesthetic in urine was validated with the instrumental conditions created in our laboratory. To carry out this validation the following parameters were taken into account: specificity, lineament, precision, accuracy, detection limit and quantification limit, and the uncertainty of the method was calculated. In the validation procedure it was found that the technique is specific and precise, the detection limit was of 0,118 μg/L, and of the quantification limit of 0,354 μg/L. The global uncertainty was of 0,243, and the expanded of 0,486. The validated method, together with the posterior introduction of the biological exposure limits, will serve as an auxiliary means of diagnosis which will allow us a periodical control of the personnel exposure

  13. Rapid one-step selection method for generating nucleic acid aptamers: development of a DNA aptamer against α-bungarotoxin.

    Lasse H Lauridsen

    Full Text Available BACKGROUND: Nucleic acids based therapeutic approaches have gained significant interest in recent years towards the development of therapeutics against many diseases. Recently, research on aptamers led to the marketing of Macugen®, an inhibitor of vascular endothelial growth factor (VEGF for the treatment of age related macular degeneration (AMD. Aptamer technology may prove useful as a therapeutic alternative against an array of human maladies. Considering the increased interest in aptamer technology globally that rival antibody mediated therapeutic approaches, a simplified selection, possibly in one-step, technique is required for developing aptamers in limited time period. PRINCIPAL FINDINGS: Herein, we present a simple one-step selection of DNA aptamers against α-bungarotoxin. A toxin immobilized glass coverslip was subjected to nucleic acid pool binding and extensive washing followed by PCR enrichment of the selected aptamers. One round of selection successfully identified a DNA aptamer sequence with a binding affinity of 7.58 µM. CONCLUSION: We have demonstrated a one-step method for rapid production of nucleic acid aptamers. Although the reported binding affinity is in the low micromolar range, we believe that this could be further improved by using larger targets, increasing the stringency of selection and also by combining a capillary electrophoresis separation prior to the one-step selection. Furthermore, the method presented here is a user-friendly, cheap and an easy way of deriving an aptamer unlike the time consuming conventional SELEX-based approach. The most important application of this method is that chemically-modified nucleic acid libraries can also be used for aptamer selection as it requires only one enzymatic step. This method could equally be suitable for developing RNA aptamers.

  14. Content validity of methods to assess malnutrition in cancer patients: a systematic review

    Sealy, Martine; Nijholt, Willemke; Stuiver, M.M.; van der Berg, M.M.; Ottery, Faith D.; van der Schans, Cees; Roodenburg, Jan L N; Jager-Wittenaar, Harriët

    Content validity of methods to assess malnutrition in cancer patients: A systematic review Rationale: Inadequate operationalisation of the multidimensial concept of malnutrition may result in inadequate evaluation of nutritional status. In this review we aimed to assess content validity of methods

  15. Critical Values for Lawshe's Content Validity Ratio: Revisiting the Original Methods of Calculation

    Ayre, Colin; Scally, Andrew John

    2014-01-01

    The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.

  16. Comparison on genomic predictions using GBLUP models and two single-step blending methods with different relationship matrices in the Nordic Holstein population

    Gao, Hongding; Christensen, Ole Fredslund; Madsen, Per

    2012-01-01

    Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may...... not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16......) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted...

  17. A VALIDATED STABILITY INDICATED RP-HPLC METHOD FOR DUTASTERIDE

    D. Pavan Kumar a, b *, Naga Jhansi a, G. Srinivasa Rao b, Kirti Kumar Jain a

    2018-01-01

    ABSTRACT A Simple, Stability indicating, Isocratic, reverse phase High Performance Liquid Chromatographic (RPLC) related substance method was developed for Dutasteride in API. This method separates the impurities which are co-eluting in the pharmacopeia method. Successful separation of degradation impurities and synthetic impurities was achieved by YMC Triat phenyl column. Chromatographic was carried out on YMC Triat phenyl (150 X 4.6 mm, 3.0µm) column using 0.01M Potassium Dihydrogen Pho...

  18. Mercury speciation analysis in seafood by species-specific isotope dilution: method validation and occurrence data

    Clemens, Stephanie; Guerin, Thierry [Agence Nationale de Securite Sanitaire de l' Alimentation, Laboratoire de Securite des Aliments de Maisons-Alfort, Unite des Contaminants Inorganiques et Mineraux de l' Environnement, ANSES, Maisons-Alfort (France); Monperrus, Mathilde; Donard, Olivier F.X.; Amouroux, David [IPREM UMR 5254 CNRS - Universite de Pau et des Pays de l' Adour, Laboratoire de Chimie Analytique Bio-Inorganique et Environnement, Institut des Sciences Analytiques et de Physico-chimie pour l' Environnement et les Materiaux, Pau Cedex (France)

    2011-11-15

    Methylmercury (MeHg) and total mercury (THg) in seafood were determined using species-specific isotope dilution analysis and gas chromatography combined with inductively coupled plasma mass spectrometry. Sample preparation methods (extraction and derivation step) were evaluated on certified reference materials using isotopically enriched Hg species. Solid-liquid extraction, derivation by propylation and automated agitation gave excellent accuracy and precision results. Satisfactory figures of merit for the selected method were obtained in terms of limit of quantification (1.2 {mu}g Hg kg{sup -1} for MeHg and 1.4 {mu}g Hg kg{sup -1} for THg), repeatability (1.3-1.7%), intermediate precision reproducibility (1.5% for MeHg and 2.2% for THg) and trueness (bias error less than 7%). By means of a recent strategy based on accuracy profiles ({beta}-expectation tolerance intervals), the selected method was successfully validated in the range of approximately 0.15-5.1 mg kg{sup -1} for MeHg and 0.27-5.2 mg kg{sup -1} for THg. Probability {beta} was set to 95% and the acceptability limits to {+-}15%. The method was then applied to 62 seafood samples representative of consumption in the French population. The MeHg concentrations were generally low (1.9-588 {mu}g kg{sup -1}), and the percentage of MeHg varied from 28% to 98% in shellfish and from 84% to 97% in fish. For all real samples tested, methylation and demethylation reactions were not significant, except in one oyster sample. The method presented here could be used for monitoring food contamination by MeHg and inorganic Hg in the future to more accurately assess human exposure. (orig.)

  19. A Rapid, Simple, and Validated RP-HPLC Method for Quantitative Analysis of Levofloxacin in Human Plasma

    Dion Notario

    2017-04-01

    Full Text Available To conduct a bioequivalence study for a copy product of levofloxacin (LEV, a simple and validated analytical method was needed, but the previous developed methods were still too complicated. For this reason, a simple and rapid high performance liquid chromatography method was developed and validated for LEV quantification in human plasma. Chromatographic separation was performed under isocratic elution on a Luna Phenomenex® C18 (150 × 4.6 mm, 5 µm column. The mobile phase was comprised of acetonitrile, methanol, and phosphate buffer 25 mM that adjusted at pH 3.0 (13:7:80 v/v/v and pumped at a flow rate of 1.5 mL/min. Detection was performed under UV detector at wavelength of 280 nm. Samples were prepared by adding acetonitrile and followed by centrifugation to precipitate plasma protein. Then followed successively by evaporation and reconstitution step. The optimized method meets the requirements of validation parameters which included linearity (r = 0.995, sensitivity (LLOQ and LOD was 1.77 and 0.57 µg/mL respectively, accuracy (%error above LLOQ ≤ 12% and LLOQ ≤ 20%, precision (RSD ≤ 9%, and robustness in the ranges of 1.77-28.83 µg/mL. Therefore, the method can be used as a routine analysis of LEV in human plasma as well as in bioequivalence study of LEV.

  20. Development and Validation of a Stability-Indicating LC-UV Method ...

    Keywords: Ketotifen, Cetirizine, Stability indicating method, Stressed conditions, Validation. Tropical ... in biological fluids [13] are also reported. Stability indicating HPLC method is reported for ketotifen where drug is ..... paracetamol, cetirizine.

  1. Large-area gold nanohole arrays fabricated by one-step method for surface plasmon resonance biochemical sensing.

    Qi, Huijie; Niu, Lihong; Zhang, Jie; Chen, Jian; Wang, Shujie; Yang, Jingjing; Guo, Siyi; Lawson, Tom; Shi, Bingyang; Song, Chunpeng

    2018-04-01

    Surface plasmon resonance (SPR) nanosensors based on metallic nanohole arrays have been widely reported to detect binding interactions in biological specimens. A simple and effective method for constructing nanoscale arrays is essential for the development of SPR nanosensors. In this work, we report a one-step method to fabricate nanohole arrays by thermal nanoimprinting in the matrix of IPS (Intermediate Polymer Stamp). No additional etching process or supporting substrate is required. The preparation process is simple, time-saving and compatible for roll-to-roll process, potentially allowing mass production. Moreover, the nanohole arrays were integrated into detection platform as SPR sensors to investigate different types of biological binding interactions. The results demonstrate that our one-step method can be used to efficiently fabricate large-area and uniform nanohole arrays for biochemical sensing.

  2. Validation of two ribosomal RNA removal methods for microbial metatranscriptomics

    He, Shaomei; Wurtzel, Omri; Singh, Kanwar; Froula, Jeff L; Yilmaz, Suzan; Tringe, Susannah G; Wang, Zhong; Chen, Feng; Lindquist, Erika A; Sorek, Rotem; Hugenholtz, Philip

    2010-10-01

    The predominance of rRNAs in the transcriptome is a major technical challenge in sequence-based analysis of cDNAs from microbial isolates and communities. Several approaches have been applied to deplete rRNAs from (meta)transcriptomes, but no systematic investigation of potential biases introduced by any of these approaches has been reported. Here we validated the effectiveness and fidelity of the two most commonly used approaches, subtractive hybridization and exonuclease digestion, as well as combinations of these treatments, on two synthetic five-microorganism metatranscriptomes using massively parallel sequencing. We found that the effectiveness of rRNA removal was a function of community composition and RNA integrity for these treatments. Subtractive hybridization alone introduced the least bias in relative transcript abundance, whereas exonuclease and in particular combined treatments greatly compromised mRNA abundance fidelity. Illumina sequencing itself also can compromise quantitative data analysis by introducing a G+C bias between runs.

  3. Method for validating radiobiological samples using a linear accelerator

    Brengues, Muriel; Liu, David; Korn, Ronald; Zenhausern, Frederic

    2014-01-01

    There is an immediate need for rapid triage of the population in case of a large scale exposure to ionizing radiation. Knowing the dose absorbed by the body will allow clinicians to administer medical treatment for the best chance of recovery for the victim. In addition, today's radiotherapy treatment could benefit from additional information regarding the patient's sensitivity to radiation before starting the treatment. As of today, there is no system in place to respond to this demand. This paper will describe specific procedures to mimic the effects of human exposure to ionizing radiation creating the tools for optimization of administered radiation dosimetry for radiotherapy and/or to estimate the doses of radiation received accidentally during a radiation event that could pose a danger to the public. In order to obtain irradiated biological samples to study ionizing radiation absorbed by the body, we performed ex-vivo irradiation of human blood samples using the linear accelerator (LINAC). The LINAC was implemented and calibrated for irradiating human whole blood samples. To test the calibration, a 2 Gy test run was successfully performed on a tube filled with water with an accuracy of 3% in dose distribution. To validate our technique the blood samples were ex-vivo irradiated and the results were analyzed using a gene expression assay to follow the effect of the ionizing irradiation by characterizing dose responsive biomarkers from radiobiological assays. The response of 5 genes was monitored resulting in expression increase with the dose of radiation received. The blood samples treated with the LINAC can provide effective irradiated blood samples suitable for molecular profiling to validate radiobiological measurements via the gene-expression based biodosimetry tools. (orig.)

  4. Method for validating radiobiological samples using a linear accelerator.

    Brengues, Muriel; Liu, David; Korn, Ronald; Zenhausern, Frederic

    2014-04-29

    There is an immediate need for rapid triage of the population in case of a large scale exposure to ionizing radiation. Knowing the dose absorbed by the body will allow clinicians to administer medical treatment for the best chance of recovery for the victim. In addition, today's radiotherapy treatment could benefit from additional information regarding the patient's sensitivity to radiation before starting the treatment. As of today, there is no system in place to respond to this demand. This paper will describe specific procedures to mimic the effects of human exposure to ionizing radiation creating the tools for optimization of administered radiation dosimetry for radiotherapy and/or to estimate the doses of radiation received accidentally during a radiation event that could pose a danger to the public. In order to obtain irradiated biological samples to study ionizing radiation absorbed by the body, we performed ex-vivo irradiation of human blood samples using the linear accelerator (LINAC). The LINAC was implemented and calibrated for irradiating human whole blood samples. To test the calibration, a 2 Gy test run was successfully performed on a tube filled with water with an accuracy of 3% in dose distribution. To validate our technique the blood samples were ex-vivo irradiated and the results were analyzed using a gene expression assay to follow the effect of the ionizing irradiation by characterizing dose responsive biomarkers from radiobiological assays. The response of 5 genes was monitored resulting in expression increase with the dose of radiation received. The blood samples treated with the LINAC can provide effective irradiated blood samples suitable for molecular profiling to validate radiobiological measurements via the gene-expression based biodosimetry tools.

  5. Optimization of One-Step In Situ Transesterification Method for Accurate Quantification of EPA in Nannochloropsis gaditana

    Yuting Tang

    2016-11-01

    Full Text Available Microalgae are a valuable source of lipid feedstocks for biodiesel and valuable omega-3 fatty acids. Nannochloropsis gaditana has emerged as a promising producer of eicosapentaenoic acid (EPA due to its fast growth rate and high EPA content. In the present study, the fatty acid profile of Nannochloropsis gaditana was found to be naturally high in EPA and devoid of docosahexaenoic acid (DHA, thereby providing an opportunity to maximize the efficacy of EPA production. Using an optimized one-step in situ transesterification method (methanol:biomass = 90 mL/g; HCl 5% by vol.; 70 °C; 1.5 h, the maximum fatty acid methyl ester (FAME yield of Nannochloropsis gaditana cultivated under rich condition was quantified as 10.04% ± 0.08% by weight with EPA-yields as high as 4.02% ± 0.17% based on dry biomass. The total FAME and EPA yields were 1.58- and 1.23-fold higher separately than that obtained using conventional two-step method (solvent system: methanol and chloroform. This one-step in situ method provides a fast and simple method to measure fatty acid methyl ester (FAME yields and could serve as a promising method to generate eicosapentaenoic acid methyl ester from microalgae.

  6. Two-step method for creating a gastric tube during laparoscopic-thoracoscopic Ivor-Lewis esophagectomy.

    Liu, Yu; Li, Ji-Jia; Zu, Peng; Liu, Hong-Xu; Yu, Zhan-Wu; Ren, Yi

    2017-12-07

    To introduce a two-step method for creating a gastric tube during laparoscopic-thoracoscopic Ivor-Lewis esophagectomy and assess its clinical application. One hundred and twenty-two patients with middle or lower esophageal cancer who underwent laparoscopic-thoracoscopic Ivor-Lewis esophagectomy at Liaoning Cancer Hospital and Institute from March 2014 to March 2016 were included in this study, and divided into two groups based on the procedure used for creating a gastric tube. One group used a two-step method for creating a gastric tube, and the other group used the conventional method. The two groups were compared regarding the operating time, surgical complications, and number of stapler cartridges used. The mean operating time was significantly shorter in the two-step method group than in the conventional method group [238 (179-293) min vs 272 (189-347) min, P creating a gastric tube during laparoscopic-thoracoscopic Ivor-Lewis esophagectomy has the advantages of simple operation, minimal damage to the tubular stomach, and reduced use of stapler cartridges.

  7. One-step leapfrog ADI-FDTD method for simulating electromagnetic wave propagation in general dispersive media.

    Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David

    2013-09-09

    The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods.

  8. Development and validation of a spectroscopic method for the ...

    Tropical Journal of Pharmaceutical Research ... Purpose: To develop a new analytical method for the quantitative analysis of miconazole ... a simple, reliable and robust method for the characterization of a mixture of the drugs in a dosage form. ... By Country · List All Titles · Free To Read Titles This Journal is Open Access.

  9. Validated method for the detection and quantitation of synthetic ...

    These methods were applied to postmortem cases from the Johannesburg Forensic Pathology Services Medicolegal Laboratory (FPS-MLL) to assess the prevalence of these synthetic cannabinoids amongst the local postmortem population. Urine samples were extracted utilizing a solid phase extraction (SPE) method, ...

  10. Validation, verification and comparison: Adopting new methods in ...

    2005-07-03

    Jul 3, 2005 ... chemical analyses can be assumed to be homogeneously distrib- uted. When introduced ... For water microbiology this has been resolved with the publication of .... tion exercise can result in a laboratory adopting the method. If, however, the new ... For methods used for environmental sam- ples, a range of ...

  11. Testing and Validation of the Dynamic Interia Measurement Method

    Chin, Alexander; Herrera, Claudia; Spivey, Natalie; Fladung, William; Cloutier, David

    2015-01-01

    This presentation describes the DIM method and how it measures the inertia properties of an object by analyzing the frequency response functions measured during a ground vibration test (GVT). The DIM method has been in development at the University of Cincinnati and has shown success on a variety of small scale test articles. The NASA AFRC version was modified for larger applications.

  12. Two-step design method for highly compact three-dimensional freeform optical system for LED surface light source.

    Mao, Xianglong; Li, Hongtao; Han, Yanjun; Luo, Yi

    2014-10-20

    Designing an illumination system for a surface light source with a strict compactness requirement is quite challenging, especially for the general three-dimensional (3D) case. In accordance with the two key features of an expected illumination distribution, i.e., a well-controlled boundary and a precise illumination pattern, a two-step design method is proposed in this paper for highly compact 3D freeform illumination systems. In the first step, a target shape scaling strategy is combined with an iterative feedback modification algorithm to generate an optimized freeform optical system with a well-controlled boundary of the target distribution. In the second step, a set of selected radii of the system obtained in the first step are optimized to further improve the illuminating quality within the target region. The method is quite flexible and effective to design highly compact optical systems with almost no restriction on the shape of the desired target field. As examples, three highly compact freeform lenses with ratio of center height h of the lens and the maximum dimension D of the source ≤ 2.5:1 are designed for LED surface light sources to form a uniform illumination distribution on a rectangular, a cross-shaped and a complex cross pierced target plane respectively. High light control efficiency of η > 0.7 as well as low relative standard illumination deviation of RSD < 0.07 is obtained simultaneously for all the three design examples.

  13. A Four-Step Block Hybrid Adams-Moulton Methods For The Solution ...

    This paper examines application of the Adam-Moulton's Method and proposes a modified self-starting continuous formula Called hybrid Adams-Moulton methods for the case k=4. It allows evaluation at both grid and off grid points to obtain the discrete schemes used in the block methods. The order, error constant and ...

  14. On some properties of the block linear multi-step methods | Chollom ...

    The convergence, stability and order of Block linear Multistep methods have been determined in the past based on individual members of the block. In this paper, methods are proposed to examine the properties of the entire block. Some Block Linear Multistep methods have been considered, their convergence, stability and ...

  15. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  16. The full size validation of remanent life assessment methods

    Hepworth, J.K.; Williams, J.A.

    1988-03-01

    A range of possible life assessment techniques for the remanent life appraisal of creeping structures is available in the published literature. However, due to the safety implications, the true conservatism of such methods cannot be assessed on operating plant. Consequently, the CEGB set up a four vessel programme in the Pressure Vessel Test Facility at the Marchwood Engineering Laboratories of the CEGB to underwrite and quantify the accuracy of these methods. The application of two non-destructive methods, namely strain monitoring and hardness measurement, to the data generated during about 12,000 hours of testing is examined. The current state of development of these methods is reviewed. Finally, the future CEGB programme relating to these vessels is discussed. (author)

  17. A Validated Method for the Detection and Quantitation of Synthetic ...

    NICOLAAS

    A LC-HRMS (liquid chromatography coupled with high resolution mass spectrometry) method for the ... its ease of availability, from head shops (shops selling predomi- ..... cannabinoids in whole blood in plastic containers with several common ...

  18. Validation of Standing Wave Liner Impedance Measurement Method, Phase I

    National Aeronautics and Space Administration — Hersh Acoustical Engineering, Inc. proposes to establish the feasibility and practicality of using the Standing Wave Method (SWM) to measure the impedance of...

  19. Validation of EIA sampling methods - bacterial and biochemical analysis

    Sheelu, G.; LokaBharathi, P.A.; Nair, S.; Raghukumar, C.; Mohandass, C.

    to temporal factors. Paired T-test between pre- and post-disturbance samples suggested that the above methods of sampling and variables like TC, protein and TOC could be used for monitoring disturbance....

  20. Development and Validation of Improved Method for Fingerprint ...

    Methods: The optimum high performance capillary electrophoresis (HPCE) ... organic solvent, and were analyzed using HPLC ... quantified to 200 ml with water and centrifuged at ..... for the analysis of flavonoids in selected Thai plants by.

  1. Validation of some FM-based fitness for purpose methods

    Broekhoven, M J.G. [Ministry of Social Affairs, The Hague (Netherlands)

    1988-12-31

    The reliability of several FM-based fitness-for-purpose methods has been investigated on a number of objects for which accurate fracture data were available from experiments or from practice, viz. 23 wide plates, 30 mm thickness (surface and through thickness cracks, cracks at holes, with and without welds), 45 pipelines sections with cracks, pressure vessels and a T-joint. The methods applied mainly comprise ASME XI, PD 6493 and R6. This contribution reviews the results. (author). 11 refs.

  2. A mixed methods inquiry into the validity of data

    Kristensen, Erling Lundager; Nielsen, Dorthe B; Jensen, Laila N

    2008-01-01

    increased awareness and dialogue between researchers and farmers or other stakeholders about the background for data collection related to management and changes in management. By integrating quantitative and qualitative research methods in a mixed methods research approach, the researchers will improve...... greatly by adding a qualitative perspective to the quantitative approach as illustrated and discussed in this article. The combined approach requires, besides skills and interdisciplinary collaboration, also openness, reflection and scepticism from the involved scientists, but the benefits may be extended...

  3. Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality

    Hirsch, Robert M.

    1988-01-01

    This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.

  4. Fibrillar polyaniline/diatomite composite synthesized by one-step in situ polymerization method

    Li Xingwei; Li Xiaoxuan; Wang Gengchao

    2005-01-01

    A fibrillar polyaniline/diatomite composite was prepared by one-step in situ polymerization of aniline in the dispersed system of diatomite, and was characterized via Fourier-transform infrared spectra (FT-IR), UV-vis-NIR spectra, wide-angle X-ray diffraction (WXRD), thermogravimetric analysis (TGA) and transmission electron microscopy (TEM), as well as conductivity. Morphology of the composite is uniform nanofibers, which the diameters of nanofibers are about 50-80 nm. The conductivity of polyaniline/diatomite composite contained 28% polyaniline is 0.29 S cm -1 at 25 deg. C, and temperature of thermal degradation has reached 493 deg. C in air. The composite has potential commercial applications as fillers for electromagnetic shielding materials and conductive coatings

  5. The measurement of unsaturated hydraulic conductivity from one-step outflow method

    Lee, S. H.; Hwang, J. H.; Lee, J. M.; Kim, C. R.

    2003-01-01

    One of the most important parts in constructing radioactive waste repository may be its safety aspect. The fundamental function of the repository is to isolate completely and forever the radioactive wastes disposed of in it. However, since either normally or abnormally nuclides are to be released from the repository with a certain causes. The hydraulic conductivity is related to transportation of nuclide in soil. However, hydraulic characteristics research in unsaturated soil is not enough at present time. A fast and easy procedure for estimating unsaturated flow parameters is presented. The estimation is based on direct measurement of the retention characteristics combined with inverse estimation of the hydraulic conductivity characteristics from one-step outflow experiment

  6. Validation of Alternative In Vitro Methods to Animal Testing: Concepts, Challenges, Processes and Tools.

    Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie

    This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the

  7. A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation.

    Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf

    2017-07-01

    This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes' inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of the evidence for a trace specimen, e.g. a fingermark, and a reference specimen, e.g. a fingerprint, to originate from common or different sources. Some theoretical aspects of probabilities necessary for this Guideline were discussed prior to its elaboration, which started after a workshop of forensic researchers and practitioners involved in this topic. In the workshop, the following questions were addressed: "which aspects of a forensic evaluation scenario need to be validated?", "what is the role of the LR as part of a decision process?" and "how to deal with uncertainty in the LR calculation?". The questions: "what to validate?" focuses on the validation methods and criteria and "how to validate?" deals with the implementation of the validation protocol. Answers to these questions were deemed necessary with several objectives. First, concepts typical for validation standards [1], such as performance characteristics, performance metrics and validation criteria, will be adapted or applied by analogy to the LR framework. Second, a validation strategy will be defined. Third, validation methods will be described. Finally, a validation protocol and an example of validation report will be proposed, which can be applied to the forensic fields developing and validating LR methods for the evaluation of the strength of evidence at source level under the following propositions. Copyright © 2016. Published by Elsevier B.V.

  8. Assessment of critical steps of a GC/MS based indirect analytical method for the determination of fatty acid esters of monochloropropanediols (MCPDEs) and of glycidol (GEs).

    Zelinkova, Zuzana; Giri, Anupam; Wenzl, Thomas

    2017-07-01

    Fatty acid esters of 2- and 3-chloropropanediol (MCPDEs) and fatty acid esters of glycidol (GEs) are commonly monitored in edible fats and oils. A recommendation issued by the European Commission emphasizes the need of generating data on the occurrence of these substances in a broad range of different foods. So far, analytical methods for the determination of MCPDEs and GEs are fully validated only for oils, fats and margarine. This manuscript presents the assessment of critical steps in the AOCS Cd 29a-13 method for the simultaneous determination of MCPDEs and GEs in the fat phase obtained from bakery and potato products, smoked and fried fish and meat, and other cereal products. The trueness of the method is affected by the additional formation of 3-MBPD esters from monoacylglycerols (MAGs), which are frequently present in food. The overestimation of GE contents for some samples was confirmed by the comparison of results with results obtained by an independent analytical method (direct analysis of GE by HPLC-MS/MS). An additional sample pre-treatment by SPE was introduced to remove MAGs from fat prior to the GEs conversion, while the overall method sensitivity was not significantly affected. Trueness of the determination of GEs by the modified analytical procedure was confirmed by comparison with a direct analysis of GEs. The potential impact on accuracy of results of the final sample preparation step of the analytical procedure, the derivatization of free forms MCPD and MBPD with PBA, was evaluated as well. Different commercial batches of PBA showed differences in solubility in a non-polar organic solvent. The PBA derivatization in organic solvent did not affect precision and trueness of the method due to the isotopic standard dilution. However, method sensitivity might be significantly compromised.

  9. Flight critical system design guidelines and validation methods

    Holt, H. M.; Lupton, A. O.; Holden, D. G.

    1984-01-01

    Efforts being expended at NASA-Langley to define a validation methodology, techniques for comparing advanced systems concepts, and design guidelines for characterizing fault tolerant digital avionics are described with an emphasis on the capabilities of AIRLAB, an environmentally controlled laboratory. AIRLAB has VAX 11/750 and 11/780 computers with an aggregate of 22 Mb memory and over 650 Mb storage, interconnected at 256 kbaud. An additional computer is programmed to emulate digital devices. Ongoing work is easily accessed at user stations by either chronological or key word indexing. The CARE III program aids in analyzing the capabilities of test systems to recover from faults. An additional code, the semi-Markov unreliability program (SURE) generates upper and lower reliability bounds. The AIRLAB facility is mainly dedicated to research on designs of digital flight-critical systems which must have acceptable reliability before incorporation into aircraft control systems. The digital systems would be too costly to submit to a full battery of flight tests and must be initially examined with the AIRLAB simulation capabilities.

  10. Development and content validation of the information assessment method for patients and consumers.

    Pluye, Pierre; Granikov, Vera; Bartlett, Gillian; Grad, Roland M; Tang, David L; Johnson-Lafleur, Janique; Shulha, Michael; Barbosa Galvão, Maria Cristiane; Ricarte, Ivan Lm; Stephenson, Randolph; Shohet, Linda; Hutsul, Jo-Anne; Repchinsky, Carol A; Rosenberg, Ellen; Burnand, Bernard; Légaré, France; Dunikowski, Lynn; Murray, Susan; Boruff, Jill; Frati, Francesca; Kloda, Lorie; Macaulay, Ann; Lagarde, François; Doray, Geneviève

    2014-02-18

    Online consumer health information addresses health problems, self-care, disease prevention, and health care services and is intended for the general public. Using this information, people can improve their knowledge, participation in health decision-making, and health. However, there are no comprehensive instruments to evaluate the value of health information from a consumer perspective. We collaborated with information providers to develop and validate the Information Assessment Method for all (IAM4all) that can be used to collect feedback from information consumers (including patients), and to enable a two-way knowledge translation between information providers and consumers. Content validation steps were followed to develop the IAM4all questionnaire. The first version was based on a theoretical framework from information science, a critical literature review and prior work. Then, 16 laypersons were interviewed on their experience with online health information and specifically their impression of the IAM4all questionnaire. Based on the summaries and interpretations of interviews, questionnaire items were revised, added, and excluded, thus creating the second version of the questionnaire. Subsequently, a panel of 12 information specialists and 8 health researchers participated in an online survey to rate each questionnaire item for relevance, clarity, representativeness, and specificity. The result of this expert panel contributed to the third, current, version of the questionnaire. The current version of the IAM4all questionnaire is structured by four levels of outcomes of information seeking/receiving: situational relevance, cognitive impact, information use, and health benefits. Following the interviews and the expert panel survey, 9 questionnaire items were confirmed as relevant, clear, representative, and specific. To improve readability and accessibility for users with a lower level of literacy, 19 items were reworded and all inconsistencies in using a

  11. Development and validity of a method for the evaluation of printed education material.

    Castro MS

    2007-06-01

    Full Text Available Objectives: To develop and study the validity of an instrument for evaluation of Printed Education Materials (PEM; to evaluate the use of acceptability indices; to identify possible influences of professional aspects. Methods: An instrument for PEM evaluation was developed which included tree steps: domain identification, item generation and instrument design. A reading to easy PEM was developed for education of patient with systemic hypertension and its treatment with hydrochlorothiazide. Construct validity was measured based on previously established errors purposively introduced into the PEM, which served as extreme groups. An acceptability index was applied taking into account the rate of professionals who should approve each item. Participants were 10 physicians (9 men and 5 nurses (all women.Results: Many professionals identified intentional errors of crude character. Few participants identified errors that needed more careful evaluation, and no one detected the intentional error that required literature analysis. Physicians considered as acceptable 95.8% of the items of the PEM, and nurses 29.2%. The differences between the scoring were statistically significant in 27% of the items. In the overall evaluation, 66.6% were considered as acceptable. The analysis of each item revealed a behavioral pattern for each professional group.Conclusions: The use of instruments for evaluation of printed education materials is required and may improve the quality of the PEM available for the patients. Not always are the acceptability indices totally correct or represent high quality of information. The professional experience, the practice pattern, and perhaps the gendre of the reviewers may influence their evaluation. An analysis of the PEM by professionals in communication, in drug information, and patients should be carried out to improve the quality of the proposed material.

  12. Method validation for chemical composition determination by electron microprobe with wavelength dispersive spectrometer

    Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.

    2016-07-01

    The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.

  13. High-throughput screening assay used in pharmacognosy: Selection, optimization and validation of methods of enzymatic inhibition by UV-visible spectrophotometry

    Graciela Granados-Guzmán

    2014-02-01

    Full Text Available In research laboratories of both organic synthesis and extraction of natural products, every day a lot of products that can potentially introduce some biological activity are obtained. Therefore it is necessary to have in vitro assays, which provide reliable information for further evaluation in in vivo systems. From this point of view, in recent years has intensified the use of high-throughput screening assays. Such trials should be optimized and validated for accurate and precise results, i.e. reliable. The present review addresses the steps needed to develop and validate bioanalytical methods, emphasizing UV-Visible spectrophotometry as detection system. Particularly focuses on the selection of the method, the optimization to determine the best experimental conditions, validation, implementation of optimized and validated method to real samples, and finally maintenance and possible transfer it to a new laboratory.

  14. Variable Step Integration Coupled with the Method of Characteristics Solution for Water-Hammer Analysis, A Case Study

    Turpin, Jason B.

    2004-01-01

    One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.

  15. Likelihood ratio data to report the validation of a forensic fingerprint evaluation method

    Daniel Ramos

    2017-02-01

    Full Text Available Data to which the authors refer to throughout this article are likelihood ratios (LR computed from the comparison of 5–12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim, [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.

  16. Validation of Analysis Method of pesticides in fresh tomatoes by Gas Chromatography associated to a liquid scintillation counting

    Dhib, Ahlem

    2011-01-01

    Pesticides are nowadays considered as toxic for human health. The maximum residues levels (MRL) in foodstuff are more and more strict. Therefore, selective analytical techniques are necessary for their identification and their quantification. The aim of this study is to set up a multi residue method for the determination of pesticides in tomatoes by gas chromatography with μECD detector (GC/μECD) associated to liquid scintillation counting. A global analytical protocol consisting of a QuECHERS version of the extraction step followed by purification step of the resulting extract on a polymeric sorbent was set up. The 14 C-chloropyrifos used as an internal standard proved excellent to control the different steps needed for the sample preparation. The method optimized is specific, selective with a recovery averaged more than 70 pour cent, repetitive and reproducible. Although some others criteria need to be checked regarding validation before its use in routine analysis, the potential of the method has been demonstrated.

  17. An M-step preconditioned conjugate gradient method for parallel computation

    Adams, L.

    1983-01-01

    This paper describes a preconditioned conjugate gradient method that can be effectively implemented on both vector machines and parallel arrays to solve sparse symmetric and positive definite systems of linear equations. The implementation on the CYBER 203/205 and on the Finite Element Machine is discussed and results obtained using the method on these machines are given.

  18. The Language Teaching Methods Scale: Reliability and Validity Studies

    Okmen, Burcu; Kilic, Abdurrahman

    2016-01-01

    The aim of this research is to develop a scale to determine the language teaching methods used by English teachers. The research sample consisted of 300 English teachers who taught at Duzce University and in primary schools, secondary schools and high schools in the Provincial Management of National Education in the city of Duzce in 2013-2014…

  19. A method to determine validity and reliability of activity sensors

    Boerema, Simone Theresa; Hermens, Hermanus J.

    2013-01-01

    METHOD Four sensors were securely fastened to a mechanical oscillator (Vibration Exciter, type 4809, Brüel & Kjær) and moved at various frequencies (6.67Hz; 13.45Hz; 19.88Hz) within the range of human physical activity. For each of the three sensor axes, the sensors were simultaneously moved for

  20. Reliability and Validity of the Research Methods Skills Assessment

    Smith, Tamarah; Smith, Samantha

    2018-01-01

    The Research Methods Skills Assessment (RMSA) was created to measure psychology majors' statistics knowledge and skills. The American Psychological Association's Guidelines for the Undergraduate Major in Psychology (APA, 2007, 2013) served as a framework for development. Results from a Rasch analysis with data from n = 330 undergraduates showed…

  1. Validity of the Demirjian method for dental age estimation for ...

    2015-02-04

    Feb 4, 2015 ... Conclusions: It is appropriate to use the Demirjian method in southern Turkish children; however, a revision is needed in some ... Departments of Pediatric Dentistry and 1Orthodontics, Faculty of Dentistry, University of Akdeniz, Antalya, Turkey .... agenesis excluded from the study because dental anomalies.

  2. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement.

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-12-01

    The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis.

  3. Impurities in biogas - validation of analytical methods for siloxanes; Foeroreningar i biogas - validering av analysmetodik foer siloxaner

    Arrhenius, Karine; Magnusson, Bertil; Sahlin, Eskil [SP Technical Research Institute of Sweden, Boraas (Sweden)

    2011-11-15

    Biogas produced from digester or landfill contains impurities which can be harmful for component that will be in contact with the biogas during its utilization. Among these, the siloxanes are often mentioned. During combustion, siloxanes are converted to silicon dioxide which accumulates on the heated surfaces in combustion equipment. Silicon dioxide is a solid compound and will remain in the engine and cause damages. Consequently, it is necessary to develop methods for the accurate determination of these compounds in biogases. In the first part of this report, a method for analysis of siloxanes in biogases was validated. The sampling was performed directly at the plant by drawing a small volume of biogas onto an adsorbent tube under a short period of time. These tubes were subsequently sent to the laboratory for analysis. The purpose of method validation is to demonstrate that the established method is fit for the purpose. This means that the method, as used by the laboratory generating the data, will provide data that meets a set of criteria concerning precision and accuracy. At the end, the uncertainty of the method was calculated. In the second part of this report, the validated method was applied to real samples collected in waste water treatment plants, co-digestion plants and plants digesting other wastes (agriculture waste). Results are presented at the end of this report. As expected, the biogases from waste water treatment plants contained largely higher concentrations of siloxanes than biogases from co-digestion plants and plants digesting agriculture wastes. The concentration of siloxanes in upgraded biogas regardless of which feedstock was digested and which upgrading technique was used was low.

  4. Scientific and interdisciplinary method as support for the restoration project. The balustrade steps of Villa Cerami

    Giulia Sanfilippo

    2015-07-01

    Full Text Available In this work an interdisciplinary study of the weathering forms of the Villa Cerami balustrade, was carried out with the aim to identify the type and causes of these and to plan conservation measures. The studied balustrade adorns and protects the steps of Villa Cerami garden, which is a suggestive example of 18th century ‘urban villa’, located in the very core of the Baroque Catania. Sadly, these stunning steps, whose magnificence and placement characterises the out-door environment of the building, at present suffer from bad degradation conditions, and the decorative details adorning the baluster are affected by irreversible damage. The causes of this ongoing degradation process are: material features, humidity, pollution and the consumption caused by the activities performed in the building. Since 1957 it has been the location of the Faculty of Law of the University of Catania. In this study, three balusters affected by the main weathering forms (biological colonization, black crust and granular disintegration recognised in the entire balustrade, were selected. The lithological type and the weathering forms were defined on the basis of an in situ investigation, using respectively the comparison of materials, to identify the calcarenites type, and the Italian norm UNI 11182 along with the Fitzner formalism, to classify the degradation forms.  A 3D survey of the selected balusters was performed with a time of flight Laser Scanner HDS300 of the Leica Geosystem with the aim to better define the volume and total surfaces of the material parts affected by erosion. The surfaces affected by black crust, were obtained by means of an image modelling technique. Data were used to calculate the damage indices through equations proposed by Fitzner and the limit at break for crushing. The potentiality of this interdisciplinary approach (architects, engineers and geologists is shown with the aim to apply it to the restoration of the entire monument

  5. Validation and Application of the Survey of Teaching Beliefs and Practices for Undergraduates (STEP-U): Identifying Factors Associated with Valuing Important Workplace Skills among Biology Students.

    Marbach-Ad, Gili; Rietschel, Carly; Thompson, Katerina V

    2016-01-01

    We present a novel assessment tool for measuring biology students' values and experiences across their undergraduate degree program. Our Survey of Teaching Beliefs and Practices for Undergraduates (STEP-U) assesses the extent to which students value skills needed for the workplace (e.g., ability to work in groups) and their experiences with teaching practices purported to promote such skills (e.g., group work). The survey was validated through factor analyses in a large sample of biology seniors (n = 1389) and through response process analyses (five interviewees). The STEP-U skills items were characterized by two underlying factors: retention (e.g., memorization) and transfer (e.g., knowledge application). Multiple linear regression models were used to examine relationships between classroom experiences, values, and student characteristics (e.g., gender, cumulative grade point average [GPA], and research experience). Student demographic and experiential factors predicted the extent to which students valued particular skills. Students with lower GPAs valued retention skills more than those with higher GPAs. Students with research experience placed greater value on scientific writing and interdisciplinary understanding. Greater experience with specific teaching practices was associated with valuing the corresponding skills more highly. The STEP-U can provide feedback vital for designing curricula that better prepare students for their intended postgraduate careers. © 2016 G. Marbach-Ad et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  6. A GPU-accelerated semi-implicit fractional-step method for numerical solutions of incompressible Navier-Stokes equations

    Ha, Sanghyun; Park, Junshin; You, Donghyun

    2018-01-01

    Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. The Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution methods used in the semi-implicit fractional-step method take advantage of multiple tridiagonal matrices whose inversion is known as the major bottleneck for acceleration on a typical multi-core machine. A novel implementation of the semi-implicit fractional-step method designed for GPU acceleration of the incompressible Navier-Stokes equations is presented. Aspects of the programing model of Compute Unified Device Architecture (CUDA), which are critical to the bandwidth-bound nature of the present method are discussed in detail. A data layout for efficient use of CUDA libraries is proposed for acceleration of tridiagonal matrix inversion and fast Fourier transform. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while the Navier-Stokes equations are computed on a GPU. Performance of the present method using CUDA is assessed by comparing the speed of solving three tridiagonal matrices using ADI with the speed of solving one heptadiagonal matrix using a conjugate gradient method. An overall speedup of 20 times is achieved using a Tesla K40 GPU in comparison with a single-core Xeon E5-2660 v3 CPU in simulations of turbulent boundary-layer flow over a flat plate conducted on over 134 million grids. Enhanced performance of 48 times speedup is reached for the same problem using a Tesla P100 GPU.

  7. In situ synthesis carbonated hydroxyapatite layers on enamel slices with acidic amino acids by a novel two-step method

    Wu, Xiaoguang; Zhao, Xu; Li, Yi; Yang, Tao; Yan, Xiujuan; Wang, Ke

    2015-01-01

    In situ fabrication of carbonated hydroxyapatite (CHA) remineralization layer on an enamel slice was completed in a novel, biomimetic two-step method. First, a CaCO 3 layer was synthesized on the surface of demineralized enamel using an acidic amino acid (aspartic acid or glutamate acid) as a soft template. Second, at the same concentration of the acidic amino acid, rod-like carbonated hydroxyapatite was produced with the CaCO 3 layer as a sacrificial template and a reactant. The morphology, crystallinity and other physicochemical properties of the crystals were characterized using field emission scanning electron microscopy (FESEM), Fourier transform infrared spectrometry (FTIR), X-ray diffraction (XRD) and energy-dispersive X-ray analysis (EDAX), respectively. Acidic amino acid could promote the uniform deposition of hydroxyapatite with rod-like crystals via absorption of phosphate and carbonate ions from the reaction solution. Moreover, compared with hydroxyapatite crystals coated on the enamel when synthesized by a one-step method, the CaCO 3 coating that was synthesized in the first step acted as an active bridge layer and sacrificial template. It played a vital role in orienting the artificial coating layer through the template effect. The results show that the rod-like carbonated hydroxyapatite crystals grow into bundles, which are similar in size and appearance to prisms in human enamel, when using the two-step method with either aspartic acid or acidic glutamate (20.00 mmol/L). - Graphical abstract: FESEM images of enamel slices etched for 60 s and repaired by the two-step method with Glu concentration of 20.00 mmol/L. (A) The boundary (dotted line) of the repaired areas (b) and unrepaired areas (a). (Some selected areas of etched enamel slices were coated with a nail polish before the reaction, which was removed by acetone after the reaction); (B) high magnification image of Ga, (C) high magnification image of Gb. In situ fabrication of carbonated

  8. In situ synthesis carbonated hydroxyapatite layers on enamel slices with acidic amino acids by a novel two-step method

    Wu, Xiaoguang [Department of Pediatric Dentistry, The Hospital of Stomatology, Jilin University, Changchun 130021 (China); Zhao, Xu [College of Chemistry, Jilin University, Changchun 130021 (China); Li, Yi, E-mail: lyi99@jlu.edu.cn [Department of Pediatric Dentistry, The Hospital of Stomatology, Jilin University, Changchun 130021 (China); Yang, Tao [Department of Stomatology, Children' s Hospital of Changchun, 130051 (China); Yan, Xiujuan; Wang, Ke [Department of Pediatric Dentistry, The Hospital of Stomatology, Jilin University, Changchun 130021 (China)

    2015-09-01

    In situ fabrication of carbonated hydroxyapatite (CHA) remineralization layer on an enamel slice was completed in a novel, biomimetic two-step method. First, a CaCO{sub 3} layer was synthesized on the surface of demineralized enamel using an acidic amino acid (aspartic acid or glutamate acid) as a soft template. Second, at the same concentration of the acidic amino acid, rod-like carbonated hydroxyapatite was produced with the CaCO{sub 3} layer as a sacrificial template and a reactant. The morphology, crystallinity and other physicochemical properties of the crystals were characterized using field emission scanning electron microscopy (FESEM), Fourier transform infrared spectrometry (FTIR), X-ray diffraction (XRD) and energy-dispersive X-ray analysis (EDAX), respectively. Acidic amino acid could promote the uniform deposition of hydroxyapatite with rod-like crystals via absorption of phosphate and carbonate ions from the reaction solution. Moreover, compared with hydroxyapatite crystals coated on the enamel when synthesized by a one-step method, the CaCO{sub 3} coating that was synthesized in the first step acted as an active bridge layer and sacrificial template. It played a vital role in orienting the artificial coating layer through the template effect. The results show that the rod-like carbonated hydroxyapatite crystals grow into bundles, which are similar in size and appearance to prisms in human enamel, when using the two-step method with either aspartic acid or acidic glutamate (20.00 mmol/L). - Graphical abstract: FESEM images of enamel slices etched for 60 s and repaired by the two-step method with Glu concentration of 20.00 mmol/L. (A) The boundary (dotted line) of the repaired areas (b) and unrepaired areas (a). (Some selected areas of etched enamel slices were coated with a nail polish before the reaction, which was removed by acetone after the reaction); (B) high magnification image of Ga, (C) high magnification image of Gb. In situ fabrication of

  9. Human Factors methods concerning integrated validation of nuclear power plant control rooms

    Oskarsson, Per-Anders; Johansson, Bjoern J.E.; Gonzalez, Natalia

    2010-02-01

    The frame of reference for this work was existing recommendations and instructions from the NPP area, experiences from the review of the Turbic Validation and experiences from system validations performed at the Swedish Armed Forces, e.g. concerning military control rooms and fighter pilots. These enterprises are characterized by complex systems in extreme environments, often with high risks, where human error can lead to serious consequences. A focus group has been performed with representatives responsible for Human Factors issues from all Swedish NPP:s. The questions that were discussed were, among other things, for whom an integrated validation (IV) is performed and its purpose, what should be included in an IV, the comparison with baseline measures, the design process, the role of SSM, which methods of measurement should be used, and how the methods are affected of changes in the control room. The report brings different questions to discussion concerning the validation process. Supplementary methods of measurement for integrated validation are discussed, e.g. dynamic, psychophysiological, and qualitative methods for identification of problems. Supplementary methods for statistical analysis are presented. The study points out a number of deficiencies in the validation process, e.g. the need of common guidelines for validation and design, criteria for different types of measurements, clarification of the role of SSM, and recommendations for the responsibility of external participants in the validation process. The authors propose 12 measures for taking care of the identified problems

  10. Reliability and validity of non-radiographic methods of thoracic kyphosis measurement: a systematic review.

    Barrett, Eva; McCreesh, Karen; Lewis, Jeremy

    2014-02-01

    A wide array of instruments are available for non-invasive thoracic kyphosis measurement. Guidelines for selecting outcome measures for use in clinical and research practice recommend that properties such as validity and reliability are considered. This systematic review reports on the reliability and validity of non-invasive methods for measuring thoracic kyphosis. A systematic search of 11 electronic databases located studies assessing reliability and/or validity of non-invasive thoracic kyphosis measurement techniques. Two independent reviewers used a critical appraisal tool to assess the quality of retrieved studies. Data was extracted by the primary reviewer. The results were synthesized qualitatively using a level of evidence approach. 27 studies satisfied the eligibility criteria and were included in the review. The reliability, validity and both reliability and validity were investigated by sixteen, two and nine studies respectively. 17/27 studies were deemed to be of high quality. In total, 15 methods of thoracic kyphosis were evaluated in retrieved studies. All investigated methods showed high (ICC ≥ .7) to very high (ICC ≥ .9) levels of reliability. The validity of the methods ranged from low to very high. The strongest levels of evidence for reliability exists in support of the Debrunner kyphometer, Spinal Mouse and Flexicurve index, and for validity supports the arcometer and Flexicurve index. Further reliability and validity studies are required to strengthen the level of evidence for the remaining methods of measurement. This should be addressed by future research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. On the Diffusion Coefficient of Two-step Method for LWR analysis

    Lee, Deokjung; Choi, Sooyoung; Smith, Kord S.

    2015-01-01

    The few-group constants including diffusion coefficients are generated from the assembly calculation results. Once the assembly calculation is done, the cross sections (XSs) are spatially homogenized, and a critical spectrum calculation is performed in order to take into account the neutron leakages of the lattice. The diffusion coefficient is also generated through the critical spectrum calculation. Three different methods of the critical spectrum calculation such as B1 method, P1 method, and fundamental mode (FM) calculation method are considered in this paper. The diffusion coefficients can also be affected by transport approximations for the transport XS calculation which is used in the assembly transport lattice calculation in order to account for the anisotropic scattering effects. The outflow transport approximation and the inflow transport approximation are investigated in this paper. The accuracy of the few group data especially the diffusion coefficients has been studied to optimize the combination of the transport correction methods and the critical spectrum calculation methods using the UNIST lattice physics code STREAM. The combination of the inflow transport approximation and the FM method is shown to provide the highest accuracy in the LWR core calculations. The methodologies to calculate the diffusion coefficients have been reviewed, and the performances of them have been investigated with a LWR core problem. The combination of the inflow transport approximation and the fundamental mode critical spectrum calculation shows the smallest errors in terms of assembly power distribution

  12. Validity of a Manual Soft Tissue Profile Prediction Method Following Mandibular Setback Osteotomy

    Kolokitha, Olga-Elpis

    2007-01-01

    Objectives The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. Methods To test the validity of the manu...

  13. [Steps to transform a necessity into a validated and useful screening tool for early detection of developmental problems in Mexican children].

    Rizzoli-Córdoba, Antonio; Delgado-Ginebra, Ismael

    A screening test is an instrument whose primary function is to identify individuals with a probable disease among an apparently healthy population, establishing risk or suspicion of a disease. Caution must be taken when using a screening tool in order to avoid unrealistic measurements, delaying an intervention for those who may benefit from it. Before introducing a screening test into clinical practice, it is necessary to certify the presence of some characteristics making its worth useful. This "certification" process is called validation. The main objective of this paper is to describe the different steps that must be taken, from the identification of a need for early detection through the generation of a validated and reliable screening tool using, as an example, the process for the modified version of the Child Development Evaluation Test (CDE or Prueba EDI) in Mexico. Copyright © 2015 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.

  14. Arsenic fractionation in agricultural soil using an automated three-step sequential extraction method coupled to hydride generation-atomic fluorescence spectrometry

    Rosas-Castor, J.M. [Universidad Autónoma de Nuevo León, UANL, Facultad de Ciencias Químicas, Cd. Universitaria, San Nicolás de los Garza, Nuevo León, C.P. 66451 Nuevo León (Mexico); Group of Analytical Chemistry, Automation and Environment, University of Balearic Islands, Cra. Valldemossa km 7.5, 07122 Palma de Mallorca (Spain); Portugal, L.; Ferrer, L. [Group of Analytical Chemistry, Automation and Environment, University of Balearic Islands, Cra. Valldemossa km 7.5, 07122 Palma de Mallorca (Spain); Guzmán-Mar, J.L.; Hernández-Ramírez, A. [Universidad Autónoma de Nuevo León, UANL, Facultad de Ciencias Químicas, Cd. Universitaria, San Nicolás de los Garza, Nuevo León, C.P. 66451 Nuevo León (Mexico); Cerdà, V. [Group of Analytical Chemistry, Automation and Environment, University of Balearic Islands, Cra. Valldemossa km 7.5, 07122 Palma de Mallorca (Spain); Hinojosa-Reyes, L., E-mail: laura.hinojosary@uanl.edu.mx [Universidad Autónoma de Nuevo León, UANL, Facultad de Ciencias Químicas, Cd. Universitaria, San Nicolás de los Garza, Nuevo León, C.P. 66451 Nuevo León (Mexico)

    2015-05-18

    Highlights: • A fully automated flow-based modified-BCR extraction method was developed to evaluate the extractable As of soil. • The MSFIA–HG-AFS system included an UV photo-oxidation step for organic species degradation. • The accuracy and precision of the proposed method were found satisfactory. • The time analysis can be reduced up to eight times by using the proposed flow-based BCR method. • The labile As (F1 + F2) was <50% of total As in soil samples from As-contaminated-mining zones. - Abstract: A fully automated modified three-step BCR flow-through sequential extraction method was developed for the fractionation of the arsenic (As) content from agricultural soil based on a multi-syringe flow injection analysis (MSFIA) system coupled to hydride generation-atomic fluorescence spectrometry (HG-AFS). Critical parameters that affect the performance of the automated system were optimized by exploiting a multivariate approach using a Doehlert design. The validation of the flow-based modified-BCR method was carried out by comparison with the conventional BCR method. Thus, the total As content was determined in the following three fractions: fraction 1 (F1), the acid-soluble or interchangeable fraction; fraction 2 (F2), the reducible fraction; and fraction 3 (F3), the oxidizable fraction. The limits of detection (LOD) were 4.0, 3.4, and 23.6 μg L{sup −1} for F1, F2, and F3, respectively. A wide working concentration range was obtained for the analysis of each fraction, i.e., 0.013–0.800, 0.011–0.900 and 0.079–1.400 mg L{sup −1} for F1, F2, and F3, respectively. The precision of the automated MSFIA–HG-AFS system, expressed as the relative standard deviation (RSD), was evaluated for a 200 μg L{sup −1} As standard solution, and RSD values between 5 and 8% were achieved for the three BCR fractions. The new modified three-step BCR flow-based sequential extraction method was satisfactorily applied for arsenic fractionation in real agricultural

  15. Experimental validation of a method characterizing bow tie filters in CT scanners using a real-time dose probe

    McKenney, Sarah E.; Nosratieh, Anita; Gelskey, Dale; Yang Kai; Huang Shinying; Chen Lin; Boone, John M.

    2011-01-01

    Purpose: Beam-shaping or ''bow tie'' (BT) filters are used to spatially modulate the x-ray beam in a CT scanner, but the conventional method of step-and-shoot measurement to characterize a beam's profile is tedious and time-consuming. The theory for characterization of bow tie relative attenuation (COBRA) method, which relies on a real-time dosimeter to address the issues of conventional measurement techniques, was previously demonstrated using computer simulations. In this study, the feasibility of the COBRA theory is further validated experimentally through the employment of a prototype real-time radiation meter and a known BT filter. Methods: The COBRA method consisted of four basic steps: (1) The probe was placed at the edge of a scanner's field of view; (2) a real-time signal train was collected as the scanner's gantry rotated with the x-ray beam on; (3) the signal train, without a BT filter, was modeled using peak values measured in the signal train of step 2; and (4) the relative attenuation of the BT filter was estimated from filtered and unfiltered data sets. The prototype probe was first verified to have an isotropic and linear response to incident x-rays. The COBRA method was then tested on a dedicated breast CT scanner with a custom-designed BT filter and compared to the conventional step-and-shoot characterization of the BT filter. Using basis decomposition of dual energy signal data, the thickness of the filter was estimated and compared to the BT filter's manufacturing specifications. The COBRA method was also demonstrated with a clinical whole body CT scanner using the body BT filter. The relative attenuation was calculated at four discrete x-ray tube potentials and used to estimate the thickness of the BT filter. Results: The prototype probe was found to have a linear and isotropic response to x-rays. The relative attenuation produced from the COBRA method fell within the error of the relative attenuation measured with the step-and-shoot method

  16. TESTING METHODS FOR MECHANICALLY IMPROVED SOILS: RELIABILITY AND VALIDITY

    Ana Petkovšek

    2017-10-01

    Full Text Available A possibility of in-situ mechanical improvement for reducing the liquefaction potential of silty sands was investigated by using three different techniques: Vibratory Roller Compaction, Rapid Impact Compaction (RIC and Soil Mixing. Material properties at all test sites were investigated before and after improvement with the laboratory and the in situ tests (CPT, SDMT, DPSH B, static and dynamic load plate test, geohydraulic tests. Correlation between the results obtained by different test methods gave inconclusive answers.

  17. A Virtual Upgrade Validation Method for Software-Reliant Systems

    2012-06-01

    behalf of the Army Program Executive Office Aviation (PEO- AVN ). The work consists of the development of the VUV method, the subject of this report...Introduction This report is the first in a series of three reports developed by the SEI for the ASSIP and sponsored by the Army PEO AVN . This first report...Technology OQA operational quality attribute OSATE Open Source AADL Tool Environment PCI Peripheral Control Interface PEO AVN Program Executive

  18. The development and validation of control rod calculation methods

    Rowlands, J.L.; Sweet, D.W.; Franklin, B.M.

    1979-01-01

    Fission rate distributions have been measured in the zero power critical facility, ZEBRA, for a series of eight different arrays of boron carbide control rods. Diffusion theory calculations have been compared with these measurements. The normalised fission rates differ by up to about 30% in some regions, between the different arrays, and these differences are well predicted by the calculations. A development has been made to a method used to produce homogenised cross sections for lattice regions containing control rods. Calculations show that the method also reproduces the reaction rate within the rod and the fission rate dip at the surface of the rod in satisfactory agreement with the more accurate calculations which represent the fine structure of the rod. A comparison between diffusion theory and transport theory calculations of control rod reactivity worths in the CDFR shows that for the standard design method the finite mesh approximation and the difference between diffusion theory and transport theory (the transport correction) tend to cancel and result in corrections to be applied to the standard mesh diffusion theory calculations of about +- 2% or less. This result applies for mesh centred finite difference diffusion theory codes and for the arrays of natural boron carbide control rods for which the calculations were made. Improvements have also been made to the effective diffusion coefficients used in diffusion theory calculations for control rod followers and these give satisfactory agreement with transport theory calculations. (U.K.)

  19. One-Step Direct Return Method For Mohr-Coulomb Plasticity

    Clausen, Johan; Damkilde, Lars; Andersen, Lars

    2004-01-01

    A new return method for the Mohr-Coulomb yield criteria is presented. The idea is to transform the problem into the principal direction and thereby achieve very simple formulas for calculating the elastic return stresses.......A new return method for the Mohr-Coulomb yield criteria is presented. The idea is to transform the problem into the principal direction and thereby achieve very simple formulas for calculating the elastic return stresses....

  20. An extended step characteristic method for solving the transport equation in general geometries

    DeHart, M.D.; Pevey, R.E.; Parish, T.A.

    1994-01-01

    A method for applying the discrete ordinates method to solve the Boltzmann transport equation on arbitrary two-dimensional meshes has been developed. The finite difference approach normally used to approximate spatial derivatives in extrapolating angular fluxes across a cell is replaced by direct solution of the characteristic form of the transport equation for each discrete direction. Thus, computational cells are not restricted to the geometrical shape of a mesh element characteristic of a given coordinate system. However, in terms of the treatment of energy and angular dependencies, this method resembles traditional discrete ordinates techniques. By using the method developed here, a general two-dimensional space can be approximated by an irregular mesh comprised of arbitrary polygons. Results for a number of test problems have been compared with solutions obtained from traditional methods, with good agreement. Comparisons include benchmarks against analytical results for problems with simple geometry, as well as numerical results obtained from traditional discrete ordinates methods by applying the ANISN and TWOTRAN-II computer programs

  1. In situ synthesis carbonated hydroxyapatite layers on enamel slices with acidic amino acids by a novel two-step method.

    Wu, Xiaoguang; Zhao, Xu; Li, Yi; Yang, Tao; Yan, Xiujuan; Wang, Ke

    2015-09-01

    In situ fabrication of carbonated hydroxyapatite (CHA) remineralization layer on an enamel slice was completed in a novel, biomimetic two-step method. First, a CaCO3 layer was synthesized on the surface of demineralized enamel using an acidic amino acid (aspartic acid or glutamate acid) as a soft template. Second, at the same concentration of the acidic amino acid, rod-like carbonated hydroxyapatite was produced with the CaCO3 layer as a sacrificial template and a reactant. The morphology, crystallinity and other physicochemical properties of the crystals were characterized using field emission scanning electron microscopy (FESEM), Fourier transform infrared spectrometry (FTIR), X-ray diffraction (XRD) and energy-dispersive X-ray analysis (EDAX), respectively. Acidic amino acid could promote the uniform deposition of hydroxyapatite with rod-like crystals via absorption of phosphate and carbonate ions from the reaction solution. Moreover, compared with hydroxyapatite crystals coated on the enamel when synthesized by a one-step method, the CaCO3 coating that was synthesized in the first step acted as an active bridge layer and sacrificial template. It played a vital role in orienting the artificial coating layer through the template effect. The results show that the rod-like carbonated hydroxyapatite crystals grow into bundles, which are similar in size and appearance to prisms in human enamel, when using the two-step method with either aspartic acid or acidic glutamate (20.00 mmol/L). Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Validation of the method for investigation of radiopharmaceuticals for in vitro use

    Vranjes, S; Jovanovic, M.; Orlic, M.; Lazic, E. . E-mail address of corresponding author: sanjav@vin.bg.ac.yu

    2005-01-01

    The aim of this study was to validate analytical method for determination of total radioactivity and radioactive concentration of 125 I-triiodotironin, radiopharmaceutical for in vitro use. Analytical parameters: selectivity, accuracy, linearity and range of this method were determined. Values obtained for all parameters are reasonable for analytical methods, therefore this method could be used for farther investigation. (author)

  3. Comparison of 10 single and stepped methods to identify frail older persons in primary care: diagnostic and prognostic accuracy.

    Sutorius, Fleur L; Hoogendijk, Emiel O; Prins, Bernard A H; van Hout, Hein P J

    2016-08-03

    Many instruments have been developed to identify frail older adults in primary care. A direct comparison of the accuracy and prevalence of identification methods is rare and most studies ignore the stepped selection typically employed in routine care practice. Also it is unclear whether the various methods select persons with different characteristics. We aimed to estimate the accuracy of 10 single and stepped methods to identify frailty in older adults and to predict adverse health outcomes. In addition, the methods were compared on their prevalence of the identified frail persons and on the characteristics of persons identified. The Groningen Frailty Indicator (GFI), the PRISMA-7, polypharmacy, the clinical judgment of the general practitioner (GP), the self-rated health of the older adult, the Edmonton Frail Scale (EFS), the Identification Seniors At Risk Primary Care (ISAR PC), the Frailty Index (FI), the InterRAI screener and gait speed were compared to three measures: two reference standards (the clinical judgment of a multidisciplinary expert panel and Fried's frailty criteria) and 6-years mortality or long term care admission. Data were used from the Dutch Identification of Frail Elderly Study, consisting of 102 people aged 65 and over from a primary care practice in Amsterdam. Frail older adults were oversampled. The accuracy of each instrument and several stepped strategies was estimated by calculating the area under the ROC-curve. Prevalence rates of frailty ranged from 14.8 to 52.9 %. The accuracy for recommended cut off values ranged from poor (AUC = 0.556 ISAR-PC) to good (AUC = 0.865 gait speed). PRISMA-7 performed best over two reference standards, GP predicted adversities best. Stepped strategies resulted in lower prevalence rates and accuracy. Persons selected by the different instruments varied greatly in age, IADL dependency, receiving homecare and mood. We found huge differences between methods to identify frail persons in prevalence

  4. Detection of Heterogeneous Small Inclusions by a Multi-Step MUSIC Method

    Solimene, Raffaele; Dell'Aversano, Angela; Leone, Giovanni

    2014-05-01

    In this contribution the problem of detecting and localizing scatterers with small (in terms of wavelength) cross sections by collecting their scattered field is addressed. The problem is dealt with for a two-dimensional and scalar configuration where the background is given as a two-layered cylindrical medium. More in detail, while scattered field data are taken in the outermost layer, inclusions are embedded within the inner layer. Moreover, the case of heterogeneous inclusions (i.e., having different scattering coefficients) is addressed. As a pertinent applicative context we identify the problem of diagnose concrete pillars in order to detect and locate rebars, ducts and other small in-homogeneities that can populate the interior of the pillar. The nature of inclusions influences the scattering coefficients. For example, the field scattered by rebars is stronger than the one due to ducts. Accordingly, it is expected that the more weakly scattering inclusions can be difficult to be detected as their scattered fields tend to be overwhelmed by those of strong scatterers. In order to circumvent this problem, in this contribution a multi-step MUltiple SIgnal Classification (MUSIC) detection algorithm is adopted [1]. In particular, the first stage aims at detecting rebars. Once rebars have been detected, their positions are exploited to update the Green's function and to subtract the scattered field due to their presence. The procedure is repeated until all the inclusions are detected. The analysis is conducted by numerical experiments for a multi-view/multi-static single-frequency configuration and the synthetic data are generated by a FDTD forward solver. Acknowledgement This work benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar." [1] R. Solimene, A. Dell'Aversano and G. Leone, "MUSIC algorithms for rebar detection," J. of Geophysics and Engineering, vol. 10, pp. 1

  5. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  6. Evaluating the Social Validity of the Early Start Denver Model: A Convergent Mixed Methods Study

    Ogilvie, Emily; McCrudden, Matthew T.

    2017-01-01

    An intervention has social validity to the extent that it is socially acceptable to participants and stakeholders. This pilot convergent mixed methods study evaluated parents' perceptions of the social validity of the Early Start Denver Model (ESDM), a naturalistic behavioral intervention for children with autism. It focused on whether the parents…

  7. Rationale and methods of the European Food Consumption Validation (EFCOVAL) Project

    Boer, de E.J.; Slimani, N.; Boeing, H.; Feinberg, M.; Leclerq, C.; Trolle, E.; Amiano, P.; Andersen, L.F.; Freisling, H.; Geelen, A.; Harttig, U.; Huybrechts, I.; Kaic-Rak, A.; Lafay, L.; Lillegaard, I.T.L.; Ruprich, J.; Vries, de J.H.M.; Ocke, M.C.

    2011-01-01

    Background/Objectives: The overall objective of the European Food Consumption Validation (EFCOVAL) Project was to further develop and validate a trans-European food consumption method to be used for the evaluation of the intake of foods, nutrients and potentially hazardous chemicals within the

  8. Validation of Likelihood Ratio Methods Used for Forensic Evidence Evaluation: Application in Forensic Fingerprints

    Haraksim, Rudolf

    2014-01-01

    In this chapter the Likelihood Ratio (LR) inference model will be introduced, the theoretical aspects of probabilities will be discussed and the validation framework for LR methods used for forensic evidence evaluation will be presented. Prior to introducing the validation framework, following

  9. A Framework for Mixing Methods in Quantitative Measurement Development, Validation, and Revision: A Case Study

    Luyt, Russell

    2012-01-01

    A framework for quantitative measurement development, validation, and revision that incorporates both qualitative and quantitative methods is introduced. It extends and adapts Adcock and Collier's work, and thus, facilitates understanding of quantitative measurement development, validation, and revision as an integrated and cyclical set of…

  10. Experimental Validation for Hot Stamping Process by Using Taguchi Method

    Fawzi Zamri, Mohd; Lim, Syh Kai; Razlan Yusoff, Ahmad

    2016-02-01

    Due to the demand for reduction in gas emissions, energy saving and producing safer vehicles has driven the development of Ultra High Strength Steel (UHSS) material. To strengthen UHSS material such as boron steel, it needed to undergo a process of hot stamping for heating at certain temperature and time. In this paper, Taguchi method is applied to determine the appropriate parameter of thickness, heating temperature and heating time to achieve optimum strength of boron steel. The experiment is conducted by using flat square shape of hot stamping tool with tensile dog bone as a blank product. Then, the value of tensile strength and hardness is measured as response. The results showed that the lower thickness, higher heating temperature and heating time give the higher strength and hardness for the final product. In conclusion, boron steel blank are able to achieve up to 1200 MPa tensile strength and 650 HV of hardness.

  11. Validation of a spectrophotometric method for quantification of carboxyhemoglobin.

    Luchini, Paulo D; Leyton, Jaime F; Strombech, Maria de Lourdes C; Ponce, Julio C; Jesus, Maria das Graças S; Leyton, Vilma

    2009-10-01

    The measurement of carboxyhemoglobin (COHb) levels in blood is a valuable procedure to confirm exposure to carbon monoxide (CO) either for forensic or occupational matters. A previously described method using spectrophotometric readings at 420 and 432 nm after reduction of oxyhemoglobin (O(2)Hb) and methemoglobin with sodium hydrosulfite solution leads to an exponential curve. This curve, used with pre-established factors, serves well for lower concentrations (1-7%) or for high concentrations (> 20%) but very rarely for both. The authors have observed that small variations on the previously described factors F1, F2, and F3, obtained from readings for 100% COHb and 100% O(2)Hb, turn into significant changes in COHb% results and propose that these factors should be determined every time COHb is measured by reading CO and O(2) saturated samples. This practice leads to an increase in accuracy and precision.

  12. VALIDATION OF THE ASSR TEST THROUGH COMPLEMENTARY AUDIOLOGYICAL METHODS

    C. Mârtu

    2016-04-01

    Full Text Available Introduction: Auditory Steady State Response (ASSR is an objective method for determining the auditive threshold, applicable and necessary especially in children. The test is extremely important for recommending cochlear implant in children. The aim of the study was to compare pure tone audiometry responses and auditory steady-state thresholds. Materials and method: The study was performed on a group including both patients with normal hearing and with hearing loss. The main inclusion criteria accepted only patients with normal otomicroscopic aspect, normal tympanogram, capable to respond to pure tone audiometry, and with ear conduction thresholds between 0 and 80 dB NHL. The patients with suppurative otic processes or ear malformations were excluded. The research protocol was followed, the tests being performed in soundproofed rooms, starting with pure tone audiometry followed, after a pause, by ASSR determinations at frequencies of 0.5, 1.2 and 4 KHz. The audiological instruments were provided by a single manufacturer. ASSR was recorded at least two times for both borderline intensities, namely the one defining the auditory threshold and the first no-response intensity. The recorded responses were stored in a database and further processed in Excel. Discussion: The differences observed between pure tone audiometry and ASSR thresholds are important at 500 Hz and insignificant at the other frequencies. When approaching the PTA-ASSR relation, whatever the main characteristic between the PTA and ASSR thresholds in one ear, the profile of the lines gap maintains the same shape on the opposite ear. Conclusions: ASSR is a confident objective test, maintaining attention to low frequencies, where some differences might occur.

  13. A GPU-accelerated semi-implicit fractional step method for numerical solutions of incompressible Navier-Stokes equations

    Ha, Sanghyun; Park, Junshin; You, Donghyun

    2017-11-01

    Utility of the computational power of modern Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. Due to its serial and bandwidth-bound nature, the present choice of numerical methods is considered to be a good candidate for evaluating the potential of GPUs for solving Navier-Stokes equations using non-explicit time integration. An efficient algorithm is presented for GPU acceleration of the Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution method used in the semi-implicit fractional-step method. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while Navier-Stokes equations are computed on a GPU. Extension to multiple NVIDIA GPUs is implemented using NVLink supported by the Pascal architecture. Performance of the present method is experimented on multiple Tesla P100 GPUs compared with a single-core Xeon E5-2650 v4 CPU in simulations of boundary-layer flow over a flat plate. Supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (Ministry of Science, ICT and Future Planning NRF-2016R1E1A2A01939553, NRF-2014R1A2A1A11049599, and Ministry of Trade, Industry and Energy 201611101000230).

  14. Two-step reconstruction method using global optimization and conjugate gradient for ultrasound-guided diffuse optical tomography.

    Tavakoli, Behnoosh; Zhu, Quing

    2013-01-01

    Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively.

  15. Method to make a single-step etch mask for 3D monolithic nanostructures

    Grishina, Diana; Harteveld, Cornelis A.M.; Woldering, L.A.; Vos, Willem L.

    2015-01-01

    Current nanostructure fabrication by etching is usually limited to planar structures as they are defined by a planar mask. The realization of three-dimensional (3D) nanostructures by etching requires technologies beyond planar masks. We present a method for fabricating a 3D mask that allows one to

  16. The next step in coastal numerical models: spectral/hp element methods?

    Eskilsson, Claes; Engsig-Karup, Allan Peter; Sherwin, Spencer J.

    2005-01-01

    In this paper we outline the application of spectral/hp element methods for modelling nonlinear and dispersive waves. We present one- and two-dimensional test cases for the shallow water equations and Boussinesqtype equations – including highly dispersive Boussinesq-type equations....

  17. A Ten-Step Design Method for Simulation Games in Logistics Management

    Fumarola, M.; Van Staalduinen, J.P.; Verbraeck, A.

    2011-01-01

    Simulation games have often been found useful as a method of inquiry to gain insight in complex system behavior and as aids for design, engineering simulation and visualization, and education. Designing simulation games are the result of creative thinking and planning, but often not the result of a

  18. Stepping beyond the paradigm wars: pluralist methods for research in learning technology

    Chris Jones

    2011-02-01

    Full Text Available This paper outlines a problem we have found in our own practice when we have been developing new researchers at post-graduate level. When students begin research training and practice, they are often confused between different levels of thinking when they are faced with methods, methodologies and research paradigms. We argue that this confusion arises from the way research methods are taught, embedded and embodied in educational systems. We set out new ways of thinking about levels of research in the field of learning technology. We argue for a problem driven/pragmatic approach to research and consider the range of methods that can be applied as diverse lenses to particular research problems. The problem of developing a coherent approach to research and research methods is not confined to research in learning technology because it is arguably a problem for all educational research and one that also affects an even wider range of disciplinary and interdisciplinary subject areas. For the purposes of this paper we will discuss the problem in relation to research in learning technologies and make a distinction between developmental and basic research that we think is particularly relevant in this field. The paradigms of research adopted have real consequences for the ways research problems are conceived and articulated, and the ways in which research is conducted. This has become an even more pressing concern in the challenging funding climate that researchers now face. We argue that there is not a simple 1 to 1 relationship between levels and most particularly that there usually is not a direct association of particular methods with either a philosophical outlook or paradigm of research. We conclude by recommending a pluralist approach to thinking about research problems and we illustrate this with the suggestion that we should encourage researchers to think in terms of counterpositives. If the researcher suggests one way of doing research in an

  19. [Use of THP-1 for allergens identification method validation].

    Zhao, Xuezheng; Jia, Qiang; Zhang, Jun; Li, Xue; Zhang, Yanshu; Dai, Yufei

    2014-05-01

    Look for an in vitro test method to evaluate sensitization using THP-1 cells by the changes of the expression of cytokines to provide more reliable markers of the identification of sensitization. The monocyte-like THP-1 cells were induced and differentiated into THP-1-macrophages with PMA (0.1 microg/ml). The changes of expression of cytokines at different time points after the cells being treated with five known allergens, 2,4-dinitrochlorobenzene (DNCB), nickel sulfate (NiSO4), phenylene diamine (PPDA) potassium dichromate (K2Cr2O7) and toluene diisocyanate (TDI) and two non-allergens sodium dodecyl sulfate (SDS) and isopropanol (IPA) at various concentrations were evaluated. The IL-6 and TNF-alpha production was measured by ELISA. The secretion of IL-1beta and IL-8 was analyzed by Cytometric Bead Array (CBA). The section of the IL-6, TNF-alpha, IL-1beta and IL-8 were the highest when THP-1 cells were exposed to NiSO4, DNCB and K2Cr2O7 for 6h, PPDA and TDI for 12h. The production of IL-6 were approximately 40, 25, 20, 50 and 50 times for five kinds chemical allergens NiSO4, DNCB, K2Cr2O7, PPDA and TDI respectively at the optimum time points and the optimal concentration compared to the control group. The expression of TNF-alpha were 20, 12, 20, 8 and 5 times more than the control group respectively. IL-1beta secretion were 30, 60, 25, 30 and 45 times respectively compared to the control group. The production of IL-8 were approximately 15, 12, 15, 12 and 7 times respectively compared to the control group. Both non-allergens SDS and IPA significantly induced IL-6 secretion in a dose-dependent manner however SDS cause a higher production levels, approximately 20 times of the control. Therefore IL-6 may not be a reliable marker for identification of allergens. TNF-alpha, IL-1beta and IL-8 expressions did not change significantly after exposed to the two non-allergens. The test method using THP-1 cells by detecting the productions of cytokines (TNF-alpha, IL-1beta and

  20. Validation needs of seismic probabilistic risk assessment (PRA) methods applied to nuclear power plants

    Kot, C.A.; Srinivasan, M.G.; Hsieh, B.J.

    1985-01-01

    An effort to validate seismic PRA methods is in progress. The work concentrates on the validation of plant response and fragility estimates through the use of test data and information from actual earthquake experience. Validation needs have been identified in the areas of soil-structure interaction, structural response and capacity, and equipment fragility. Of particular concern is the adequacy of linear methodology to predict nonlinear behavior. While many questions can be resolved through the judicious use of dynamic test data, other aspects can only be validated by means of input and response measurements during actual earthquakes. A number of past, ongoing, and planned testing programs which can provide useful validation data have been identified, and validation approaches for specific problems are being formulated

  1. “Imagine if… stepping into someone’s shoes” as a research method

    Katrine Heggstad

    2018-03-01

    Full Text Available This article investigates and discusses how a fictional framework can be used as a method for research within the field of drama, and how this approach makes researchers understand the situated moments that are researched. The fictional framework is part of a larger research project which focuses on drama with people living with dementia. The article combines artistic practice-based research with a sensory as well as an auto-ethnographic approach, as the practitioners’ family-relations (as mother and daughter are explored and reflected upon as integral to the method. The article contributes to knowledge production regarding fictional frame in research and reflects on its significance for work with people experiencing dementia.

  2. Exploiting Superconvergence in Discontinuous Galerkin Methods for Improved Time-Stepping and Visualization

    2016-09-08

    Filters for Multi-dimensional data”, International Conference on Spectral and Higher Order Methods (ICOSAHOM). Rio de Janeiro, Brazil. June 27 - July...Accuracy Conserving (SIAC) filter when applied to nonuniform meshes; 2) Theoretically and numerical demonstration of the 2k+1 order accuracy of the SIAC...filter to reduce the kernel footprint; 4) Establishing the theoretical viability of the SIAC filter for nonlinear scalar hyperbolic conservation laws

  3. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    Ernst, Floris; Schweikard, Achim

    2008-01-01

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  4. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    Ernst, Floris; Schweikard, Achim [University of Luebeck, Institute for Robotics and Cognitive Systems, Luebeck (Germany)

    2008-06-15

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  5. Content validity across methods of malnutrition assessment in patients with cancer is limited

    Sealy, Martine J.; Nijholt, Willemke; Stuiver, Martijn M.; van der Berg, Marit M.; Roodenburg, Jan L. N.; Schans, van der Cees P.; Ottery, Faith D.; Jager-Wittenaar, Harriet

    Objective: To identify malnutrition assessment methods in cancer patients and assess their content validity based on internationally accepted definitions for malnutrition. Study Design and Setting: Systematic review of studies in cancer patients that operationalized malnutrition as a variable,

  6. Content validity across methods of malnutrition assessment in patients with cancer is limited

    Sealy, Martine; Nijholt, Willemke; Stuiver, M.M.; van der Berg, M.M.; Roodenburg, Jan; Ottery, Faith D.; van der Schans, Cees; Jager, Harriët

    2016-01-01

    Objective To identify malnutrition assessment methods in cancer patients and assess their content validity based on internationally accepted definitions for malnutrition. Study Design and Setting Systematic review of studies in cancer patients that operationalized malnutrition as a variable,

  7. Validation and application of an high-order spectral difference method for flow induced noise simulation

    Parsani, Matteo; Ghorbaniasl, Ghader; Lacor, C.

    2011-01-01

    . The method is based on the Ffowcs WilliamsHawkings approach, which provides noise contributions for monopole, dipole and quadrupole acoustic sources. This paper will focus on the validation and assessment of this hybrid approach using different test cases

  8. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  9. Development and Validation of a RP-HPLC Method for the ...

    Development and Validation of a RP-HPLC Method for the Simultaneous Determination of Rifampicin and a Flavonoid Glycoside - A Novel ... range, accuracy, precision, limit of detection, limit of quantification, robustness and specificity.

  10. Probability of Detection (POD) as a statistical model for the validation of qualitative methods.

    Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T

    2011-01-01

    A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.

  11. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current

  12. Synthesis of large CZTSe nanoparticles through a two-step hot-injection method

    Engberg, Sara Lena Josefin; Li, Zhenggang; Lek, Jun Yan

    2015-01-01

    Grain boundaries in Cu2ZnSn(SxSe1x)4 (CZTSSe) thin films act as a defect that reduces the mobility of the charges. Hence one way to improve the performance of these thin film solar cells is to increase the grain size in the films. Most of the synthesis methods published so far for CZTSSe colloidal...... molecules, solvents and precursors, and by controlling the initial monomer concentration. Additionally, we show how our new synthesis route can be utilized to achieve targeted ratios of CZTS and CZTSe nanoparticles to be used for mixed-phase CZTSSe thin films....

  13. A three-step calibration method for tri-axial field sensors in a 3D magnetic digital compass

    Zhu, Xiaoning; Zhao, Ta; Zhou, Zhijian; Cheng, Defu

    2017-01-01

    In a 3D magnetic compass, it is important to calibrate the tri-axial magnetometers and accelerometers so the compass will provide accurate heading and attitude information. Previous researchers have used two methods to calibrate these two field sensors separately, i.e. the classic independent ellipsoid fitting method and the independent dot product invariant method, respectively. Both methods are easy to use, and no highly accurate, external equipment is required. However, self-calibration with ellipsoid fitting has the disadvantage that it interfuses an orthogonal matrix, and the dot product invariant method requires the use of pre-calibrated internal field sensors, which may be unavailable in many cases. In this paper, we have introduced and unified an error model of two tri-axial field sensors. Accordingly, the orthogonal matrix caused by ellipsoid fitting was mathematically proved to be the combination of two sources, the mounting misalignment and the rotation misalignment. Moreover, a new method, which we call optimal resultant vector, was proposed to further calibrate multi-sensor systems on the basis of ellipsoid fitting and dot product invariant methods, establishing a new, three-step calibration method. The superiority of the proposed method over the state-of-the-art approaches were demonstrated by simulations and a 3D compass experiment. (paper)

  14. Comparing the Validity of Non-Invasive Methods in Measuring Thoracic Kyphosis and Lumbar Lordosis

    Mohammad Yousefi

    2012-04-01

    Full Text Available Background: the purpose of this article is to study the validity of each of the non-invasive methods (flexible ruler, spinal mouse, and processing the image versus the one through-Ray radiation (the basic method and comparing them with each other.Materials and Methods: for evaluating the validity of each of these non-invasive methods, the thoracic Kyphosis and lumber Lordosis angle of 20 students of Birjand University (age mean and standard deviation: 26±2, weight: 72±2.5 kg, height: 169±5.5 cm through fours methods of flexible ruler, spinal mouse, and image processing and X-ray.Results: the results indicated that the validity of the methods including flexible ruler, spinal mouse, and image processing in measuring the thoracic Kyphosis and lumber Lordosis angle respectively have an adherence of 0.81, 0.87, 0.73, 0.76, 0.83, 0.89 (p>0.05. As a result, regarding the gained validity against the golden method of X-ray, it could be stated that the three mentioned non-invasive methods have adequate validity. In addition, the one-way analysis of variance test indicated that there existed a meaningful relationship between the three methods of measuring the thoracic Kyphosis and lumber Lordosis, and with respect to the Tukey’s test result, the image processing method is the most precise one.Conclusion as a result, this method could be used along with other non-invasive methods as a valid measuring method.

  15. A Multiscale Finite Element Model Validation Method of Composite Cable-Stayed Bridge Based on Structural Health Monitoring System

    Rumian Zhong

    2015-01-01

    Full Text Available A two-step response surface method for multiscale finite element model (FEM updating and validation is presented with respect to Guanhe Bridge, a composite cable-stayed bridge in the National Highway number G15, in China. Firstly, the state equations of both multiscale and single-scale FEM are established based on the basic equation in structural dynamic mechanics to update the multiscale coupling parameters and structural parameters. Secondly, based on the measured data from the structural health monitoring (SHM system, a Monte Carlo simulation is employed to analyze the uncertainty quantification and transmission, where the uncertainties of the multiscale FEM and measured data were considered. The results indicate that the relative errors between the calculated and measured frequencies are less than 2%, and the overlap ratio indexes of each modal frequency are larger than 80% without the average absolute value of relative errors. These demonstrate that the proposed method can be applied to validate the multiscale FEM, and the validated FEM can reflect the current conditions of the real bridge; thus it can be used as the basis for bridge health monitoring, damage prognosis (DP, and safety prognosis (SP.

  16. Two Validated HPLC Methods for the Quantification of Alizarin and other Anthraquinones in Rubia tinctorum Cultivars

    Derksen, G.C.H.; Lelyveld, G.P.; Beek, van T.A.; Capelle, A.; Groot, de Æ.

    2004-01-01

    Direct and indirect HPLC-UV methods for the quantitative determination of anthraquinones in dried madder root have been developed, validated and compared. In the direct method, madder root was extracted twice with refluxing ethanol-water. This method allowed the determination of the two major native

  17. Practical Method for engineering Erbium-doped fiber lasers from step-like pulse excitations

    Causado-Buelvas, J D; Gomez-Cardona, N D; Torres, P

    2011-01-01

    A simple method, known as 'easy points', has been applied to the characterization of Erbium-doped fibers, aiming for the engineering of fiber lasers. Using low- optical-power flattop pulse excitations it has been possible to determine both the attenuation coefficients and the intrinsic saturation powers of doped single-mode fibers at 980 and 1550 nm. Laser systems have been projected for which the optimal fiber length and output power have been determined as a function of the input power. Ring and linear laser cavities have been set up, and the characteristics of the output laser have been obtained and compared with the theoretical predictions based on the 'easy points' parameters.

  18. A three-step reconstruction method for fluorescence molecular tomography based on compressive sensing

    Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.

    2017-01-01

    Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT...... matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via ℓ1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate...... and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1...

  19. Determination of Modafinil in Tablet Formulation Using Three New Validated Spectrophotometric Methods

    Basniwal, P.K.; Jain, D.; Basniwal, P.K.

    2014-01-01

    In this study, three new UV spectrophotometric methods viz. linear regression equation (LRE), standard absorptivity (SA) and first order derivative (FOD) method were developed and validated for determination of modafinil in tablet form. The Beer-Lamberts law was obeyed as linear in the range of 10-50 μg/ mL and all the methods were validated for linearity, accuracy, precision and robustness. These methods were successfully applied for assay of modafinil drug content in tablets in the range of 100.20 - 100.42 %, 100.11 - 100.58 % and 100.25 - 100.34 %, respectively with acceptable standard deviation (less than two) for all the methods. The validated spectrophotometric methods may be successfully applied for assay, dissolution studies, bio-equivalence studies as well as routine analysis in pharmaceutical industries. (author)

  20. Low Temperature Two-Steps Sintering (LTTSS) - an innovative method for consolidating porous UO2 pellets

    Sanjay Kumar, D.; Ananthasivan, K.; Senapati, Abhiram; Venkata Krishnan, R.

    2015-01-01

    Metallic uranium and its alloys are an important fuel for fast reactors. Presently, metallic uranium is being prepared using expensive fluoro-metallothermic process. Recent reports suggest that metal oxide could be reduced to metal using a novel electrochemical de-oxidation method and this could serve as attractive alternate for expensive metallothermic process. In view of which, a research program is being pursued in our Centre to develop an optimum process parameter for the scaled up preparation of metallic uranium efficiently. One of the important process parameter is the size, nature and distribution of porosity in the urania pellet. Essentially the ceramic form of the urania should encompass interconnected porosity that would allow percolation of melts into the UO 2 . However, the matrix density of the pellet should be high to ensure that it possesses good handling strength and is electrically conducting. Hence preparation of high dense porous UO 2 pellets was required. In this study, we report the preparation of porous UO 2 pellets possessing a very high matrix density by using the citrate gel-combustion method. The 'as-prepared' powders were consolidated at various compaction pressures as such and these pellets were sintered in 8 mol %Ar+H 2 gas with a flow rate of 250 mL/min at 1073 K for 30 min followed by soaking at 1473 K for 4 h with heating rate of 5 K min -1 in a molybdenum furnace. X-ray diffraction studies revealed that these pellets contained UO 2 . The morphological analysis sintered pellets was carried out by using Scanning Electron Microscope (M/s. Philips model XL 30, Netherlands). All these pellets were gold coated

  1. Rapid, single-step most-probable-number method for enumerating fecal coliforms in effluents from sewage treatment plants

    Munoz, E. F.; Silverman, M. P.

    1979-01-01

    A single-step most-probable-number method for determining the number of fecal coliform bacteria present in sewage treatment plant effluents is discussed. A single growth medium based on that of Reasoner et al. (1976) and consisting of 5.0 gr. proteose peptone, 3.0 gr. yeast extract, 10.0 gr. lactose, 7.5 gr. NaCl, 0.2 gr. sodium lauryl sulfate, and 0.1 gr. sodium desoxycholate per liter is used. The pH is adjusted to 6.5, and samples are incubated at 44.5 deg C. Bacterial growth is detected either by measuring the increase with time in the electrical impedance ratio between the innoculated sample vial and an uninnoculated reference vial or by visual examination for turbidity. Results obtained by the single-step method for chlorinated and unchlorinated effluent samples are in excellent agreement with those obtained by the standard method. It is suggested that in automated treatment plants impedance ratio data could be automatically matched by computer programs with the appropriate dilution factors and most probable number tables already in the computer memory, with the corresponding result displayed as fecal coliforms per 100 ml of effluent.

  2. Antenna characteristics and air-ground interface deembedding methods for stepped-frequency ground-penetrating radar measurements

    Karlsen, Brian; Larsen, Jan; Jakobsen, Kaj Bjarne

    2000-01-01

    The result from field-tests using a Stepped-Frequency Ground Penetrating Radar (SF-GPR) and promising antenna and air-ground deembedding methods for a SF-GPR is presented. A monostatic S-band rectangular waveguide antenna was used in the field-tests. The advantages of the SF-GPR, e.g., amplitude...... and phase information in the SF-GPR signal, is used to deembed the characteristics of the antenna. We propose a new air-to-ground interface deembedding technique based on Principal Component Analysis which enables enhancement of the SF-GPR signal from buried objects, e.g., anti-personal landmines...

  3. The six-spot-step test - a new method for monitoring walking ability in patients with chronic inflammatory polyneuropathy

    Kreutzfeldt, Melissa; Jensen, Henrik B; Ravnborg, Mads

    2017-01-01

    OBJECTIVE: To evaluate whether the Six-Spot-Step-Test (SSST) is more suitable for monitoring walking ability in patients with chronic inflammatory polyneuropathy than the Timed-25-Foot-Walking test (T25FW). METHOD: In the SSST, participants have to walk as quickly as possible across a field...... of effect size, standardized response means and relative efficiency. Both ambulation tests correlated moderately to PGIC. CONCLUSION: The SSST may be superior to the T25FW in terms of dynamic range, floor effect and responsiveness which makes the SSST a possible alternative for monitoring walking ability...

  4. Cavity digital control testing system by Simulink step operation method for TESLA linear accelerator and free electron laser

    Czarski, Tomasz; Romaniuk, Ryszard S.; Pozniak, Krzysztof T.; Simrock, Stefan

    2004-07-01

    The cavity control system for the TESLA -- TeV-Energy Superconducting Linear Accelerator project is initially introduced in this paper. The FPGA -- Field Programmable Gate Array technology has been implemented for digital controller stabilizing cavity field gradient. The cavity SIMULINK model has been applied to test the hardware controller. The step operation method has been developed for testing the FPGA device coupled to the SIMULINK model of the analog real plant. The FPGA signal processing has been verified according to the required algorithm of the reference MATLAB controller. Some experimental results have been presented for different cavity operational conditions.

  5. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  6. A step forward in the study of the electroerosion by optical methods

    Aparicio, R.; Gale, M. F. Ruiz; Hogert, E. N.; Landau, M. R.; Gaggioli, y. N. G.

    2003-05-01

    This work develops two theoretical models of surfaces to explain the behavior of the light scattered by samples that suffers some alteration. In a first model, it is evaluated the mean intensity scattered by the sample, analyzing the different curves obtained as function of the eroded/total surface ratio. The theoretical results are compared with those obtained experimentally. It can be seen that there exists a strong relation between the electroerosion level and the light scattered by the sample. A second model analyzes a surface with random changes in its roughness. A translucent surface with its roughness changing in a controlled way is studied. Then, the correlation coefficient variation as function of the roughness variation is determined by the transmission speckle correlation method. The obtained experimental values are compared with those obtained with this model. In summary, it can be shown that the first- and second-order statistics properties of the transmitted or reflected light by a sample with a variable topography can be taken account as a parameter to analyze these morphologic changes.

  7. Surgiplanner: a new method for one step oral rehabilitation of severe atrophic maxilla.

    Busato, A; Vismara, V; Grecchi, F; Grecchi, E; Lauritano, D

    2017-01-01

    The implant-prosthetic rehabilitation of edentulous upper jaws has always been complex for surgeons and dentists. The lack of bone in both vertical and horizontal dimension does not allow the correct insertion of dental implants. In addition, patients with edentulous upper and lower arch have a loss of vertical dimension of the face and an aged expression. Many surgical techniques have been proposed to increase the bone volume, height and thickness, such as the Le Fort I osteotomy, the bone grafts and the placement of dental implants. Planning these surgical procedures is difficult, because it is not possible to reproduce the movements of osteotomized bone segments in three planes of space. This article describes the treatment of severe atrophy maxilla with a new approach using a new instrument named "Surgiplanner". Surgiplanner is a method that, only using a computerized axial tomography (CAT), allows to obtain a totally predetermined therapeutic result from both an aesthetic and functional point of view, with surgery of severe resorbed jaws. Surgiplanner allows repositioning of segment of the skeleton of the patient's face in a predetermined and controlled way for the best implant-supported oral rehabilitation.

  8. A one-step method for modelling longitudinal data with differential equations.

    Hu, Yueqin; Treinen, Raymond

    2018-04-06

    Differential equation models are frequently used to describe non-linear trajectories of longitudinal data. This study proposes a new approach to estimate the parameters in differential equation models. Instead of estimating derivatives from the observed data first and then fitting a differential equation to the derivatives, our new approach directly fits the analytic solution of a differential equation to the observed data, and therefore simplifies the procedure and avoids bias from derivative estimations. A simulation study indicates that the analytic solutions of differential equations (ASDE) approach obtains unbiased estimates of parameters and their standard errors. Compared with other approaches that estimate derivatives first, ASDE has smaller standard error, larger statistical power and accurate Type I error. Although ASDE obtains biased estimation when the system has sudden phase change, the bias is not serious and a solution is also provided to solve the phase problem. The ASDE method is illustrated and applied to a two-week study on consumers' shopping behaviour after a sale promotion, and to a set of public data tracking participants' grammatical facial expression in sign language. R codes for ASDE, recommendations for sample size and starting values are provided. Limitations and several possible expansions of ASDE are also discussed. © 2018 The British Psychological Society.

  9. Workshop on acceleration of the validation and regulatory acceptance of alternative methods and implementation of testing strategies

    Piersma, A. H.; Burgdorf, T.; Louekari, K.

    2018-01-01

    concerning the regulatory acceptance and implementation of alternative test methods and testing strategies, with the aim to develop feasible solutions. Classical validation of alternative methods usually involves one to one comparison with the gold standard animal study. This approach suffers from...... the reductionist nature of an alternative test as compared to the animal study as well as from the animal study being considered as the gold standard. Modern approaches combine individual alternatives into testing strategies, for which integrated and defined approaches are emerging at OECD. Furthermore, progress......-focused hazard and risk assessment of chemicals requires an open mind towards stepping away from the animal study as the gold standard and defining human biologically based regulatory requirements for human hazard and risk assessment....

  10. Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.

    Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy

    2010-02-01

    This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.

  11. Validation of Cyanoacrylate Method for Collection of Stratum Corneum in Human Skin for Lipid Analysis

    Jungersted, JM; Hellgren, Lars; Drachmann, Tue

    2010-01-01

    Background and Objective: Lipids in the stratum corneum (SC) are of major importance for the skin barrier function. Many different methods have been used for the collection of SC for the analysis of SC lipids. The objective of the present study was to validate the cyanoacrylate method for the col......Background and Objective: Lipids in the stratum corneum (SC) are of major importance for the skin barrier function. Many different methods have been used for the collection of SC for the analysis of SC lipids. The objective of the present study was to validate the cyanoacrylate method...

  12. Microscale validation of 4-aminoantipyrine test method for quantifying phenolic compounds in microbial culture

    Justiz Mendoza, Ibrahin; Aguilera Rodriguez, Isabel; Perez Portuondo, Irasema

    2014-01-01

    Validation of test methods microscale is currently of great importance due to the economic and environmental advantages possessed, which constitutes a prerequisite for the performance of services and quality assurance of the results to provide customer. This paper addresses the microscale validation of 4-aminoantipyrine spectrophotometric method for the quantification of phenolic compounds in culture medium. Parameters linearity, precision, regression, accuracy, detection limits, quantification limits and robustness were evaluated, addition to the comparison test with no standardized method for determining polyphenols (Folin Ciocalteu). The results showed that both methods are feasible for determining phenols

  13. Food demand in Brazil: an application of Shonkwiler & Yen Two-Step estimation method

    Alexandre Bragança Coelho

    2010-03-01

    Full Text Available The objective of the analysis is to estimate a demand system including eighteen food products using data from a Brazilian Household Budget Survey carried out in 2002 and 2003 (POF 2002/2003. The functional form used was Quadratic Almost Ideal Demand System (QUAIDS. Estimation employs the Shonkwiler and Yen method to account for zero consumption. Results showed that purchase probabilities of staples foods were negatively related to family monthly income, while meat, milk and other products showed a positive relation. Regional, educational and urbanization variables were also important in the first stage estimation. While some of the goods had negative income coefficients, none were inferior and six of eighteen were luxuries based on second stage estimates.O objetivo deste artigo é analisar a demanda de alimentos no Brasil por meio da estimação de um sistema de demanda com dezoito produtos usando dados da Pesquisa de Orçamentos Familiares realizada em 2002 e 2003 (POF 2002/2003. A forma funcional utilizada foi o Quadratic Almost Ideal Demand System (QUAIDS. A estimação utiliza o Procedimento de Shonkwiler e Yen para lidar com o problema do consumo zero. Os resultados mostraram que as probabilidades de aquisição dos produtos básicos foram negativamente relacionadas com a renda familiar mensal, enquanto carnes, leite e outros produtos mostraram uma relação positiva. As variáveis de educação, regionais e de localização do domicílio também foram importantes no primeiro estágio da estimação. Em relação às elasticidades-renda, nenhum bem foi considerado inferior e seis de dezoito foram considerados bens de luxo.

  14. Validation of a liquid chromatography ultraviolet method for determination of herbicide diuron and its metabolites in soil samples

    ANA LUCIA S.M. FELICIO

    2016-01-01

    Full Text Available ABSTRACT Diuron is one of the most widely herbicide used worldwide, which can undergo degradation producing three primary metabolites: 3,4-dichlorophenylurea, 3-(3,4-dichlorophenyl-1-methylurea, and 3,4-dichloroaniline. Since the persistence of diuron and its by-products in ecosystems involves risk of toxicity to environment and human health, a reliable quantitative method for simultaneous monitoring of these compounds is required. Hence, a simple method without preconcentration step was validated for quantitation of diuron and its main metabolites by high performance liquid chromatography with ultraviolet detection. Separation was achieved in less than 11 minutes using a C18 column, mobile phase composed of acetonitrile and water (45:55 v/v at 0.86 mL min-1 and detection at 254 nm. The validated method using solid-liquid extraction followed by an isocratic chromatographic elution proved to be specific, precise and linear (R2 ˃ 0.99, presenting more than 90% of recovery. The method was successfully applied to quantify diuron and their by-products in soil samples collected in a sugarcane cultivation area, focusing on the environmental control.

  15. Method development and validation of potent pyrimidine derivative by UV-VIS spectrophotometer.

    Chaudhary, Anshu; Singh, Anoop; Verma, Prabhakar Kumar

    2014-12-01

    A rapid and sensitive ultraviolet-visible (UV-VIS) spectroscopic method was developed for the estimation of pyrimidine derivative 6-Bromo-3-(6-(2,6-dichlorophenyl)-2-(morpolinomethylamino) pyrimidine4-yl) -2H-chromen-2-one (BT10M) in bulk form. Pyrimidine derivative was monitored at 275 nm with UV detection, and there is no interference of diluents at 275 nm. The method was found to be linear in the range of 50 to 150 μg/ml. The accuracy and precision were determined and validated statistically. The method was validated as a guideline. The results showed that the proposed method is suitable for the accurate, precise, and rapid determination of pyrimidine derivative. Graphical Abstract Method development and validation of potent pyrimidine derivative by UV spectroscopy.

  16. Comparison between stochastic and machine learning methods for hydrological multi-step ahead forecasting: All forecasts are wrong!

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts

  17. The Three-Step Test-Interview (TSTI: An observation-based method for pretesting self-completion questionnaires

    Tony Hak

    2008-12-01

    Full Text Available Three-Step Test-Interview (TSTI is a method for pretesting a self-completion questionnaire by first observing actual instances of interaction between the instrument and respondents (the response process before exploring the reasons for this behavior. The TSTI consists of the following three steps: 1. (Respondent-driven observation of response behavior. 2. (Interviewer-driven follow-up probing aimed at remedying gaps in observational data. 3. (Interviewer-driven debriefing aimed at eliciting experiences and opinions. We describe the aims and the techniques of these three steps, and then discuss pilot studies in which we tested the feasibility and the productivity of the TSTI by applying it in testing three rather different types of questionnaires. In the first study, the quality of a set of questions about alcohol consumption was assessed. The TSTI proved to be productive in identifying problems that resulted from a mismatch between the ‘theory’ underlying the questions on the one hand, and features of a respondent’s actual behavior and biography on the other hand. In the second pilot study, Dutch and Norwegian versions of an attitude scale, the 20-item Illegal Aliens Scale, were tested. The TSTI appeared to be productive in identifying problems that resulted from different ‘response strategies’. In the third pilot, a two-year longitudinal study, the TSTI appeared to be an effective method for documenting processes of ‘response shift’ in repeated measurements of health-related Quality of Life (QoL.

  18. [Data validation methods and discussion on Chinese materia medica resource survey].

    Zhang, Yue; Ma, Wei-Feng; Zhang, Xiao-Bo; Zhu, Shou-Dong; Guo, Lan-Ping; Wang, Xing-Xing

    2013-07-01

    From the beginning of the fourth national survey of the Chinese materia medica resources, there were 22 provinces have conducted pilots. The survey teams have reported immense data, it put forward the very high request to the database system construction. In order to ensure the quality, it is necessary to check and validate the data in database system. Data validation is important methods to ensure the validity, integrity and accuracy of census data. This paper comprehensively introduce the data validation system of the fourth national survey of the Chinese materia medica resources database system, and further improve the design idea and programs of data validation. The purpose of this study is to promote the survey work smoothly.

  19. Methods of validating the Advanced Diagnosis and Warning system for aircraft ICing Environments (ADWICE)

    Rosczyk, S.; Hauf, T.; Leifeld, C.

    2003-04-01

    In-flight icing is one of the most hazardous problems in aviation. It was determined as contributing factor in more than 800 incidents worldwide. And though the meteorological factors of airframe icing become more and more transparent, they have to be integrated into the Federal Aviation Administration's (FAA) certification rules first. Therefore best way to enhance aviational safety is to know the areas of dangerous icing conditions in order to prevent flying in them. For this reason the German Weather Service (DWD), the Institute for Atmospheric Physics at the German Aerospace Centre (DLR) and the Institute of Meteorology and Climatology (ImuK) of the University of Hanover started developingADWICE - theAdvanced Diagnosis and Warning system for aircraft ICing Environments - in 1998. This algorithm is based on the DWDLocal Model (LM) forecast of temperature and humidity, in fusion with radar and synop and, coming soon, satellite data. It gives an every-hour nowcast of icing severity and type - divided into four categories: freezing rain, convective, stratiform and general - for the middle European area. A first validation of ADWICE took place in 1999 with observational data from an in-flight icing campaign during EURICE in 1997. The momentary validation deals with a broader database. As first step the output from ADWICE is compared to observations from pilots (PIREPs) to get a statistic of the probability of detecting icing and either no-icing conditions within the last icing-seasons. There were good results of this method with the AmericanIntegrated Icing Diagnostic Algorithm (IIDA). A problem though is the small number of PIREPs from Europe in comparison to the US. So a temporary campaign of pilots (including Lufthansa and Aerolloyd) collecting cloud and icing information every few miles is intended to solve this unpleasant situation. Another source of data are the measurements of theFalcon - a DLR research aircraft carrying an icing sensor. In addition to that

  20. Application and validation of superior spectrophotometric methods for simultaneous determination of ternary mixture used for hypertension management

    Mohamed, Heba M.; Lamie, Nesrine T.

    2016-02-01

    Telmisartan (TL), Hydrochlorothiazide (HZ) and Amlodipine besylate (AM) are co-formulated together for hypertension management. Three smart, specific and precise spectrophotometric methods were applied and validated for simultaneous determination of the three cited drugs. Method A is the ratio isoabsorptive point and ratio difference in subtracted spectra (RIDSS) which is based on dividing the ternary mixture of the studied drugs by the spectrum of AM to get the division spectrum, from which concentration of AM can be obtained by measuring the amplitude values in the plateau region at 360 nm. Then the amplitude value of the plateau region was subtracted from the division spectrum and HZ concentration was obtained by measuring the difference in amplitude values at 278.5 and 306 nm (corresponding to zero difference of TL) while the total concentration of HZ and TL in the mixture was measured at their isoabsorptive point in the division spectrum at 278.5 nm (Aiso). TL concentration is then obtained by subtraction. Method B; double divisor ratio spectra derivative spectrophotometry (RS-DS) and method C; mean centering of ratio spectra (MCR) spectrophotometric methods. The proposed methods did not require any initial separation steps prior the analysis of the three drugs. A comparative study was done between the three methods regarding their; simplicity, sensitivity and limitations. Specificity was investigated by analyzing the synthetic mixtures containing different ratios of the three studied drugs and their tablets dosage form. Statistical comparison of the obtained results with those found by the official methods was done, differences were non-significant in regard to accuracy and precision. The three methods were validated in accordance with ICH guidelines and can be used for quality control laboratories for TL, HZ and AM.

  1. Session-RPE Method for Training Load Monitoring: Validity, Ecological Usefulness, and Influencing Factors

    Monoem Haddad

    2017-11-01

    Full Text Available Purpose: The aim of this review is to (1 retrieve all data validating the Session-rating of perceived exertion (RPE-method using various criteria, (2 highlight the rationale of this method and its ecological usefulness, and (3 describe factors that can alter RPE and users of this method should take into consideration.Method: Search engines such as SPORTDiscus, PubMed, and Google Scholar databases in the English language between 2001 and 2016 were consulted for the validity and usefulness of the session-RPE method. Studies were considered for further analysis when they used the session-RPE method proposed by Foster et al. in 2001. Participants were athletes of any gender, age, or level of competition. Studies using languages other than English were excluded in the analysis of the validity and reliability of the session-RPE method. Other studies were examined to explain the rationale of the session-RPE method and the origin of RPE.Results: A total of 950 studies cited the Foster et al. study that proposed the session RPE-method. 36 studies have examined the validity and reliability of this proposed method using the modified CR-10.Conclusion: These studies confirmed the validity and good reliability and internal consistency of session-RPE method in several sports and physical activities with men and women of different age categories (children, adolescents, and adults among various expertise levels. This method could be used as “standing alone” method for training load (TL monitoring purposes though some recommend to combine it with other physiological parameters as heart rate.

  2. Validation of the Welch Allyn SureBP (inflation) and StepBP (deflation) algorithms by AAMI standard testing and BHS data analysis.

    Alpert, Bruce S

    2011-04-01

    We evaluated two new Welch Allyn automated blood pressure (BP) algorithms. The first, SureBP, estimates BP during cuff inflation; the second, StepBP, does so during deflation. We followed the American National Standards Institute/Association for the Advancement of Medical Instrumentation SP10:2006 standard for testing and data analysis. The data were also analyzed using the British Hypertension Society analysis strategy. We tested children, adolescents, and adults. The requirements of the American National Standards Institute/Association for the Advancement of Medical Instrumentation SP10:2006 standard were fulfilled with respect to BP levels, arm sizes, and ages. Association for the Advancement of Medical Instrumentation SP10 Method 1 data analysis was used. The mean±standard deviation for the device readings compared with auscultation by paired, trained, blinded observers in the SureBP mode were -2.14±7.44 mmHg for systolic BP (SBP) and -0.55±5.98 mmHg for diastolic BP (DBP). In the StepBP mode, the differences were -3.61±6.30 mmHg for SBP and -2.03±5.30 mmHg for DBP. Both algorithms achieved an A grade for both SBP and DBP by British Hypertension Society analysis. The SureBP inflation-based algorithm will be available in many new-generation Welch Allyn monitors. Its use will reduce the time it takes to estimate BP in critical patient care circumstances. The device will not need to inflate to excessive suprasystolic BPs to obtain the SBP values. Deflation is rapid once SBP has been determined, thus reducing the total time of cuff inflation and reducing patient discomfort. If the SureBP fails to obtain a BP value, the StepBP algorithm is activated to estimate BP by traditional deflation methodology.

  3. Method of neptunium recovery into the product stream of the Purex second codecontamination step for LWR fuel reprocessing

    Tsuboya, T; Nemoto, S; Hoshino, T; Segawa, T [Power Reactor and Nuclear Fuel Development Corp., Tokyo (Japan)

    1973-04-01

    The neptunium behavior in the second codecontamination step in Purex process of Power Reactor and Nuclear Fuel Development Corporation was experimentally studied, and the conditions for discharging neptunium in product stream were examined. Improved nitrous acid method was applied to the second codecontamination step. Nitrous acid (NaNO/sub 2/) was supplied to the 1st stage of extraction section at feed rate of 7.5 mM/hr, and hydrazine (hydrazine nitrate) was supplied to some stages near feed point at feed rate of 1.6 mM/hr, by using laboratory scale mixer-settlers having 6 ml of mixing volume and 17 ml of settling volume. Neptunium extraction behavior was analyzed by the code NEPTUN-I simulating neptunium concentration profile and by the code NEPTUN-II for calculating Np (V) and Np (VI) concentration. Batch experiments were performed for explaining the reduction reaction of Np (VI) in organic phase. After shaking the aqueous solution containing Np (VI) in 3 M nitric acid with the various volume ratios of TBP, both phases were separated, and the neptunium concentration was determined. In conclusion, the improved nitrous acid method was effective for the neptunium discharge in product stream when the flow ratio of organic phase to aqueous phase was increased to about three times.

  4. One-step method for the fabrication of superhydrophobic surface on magnesium alloy and its corrosion protection, antifouling performance

    Zhao, Lin; Liu, Qi; Gao, Rui; Wang, Jun; Yang, Wanlu; Liu, Lianhe

    2014-01-01

    Highlights: •The myristic acid iron superhydrophobic surface was formatted on AZ31. •Two procedures to build a super-hydrophobic were simplified to one step. •The superhydrophobic surface shows good anticorrosion, antifouling properties. •We report a new approach for the superhydrophobic surface protection on AZ31. -- Abstract: Inspired by the lotus leaf, various methods to fabricate artificial superhydrophobic surfaces have been developed. Our purpose is to create a simple, one-step and environment-friendly method to construct a superhydrophobic surface on a magnesium alloy substrate. The substrate was immersed in a solution containing ferric chloride (FeCl 3 ·6H 2 O), deionized water, tetradecanoic acid (CH 3 (CH 2 ) 12 COOH) and ethanol. Scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS) and Fourier transformed infrared (FT-IR) were employed to characterize the substrate surface. The obtained surface showed a micron rough structure, a high contact angle (CA) of 165° ± 2° and desirable corrosion protection and antifouling properties

  5. New 'one-step' method for the simultaneous synthesis and anchoring of organic monolith inside COC microchip channels

    Ladner, Yoann; Cretier, Gerard; Dugas, Vincent; Randon, Jerome; Faure, Karine; Bruchet, Anthony

    2012-01-01

    A new method for monolith synthesis and anchoring inside cyclic olefin copolymer (COC) microchannels in a single step is proposed. It is shown that type I photo-initiators, typically used in a polymerization mixture to generate free radicals during monolith synthesis, can simultaneously act as type II photo-initiators and react with the plastic surface through hydrogen abstraction. This mechanism is used to 'photo-graft' poly(ethylene glycol) methacrylate (PEGMA) on COC surfaces. Contact angle measurements were used to observe the changes in surface hydrophilicity when increasing initiator concentration and irradiation duration. The ability of type I photo-initiators to synthesize and anchor a monolith inside COC microchannels in a single step was proved through SEM observations. Different concentrations of photo-initiators were tried. Finally, electro-chromatographic separations of polycyclic aromatic hydrocarbons were realized to illustrate the beneficial effect of anchoring on chromatographic performances. The versatility of the method was demonstrated with two widely used photo-initiators: benzoin methyl ether (BME) and azobisisobutyronitrile (AIBN). (authors)

  6. Development and validation of a dissolution method using HPLC for diclofenac potassium in oral suspension

    Alexandre Machado Rubim

    2014-04-01

    Full Text Available The present study describes the development and validation of an in vitro dissolution method for evaluation to release diclofenac potassium in oral suspension. The dissolution test was developed and validated according to international guidelines. Parameters like linearity, specificity, precision and accuracy were evaluated, as well as the influence of rotation speed and surfactant concentration on the medium. After selecting the best conditions, the method was validated using apparatus 2 (paddle, 50-rpm rotation speed, 900 mL of water with 0.3% sodium lauryl sulfate (SLS as dissolution medium at 37.0 ± 0.5°C. Samples were analyzed using the HPLC-UV (PDA method. The results obtained were satisfactory for the parameters evaluated. The method developed may be useful in routine quality control for pharmaceutical industries that produce oral suspensions containing diclofenac potassium.

  7. Validation of innovative technologies and strategies for regulatory safety assessment methods: challenges and opportunities.

    Stokes, William S; Wind, Marilyn

    2010-01-01

    Advances in science and innovative technologies are providing new opportunities to develop test methods and strategies that may improve safety assessments and reduce animal use for safety testing. These include high throughput screening and other approaches that can rapidly measure or predict various molecular, genetic, and cellular perturbations caused by test substances. Integrated testing and decision strategies that consider multiple types of information and data are also being developed. Prior to their use for regulatory decision-making, new methods and strategies must undergo appropriate validation studies to determine the extent that their use can provide equivalent or improved protection compared to existing methods and to determine the extent that reproducible results can be obtained in different laboratories. Comprehensive and optimal validation study designs are expected to expedite the validation and regulatory acceptance of new test methods and strategies that will support improved safety assessments and reduced animal use for regulatory testing.

  8. ReSOLV: Applying Cryptocurrency Blockchain Methods to Enable Global Cross-Platform Software License Validation

    Alan Litchfield

    2018-05-01

    Full Text Available This paper presents a method for a decentralised peer-to-peer software license validation system using cryptocurrency blockchain technology to ameliorate software piracy, and to provide a mechanism for software developers to protect copyrighted works. Protecting software copyright has been an issue since the late 1970s and software license validation has been a primary method employed in an attempt to minimise software piracy and protect software copyright. The method described creates an ecosystem in which the rights and privileges of participants are observed.

  9. Method development and validation of liquid chromatography-tandem/mass spectrometry for aldosterone in human plasma: Application to drug interaction study of atorvastatin and olmesartan combination

    Rakesh Das

    2014-01-01

    Full Text Available In the present investigation, a simple and sensitive liquid chromatography-tandem mass spectrometry (LC/MS/MS method was developed for the quantification of aldosterone (ALD a hormone responsible for blood pressure in human plasma. The developed method was validated and extended for application on human subjects to study drug interaction of atorvastatin (ATSV and olmesartan (OLM on levels of ALD. The ALD in plasma was extracted by liquid-liquid extraction with 5 mL dichloromethane/ethyl ether (60/40% v/v. The chromatographic separation of ALD was carried on Xterra, RP-Column C18 (150 mm× 4.6 mm × 3.5 μm at 30°C followed by four-step gradient program composed of methanol and water. Step 1 started with 35% methanol for first 1 min and changed linearly to 90% in next 1.5 min in Step 2. Step 3 lasted for next 2 min with 90% methanol. The method finally concluded with Step 4 to achieve initial concentration of methanol that is, 35% thus contributing the total method run time of 17.5 min. The flow rate was 0.25 mL/min throughout the process. The developed method was validated for specificity, accuracy, precision, stability, linearity, sensitivity, and recovery. The method was linear and found to be acceptable over the range of 50-800 ng/mL. The method was successfully applied for the drug interaction study of ATSV + OLM in combination against OLM treatment on blood pressure by quantifying changes in levels of ALD in hypertensive patients. The study revealed levels of ALD were significantly higher in ATSV + OLM treatment condition when compared to OLM as single treated condition. This reflects the reason of low effectiveness of ATSV + OLM in combination instead of synergistic activity.

  10. The Diabetes Evaluation Framework for Innovative National Evaluations (DEFINE): Construct and Content Validation Using a Modified Delphi Method.

    Paquette-Warren, Jann; Tyler, Marie; Fournie, Meghan; Harris, Stewart B

    2017-06-01

    In order to scale-up successful innovations, more evidence is needed to evaluate programs that attempt to address the rising prevalence of diabetes and the associated burdens on patients and the healthcare system. This study aimed to assess the construct and content validity of the Diabetes Evaluation Framework for Innovative National Evaluations (DEFINE), a tool developed to guide the evaluation, design and implementation with built-in knowledge translation principles. A modified Delphi method, including 3 individual rounds (questionnaire with 7-point agreement/importance Likert scales and/or open-ended questions) and 1 group round (open discussion) were conducted. Twelve experts in diabetes, research, knowledge translation, evaluation and policy from Canada (Ontario, Quebec and British Columbia) and Australia participated. Quantitative consensus criteria were an interquartile range of ≤1. Qualitative data were analyzed thematically and confirmed by participants. An importance scale was used to determine a priority multi-level indicator set. Items rated very or extremely important by 80% or more of the experts were reviewed in the final group round to build the final set. Participants reached consensus on the content and construct validity of DEFINE, including its title, overall goal, 5-step evaluation approach, medical and nonmedical determinants of health schematics, full list of indicators and associated measurement tools, priority multi-level indicator set and next steps in DEFINE's development. Validated by experts, DEFINE has the right theoretic components to evaluate comprehensively diabetes prevention and management programs and to support acquisition of evidence that could influence the knowledge translation of innovations to reduce the burden of diabetes. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Experimental validation for calcul methods of structures having shock non-linearity

    Brochard, D.; Buland, P.

    1987-01-01

    For the seismic analysis of non-linear structures, numerical methods have been developed which need to be validated on experimental results. The aim of this paper is to present the design method of a test program which results will be used for this purpose. Some applications to nuclear components will illustrate this presentation [fr

  12. Examination of packaging materials in bakery products : a validated method for detection and quantification

    Raamsdonk, van L.W.D.; Pinckaers, V.G.Z.; Vliege, J.J.M.; Egmond, van H.J.

    2012-01-01

    Methods for the detection and quantification of packaging materials are necessary for the control of the prohibition of these materials according to Regulation (EC)767/2009. A method has been developed and validated at RIKILT for bakery products, including sweet bread and raisin bread. This choice

  13. Validation of a Novel 3-Dimensional Sonographic Method for Assessing Gastric Accommodation in Healthy Adults

    Buisman, Wijnand J; van Herwaarden-Lindeboom, MYA; Mauritz, Femke A; El Ouamari, Mourad; Hausken, Trygve; Olafsdottir, Edda J; van der Zee, David C; Gilja, Odd Helge

    OBJECTIVES: A novel automated 3-dimensional (3D) sonographic method has been developed for measuring gastric volumes. This study aimed to validate and assess the reliability of this novel 3D sonographic method compared to the reference standard in 3D gastric sonography: freehand magneto-based 3D

  14. Validity of a structured method of selecting abstracts for a plastic surgical scientific meeting

    van der Steen, LPE; Hage, JJ; Kon, M; Monstrey, SJ

    In 1999, the European Association of Plastic Surgeons accepted a structured method to assess and select the abstracts that are submitted for its yearly scientific meeting. The two criteria used to evaluate whether such a selection method is accurate were reliability and validity. The authors

  15. Application of EU guidelines for the validation of screening methods for veterinary drugs

    Stolker, A.A.M.

    2012-01-01

    Commission Decision (CD) 2002/657/EC describes detailed rules for method validation within the framework of residue monitoring programmes. The approach described in this CD is based on criteria. For (qualitative) screening methods, the most important criteria is that the CCß has to be below any

  16. Validity of the remote food photography method against doubly labeled water among minority preschoolers

    The aim of this study was to determine the validity of energy intake (EI) estimations made using the remote food photography method (RFPM) compared to the doubly labeled water (DLW) method in minority preschool children in a free-living environment. Seven days of food intake and spot urine samples...

  17. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  18. Validation parameters of instrumental method for determination of total bacterial count in milk

    Nataša Mikulec

    2004-10-01

    Full Text Available The method of flow citometry as rapid, instrumental and routine microbiological method is used for determination of total bacterial count in milk. The results of flow citometry are expressed as individual bacterial cells count. Problems regarding the interpretation of the results of total bacterial count can be avoided by transformation of the results of flow citometry method onto the scale of reference method (HRN ISO 6610:2001.. The method of flow citometry, like any analitycal method, according to the HRN EN ISO/IEC 17025:2000 standard, requires validation and verification. This paper describes parameters of validation: accuracy, precision, specificity, range, robustness and measuring uncertainty for the method of flow citometry.

  19. A novel single-step synthesis of N-doped TiO2 via a sonochemical method

    Wang, Xi-Kui; Wang, Chen; Guo, Wei-Lin; Wang, Jin-Gang

    2011-01-01

    Graphical abstract: The N-doped anatase TiO 2 nanoparticles were synthesized by sonochemical method. The as-prepared sample is characterized by XRD, TEM, XPS and UV-Vis DRS. The photocatalytic activity of the photocatalyst was evaluated by the photodegradation of an azo dye direct sky blue 5B. Highlights: → A novel singal-step sonochemical synthesis method for the preparation of anatase N-doped TiO 2 nanocrystalline at low temperature has been devoleped. → The as-prepared sample is characterized by XRD, TEM, XPS and UV-Vis DRS. → The photodegradation of azo dye direct sky blue 5 showed that the N-doped TiO 2 catalyst is of high visible-light photocatalytic activity. -- Abstract: A novel single-step synthetic method for the preparation of anatase N-doped TiO 2 nanocrystalline at low temperature has been devoleped. The N-doped anatase TiO 2 nanoparticles were synthesized by sonication of the solution of tetraisopropyl titanium and urea in water and isopropyl alcohol at 80 o C for 150 min. The as-prepared sample was characterized by X-ray diffraction, transmission electron microscopy, X-ray photoelectron spectroscopy and UV-vis absorption spectrum. The product structure depends on the reaction temperature and reaction time. The photocatalytic activity of the as-prepared photocatalyst was evaluated via the photodegradation of an azo dye direct sky blue 5B. The results show that the N-doped TiO 2 nanocrystalline prepared via sonication exhibit an excellent photocatalytic activity under UV light and simulated sunlight.

  20. A new fourth-order Fourier-Bessel split-step method for the extended nonlinear Schroedinger equation

    Nash, Patrick L.

    2008-01-01

    Fourier split-step techniques are often used to compute soliton-like numerical solutions of the nonlinear Schroedinger equation. Here, a new fourth-order implementation of the Fourier split-step algorithm is described for problems possessing azimuthal symmetry in 3 + 1-dimensions. This implementation is based, in part, on a finite difference approximation Δ perpendicular FDA of 1/r (∂)/(∂r) r(∂)/(∂r) that possesses an associated exact unitary representation of e i/2λΔ perpendicular FDA . The matrix elements of this unitary matrix are given by special functions known as the associated Bessel functions. Hence the attribute Fourier-Bessel for the method. The Fourier-Bessel algorithm is shown to be unitary and unconditionally stable. The Fourier-Bessel algorithm is employed to simulate the propagation of a periodic series of short laser pulses through a nonlinear medium. This numerical simulation calculates waveform intensity profiles in a sequence of planes that are transverse to the general propagation direction, and labeled by the cylindrical coordinate z. These profiles exhibit a series of isolated pulses that are offset from the time origin by characteristic times, and provide evidence for a physical effect that may be loosely termed normal mode condensation. Normal mode condensation is consistent with experimentally observed pulse filamentation into a packet of short bursts, which may occur as a result of short, intense irradiation of a medium