WorldWideScience

Sample records for sequential application method

  1. Coupling surfactants with permanganate for DNAPL removal : coinjection or sequential application as delivery methods

    Energy Technology Data Exchange (ETDEWEB)

    Dugan, P.J. [Carus Corp., Peru, IL (United States); Siegrist, R.L. [Colorado School of Mines, Golden, CO (United States); Crimi, M.L. [Clarkson Univ., Potsdam, NY (United States)

    2010-07-01

    This PowerPoint presentation described a study conducted to test the effectiveness of surfactant-enhanced permanganate for the remediation of dense nonaqueous phase liquids (DNAPL). When DNAPL enters the environment, it can pollute millions of gallons of ground water and create huge dissolved plumes that act as long-term sources of contamination. Surfactants were used to enhance the solubilization and mobilization of DNAPL during the remediation process. In situ chemical oxidation (ISCO) was then used to deliver oxidants into the sub-surface to destroy organic contaminants in the soil and ground water. Experimental 2-D flow-through cell studies of 72 surfactants were conducted with the permanganate to evaluate delivery methods and determine compatible co-solvents for the surfactant process. Delivery methods included co-injection and sequential application. Four compatible surfactants were found to be compatible with the permanganate. A 90 percent DNAPL remediation rate was achieved using relatively low surfactant and oxidant concentrations. tabs., figs.

  2. Trial Sequential Methods for Meta-Analysis

    Science.gov (United States)

    Kulinskaya, Elena; Wood, John

    2014-01-01

    Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

  3. Evaluation Using Sequential Trials Methods.

    Science.gov (United States)

    Cohen, Mark E.; Ralls, Stephen A.

    1986-01-01

    Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)

  4. Arsenic Mobility and Availability in Sediments by Application of BCR Sequential Extractions Method

    International Nuclear Information System (INIS)

    Larios, R.; Fernandez, R.; Rucandio, M. I.

    2011-01-01

    Arsenic is a metalloid found in nature, both naturally and due to anthropogenic activities. Among them, mining works are an important source of arsenic release to the environment. Asturias is a region where important mercury mines were exploited, and in them arsenic occurs in para genesis with mercury minerals. The toxicity and mobility of this element depends on the chemical species it is found. Fractionation studies are required to analyze the mobility of this metalloid in soils and sediments. Among them, the proposed by the Bureau Community of Reference (BCR) is one of the most employed. This method attempts to divide up, by operationally defined stages, the amount of this element associated with carbonates (fraction 1), iron and manganese oxy hydroxides (fraction 2), organic matter and sulphides (fraction 3), and finally as the amount associated residual fraction to primary and secondary minerals, that is, from the most labile fractions to the most refractory ones. Fractionation of arsenic in sediments from two mines in Asturias were studied, La Soterrana and Los Rueldos. Sediments from La Soterrana showed high levels of arsenic in the non-residual phases, indicating that the majority of arsenic has an anthropogenic origin. By contrast, in sediments from Los Rueldos most of the arsenic is concentrated in the residual phase, indicating that this element remains bound to very refractory primary minerals, as is also demonstrated by the strong correlation of arsenic fractionation and the fractionation of elements present in refractory minerals, such as iron, aluminum and titanium. (Author) 51 refs.

  5. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  6. A framework for sequential multiblock component methods

    NARCIS (Netherlands)

    Smilde, A.K.; Westerhuis, J.A.; Jong, S.de

    2003-01-01

    Multiblock or multiset methods are starting to be used in chemistry and biology to study complex data sets. In chemometrics, sequential multiblock methods are popular; that is, methods that calculate one component at a time and use deflation for finding the next component. In this paper a framework

  7. Multi-volatile method for aroma analysis using sequential dynamic headspace sampling with an application to brewed coffee.

    Science.gov (United States)

    Ochiai, Nobuo; Tsunokawa, Jun; Sasamoto, Kikuo; Hoffmann, Andreas

    2014-12-05

    A novel multi-volatile method (MVM) using sequential dynamic headspace (DHS) sampling for analysis of aroma compounds in aqueous sample was developed. The MVM consists of three different DHS method parameters sets including choice of the replaceable adsorbent trap. The first DHS sampling at 25 °C using a carbon-based adsorbent trap targets very volatile solutes with high vapor pressure (>20 kPa). The second DHS sampling at 25 °C using the same type of carbon-based adsorbent trap targets volatile solutes with moderate vapor pressure (1-20 kPa). The third DHS sampling using a Tenax TA trap at 80 °C targets solutes with low vapor pressure (0.9910) and high sensitivity (limit of detection: 1.0-7.5 ng mL(-1)) even with MS scan mode. The feasibility and benefit of the method was demonstrated with analysis of a wide variety of aroma compounds in brewed coffee. Ten potent aroma compounds from top-note to base-note (acetaldehyde, 2,3-butanedione, 4-ethyl guaiacol, furaneol, guaiacol, 3-methyl butanal, 2,3-pentanedione, 2,3,5-trimethyl pyrazine, vanillin, and 4-vinyl guaiacol) could be identified together with an additional 72 aroma compounds. Thirty compounds including 9 potent aroma compounds were quantified in the range of 74-4300 ng mL(-1) (RSD<10%, n=5). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  8. A Practical Framework for Evaluating Health Services Management Educational Program: The Application of The Mixed-Method Sequential Explanatory Design

    Directory of Open Access Journals (Sweden)

    Bazrafshan Azam

    2015-07-01

    Full Text Available Introduction:Health services managers are responsible for improving the efficiency and quality in delivering healthcare services. In this regard, Health Services Management (HSM programs have been widely established to provide health providers with skilled, professional managers to address those needs. It is therefore important to ascertain the quality of these programs. The purpose of this study was to synthesize and develop a framework to evaluate the quality of the Health Services Management (HSM program at Kerman University of Medical Sciences. Methods: This study followed a mixed-method sequential explanatory approach in which data were collected through a CIPP survey and semi-structured interviews. In phase 1, participants included 10 faculty members, 64 students and 90 alumni. In phase 2, in-depth semi-structured interviews and purposeful sampling were conducted with 27 participants to better understand their perceptions of the HSM program. All interviews were audio-taped and transcribed verbatim. NVivo N8 was used to analyze the qualitative data and extract the themes. Results: The data analysis revealed both positive and negative attitudes toward the HSM program. According to the CIPP survey, program objectives (74%, curriculum content (59.5% and graduate skills (79% were the major sources of dissatisfaction. However, most respondents (n=48 reported that the classes are well equipped and learning resources are well prepared (n=41. Most respondents (n=41 reported that the students are actively involved in classroom activities. The majority of respondents (n=43 pointed out that the instructors implemented appropriate teaching strategies. Qualitative analysis of interviews revealed that a regular community needs assessment, content revision and directing attention to graduate skills and expertise are the key solutions to improve the program’s quality.Conclusion: This study revealed to what extent the HSM program objectives is being

  9. General Methods for Analysis of Sequential “n-step” Kinetic Mechanisms: Application to Single Turnover Kinetics of Helicase-Catalyzed DNA Unwinding

    Science.gov (United States)

    Lucius, Aaron L.; Maluf, Nasib K.; Fischer, Christopher J.; Lohman, Timothy M.

    2003-01-01

    Helicase-catalyzed DNA unwinding is often studied using “all or none” assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using “n-step” sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the “kinetic step size”, m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using “n-step” sequential mechanisms has previously been limited by an inability to float the number of “unwinding steps”, n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, fss(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain fss(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation. PMID:14507688

  10. General methods for analysis of sequential "n-step" kinetic mechanisms: application to single turnover kinetics of helicase-catalyzed DNA unwinding.

    Science.gov (United States)

    Lucius, Aaron L; Maluf, Nasib K; Fischer, Christopher J; Lohman, Timothy M

    2003-10-01

    Helicase-catalyzed DNA unwinding is often studied using "all or none" assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using "n-step" sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the "kinetic step size", m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using "n-step" sequential mechanisms has previously been limited by an inability to float the number of "unwinding steps", n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, f(ss)(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain f(ss)(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation.

  11. Methods for sequential resonance assignment in solid, uniformly 13C, 15N labelled peptides: Quantification and application to antamanide

    International Nuclear Information System (INIS)

    Detken, Andreas; Hardy, Edme H.; Ernst, Matthias; Kainosho, Masatsune; Kawakami, Toru; Aimoto, Saburo; Meier, Beat H.

    2001-01-01

    The application of adiabatic polarization-transfer experiments to resonance assignment in solid, uniformly 13 C- 15 N-labelled polypeptides is demonstrated for the cyclic decapeptide antamanide. A homonuclear correlation experiment employing the DREAM sequence for adiabatic dipolar transfer yields a complete assignment of the C α and aliphatic side-chain 13 C resonances to amino acid types. The same information can be obtained from a TOBSY experiment using the recently introduced P9 1 12 TOBSY sequence, which employs the J couplings as a transfer mechanism. A comparison of the two methods is presented. Except for some aromatic phenylalanine resonances, a complete sequence-specific assignment of the 13 C and 15 N resonances in antamanide is achieved by a series of selective or broadband adiabatic triple-resonance experiments. Heteronuclear transfer by adiabatic-passage Hartmann-Hahn cross polarization is combined with adiabatic homonuclear transfer by the DREAM and rotational-resonance tickling sequences into two- and three-dimensional experiments. The performance of these experiments is evaluated quantitatively

  12. Further comments on the sequential probability ratio testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Kulacsy, K. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics

    1997-05-23

    The Bayesian method for belief updating proposed in Racz (1996) is examined. The interpretation of the belief function introduced therein is found, and the method is compared to the classical binary Sequential Probability Ratio Testing method (SPRT). (author).

  13. A sequential mixed methods research approach to investigating HIV ...

    African Journals Online (AJOL)

    2016-09-03

    Sep 3, 2016 ... Sequential mixed methods research is an effective approach for ... show the effectiveness of the research method. ... qualitative data before quantitative datasets ..... whereby both types of data are collected simultaneously.

  14. Comparing two Poisson populations sequentially: an application

    International Nuclear Information System (INIS)

    Halteman, E.J.

    1986-01-01

    Rocky Flats Plant in Golden, Colorado monitors each of its employees for radiation exposure. Excess exposure is detected by comparing the means of two Poisson populations. A sequential probability ratio test (SPRT) is proposed as a replacement for the fixed sample normal approximation test. A uniformly most efficient SPRT exists, however logistics suggest using a truncated SPRT. The truncated SPRT is evaluated in detail and shown to possess large potential savings in average time spent by employees in the monitoring process

  15. Developing a Self-Report-Based Sequential Analysis Method for Educational Technology Systems: A Process-Based Usability Evaluation

    Science.gov (United States)

    Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse

    2015-01-01

    The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…

  16. A sequential mixed methods research approach to investigating HIV ...

    African Journals Online (AJOL)

    Sequential mixed methods research is an effective approach for investigating complex problems, but it has not been extensively used in construction management research. In South Africa, the HIV/AIDS pandemic has seen construction management taking on a vital responsibility since the government called upon the ...

  17. Standardized method for reproducing the sequential X-rays flap

    International Nuclear Information System (INIS)

    Brenes, Alejandra; Molina, Katherine; Gudino, Sylvia

    2009-01-01

    A method is validated to estandardize in the taking, developing and analysis of bite-wing radiographs taken in sequential way, in order to compare and evaluate detectable changes in the evolution of the interproximal lesions through time. A radiographic positioner called XCP® is modified by means of a rigid acrylic guide, to achieve proper of the X ray equipment core positioning relative to the XCP® ring and the reorientation during the sequential x-rays process. 16 subjects of 4 to 40 years old are studied for a total number of 32 registries. Two x-rays of the same block of teeth of each subject have been taken in sequential way, with a minimal difference of 30 minutes between each one, before the placement of radiographic attachment. The images have been digitized with a Super Cam® scanner and imported to a software. The measurements in X and Y-axis for both x-rays were performed to proceed to compare. The intraclass correlation index (ICI) has shown that the proposed method is statistically related to measurement (mm) obtained in the X and Y-axis for both sequential series of x-rays (p=0.01). The measures of central tendency and dispersion have shown that the usual occurrence is indifferent between the two measurements (Mode 0.000 and S = 0083 and 0.109) and that the probability of occurrence of different values is lower than expected. (author) [es

  18. Breaking from binaries - using a sequential mixed methods design.

    Science.gov (United States)

    Larkin, Patricia Mary; Begley, Cecily Marion; Devane, Declan

    2014-03-01

    To outline the traditional worldviews of healthcare research and discuss the benefits and challenges of using mixed methods approaches in contributing to the development of nursing and midwifery knowledge. There has been much debate about the contribution of mixed methods research to nursing and midwifery knowledge in recent years. A sequential exploratory design is used as an exemplar of a mixed methods approach. The study discussed used a combination of focus-group interviews and a quantitative instrument to obtain a fuller understanding of women's experiences of childbirth. In the mixed methods study example, qualitative data were analysed using thematic analysis and quantitative data using regression analysis. Polarised debates about the veracity, philosophical integrity and motivation for conducting mixed methods research have largely abated. A mixed methods approach can contribute to a deeper, more contextual understanding of a variety of subjects and experiences; as a result, it furthers knowledge that can be used in clinical practice. The purpose of the research study should be the main instigator when choosing from an array of mixed methods research designs. Mixed methods research offers a variety of models that can augment investigative capabilities and provide richer data than can a discrete method alone. This paper offers an example of an exploratory, sequential approach to investigating women's childbirth experiences. A clear framework for the conduct and integration of the different phases of the mixed methods research process is provided. This approach can be used by practitioners and policy makers to improve practice.

  19. A working-set framework for sequential convex approximation methods

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    2008-01-01

    We present an active-set algorithmic framework intended as an extension to existing implementations of sequential convex approximation methods for solving nonlinear inequality constrained programs. The framework is independent of the choice of approximations and the stabilization technique used...... to guarantee global convergence of the method. The algorithm works directly on the nonlinear constraints in the convex sub-problems and solves a sequence of relaxations of the current sub-problem. The algorithm terminates with the optimal solution to the sub-problem after solving a finite number of relaxations....

  20. Comments on the sequential probability ratio testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics

    1996-07-01

    In this paper the classical sequential probability ratio testing method (SPRT) is reconsidered. Every individual boundary crossing event of the SPRT is regarded as a new piece of evidence about the problem under hypothesis testing. The Bayes method is applied for belief updating, i.e. integrating these individual decisions. The procedure is recommended to use when the user (1) would like to be informed about the tested hypothesis continuously and (2) would like to achieve his final conclusion with high confidence level. (Author).

  1. Measurement of sequential change of regional ventilation by new developed Kr-81m method in asthmatics

    International Nuclear Information System (INIS)

    Shimada, Takao; Narita, Hiroto; Ishida, Hirohide; Terashima, Yoichi; Hirasawa, Korenori; Mori, Yutaka; Kawakami, Kenji

    1991-01-01

    Fazio has reported the distribution of Kr-81m by the continuous inhalation method indicates distribution of ventilation. To estimate sequential ventilation change with the continuous Kr-81m inhalation method, it is necessary to keep the concentration of Kr-81m constant. However this is frequently ignored. Because of this, we have developed the new method to maintain constant concentration of Kr-81m and compared the reliability of this method to the conventional method. The results of phantom study showed that the concentration of Kr-81m is kept constant, and sequential change of ventilation can be estimated only by our new method. On application of this method in asthmatics, we have discovered the existence of the region where ventilation reduced by inhalation of a bronchodilator. (author)

  2. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  3. The biological effects of quadripolar radiofrequency sequential application: a human experimental study.

    Science.gov (United States)

    Nicoletti, Giovanni; Cornaglia, Antonia Icaro; Faga, Angela; Scevola, Silvia

    2014-10-01

    An experimental study was conducted to assess the effectiveness and safety of an innovative quadripolar variable electrode configuration radiofrequency device with objective measurements in an ex vivo and in vivo human experimental model. Nonablative radiofrequency applications are well-established anti-ageing procedures for cosmetic skin tightening. The study was performed in two steps: ex vivo and in vivo assessments. In the ex vivo assessments the radiofrequency applications were performed on human full-thickness skin and subcutaneous tissue specimens harvested during surgery for body contouring. In the in vivo assessments the applications were performed on two volunteer patients scheduled for body contouring surgery at the end of the study. The assessment methods were: clinical examination and medical photography, temperature measurement with thermal imaging scan, and light microscopy histological examination. The ex vivo assessments allowed for identification of the effective safety range for human application. The in vivo assessments allowed for demonstration of the biological effects of sequential radiofrequency applications. After a course of radiofrequency applications, the collagen fibers underwent an immediate heat-induced rearrangement and were partially denaturated and progressively metabolized by the macrophages. An overall thickening and spatial rearrangement was appreciated both in the collagen and elastic fibers, the latter displaying a juvenile reticular pattern. A late onset in the macrophage activation after sequential radiofrequency applications was appreciated. Our data confirm the effectiveness of sequential radiofrequency applications in obtaining attenuation of the skin wrinkles by an overall skin tightening.

  4. A Sequential Quadratically Constrained Quadratic Programming Method of Feasible Directions

    International Nuclear Information System (INIS)

    Jian Jinbao; Hu Qingjie; Tang Chunming; Zheng Haiyan

    2007-01-01

    In this paper, a sequential quadratically constrained quadratic programming method of feasible directions is proposed for the optimization problems with nonlinear inequality constraints. At each iteration of the proposed algorithm, a feasible direction of descent is obtained by solving only one subproblem which consist of a convex quadratic objective function and simple quadratic inequality constraints without the second derivatives of the functions of the discussed problems, and such a subproblem can be formulated as a second-order cone programming which can be solved by interior point methods. To overcome the Maratos effect, an efficient higher-order correction direction is obtained by only one explicit computation formula. The algorithm is proved to be globally convergent and superlinearly convergent under some mild conditions without the strict complementarity. Finally, some preliminary numerical results are reported

  5. Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    Science.gov (United States)

    Liu, Rong

    2017-01-01

    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781

  6. Development of sequential analytical method for the determination of U-238, U-234, Th-232, Th-230, Th-228, Ra-226 and Ra-228 and its application in mineral waters

    International Nuclear Information System (INIS)

    Costa Lauria, D. da.

    1986-01-01

    A sequential analytical method for the determination of U-238, U-234, Th-232, Th-230, Th-228, Ra-226 and Ra-228 in environmental samples and applied to the analysis of mineral waters is studied. Thorium isotopes are coprecipitated with lanthanium fluoride before counting in alpha spectrometer, the uranium isotopes are determined by alpha spectrometry following extraction with TOPO onto a polymenic membrane. Radium-226 is determined with the radom emanation technique. (M.J.C.) [pt

  7. A novel method for the sequential removal and separation of multiple heavy metals from wastewater.

    Science.gov (United States)

    Fang, Li; Li, Liang; Qu, Zan; Xu, Haomiao; Xu, Jianfang; Yan, Naiqiang

    2018-01-15

    A novel method was developed and applied for the treatment of simulated wastewater containing multiple heavy metals. A sorbent of ZnS nanocrystals (NCs) was synthesized and showed extraordinary performance for the removal of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ . The removal efficiencies of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ were 99.9%, 99.9%, 90.8% and 66.3%, respectively. Meanwhile, it was determined that solubility product (K sp ) of heavy metal sulfides was closely related to adsorption selectivity of various heavy metals on the sorbent. The removal efficiency of Hg 2+ was higher than that of Cd 2+ , while the K sp of HgS was lower than that of CdS. It indicated that preferential adsorption of heavy metals occurred when the K sp of the heavy metal sulfide was lower. In addition, the differences in the K sp of heavy metal sulfides allowed for the exchange of heavy metals, indicating the potential application for the sequential removal and separation of heavy metals from wastewater. According to the cumulative adsorption experimental results, multiple heavy metals were sequentially adsorbed and separated from the simulated wastewater in the order of the K sp of their sulfides. This method holds the promise of sequentially removing and separating multiple heavy metals from wastewater. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Sequential specification of time-aware stream processing applications

    NARCIS (Netherlands)

    Geuns, S.J.; Hausmans, J.P.H.M.; Bekooij, Marco Jan Gerrit

    Automatic parallelization of Nested Loop Programs (NLPs) is an attractive method to create embedded real-time stream processing applications for multi-core systems. However, the description and parallelization of applications with a time dependent functional behavior has not been considered in NLPs.

  9. Comparative study of six sequential spectrophotometric methods for quantification and separation of ribavirin, sofosbuvir and daclatasvir: An application on Laboratory prepared mixture, pharmaceutical preparations, spiked human urine, spiked human plasma, and dissolution test.

    Science.gov (United States)

    Hassan, Wafaa S; Elmasry, Manal S; Elsayed, Heba M; Zidan, Dalia W

    2018-05-18

    In accordance with International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) guidelines, six novel, simple and precise sequential spectrophotometric methods were developed and validated for the simultaneous analysis of Ribavirin (RIB), Sofosbuvir (SOF), and Daclatasvir (DAC) in their mixture without prior separation steps. These drugs are described as co-administered for treatment of Hepatitis C virus (HCV). HCV is the cause of hepatitis C and some cancers such as liver cancer (hepatocellular carcinoma) and lymphomas in humans. These techniques consisted of several sequential steps using zero, ratio and/or derivative spectra. DAC was first determined through direct spectrophotometry at 313.7 nm without any interference of the other two drugs while RIB and SOF can be determined after ratio subtraction through five methods; Ratio difference spectrophotometric method, successive derivative ratio method, constant center, isoabsorptive method at 238.8 nm, and mean centering of the ratio spectra (MCR) at 224 nm and 258 nm for RIB and SOF, respectively. The calibration curve is linear over the concentration ranges of (6-42), (10-70) and (4-16) μg/mL for RIB, SOF, and DAC, respectively. This method was successfully applied to commercial pharmaceutical preparation of the drugs, spiked human urine, and spiked human plasma. The above methods are very simple methods that were developed for the simultaneous determination of binary and ternary mixtures and so enhance signal-to-noise ratio. The method has been successfully applied to the simultaneous analysis of RIB, SOF, and DAC in laboratory prepared mixtures. The obtained results are statistically compared with those obtained by the official or reported methods, showing no significant difference with respect to accuracy and precision at p = 0.05. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Sequential method for the assessment of innovations in computer assisted industrial processes

    International Nuclear Information System (INIS)

    Suarez Antola R.

    1995-01-01

    A sequential method for the assessment of innovations in industrial processes is proposed, using suitable combinations of mathematical modelling and numerical simulation of dynamics. Some advantages and limitations of the proposed method are discussed. tabs

  11. Simultaneous determination of two active components of pharmaceutical preparations by sequential injection method using heteropoly complexes

    Directory of Open Access Journals (Sweden)

    Mohammed Khair E. A. Al-Shwaiyat

    2014-12-01

    Full Text Available New approach has been proposed for the simultaneous determination of two reducing agents based on the dependence of their reaction rate with 18-molybdo-2-phosphate heteropoly complex on pH. The method was automated using the manifold typical for the sequential analysis method. Ascorbic acid and rutin were determined by successive injection of two samples acidified to different pH. The linear range for rutin determination was 0.6-20 mg/L and the detection limit was 0.2 mg/L (l = 1 cm. The determination of rutin was possible in the presence of up to a 20-fold excess of ascorbic acid. The method was successfully applied to the determination of ascorbic acid and rutin in ascorutin tablets. The applicability of the proposed method for the determination of total polyphenol content in natural plant samples was shown.

  12. A generally applicable sequential alkaline phosphatase immunohistochemical double staining

    NARCIS (Netherlands)

    van der Loos, Chris M.; Teeling, Peter

    2008-01-01

    A universal type of sequential double alkaline phosphatase immunohistochemical staining is described that can be used for formalin-fixed, paraffin-embedded and cryostat tissue sections from human and mouse origin. It consists of two alkaline phosphatase detection systems including enzymatic

  13. Sequential sampling: a novel method in farm animal welfare assessment.

    Science.gov (United States)

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall

  14. [Using sequential indicator simulation method to define risk areas of soil heavy metals in farmland.

    Science.gov (United States)

    Yang, Hao; Song, Ying Qiang; Hu, Yue Ming; Chen, Fei Xiang; Zhang, Rui

    2018-05-01

    The heavy metals in soil have serious impacts on safety, ecological environment and human health due to their toxicity and accumulation. It is necessary to efficiently identify the risk area of heavy metals in farmland soil, which is of important significance for environment protection, pollution warning and farmland risk control. We collected 204 samples and analyzed the contents of seven kinds of heavy metals (Cu, Zn, Pb, Cd, Cr, As, Hg) in Zengcheng District of Guangzhou, China. In order to overcame the problems of the data, including the limitation of abnormal values and skewness distribution and the smooth effect with the traditional kriging methods, we used sequential indicator simulation method (SISIM) to define the spatial distribution of heavy metals, and combined Hakanson index method to identify potential ecological risk area of heavy metals in farmland. The results showed that: (1) Based on the similar accuracy of spatial prediction of soil heavy metals, the SISIM had a better expression of detail rebuild than ordinary kriging in small scale area. Compared to indicator kriging, the SISIM had less error rate (4.9%-17.1%) in uncertainty evaluation of heavy-metal risk identification. The SISIM had less smooth effect and was more applicable to simulate the spatial uncertainty assessment of soil heavy metals and risk identification. (2) There was no pollution in Zengcheng's farmland. Moderate potential ecological risk was found in the southern part of study area due to enterprise production, human activities, and river sediments. This study combined the sequential indicator simulation with Hakanson risk index method, and effectively overcame the outlier information loss and smooth effect of traditional kriging method. It provided a new way to identify the soil heavy metal risk area of farmland in uneven sampling.

  15. Statistical analysis of dose heterogeneity in circulating blood: Implications for sequential methods of total body irradiation

    International Nuclear Information System (INIS)

    Molloy, Janelle A.

    2010-01-01

    Purpose: Improvements in delivery techniques for total body irradiation (TBI) using Tomotherapy and intensity modulated radiation therapy have been proven feasible. Despite the promise of improved dose conformality, the application of these ''sequential'' techniques has been hampered by concerns over dose heterogeneity to circulating blood. The present study was conducted to provide quantitative evidence regarding the potential clinical impact of this heterogeneity. Methods: Blood perfusion was modeled analytically as possessing linear, sinusoidal motion in the craniocaudal dimension. The average perfusion period for human circulation was estimated to be approximately 78 s. Sequential treatment delivery was modeled as a Gaussian-shaped dose cloud with a 10 cm length that traversed a 183 cm patient length at a uniform speed. Total dose to circulating blood voxels was calculated via numerical integration and normalized to 2 Gy per fraction. Dose statistics and equivalent uniform dose (EUD) were calculated for relevant treatment times, radiobiological parameters, blood perfusion rates, and fractionation schemes. The model was then refined to account for random dispersion superimposed onto the underlying periodic blood flow. Finally, a fully stochastic model was developed using binomial and trinomial probability distributions. These models allowed for the analysis of nonlinear sequential treatment modalities and treatment designs that incorporate deliberate organ sparing. Results: The dose received by individual blood voxels exhibited asymmetric behavior that depended on the coherence among the blood velocity, circulation phase, and the spatiotemporal characteristics of the irradiation beam. Heterogeneity increased with the perfusion period and decreased with the treatment time. Notwithstanding, heterogeneity was less than ±10% for perfusion periods less than 150 s. The EUD was compromised for radiosensitive cells, long perfusion periods, and short treatment times

  16. Double tracer autoradiographic method for sequential evaluation of regional cerebral perfusion

    International Nuclear Information System (INIS)

    Matsuda, H.; Tsuji, S.; Oba, H.; Kinuya, K.; Terada, H.; Sumiya, H.; Shiba, K.; Mori, H.; Hisada, K.; Maeda, T.

    1989-01-01

    A new double tracer autoradiographic method for the sequential evaluation of altered regional cerebral perfusion in the same animal is presented. This method is based on the sequential injection of two tracers, 99m Tc-hexamethylpropyleneamine oxime and N-isopropyl-( 125 I)p-iodoamphetamine. This method is validated in the assessment of brovincamine effects on regional cerebral perfusion in an experimental model of chronic brain ischemia in the rat. The drug enhanced perfusion recovery in low-flow areas, selectively in surrounding areas of infarction. The results suggest that this technique is of potential use in the study of neuropharmacological effects applied during the experiment

  17. Optimization of sequential decisions by least squares Monte Carlo method

    DEFF Research Database (Denmark)

    Nishijima, Kazuyoshi; Anders, Annett

    change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...... is proposed by Longstaff and Schwartz (2001) for pricing of American options. The present paper formulates the decision problem in a more general manner and explains how the solution scheme proposed by Anders and Nishijima (2011) is implemented for the optimization of the formulated decision problem...

  18. A Robust Real Time Direction-of-Arrival Estimation Method for Sequential Movement Events of Vehicles.

    Science.gov (United States)

    Liu, Huawei; Li, Baoqing; Yuan, Xiaobing; Zhou, Qianwei; Huang, Jingchang

    2018-03-27

    Parameters estimation of sequential movement events of vehicles is facing the challenges of noise interferences and the demands of portable implementation. In this paper, we propose a robust direction-of-arrival (DOA) estimation method for the sequential movement events of vehicles based on a small Micro-Electro-Mechanical System (MEMS) microphone array system. Inspired by the incoherent signal-subspace method (ISM), the method that is proposed in this work employs multiple sub-bands, which are selected from the wideband signals with high magnitude-squared coherence to track moving vehicles in the presence of wind noise. The field test results demonstrate that the proposed method has a better performance in emulating the DOA of a moving vehicle even in the case of severe wind interference than the narrowband multiple signal classification (MUSIC) method, the sub-band DOA estimation method, and the classical two-sided correlation transformation (TCT) method.

  19. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    Science.gov (United States)

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  20. Double-label autoradiographic deoxyglucose method for sequential measurement of regional cerebral glucose utilization

    Energy Technology Data Exchange (ETDEWEB)

    Redies, C; Diksic, M; Evans, A C; Gjedde, A; Yamamoto, Y L

    1987-08-01

    A new double-label autoradiographic glucose analog method for the sequential measurement of altered regional cerebral metabolic rates for glucose in the same animal is presented. This method is based on the sequential injection of two boluses of glucose tracer labeled with two different isotopes (short-lived /sup 18/F and long-lived /sup 3/H, respectively). An operational equation is derived which allows the determination of glucose utilization for the time period before the injection of the second tracer; this equation corrects for accumulation and loss of the first tracer from the metabolic pool occurring after the injection of the second tracer. An error analysis of this operational equation is performed. The double-label deoxyglucose method is validated in the primary somatosensory (''barrel'') cortex of the anesthetized rat. Two different rows of whiskers were stimulated sequentially in each rat; the two periods of stimulation were each preceded by an injection of glucose tracer. After decapitation, dried brain slices were first exposed, in direct contact, to standard X-ray film and then to uncoated, ''tritium-sensitive'' film. Results show that the double-label deoxyglucose method proposed in this paper allows the quantification and complete separation of glucose utilization patterns elicited by two different stimulations sequentially applied in the same animal.

  1. Radio controlled detonators and sequential real time blast applications

    Energy Technology Data Exchange (ETDEWEB)

    Bernard, T.; Laboz, J.M. [Delta Caps International, Nice (France)

    1995-12-31

    Among the numerous technical evolutions in the blasting environment the authors are going to describe below the concept of electronic detonator sequenced by radio waves, and also its numerous applications. Three major technologies are used in the initiation environment: fused-initiated detonators; electric detonators; and non-electric detonators. The last two technologies were made available under multiple variants. Two major innovations are going to substantially change the way traditional detonators operate: pyrotechnic delays are replaced by electronic delays (greater accuracy); and triggering orders, passing through a cable, is now replaced by radio-waves transmission (possibility to do real time delay pattern). Such a new product provided all the features offered by current detonators, but also allows mastering specific cases that were difficult to control with the current technology, such as: vibration control; underground blast; and building demolition.

  2. Application of full-face round by the sequential blasting machine in tunnel excavation

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Y.D.; Park, B.K.; Lee, S.E.; Lim, H.U.

    1995-12-31

    Many methods and techniques have been developed to reduce ground vibrations. Some of them are an adoption of electric millisecond detonators with a sequential blasting machine and an improvement of initiating system with an adequate number of delay intervals. To reduce the level of ground vibration in tunnel excavation, the sequential blasting machine (S.B.M.) with decisecond detonators was adopted. A total of 134 blasts was recorded at various sites and the results were analyzed. The distances blast-to-structures were ranged from 20.3 to 42.0 meter, where charge weights were varied from 0.25 to 0.75kg per delay. It is proved that the sequential blasting in tunnel excavation is very effective to control ground vibration.

  3. Ensemble forecasting using sequential aggregation for photovoltaic power applications

    International Nuclear Information System (INIS)

    Thorey, Jean

    2017-01-01

    Our main objective is to improve the quality of photovoltaic power forecasts deriving from weather forecasts. Such forecasts are imperfect due to meteorological uncertainties and statistical modeling inaccuracies in the conversion of weather forecasts to power forecasts. First we gather several weather forecasts, secondly we generate multiple photovoltaic power forecasts, and finally we build linear combinations of the power forecasts. The minimization of the Continuous Ranked Probability Score (CRPS) allows to statistically calibrate the combination of these forecasts, and provides probabilistic forecasts under the form of a weighted empirical distribution function. We investigate the CRPS bias in this context and several properties of scoring rules which can be seen as a sum of quantile-weighted losses or a sum of threshold-weighted losses. The minimization procedure is achieved with online learning techniques. Such techniques come with theoretical guarantees of robustness on the predictive power of the combination of the forecasts. Essentially no assumptions are needed for the theoretical guarantees to hold. The proposed methods are applied to the forecast of solar radiation using satellite data, and the forecast of photovoltaic power based on high-resolution weather forecasts and standard ensembles of forecasts. (author) [fr

  4. A Generalizable Top-Down Nanostructuring Method of Bulk Oxides: Sequential Oxygen-Nitrogen Exchange Reaction.

    Science.gov (United States)

    Lee, Lanlee; Kang, Byungwuk; Han, Suyoung; Kim, Hee-Eun; Lee, Moo Dong; Bang, Jin Ho

    2018-05-27

    A thermal reaction route that induces grain fracture instead of grain growth is devised and developed as a top-down approach to prepare nanostructured oxides from bulk solids. This novel synthesis approach, referred to as the sequential oxygen-nitrogen exchange (SONE) reaction, exploits the reversible anion exchange between oxygen and nitrogen in oxides that is driven by a simple two-step thermal treatment in ammonia and air. Internal stress developed by significant structural rearrangement via the formation of (oxy)nitride and the creation of oxygen vacancies and their subsequent combination into nanopores transforms bulk solid oxides into nanostructured oxides. The SONE reaction can be applicable to most transition metal oxides, and when utilized in a lithium-ion battery, the produced nanostructured materials are superior to their bulk counterparts and even comparable to those produced by conventional bottom-up approaches. Given its simplicity and scalability, this synthesis method could open a new avenue to the development of high-performance nanostructured electrode materials that can meet the industrial demand of cost-effectiveness for mass production. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis

    Science.gov (United States)

    Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.

    As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.

  6. Application of Box-Wilson experimental design method for 2,4-dinitrotoluene treatment in a sequential anaerobic migrating blanket reactor (AMBR)/aerobic completely stirred tank reactor (CSTR) system

    International Nuclear Information System (INIS)

    Kuscu, Ozlem Selcuk; Sponza, Delia Teresa

    2011-01-01

    A sequential aerobic completely stirred tank reactor (CSTR) following the anaerobic migrating blanket reactor (AMBR) was used to treat a synthetic wastewater containing 2,4-dinitrotoluene (2,4-DNT). A Box-Wilson statistical experiment design was used to determine the effects of 2,4-DNT and the hydraulic retention times (HRTs) on 2,4-DNT and COD removal efficiencies in the AMBR reactor. The 2,4-DNT concentrations in the feed (0-280 mg/L) and the HRT (0.5-10 days) were considered as the independent variables while the 2,4-DNT and chemical oxygen demand (COD) removal efficiencies, total and methane gas productions, methane gas percentage, pH, total volatile fatty acid (TVFA) and total volatile fatty acid/bicarbonate alkalinity (TVFA/Bic.Alk.) ratio were considered as the objective functions in the Box-Wilson statistical experiment design in the AMBR. The predicted data for the parameters given above were determined from the response functions by regression analysis of the experimental data and exhibited excellent agreement with the experimental results. The optimum HRT which gave the maximum COD (97.00%) and 2,4-DNT removal (99.90%) efficiencies was between 5 and 10 days at influent 2,4-DNT concentrations 1-280 mg/L in the AMBR. The aerobic CSTR was used for removals of residual COD remaining from the AMBR, and for metabolites of 2,4-DNT. The maximum COD removal efficiency was 99% at an HRT of 1.89 days at a 2,4-DNT concentration of 239 mg/L in the aerobic CSTR. It was found that 280 mg/L 2,4-DNT transformed to 2,4-diaminotoluene (2,4-DAT) via 2-amino-4-nitrotoluene (2-A-4-NT) and 4-amino-2-nitrotoluene (4-A-2-NT) in the AMBR. The maximum 2,4-DAT removal was 82% at an HRT of 8.61 days in the aerobic CSTR. The maximum total COD and 2,4-DNT removal efficiencies were 99.00% and 99.99%, respectively, at an influent 2,4-DNT concentration of 239 mg/L and at 1.89 days of HRT in the sequential AMBR/CSTR.

  7. Application of Box-Wilson experimental design method for 2,4-dinitrotoluene treatment in a sequential anaerobic migrating blanket reactor (AMBR)/aerobic completely stirred tank reactor (CSTR) system

    Energy Technology Data Exchange (ETDEWEB)

    Kuscu, Ozlem Selcuk, E-mail: oselcuk@mmf.sdu.edu.tr [Department of Environmental Engineering, Engineering and Architecture Faculty, Sueleyman Demirel University, Cuenuer Campus, 32260 Isparta (Turkey); Sponza, Delia Teresa [Dokuz Eyluel University, Engineering Faculty, Environmental Engineering Department, Buca Kaynaklar campus, Izmir (Turkey)

    2011-03-15

    A sequential aerobic completely stirred tank reactor (CSTR) following the anaerobic migrating blanket reactor (AMBR) was used to treat a synthetic wastewater containing 2,4-dinitrotoluene (2,4-DNT). A Box-Wilson statistical experiment design was used to determine the effects of 2,4-DNT and the hydraulic retention times (HRTs) on 2,4-DNT and COD removal efficiencies in the AMBR reactor. The 2,4-DNT concentrations in the feed (0-280 mg/L) and the HRT (0.5-10 days) were considered as the independent variables while the 2,4-DNT and chemical oxygen demand (COD) removal efficiencies, total and methane gas productions, methane gas percentage, pH, total volatile fatty acid (TVFA) and total volatile fatty acid/bicarbonate alkalinity (TVFA/Bic.Alk.) ratio were considered as the objective functions in the Box-Wilson statistical experiment design in the AMBR. The predicted data for the parameters given above were determined from the response functions by regression analysis of the experimental data and exhibited excellent agreement with the experimental results. The optimum HRT which gave the maximum COD (97.00%) and 2,4-DNT removal (99.90%) efficiencies was between 5 and 10 days at influent 2,4-DNT concentrations 1-280 mg/L in the AMBR. The aerobic CSTR was used for removals of residual COD remaining from the AMBR, and for metabolites of 2,4-DNT. The maximum COD removal efficiency was 99% at an HRT of 1.89 days at a 2,4-DNT concentration of 239 mg/L in the aerobic CSTR. It was found that 280 mg/L 2,4-DNT transformed to 2,4-diaminotoluene (2,4-DAT) via 2-amino-4-nitrotoluene (2-A-4-NT) and 4-amino-2-nitrotoluene (4-A-2-NT) in the AMBR. The maximum 2,4-DAT removal was 82% at an HRT of 8.61 days in the aerobic CSTR. The maximum total COD and 2,4-DNT removal efficiencies were 99.00% and 99.99%, respectively, at an influent 2,4-DNT concentration of 239 mg/L and at 1.89 days of HRT in the sequential AMBR/CSTR.

  8. Application of Box-Wilson experimental design method for 2,4-dinitrotoluene treatment in a sequential anaerobic migrating blanket reactor (AMBR)/aerobic completely stirred tank reactor (CSTR) system.

    Science.gov (United States)

    Kuşçu, Özlem Selçuk; Sponza, Delia Teresa

    2011-03-15

    A sequential aerobic completely stirred tank reactor (CSTR) following the anaerobic migrating blanket reactor (AMBR) was used to treat a synthetic wastewater containing 2,4-dinitrotoluene (2,4-DNT). A Box-Wilson statistical experiment design was used to determine the effects of 2,4-DNT and the hydraulic retention times (HRTs) on 2,4-DNT and COD removal efficiencies in the AMBR reactor. The 2,4-DNT concentrations in the feed (0-280 mg/L) and the HRT (0.5-10 days) were considered as the independent variables while the 2,4-DNT and chemical oxygen demand (COD) removal efficiencies, total and methane gas productions, methane gas percentage, pH, total volatile fatty acid (TVFA) and total volatile fatty acid/bicarbonate alkalinity (TVFA/Bic.Alk.) ratio were considered as the objective functions in the Box-Wilson statistical experiment design in the AMBR. The predicted data for the parameters given above were determined from the response functions by regression analysis of the experimental data and exhibited excellent agreement with the experimental results. The optimum HRT which gave the maximum COD (97.00%) and 2,4-DNT removal (99.90%) efficiencies was between 5 and 10 days at influent 2,4-DNT concentrations 1-280 mg/L in the AMBR. The aerobic CSTR was used for removals of residual COD remaining from the AMBR, and for metabolites of 2,4-DNT. The maximum COD removal efficiency was 99% at an HRT of 1.89 days at a 2,4-DNT concentration of 239 mg/L in the aerobic CSTR. It was found that 280 mg/L 2,4-DNT transformed to 2,4-diaminotoluene (2,4-DAT) via 2-amino-4-nitrotoluene (2-A-4-NT) and 4-amino-2-nitrotoluene (4-A-2-NT) in the AMBR. The maximum 2,4-DAT removal was 82% at an HRT of 8.61 days in the aerobic CSTR. The maximum total COD and 2,4-DNT removal efficiencies were 99.00% and 99.99%, respectively, at an influent 2,4-DNT concentration of 239 mg/L and at 1.89 days of HRT in the sequential AMBR/CSTR. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  10. Introducing sequential managed aquifer recharge technology (SMART) - From laboratory to full-scale application.

    Science.gov (United States)

    Regnery, Julia; Wing, Alexandre D; Kautz, Jessica; Drewes, Jörg E

    2016-07-01

    Previous lab-scale studies demonstrated that stimulating the indigenous soil microbial community of groundwater recharge systems by manipulating the availability of biodegradable organic carbon (BDOC) and establishing sequential redox conditions in the subsurface resulted in enhanced removal of compounds with redox-dependent removal behavior such as trace organic chemicals. The aim of this study is to advance this concept from laboratory to full-scale application by introducing sequential managed aquifer recharge technology (SMART). To validate the concept of SMART, a full-scale managed aquifer recharge (MAR) facility in Colorado was studied for three years that featured the proposed sequential configuration: A short riverbank filtration passage followed by subsequent re-aeration and artificial recharge and recovery. Our findings demonstrate that sequential subsurface treatment zones characterized by carbon-rich (>3 mg/L BDOC) to carbon-depleted (≤1 mg/L BDOC) and predominant oxic redox conditions can be established at full-scale MAR facilities adopting the SMART concept. The sequential configuration resulted in substantially improved trace organic chemical removal (i.e. higher biodegradation rate coefficients) for moderately biodegradable compounds compared to conventional MAR systems with extended travel times in an anoxic aquifer. Furthermore, sorption batch experiments with clay materials dispersed in the subsurface implied that sorptive processes might also play a role in the attenuation and retardation of chlorinated flame retardants during MAR. Hence, understanding key factors controlling trace organic chemical removal performance during SMART allows for systems to be engineered for optimal efficiency, resulting in improved removal of constituents at shorter subsurface travel times and a potentially reduced physical footprint of MAR installations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  12. A Comparison of Sequential and GPU Implementations of Iterative Methods to Compute Reachability Probabilities

    Directory of Open Access Journals (Sweden)

    Elise Cormie-Bowins

    2012-10-01

    Full Text Available We consider the problem of computing reachability probabilities: given a Markov chain, an initial state of the Markov chain, and a set of goal states of the Markov chain, what is the probability of reaching any of the goal states from the initial state? This problem can be reduced to solving a linear equation Ax = b for x, where A is a matrix and b is a vector. We consider two iterative methods to solve the linear equation: the Jacobi method and the biconjugate gradient stabilized (BiCGStab method. For both methods, a sequential and a parallel version have been implemented. The parallel versions have been implemented on the compute unified device architecture (CUDA so that they can be run on a NVIDIA graphics processing unit (GPU. From our experiments we conclude that as the size of the matrix increases, the CUDA implementations outperform the sequential implementations. Furthermore, the BiCGStab method performs better than the Jacobi method for dense matrices, whereas the Jacobi method does better for sparse ones. Since the reachability probabilities problem plays a key role in probabilistic model checking, we also compared the implementations for matrices obtained from a probabilistic model checker. Our experiments support the conjecture by Bosnacki et al. that the Jacobi method is superior to Krylov subspace methods, a class to which the BiCGStab method belongs, for probabilistic model checking.

  13. Sequential plasma activation methods for hydrophilic direct bonding at sub-200 °C

    Science.gov (United States)

    He, Ran; Yamauchi, Akira; Suga, Tadatomo

    2018-02-01

    We present our newly developed sequential plasma activation methods for hydrophilic direct bonding of silica glasses and thermally grown SiO2 films. N2 plasma was employed to introduce a metastable oxynitride layer on wafer surfaces for the improvement of bond energy. By using either O2-plasma/N2-plasma/N-radical or N2-plasma/N-radical sequential activation, the quartz-quartz bond energy was increased from 2.7 J/m2 to close to the quartz bulk fracture energy that was estimated to be around 9.0 J/m2 after post-bonding annealing at 200 °C. The silicon bulklike bond energy between thermal SiO2 films was also obtained. We suggest that the improvement is attributable to surface modification such as N-related defect formation and asperity softening by the N2 plasma surface treatment.

  14. Person Recognition Method using Sequential Walking Footprints via Overlapped Foot Shape and Center-Of-Pressure Trajectory

    Directory of Open Access Journals (Sweden)

    Jin-Woo Jung

    2013-08-01

    Full Text Available One emerging biometric identification method is the use of human footprint. However, in the previous research, there were some limitations resulting from the spatial resolution of sensors. One possible method to overcome this limitation is through the use additional information such as dynamic walking information in sequential walking footprint. In this study, we suggest a new person recognition scheme based on both overlapped foot shape and COP (Center Of Pressure trajectory during one-step walking. And, we show the usefulness of the suggested method, obtaining a 98.6% recognition rate in our experiment with eleven people. In addition, we show an application of the suggested method, automatic door-opening system for intelligent residential space.

  15. Fast-responding liquid crystal light-valve technology for color-sequential display applications

    Science.gov (United States)

    Janssen, Peter J.; Konovalov, Victor A.; Muravski, Anatoli A.; Yakovenko, Sergei Y.

    1996-04-01

    A color sequential projection system has some distinct advantages over conventional systems which make it uniquely suitable for consumer TV as well as high performance professional applications such as computer monitors and electronic cinema. A fast responding light-valve is, clearly, essential for a good performing system. Response speed of transmissive LC lightvalves has been marginal thus far for good color rendition. Recently, Sevchenko Institute has made some very fast reflective LC cells which were evaluated at Philips Labs. These devices showed sub millisecond-large signal-response times, even at room temperature, and produced good color in a projector emulation testbed. In our presentation we describe our highly efficient color sequential projector and demonstrate its operation on video tape. Next we discuss light-valve requirements and reflective light-valve test results.

  16. A sequential fuzzy diagnosis method for rotating machinery using ant colony optimization and possibility theory

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Hao; Ping, Xueliang; Cao, Yi; Lie, Ke [Jiangnan University, Wuxi (China); Chen, Peng [Mie University, Mie (Japan); Wang, Huaqing [Beijing University, Beijing (China)

    2014-04-15

    This study proposes a novel intelligent fault diagnosis method for rotating machinery using ant colony optimization (ACO) and possibility theory. The non-dimensional symptom parameters (NSPs) in the frequency domain are defined to reflect the features of the vibration signals measured in each state. A sensitive evaluation method for selecting good symptom parameters using principal component analysis (PCA) is proposed for detecting and distinguishing faults in rotating machinery. By using ACO clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. A fuzzy diagnosis method using sequential inference and possibility theory is also proposed, by which the conditions of the machinery can be identified sequentially. Lastly, the proposed method is compared with a conventional neural networks (NN) method. Practical examples of diagnosis for a V-belt driving equipment used in a centrifugal fan are provided to verify the effectiveness of the proposed method. The results verify that the faults that often occur in V-belt driving equipment, such as a pulley defect state, a belt defect state and a belt looseness state, are effectively identified by the proposed method, while these faults are difficult to detect using conventional NN.

  17. Sequential Application of Soil Vapor Extraction and Bioremediation Processes for the Remediation of Ethylbenzene-Contaminated Soils

    DEFF Research Database (Denmark)

    Soares, António Carlos Alves; Pinho, Maria Teresa; Albergaria, José Tomás

    2012-01-01

    Soil vapor extraction (SVE) is an efficient, well-known and widely applied soil remediation technology. However, under certain conditions it cannot achieve the defined cleanup goals, requiring further treatment, for example, through bioremediation (BR). The sequential application of these technol......Soil vapor extraction (SVE) is an efficient, well-known and widely applied soil remediation technology. However, under certain conditions it cannot achieve the defined cleanup goals, requiring further treatment, for example, through bioremediation (BR). The sequential application...

  18. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-01-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  19. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-10-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  20. Sequential and Simultaneous Applications of UV and Chlorine for Adenovirus Inactivation.

    Science.gov (United States)

    Rattanakul, Surapong; Oguma, Kumiko; Takizawa, Satoshi

    2015-09-01

    Adenoviruses are water-borne human pathogens with high resistance to UV disinfection. Combination of UV treatment and chlorination could be an effective approach to deal with adenoviruses. In this study, human adenovirus 5 (HAdV-5) was challenged in a bench-scale experiment by separate applications of UV or chlorine and by combined applications of UV and chlorine in either a sequential or simultaneous manner. The treated samples were then propagated in human lung carcinoma epithelial cells to quantify the log inactivation of HAdV-5. When the processes were separate, a fluence of 100 mJ/cm(2) and a CT value of 0.02 mg min/L were required to achieve 2 log inactivation of HAdV-5 by UV disinfection and chlorination, respectively. Interestingly, synergistic effects on the HAdV-5 inactivation rates were found in the sequential process of chlorine followed by UV (Cl2-UV) (p simultaneous application of UV/Cl2. This implies that a pretreatment with chlorine may increase the sensitivity of the virus to the subsequent UV disinfection. In conclusion, this study suggests that the combined application of UV and chlorine could be an effective measure against adenoviruses as a multi-barrier approach in water disinfection.

  1. Radial basis function neural networks with sequential learning MRAN and its applications

    CERN Document Server

    Sundararajan, N; Wei Lu Ying

    1999-01-01

    This book presents in detail the newly developed sequential learning algorithm for radial basis function neural networks, which realizes a minimal network. This algorithm, created by the authors, is referred to as Minimal Resource Allocation Networks (MRAN). The book describes the application of MRAN in different areas, including pattern recognition, time series prediction, system identification, control, communication and signal processing. Benchmark problems from these areas have been studied, and MRAN is compared with other algorithms. In order to make the book self-contained, a review of t

  2. An Improved Sequential Initiation Method for Multitarget Track in Clutter with Large Noise Measurement

    Directory of Open Access Journals (Sweden)

    Daxiong Ji

    2014-01-01

    Full Text Available This paper proposes an improved sequential method for underwater multiple objects tracks initiation in clutter, estimating the initial position for the trajectory. The underwater environment is complex and changeable, and the sonar data are not very ideal. When the detection distance is far, the error of measured data is also great. Besides that, the clutter has a grave effect on the tracks initiation. So it is hard to initialize a track and estimate the initial position. The new tracks initiation is that when at least six of ten points meet the requirements, then we determine that there is a new track and the initial states of the parameters are estimated by the linear least square method. Compared to the conventional tracks initiation methods, our method not only considers the kinematics information of targets, but also regards the error of the sonar sensors as an important element. Computer simulations confirm that the performance of our method is very nice.

  3. Sequential Fuzzy Diagnosis Method for Motor Roller Bearing in Variable Operating Conditions Based on Vibration Analysis

    Directory of Open Access Journals (Sweden)

    Yi Cao

    2013-06-01

    Full Text Available A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD and the relative crossing information (RCI methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO clustering algorithm, the synthesizing symptom parameters (SSP for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP, and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well.

  4. The sequential application of macroalgal biosorbents for the bioremediation of a complex industrial effluent.

    Directory of Open Access Journals (Sweden)

    Joel T Kidgell

    Full Text Available Fe-treated biochar and raw biochar produced from macroalgae are effective biosorbents of metalloids and metals, respectively. However, the treatment of complex effluents that contain both metalloid and metal contaminants presents a challenging scenario. We test a multiple-biosorbent approach to bioremediation using Fe-biochar and biochar to remediate both metalloids and metals from the effluent from a coal-fired power station. First, a model was derived from published data for this effluent to predict the biosorption of 21 elements by Fe-biochar and biochar. The modelled outputs were then used to design biosorption experiments using Fe-biochar and biochar, both simultaneously and in sequence, to treat effluent containing multiple contaminants in excess of water quality criteria. The waste water was produced during ash disposal at an Australian coal-fired power station. The application of Fe-biochar and biochar, either simultaneously or sequentially, resulted in a more comprehensive remediation of metalloids and metals compared to either biosorbent used individually. The most effective treatment was the sequential use of Fe-biochar to remove metalloids from the waste water, followed by biochar to remove metals. Al, Cd, Cr, Cu, Mn, Ni, Pb, Zn were reduced to the lowest concentration following the sequential application of the two biosorbents, and their final concentrations were predicted by the model. Overall, 17 of the 21 elements measured were remediated to, or below, the concentrations that were predicted by the model. Both metalloids and metals can be remediated from complex effluent using biosorbents with different characteristics but derived from a single feedstock. Furthermore, the extent of remediation can be predicted for similar effluents using additive models.

  5. The sequential application of macroalgal biosorbents for the bioremediation of a complex industrial effluent.

    Science.gov (United States)

    Kidgell, Joel T; de Nys, Rocky; Paul, Nicholas A; Roberts, David A

    2014-01-01

    Fe-treated biochar and raw biochar produced from macroalgae are effective biosorbents of metalloids and metals, respectively. However, the treatment of complex effluents that contain both metalloid and metal contaminants presents a challenging scenario. We test a multiple-biosorbent approach to bioremediation using Fe-biochar and biochar to remediate both metalloids and metals from the effluent from a coal-fired power station. First, a model was derived from published data for this effluent to predict the biosorption of 21 elements by Fe-biochar and biochar. The modelled outputs were then used to design biosorption experiments using Fe-biochar and biochar, both simultaneously and in sequence, to treat effluent containing multiple contaminants in excess of water quality criteria. The waste water was produced during ash disposal at an Australian coal-fired power station. The application of Fe-biochar and biochar, either simultaneously or sequentially, resulted in a more comprehensive remediation of metalloids and metals compared to either biosorbent used individually. The most effective treatment was the sequential use of Fe-biochar to remove metalloids from the waste water, followed by biochar to remove metals. Al, Cd, Cr, Cu, Mn, Ni, Pb, Zn were reduced to the lowest concentration following the sequential application of the two biosorbents, and their final concentrations were predicted by the model. Overall, 17 of the 21 elements measured were remediated to, or below, the concentrations that were predicted by the model. Both metalloids and metals can be remediated from complex effluent using biosorbents with different characteristics but derived from a single feedstock. Furthermore, the extent of remediation can be predicted for similar effluents using additive models.

  6. Two approaches for sequential extraction of radionuclides in soils: batch and column methods

    International Nuclear Information System (INIS)

    Vidal, M.; Rauret, G.

    1993-01-01

    A three-step sequential extraction designed by Community Bureau of Reference (BCR) is applied to two types of soil (sandy and sandy-loam) which had been previously contaminated with a radionuclide aerosol containing 134 Cs, 85 Sr and 110m Ag. This scheme is applied using both batch and column methods. The radionuclide distribution obtained with this scheme depends both on the method and on soil type. Compared with the batch method, column extraction is an inadvisable method. Kinetic aspects seem to be important, especially in the first and third fractions. The radionuclide distribution shows that radiostrontium has high mobility, radiocaesium is highly retained by clay minerals whereas Fe/Mn oxides and organic matter have an important role in radiosilver retention. (Author)

  7. A fast and efficient method for sequential cone-beam tomography

    International Nuclear Information System (INIS)

    Koehler, Th.; Proksa, R.; Grass, M.

    2001-01-01

    Sequential cone-beam tomography is a method that uses data of two or more parallel circular trajectories of a cone-beam scanner to reconstruct the object function. We propose a condition for the data acquisition that ensures that all object points between two successive circles are irradiated over an angular span of the x-ray source position of exactly 360 deg. in total as seen along the rotation axis. A fast and efficient approximative reconstruction method for the proposed acquisition is presented which uses data from exactly 360 deg. for every object point. It is based on the Tent-FDK method which was recently developed for single circular cone-beam CT. The measurement geometry does not provide sufficient data for exact reconstruction but it is shown that the proposed reconstruction method provides satisfying image quality for small cone angles

  8. Statistical analysis of dose heterogeneity in circulating blood: implications for sequential methods of total body irradiation.

    Science.gov (United States)

    Molloy, Janelle A

    2010-11-01

    Improvements in delivery techniques for total body irradiation (TBI) using Tomotherapy and intensity modulated radiation therapy have been proven feasible. Despite the promise of improved dose conformality, the application of these "sequential" techniques has been hampered by concerns over dose heterogeneity to circulating blood. The present study was conducted to provide quantitative evidence regarding the potential clinical impact of this heterogeneity. Blood perfusion was modeled analytically as possessing linear, sinusoidal motion in the craniocaudal dimension. The average perfusion period for human circulation was estimated to be approximately 78 s. Sequential treatment delivery was modeled as a Gaussian-shaped dose cloud with a 10 cm length that traversed a 183 cm patient length at a uniform speed. Total dose to circulating blood voxels was calculated via numerical integration and normalized to 2 Gy per fraction. Dose statistics and equivalent uniform dose (EUD) were calculated for relevant treatment times, radiobiological parameters, blood perfusion rates, and fractionation schemes. The model was then refined to account for random dispersion superimposed onto the underlying periodic blood flow. Finally, a fully stochastic model was developed using binomial and trinomial probability distributions. These models allowed for the analysis of nonlinear sequential treatment modalities and treatment designs that incorporate deliberate organ sparing. The dose received by individual blood voxels exhibited asymmetric behavior that depended on the coherence among the blood velocity, circulation phase, and the spatiotemporal characteristics of the irradiation beam. Heterogeneity increased with the perfusion period and decreased with the treatment time. Notwithstanding, heterogeneity was less than +/- 10% for perfusion periods less than 150 s. The EUD was compromised for radiosensitive cells, long perfusion periods, and short treatment times. However, the EUD was

  9. Arsenic Mobility and Availability in Sediments by Application of BCR Sequential Extractions Method; Movilidad y Disponibilidad de Arsenico en Sedimentos Mediante la Aplicacion del Metodo de Extracciones Secuenciales BCR

    Energy Technology Data Exchange (ETDEWEB)

    Larios, R.; Fernandez, R.; Rucandio, M. I.

    2011-05-13

    Arsenic is a metalloid found in nature, both naturally and due to anthropogenic activities. Among them, mining works are an important source of arsenic release to the environment. Asturias is a region where important mercury mines were exploited, and in them arsenic occurs in para genesis with mercury minerals. The toxicity and mobility of this element depends on the chemical species it is found. Fractionation studies are required to analyze the mobility of this metalloid in soils and sediments. Among them, the proposed by the Bureau Community of Reference (BCR) is one of the most employed. This method attempts to divide up, by operationally defined stages, the amount of this element associated with carbonates (fraction 1), iron and manganese oxy hydroxides (fraction 2), organic matter and sulphides (fraction 3), and finally as the amount associated residual fraction to primary and secondary minerals, that is, from the most labile fractions to the most refractory ones. Fractionation of arsenic in sediments from two mines in Asturias were studied, La Soterrana and Los Rueldos. Sediments from La Soterrana showed high levels of arsenic in the non-residual phases, indicating that the majority of arsenic has an anthropogenic origin. By contrast, in sediments from Los Rueldos most of the arsenic is concentrated in the residual phase, indicating that this element remains bound to very refractory primary minerals, as is also demonstrated by the strong correlation of arsenic fractionation and the fractionation of elements present in refractory minerals, such as iron, aluminum and titanium. (Author) 51 refs.

  10. Anomaly Detection in Gas Turbine Fuel Systems Using a Sequential Symbolic Method

    Directory of Open Access Journals (Sweden)

    Fei Li

    2017-05-01

    Full Text Available Anomaly detection plays a significant role in helping gas turbines run reliably and economically. Considering the collective anomalous data and both sensitivity and robustness of the anomaly detection model, a sequential symbolic anomaly detection method is proposed and applied to the gas turbine fuel system. A structural Finite State Machine is used to evaluate posterior probabilities of observing symbolic sequences and the most probable state sequences they may locate. Hence an estimation-based model and a decoding-based model are used to identify anomalies in two different ways. Experimental results indicate that both models have both ideal performance overall, but the estimation-based model has a strong robustness ability, whereas the decoding-based model has a strong accuracy ability, particularly in a certain range of sequence lengths. Therefore, the proposed method can facilitate well existing symbolic dynamic analysis- based anomaly detection methods, especially in the gas turbine domain.

  11. Sequential growth factor application in bone marrow stromal cell ligament engineering.

    Science.gov (United States)

    Moreau, Jodie E; Chen, Jingsong; Horan, Rebecca L; Kaplan, David L; Altman, Gregory H

    2005-01-01

    In vitro bone marrow stromal cell (BMSC) growth may be enhanced through culture medium supplementation, mimicking the biochemical environment in which cells optimally proliferate and differentiate. We hypothesize that the sequential administration of growth factors to first proliferate and then differentiate BMSCs cultured on silk fiber matrices will support the enhanced development of ligament tissue in vitro. Confluent second passage (P2) BMSCs obtained from purified bone marrow aspirates were seeded on RGD-modified silk matrices. Seeded matrices were divided into three groups for 5 days of static culture, with medium supplement of basic fibroblast growth factor (B) (1 ng/mL), epidermal growth factor (E; 1 ng/mL), or growth factor-free control (C). After day 5, medium supplementation was changed to transforming growth factor-beta1 (T; 5 ng/mL) or C for an additional 9 days of culture. Real-time RT-PCR, SEM, MTT, histology, and ELISA for collagen type I of all sample groups were performed. Results indicated that BT supported the greatest cell ingrowth after 14 days of culture in addition to the greatest cumulative collagen type I expression measured by ELISA. Sequential growth factor application promoted significant increases in collagen type I transcript expression from day 5 of culture to day 14, for five of six groups tested. All T-supplemented samples surpassed their respective control samples in both cell ingrowth and collagen deposition. All samples supported spindle-shaped, fibroblast cell morphology, aligning with the direction of silk fibers. These findings indicate significant in vitro ligament development after only 14 days of culture when using a sequential growth factor approach.

  12. Testing sequential extraction methods for the analysis of multiple stable isotope systems from a bone sample

    Science.gov (United States)

    Sahlstedt, Elina; Arppe, Laura

    2017-04-01

    Stable isotope composition of bones, analysed either from the mineral phase (hydroxyapatite) or from the organic phase (mainly collagen) carry important climatological and ecological information and are therefore widely used in paleontological and archaeological research. For the analysis of the stable isotope compositions, both of the phases, hydroxyapatite and collagen, have their more or less well established separation and analytical techniques. Recent development in IRMS and wet chemical extraction methods have facilitated the analysis of very small bone fractions (500 μg or less starting material) for PO43-O isotope composition. However, the uniqueness and (pre-) historical value of each archaeological and paleontological finding lead to preciously little material available for stable isotope analyses, encouraging further development of microanalytical methods for the use of stable isotope analyses. Here we present the first results in developing extraction methods for combining collagen C- and N-isotope analyses to PO43-O-isotope analyses from a single bone sample fraction. We tested sequential extraction starting with dilute acid demineralization and collection of both collagen and PO43-fractions, followed by further purification step by H2O2 (PO43-fraction). First results show that bone sample separates as small as 2 mg may be analysed for their δ15N, δ13C and δ18OPO4 values. The method may be incorporated in detailed investigation of sequentially developing skeletal material such as teeth, potentially allowing for the investigation of interannual variability in climatological/environmental signals or investigation of the early life history of an individual.

  13. Mercury and trace element fractionation in Almaden soils by application of different sequential extraction procedures

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, D.M.; Quejido, A.J.; Fernandez, M.; Hernandez, C.; Schmid, T.; Millan, R.; Gonzalez, M.; Aldea, M.; Martin, R.; Morante, R. [CIEMAT, Madrid (Spain)

    2005-04-01

    A comparative evaluation of the mercury distribution in a soil sample from Almaden (Spain) has been performed by applying three different sequential extraction procedures, namely, modified BCR (three steps in sequence), Di Giulio-Ryan (four steps in sequence), and a specific SEP developed at CIEMAT (six steps in sequence). There were important differences in the mercury extraction results obtained by the three procedures according to the reagents applied and the sequence of their application. These findings highlight the difficulty of setting a universal SEP to obtain information on metal fractions of different mobility for any soil sample, as well as the requirement for knowledge about the mineralogical and chemical characteristics of the samples. The specific six-step CIEMAT sequential extraction procedure was applied to a soil profile (Ap, Ah, Bt1, and Bt2 horizons). The distribution of mercury and major, minor, and trace elements in the different fractions were determined. The results indicate that mercury is mainly released with 6 M HCl. The strong association of mercury with crystalline iron oxyhydroxides, present in all the horizons of the profile, and/or the solubility of some mercury compounds in such acid can explain this fact. Minor mercury is found in the fraction assigned to oxidizable matter and in the final insoluble residue (cinnabar). (orig.)

  14. a method of gravity and seismic sequential inversion and its GPU implementation

    Science.gov (United States)

    Liu, G.; Meng, X.

    2011-12-01

    In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing

  15. A Sequential Kriging reliability analysis method with characteristics of adaptive sampling regions and parallelizability

    International Nuclear Information System (INIS)

    Wen, Zhixun; Pei, Haiqing; Liu, Hai; Yue, Zhufeng

    2016-01-01

    The sequential Kriging reliability analysis (SKRA) method has been developed in recent years for nonlinear implicit response functions which are expensive to evaluate. This type of method includes EGRA: the efficient reliability analysis method, and AK-MCS: the active learning reliability method combining Kriging model and Monte Carlo simulation. The purpose of this paper is to improve SKRA by adaptive sampling regions and parallelizability. The adaptive sampling regions strategy is proposed to avoid selecting samples in regions where the probability density is so low that the accuracy of these regions has negligible effects on the results. The size of the sampling regions is adapted according to the failure probability calculated by last iteration. Two parallel strategies are introduced and compared, aimed at selecting multiple sample points at a time. The improvement is verified through several troublesome examples. - Highlights: • The ISKRA method improves the efficiency of SKRA. • Adaptive sampling regions strategy reduces the number of needed samples. • The two parallel strategies reduce the number of needed iterations. • The accuracy of the optimal value impacts the number of samples significantly.

  16. Alternative perspectives of safety in home delivered health care: a sequential exploratory mixed method study.

    Science.gov (United States)

    Jones, Sarahjane

    2016-10-01

    The aim of this study was to discover and describe how patients, carers and case management nurses define safety and compare it to the traditional risk reduction and harm avoidance definition of safety. Care services are increasingly being delivered in the home for patients with complex long-term conditions. However, the concept of safety remains largely unexplored. A sequential, exploratory mixed method design. A qualitative case study of the UK National Health Service case management programme in the English UK National Health Service was deployed during 2012. Thirteen interviews were conducted with patients (n = 9) and carers (n = 6) and three focus groups with nurses (n = 17) from three community care providers. The qualitative element explored the definition of safety. Data were subjected to framework analysis and themes were identified by participant group. Sequentially, a cross-sectional survey was conducted during 2013 in a fourth community care provider (patient n = 35, carer n = 19, nurse n = 26) as a form of triangulation. Patients and carers describe safety differently to case management nurses, choosing to focus on meeting needs. They use more positive language and recognize the role they have in safety in home-delivered health care. In comparison, case management nurses described safety similarly to the definitions found in the literature. However, when offered the patient and carer definition of safety, they preferentially selected this definition to their own or the literature definition. Patients and carers offer an alternative perspective on patient safety in home-delivered health care that identifies their role in ensuring safety and is more closely aligned with the empowerment philosophy of case management. © 2016 John Wiley & Sons Ltd.

  17. Metabolic routes along digestive system of licorice: multicomponent sequential metabolism method in rat.

    Science.gov (United States)

    Zhang, Lei; Zhao, Haiyu; Liu, Yang; Dong, Honghuan; Lv, Beiran; Fang, Min; Zhao, Huihui

    2016-06-01

    This study was conducted to establish the multicomponent sequential metabolism (MSM) method based on comparative analysis along the digestive system following oral administration of licorice (Glycyrrhiza uralensis Fisch., leguminosae), a traditional Chinese medicine widely used for harmonizing other ingredients in a formulae. The licorice water extract (LWE) dissolved in Krebs-Ringer buffer solution (1 g/mL) was used to carry out the experiments and the comparative analysis was performed using HPLC and LC-MS/MS methods. In vitro incubation, in situ closed-loop and in vivo blood sampling were used to measure the LWE metabolic profile along the digestive system. The incubation experiment showed that the LWE was basically stable in digestive juice. A comparative analysis presented the metabolic profile of each prototype and its corresponding metabolites then. Liver was the major metabolic organ for LWE, and the metabolism by the intestinal flora and gut wall was also an important part of the process. The MSM method was practical and could be a potential method to describe the metabolic routes of multiple components before absorption into the systemic blood stream. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Enhanced Silver Nanoparticle Chemiluminescence Method for the Determination of Gemifloxacin Mesylate using Sequential Injection Analysis

    International Nuclear Information System (INIS)

    Alarfaj, N.A.; Aly, F.A.; Tamimi, A.A.

    2013-01-01

    A sequential injection analysis (SIA) with chemiluminescence detection has been proposed for the determination of the antibiotic gemifloxacin mesylate (GFX). The developed method is based on the enhancement effect of silver nanoparticles (Ag NPs) on the chemiluminescence (CL) signal of luminol-potassium ferricyanide reaction in alkaline medium. The introduction of gemifloxacin in this system produced a significant decrease in the CL intensity in presence of (Ag NPs). The optimum conditions for CL emission were investigated. Linear relationship between the decrease in CL intensity and concentration was obtained in the range 0.01-1000 ng mL-1, (r = 0.9997) with detection limit of 2.0 pg mL-1 and quantification limit of 0.01 pg mL-1. The relative standard deviation was 1.3 %. The proposed method was employed for the determination of gemifloxacin in bulk drug, in its pharmaceutical dosage forms and biological fluids such as human serum and urine. The interference of some common additive compounds such as glucose, lactose, starch, talc and magnesium stearate was investigated, and no interference was found from these excipients. The obtained SIA results were statistically compared with those obtained from a reported method and did not show any significant difference at confidence level 95%. (author)

  19. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  20. Sensing of chlorpheniramine in pharmaceutical applications by sequential injector coupled with potentiometer

    Directory of Open Access Journals (Sweden)

    Tawfik A. Saleh

    2011-11-01

    Full Text Available This paper reports on development of a system consisting of a portable sequential injector coupled with potentiometric unit for sensing of chlorpheniramine (CPA, based on the reaction of CPA with potassium permanganate in acidic media. Various experimental conditions affecting the potential intensity were studied and incorporated into the procedure. Under the optimum conditions, linear relationship between the CPA concentration and peak area was obtained for the concentration range of 0.1–50 ppm. The method reflects good recovery with relative standard deviation (RSD<3%. The detection limit was 0.05 ppm. The developed method was successfully applied for determination of CPA in pure form and in pharmaceutical dosage forms. The results, obtained using the method, are in accord with the results of the British pharmacopoeia method. In addition to its accuracy and precision, the method has the advantages of being simple, inexpensive and rapid. Keywords: Sensing, Flow injection, Chlorpheniramine, Potentiometry

  1. About a sequential method for non destructive testing of structures by mechanical vibrations

    International Nuclear Information System (INIS)

    Suarez Antola, R.

    2001-01-01

    The presence and growth of cracks voids or fields of pores under applied forces or environmental actions can produce a meaningful lowering in the proper frequencies of normal modes of mechanical vibration in structures.A quite general expression for the square of modes proper frequency as a functional of displacement field,density field and elastic moduli fields is used as a starting point.The effect of defects on frequency are modeled as equivalent changes in density and elastic moduli fields,introducing the concept of region of influence of each defect.An approximate expression is obtained which relates the relative lowering in the square of modes proper frequency with position,size,shape and orientation of defects in mode displacement field.Some simple examples of structural elements with cracks or fields of pores are considered.the connection with linear elastic fracture mechanics is briefly exemplified.A sequential method is proposed for non-destructive testing of structures using mechanical vibrations combined with properly chosen local nondestructive testing methods

  2. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  3. A method for multiple sequential analyses of macrophage functions using a small single cell sample

    Directory of Open Access Journals (Sweden)

    F.R.F. Nascimento

    2003-09-01

    Full Text Available Microbial pathogens such as bacillus Calmette-Guérin (BCG induce the activation of macrophages. Activated macrophages can be characterized by the increased production of reactive oxygen and nitrogen metabolites, generated via NADPH oxidase and inducible nitric oxide synthase, respectively, and by the increased expression of major histocompatibility complex class II molecules (MHC II. Multiple microassays have been developed to measure these parameters. Usually each assay requires 2-5 x 10(5 cells per well. In some experimental conditions the number of cells is the limiting factor for the phenotypic characterization of macrophages. Here we describe a method whereby this limitation can be circumvented. Using a single 96-well microassay and a very small number of peritoneal cells obtained from C3H/HePas mice, containing as little as <=2 x 10(5 macrophages per well, we determined sequentially the oxidative burst (H2O2, nitric oxide production and MHC II (IAk expression of BCG-activated macrophages. More specifically, with 100 µl of cell suspension it was possible to quantify H2O2 release and nitric oxide production after 1 and 48 h, respectively, and IAk expression after 48 h of cell culture. In addition, this microassay is easy to perform, highly reproducible and more economical.

  4. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    International Nuclear Information System (INIS)

    Zhang, Xiaole; Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu

    2017-01-01

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  5. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiaole, E-mail: zhangxiaole10@outlook.com [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany); Institute of Public Safety Research, Department of Engineering Physics, Tsinghua University, Beijing, 100084 (China); Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany)

    2017-03-05

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  6. Reduction of Salmonella on chicken meat and chicken skin by combined or sequential application of lytic bacteriophage with chemical antimicrobials.

    Science.gov (United States)

    Sukumaran, Anuraj T; Nannapaneni, Rama; Kiess, Aaron; Sharma, Chander Shekhar

    2015-08-17

    The effectiveness of recently approved Salmonella lytic bacteriophage preparation (SalmoFresh™) in reducing Salmonella in vitro and on chicken breast fillets was examined in combination with lauric arginate (LAE) or cetylpyridinium chloride (CPC). In another experiment, a sequential spray application of this bacteriophage (phage) solution on Salmonella inoculated chicken skin after a 20s dip in chemical antimicrobials (LAE, CPC, peracetic acid, or chlorine) was also examined in reducing Salmonella counts on chicken skin. The application of phage in combination with CPC or LAE reduced S. Typhimurium, S. Heidelberg, and S. Enteritidis up to 5 log units in vitro at 4 °C. On chicken breast fillets, phage in combination with CPC or LAE resulted in significant (p<0.05) reductions of Salmonella ranging from 0.5 to 1.3 log CFU/g as compared to control up to 7 days of refrigerated storage. When phage was applied sequentially with chemical antimicrobials, all the treatments resulted in significant reductions of Salmonella. The application of chlorine (30 ppm) and PAA (400 ppm) followed by phage spray (10(9)PFU/ml) resulted in highest Salmonella reductions of 1.6-1.7 and 2.2-2.5l og CFU/cm(2), respectively. In conclusion, the surface applications of phage in combination with LAE or CPC significantly reduced Salmonella counts on chicken breast fillets. However, higher reductions in Salmonella counts were achieved on chicken skin by the sequential application of chemical antimicrobials followed by phage spray. The sequential application of chlorine, PAA, and phage can provide additional hurdles to reduce Salmonella on fresh poultry carcasses or cut up parts. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Sequential Specification of Time-aware Stream Processing Applications (Extended Abstract)

    NARCIS (Netherlands)

    Geuns, S.J.; Hausmans, J.P.H.M.; Bekooij, Marco Jan Gerrit

    2012-01-01

    Automatic parallelization of Nested Loop Programs (NLPs) is an attractive method to create embedded real-time stream processing applications for multi-core systems. However, the description and parallelization of applications with a time dependent functional behavior has not been considered in NLPs.

  8. [Sequential degradation of p-cresol by photochemical and biological methods].

    Science.gov (United States)

    Karetnikova, E A; Chaĭkovskaia, O N; Sokolova, I V; Nikitina, L I

    2008-01-01

    Sequential photo- and biodegradation of p-cresol was studied using a mercury lamp, as well as KrCl and XeCl excilamps. Preirradiation of p-cresol at a concentration of 10(-4) M did not affect the rate of its subsequent biodegradation. An increase in the concentration of p-cresol to 10(-3) M and in the duration preliminary UV irradiation inhibited subsequent biodegradation. Biodegradation of p-cresol was accompanied by the formation of a product with a fluorescence maximum at 365 nm (lambdaex 280 nm), and photodegradation yielded a compound fluorescing at 400 nm (lambdaex 330 nm). Sequential UV and biodegradation led to the appearance of bands in the fluorescence spectra that were ascribed to p-cresol and its photolysis products. It was shown that sequential use of biological and photochemical degradation results in degradation of not only the initial toxicant but also the metabolites formed during its biodegradation.

  9. Haemodialysis work environment contributors to job satisfaction and stress: a sequential mixed methods study.

    Science.gov (United States)

    Hayes, Bronwyn; Bonner, Ann; Douglas, Clint

    2015-01-01

    Haemodialysis nurses form long term relationships with patients in a technologically complex work environment. Previous studies have highlighted that haemodialysis nurses face stressors related to the nature of their work and also their work environments leading to reported high levels of burnout. Using Kanters (1997) Structural Empowerment Theory as a guiding framework, the aim of this study was to explore the factors contributing to satisfaction with the work environment, job satisfaction, job stress and burnout in haemodialysis nurses. Using a sequential mixed-methods design, the first phase involved an on-line survey comprising demographic and work characteristics, Brisbane Practice Environment Measure (B-PEM), Index of Work Satisfaction (IWS), Nursing Stress Scale (NSS) and the Maslach Burnout Inventory (MBI). The second phase involved conducting eight semi-structured interviews with data thematically analyzed. From the 417 nurses surveyed the majority were female (90.9 %), aged over 41 years of age (74.3 %), and 47.4 % had worked in haemodialysis for more than 10 years. Overall the work environment was perceived positively and there was a moderate level of job satisfaction. However levels of stress and emotional exhaustion (burnout) were high. Two themes, ability to care and feeling successful as a nurse, provided clarity to the level of job satisfaction found in phase 1. While two further themes, patients as quasi-family and intense working teams, explained why working as a haemodialysis nurse was both satisfying and stressful. Nurse managers can use these results to identify issues being experienced by haemodialysis nurses working in the unit they are supervising.

  10. Implementing Quality Criteria in Designing and Conducting a Sequential Quan [right arrow] Qual Mixed Methods Study of Student Engagement with Learning Applied Research Methods Online

    Science.gov (United States)

    Ivankova, Nataliya V.

    2014-01-01

    In spite of recent methodological developments related to quality assurance in mixed methods research, practical examples of how to implement quality criteria in designing and conducting sequential QUAN [right arrow] QUAL mixed methods studies to ensure the process is systematic and rigorous remain scarce. This article discusses a three-step…

  11. On the equivalence of optimality criterion and sequential approximate optimization methods in the classical layout problem

    NARCIS (Netherlands)

    Groenwold, A.A.; Etman, L.F.P.

    2008-01-01

    We study the classical topology optimization problem, in which minimum compliance is sought, subject to linear constraints. Using a dual statement, we propose two separable and strictly convex subproblems for use in sequential approximate optimization (SAO) algorithms.Respectively, the subproblems

  12. Full-Color LCD Microdisplay System Based on OLED Backlight Unit and Field-Sequential Color Driving Method

    Directory of Open Access Journals (Sweden)

    Sungho Woo

    2011-01-01

    Full Text Available We developed a single-panel LCD microdisplay system using a field-sequential color (FSC driving method and an organic light-emitting diode (OLED as a backlight unit (BLU. The 0.76′′ OLED BLU with red, green, and blue (RGB colors was fabricated by a conventional UV photolithography patterning process and by vacuum deposition of small molecule organic layers. The field-sequential driving frequency was set to 255 Hz to allow each of the RGB colors to be generated without color mixing at the given display frame rate. A prototype FSC LCD microdisplay system consisting of a 0.7′′ LCD microdisplay panel and the 0.76′′ OLED BLU successfully exhibited color display and moving picture images using the FSC driving method.

  13. A sequential adaptation technique and its application to the Mark 12 IFF system

    Science.gov (United States)

    Bailey, John S.; Mallett, John D.; Sheppard, Duane J.; Warner, F. Neal; Adams, Robert

    1986-07-01

    Sequential adaptation uses only two sets of receivers, correlators, and A/D converters which are time multiplexed to effect spatial adaptation in a system with (N) adaptive degrees of freedom. This technique can substantially reduce the hardware cost over what is realizable in a parallel architecture. A three channel L-band version of the sequential adapter was built and tested for use with the MARK XII IFF (identify friend or foe) system. In this system the sequentially determined adaptive weights were obtained digitally but implemented at RF. As a result, many of the post RF hardware induced sources of error that normally limit cancellation, such as receiver mismatch, are removed by the feedback property. The result is a system that can yield high levels of cancellation and be readily retrofitted to currently fielded equipment.

  14. Sequential analysis of materials balances. Application to a prospective reprocessing facility

    International Nuclear Information System (INIS)

    Picard, R.

    1986-01-01

    This paper discusses near-real-time accounting in the context of the prospective DWK reprocessing plant. Sensitivity of a standard sequential testing procedure, applied to unfalsified operator data only, is examined with respect to a variety of loss scenarios. It is seen that large inventories preclude high-probability detection of certain protracted losses of material. In Sec. 2, a rough error propagation for the MBA of interest is outlined. Mathematical development for the analysis is given in Sec. 3, and generic aspects of sequential testing are reviewed in Sec. 4. In Sec. 5, results from a simulation to quantify performance of the accounting system are presented

  15. Secretome data from Trichoderma reesei and Aspergillus niger cultivated in submerged and sequential fermentation methods

    Directory of Open Access Journals (Sweden)

    Camila Florencio

    2016-09-01

    Full Text Available The cultivation procedure and the fungal strain applied for enzyme production may influence levels and profile of the proteins produced. The proteomic analysis data presented here provide critical information to compare proteins secreted by Trichoderma reesei and Aspergillus niger when cultivated through submerged and sequential fermentation processes, using steam-explosion sugarcane bagasse as inducer for enzyme production. The proteins were organized according to the families described in CAZy database as cellulases, hemicellulases, proteases/peptidases, cell-wall-protein, lipases, others (catalase, esterase, etc., glycoside hydrolases families, predicted and hypothetical proteins. Further detailed analysis of this data is provided in “Secretome analysis of Trichoderma reesei and Aspergillus niger cultivated by submerged and sequential fermentation process: enzyme production for sugarcane bagasse hydrolysis” C. Florencio, F.M. Cunha, A.C Badino, C.S. Farinas, E. Ximenes, M.R. Ladisch (2016 [1]. Keywords: Tricoderma reesei, Aspergillus Niger, Enzyme Production, Secretome

  16. Sequential method for the assessment of innovations in computer assisted industrial processes; Metodo secuencial para evaluacion de innovaciones en procesos industriales asistido por computadora

    Energy Technology Data Exchange (ETDEWEB)

    Suarez Antola, R [Universidad Catolica del Uruguay, Montevideo (Uruguay); Artucio, G [Ministerio de Industria Energia y Mineria. Direccion Nacional de Tecnologia Nuclear, Montevideo (Uruguay)

    1995-08-01

    A sequential method for the assessment of innovations in industrial processes is proposed, using suitable combinations of mathematical modelling and numerical simulation of dynamics. Some advantages and limitations of the proposed method are discussed. tabs.

  17. Simulation based sequential Monte Carlo methods for discretely observed Markov processes

    OpenAIRE

    Neal, Peter

    2014-01-01

    Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...

  18. Online sequential condition prediction method of natural circulation systems based on EOS-ELM and phase space reconstruction

    International Nuclear Information System (INIS)

    Chen, Hanying; Gao, Puzhen; Tan, Sichao; Tang, Jiguo; Yuan, Hongsheng

    2017-01-01

    Highlights: •An online condition prediction method for natural circulation systems in NPP was proposed based on EOS-ELM. •The proposed online prediction method was validated using experimental data. •The training speed of the proposed method is significantly fast. •The proposed method can achieve good accuracy in wide parameter range. -- Abstract: Natural circulation design is widely used in the passive safety systems of advanced nuclear power reactors. The irregular and chaotic flow oscillations are often observed in boiling natural circulation systems so it is difficult for operators to monitor and predict the condition of these systems. An online condition forecasting method for natural circulation system is proposed in this study as an assisting technique for plant operators. The proposed prediction approach was developed based on Ensemble of Online Sequential Extreme Learning Machine (EOS-ELM) and phase space reconstruction. Online Sequential Extreme Learning Machine (OS-ELM) is an online sequential learning neural network algorithm and EOS-ELM is the ensemble method of it. The proposed condition prediction method can be initiated by a small chunk of monitoring data and it can be updated by newly arrived data at very fast speed during the online prediction. Simulation experiments were conducted on the data of two natural circulation loops to validate the performance of the proposed method. The simulation results show that the proposed predication model can successfully recognize different types of flow oscillations and accurately forecast the trend of monitored plant variables. The influence of the number of hidden nodes and neural network inputs on prediction performance was studied and the proposed model can achieve good accuracy in a wide parameter range. Moreover, the comparison results show that the proposed condition prediction method has much faster online learning speed and better prediction accuracy than conventional neural network model.

  19. A comparison of sequential and information-based methods for determining the co-integration rank in heteroskedastic VAR MODELS

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Angelis, Luca De; Rahbek, Anders

    2015-01-01

    In this article, we investigate the behaviour of a number of methods for estimating the co-integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion....... The relative finite-sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC-based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms......-based method to over-estimate the co-integration rank in relatively small sample sizes....

  20. Sequential injection analysis for automation of the Winkler methodology, with real-time SIMPLEX optimization and shipboard application

    Energy Technology Data Exchange (ETDEWEB)

    Horstkotte, Burkhard; Tovar Sanchez, Antonio; Duarte, Carlos M. [Department of Global Change Research, IMEDEA (CSIC-UIB) Institut Mediterrani d' Estudis Avancats, Miquel Marques 21, 07190 Esporles (Spain); Cerda, Victor, E-mail: Victor.Cerda@uib.es [University of the Balearic Islands, Department of Chemistry Carreterra de Valldemossa km 7.5, 07011 Palma de Mallorca (Spain)

    2010-01-25

    A multipurpose analyzer system based on sequential injection analysis (SIA) for the determination of dissolved oxygen (DO) in seawater is presented. Three operation modes were established and successfully applied onboard during a research cruise in the Southern ocean: 1st, in-line execution of the entire Winkler method including precipitation of manganese (II) hydroxide, fixation of DO, precipitate dissolution by confluent acidification, and spectrophotometric quantification of the generated iodine/tri-iodide (I{sub 2}/I{sub 3}{sup -}), 2nd, spectrophotometric quantification of I{sub 2}/I{sub 3}{sup -} in samples prepared according the classical Winkler protocol, and 3rd, accurate batch-wise titration of I{sub 2}/I{sub 3}{sup -} with thiosulfate using one syringe pump of the analyzer as automatic burette. In the first mode, the zone stacking principle was applied to achieve high dispersion of the reagent solutions in the sample zone. Spectrophotometric detection was done at the isobestic wavelength 466 nm of I{sub 2}/I{sub 3}{sup -}. Highly reduced consumption of reagents and sample compared to the classical Winkler protocol, linear response up to 16 mg L{sup -1} DO, and an injection frequency of 30 per hour were achieved. It is noteworthy that for the offline protocol, sample metering and quantification with a potentiometric titrator lasts in general over 5 min without counting sample fixation, incubation, and glassware cleaning. The modified SIMPLEX methodology was used for the simultaneous optimization of four volumetric and two chemical variables. Vertex calculation and consequent application including in-line preparation of one reagent was carried out in real-time using the software AutoAnalysis. The analytical system featured high signal stability, robustness, and a repeatability of 3% RSD (1st mode) and 0.8% (2nd mode) during shipboard application.

  1. Sequential injection analysis for automation of the Winkler methodology, with real-time SIMPLEX optimization and shipboard application

    International Nuclear Information System (INIS)

    Horstkotte, Burkhard; Tovar Sanchez, Antonio; Duarte, Carlos M.; Cerda, Victor

    2010-01-01

    A multipurpose analyzer system based on sequential injection analysis (SIA) for the determination of dissolved oxygen (DO) in seawater is presented. Three operation modes were established and successfully applied onboard during a research cruise in the Southern ocean: 1st, in-line execution of the entire Winkler method including precipitation of manganese (II) hydroxide, fixation of DO, precipitate dissolution by confluent acidification, and spectrophotometric quantification of the generated iodine/tri-iodide (I 2 /I 3 - ), 2nd, spectrophotometric quantification of I 2 /I 3 - in samples prepared according the classical Winkler protocol, and 3rd, accurate batch-wise titration of I 2 /I 3 - with thiosulfate using one syringe pump of the analyzer as automatic burette. In the first mode, the zone stacking principle was applied to achieve high dispersion of the reagent solutions in the sample zone. Spectrophotometric detection was done at the isobestic wavelength 466 nm of I 2 /I 3 - . Highly reduced consumption of reagents and sample compared to the classical Winkler protocol, linear response up to 16 mg L -1 DO, and an injection frequency of 30 per hour were achieved. It is noteworthy that for the offline protocol, sample metering and quantification with a potentiometric titrator lasts in general over 5 min without counting sample fixation, incubation, and glassware cleaning. The modified SIMPLEX methodology was used for the simultaneous optimization of four volumetric and two chemical variables. Vertex calculation and consequent application including in-line preparation of one reagent was carried out in real-time using the software AutoAnalysis. The analytical system featured high signal stability, robustness, and a repeatability of 3% RSD (1st mode) and 0.8% (2nd mode) during shipboard application.

  2. Volatile profile characterisation of Chilean sparkling wines produced by traditional and Charmat methods via sequential stir bar sorptive extraction.

    Science.gov (United States)

    Ubeda, C; Callejón, R M; Troncoso, A M; Peña-Neira, A; Morales, M L

    2016-09-15

    The volatile compositions of Charmat and traditional Chilean sparkling wines were studied for the first time. For this purpose, EG-Silicone and PDMS polymeric phases were compared and, afterwards, the most adequate was selected. The best extraction method turned out to be a sequential extraction in the headspace and by immersion using two PDMS twisters. A total of 130 compounds were determined. In traditional Chilean sparkling wines, ethyl esters were significantly higher, while acetic esters and ketones were predominant in the Charmat wines. PCA and LDA confirmed the differences in the volatile profiles between the production methods (traditional vs. Charmat). Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Green chemistry: solvent- and metal-free Prins cyclization. Application to sequential reactions.

    Science.gov (United States)

    Clarisse, Damien; Pelotier, Béatrice; Piva, Olivier; Fache, Fabienne

    2012-01-04

    Prins cyclization between a homoallylic alcohol and an aldehyde, promoted by trimethylsilyl halide, afforded 4-halo-tetrahydropyrans with good to excellent yields. Thanks to the absence of the solvent and metal, the THP thus obtained have been implicated without purification in several other reactions, in a sequential way, affording in particular new indole derivatives. This journal is © The Royal Society of Chemistry 2012

  4. The Biological Effects of Quadripolar Radiofrequency Sequential Application: A Human Experimental Study

    OpenAIRE

    Nicoletti, Giovanni; Cornaglia, Antonia Icaro; Faga, Angela; Scevola, Silvia

    2014-01-01

    Objective: An experimental study was conducted to assess the effectiveness and safety of an innovative quadripolar variable electrode configuration radiofrequency device with objective measurements in an ex vivo and in vivo human experimental model. Background data: Nonablative radiofrequency applications are well-established anti-ageing procedures for cosmetic skin tightening. Methods: The study was performed in two steps: ex vivo and in vivo assessments. In the ex vivo assessments the radio...

  5. Sequential injection titration method using second-order signals: determination of acidity in plant oils and biodiesel samples.

    Science.gov (United States)

    del Río, Vanessa; Larrechi, M Soledad; Callao, M Pilar

    2010-06-15

    A new concept of flow titration is proposed and demonstrated for the determination of total acidity in plant oils and biodiesel. We use sequential injection analysis (SIA) with a diode array spectrophotometric detector linked to chemometric tools such as multivariate curve resolution-alternating least squares (MCR-ALS). This system is based on the evolution of the basic specie of an acid-base indicator, alizarine, when it comes into contact with a sample that contains free fatty acids. The gradual pH change in the reactor coil due to diffusion and reaction phenomenona allows the sequential appearance of both species of the indicator in the detector coil, recording a data matrix for each sample. The SIA-MCR-ALS method helps to reduce the amounts of sample, the reagents and the time consumed. Each determination consumes 0.413ml of sample, 0.250ml of indicator and 3ml of carrier (ethanol) and generates 3.333ml of waste. The frequency of the analysis is high (12 samples h(-1) including all steps, i.e., cleaning, preparing and analysing). The utilized reagents are of common use in the laboratory and it is not necessary to use the reagents of perfect known concentration. The method was applied to determine acidity in plant oil and biodiesel samples. Results obtained by the proposed method compare well with those obtained by the official European Community method that is time consuming and uses large amounts of organic solvents.

  6. Sequential probability ratio controllers for safeguards radiation monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.; Coop, K.L.; Nixon, K.V.

    1984-01-01

    Sequential hypothesis tests applied to nuclear safeguards accounting methods make the methods more sensitive to detecting diversion. The sequential tests also improve transient signal detection in safeguards radiation monitors. This paper describes three microprocessor control units with sequential probability-ratio tests for detecting transient increases in radiation intensity. The control units are designed for three specific applications: low-intensity monitoring with Poisson probability ratios, higher intensity gamma-ray monitoring where fixed counting intervals are shortened by sequential testing, and monitoring moving traffic where the sequential technique responds to variable-duration signals. The fixed-interval controller shortens a customary 50-s monitoring time to an average of 18 s, making the monitoring delay less bothersome. The controller for monitoring moving vehicles benefits from the sequential technique by maintaining more than half its sensitivity when the normal passage speed doubles

  7. Sequential Injection Method for Rapid and Simultaneous Determination of 236U, 237Np, and Pu Isotopes in Seawater

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Steier, Peter

    2013-01-01

    An automated analytical method implemented in a novel dual-column tandem sequential injection (SI) system was developed for simultaneous determination of 236U, 237Np, 239Pu, and 240Pu in seawater samples. A combination of TEVA and UTEVA extraction chromatography was exploited to separate and purify...... target analytes, whereupon plutonium and neptunium were simultaneously isolated and purified on TEVA, while uranium was collected on UTEVA. The separation behavior of U, Np, and Pu on TEVA–UTEVA columns was investigated in detail in order to achieve high chemical yields and complete purification...

  8. Method for sequentially processing a multi-level interconnect circuit in a vacuum chamber

    Science.gov (United States)

    Routh, D. E.; Sharma, G. C. (Inventor)

    1984-01-01

    An apparatus is disclosed which includes a vacuum system having a vacuum chamber in which wafers are processed on rotating turntables. The vacuum chamber is provided with an RF sputtering system and a dc magnetron sputtering system. A gas inlet introduces various gases to the vacuum chamber and creates various gas plasma during the sputtering steps. The rotating turntables insure that the respective wafers are present under the sputtering guns for an average amount of time such that consistency in sputtering and deposition is achieved. By continuous and sequential processing of the wafers in a common vacuum chamber without removal, the adverse affects of exposure to atmospheric conditions are eliminated providing higher quality circuit contacts and functional device.

  9. Sequential Power-Dependence Theory

    NARCIS (Netherlands)

    Buskens, Vincent; Rijt, Arnout van de

    2008-01-01

    Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike

  10. Double-radionuclide autoradiographic method using N-isopropyl-iodoamphetamine for sequential measurements of local cerebral blood flow

    International Nuclear Information System (INIS)

    Obrenovitch, T.P.; Clayton, C.B.; Strong, A.J.

    1987-01-01

    A double-radionuclide autoradiographic method has been assessed for sequential determinations of local CBF (LCBF). It is based on two successive intravascular injections of N-isopropyl-p-iodoamphetamine (IMP) labelled with different radionuclides, whose concentrations can later be differentiated in the same tissue section using double-radionuclide autoradiography. Previous studies suggested that the distribution of IMP, up to 30 min after its administration, still represents LCBFs. Our data indicate that, provided the tracer is injected directly into the left ventricle, there is little back diffusion from normal brain to blood under physiological conditions for at least 35 min following the tracer injection and an injection of unlabelled IMP, in a dose larger than that used for blood flow determination, does not displace any labelled IMP previously taken up by the brain, nor does it displace any labelled IMP previously accumulated in the lung that would lead to secondary brain uptake. On the basis of these results, we conclude that sequential autoradiographic determinations of LCBF using IMP labelled with different radionuclides is possible. This is a promising experimental method for the simultaneous investigation of changes in LCBF in several CNS structures

  11. Comparison of /sup 32/P therapy and sequential hemibody irradiation (HBI) for bony metastases as methods of whole body irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, H.; Choi, K.; Sohn, C.; Yaes, R.; Rotman, M.

    1986-06-01

    We report a retrospective study of 15 patients with prostate carcinoma and diffuse bone metastases treated with sodium /sup 32/P for palliation of pain at Downstate Medical Center and Kings County Hospital from 1973 to 1978. The response rates, duration of response, and toxicities are compared with those of other series of patients treated with /sup 32/P and with sequential hemibody irradiation. The response rates and duration of response are similar with both modalities ranging from 58 to 95% with a duration of 3.3 to 6 months with /sup 32/P and from 75 to 86% with a median duration of 5.5 months with hemibody irradiation. There are significant differences in the patterns of response and in the toxicities of the two treatment methods. Both methods cause significant bone marrow depression. Acute radiation syndrome, radiation pneumonitis, and alopecia are seen with sequential hemibody irradiation and not with /sup 32/P, but their incidence can be reduced by careful treatment planning. Hemibody irradiation can provide pain relief within 24 to 48 h, while /sup 32/P may produce an initial exacerbation of pain. Lower hemibody irradiation alone is less toxic than either upper hemibody irradiation or /sup 32/P treatment.

  12. Sparse Classification - Methods & Applications

    DEFF Research Database (Denmark)

    Einarsson, Gudmundur

    for analysing such data carry the potential to revolutionize tasks such as medical diagnostics where often decisions need to be based on only a few high-dimensional observations. This explosion in data dimensionality has sparked the development of novel statistical methods. In contrast, classical statistics...

  13. Kriging : Methods and Applications

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2017-01-01

    In this chapter we present Kriging— also known as a Gaussian process (GP) model— which is a mathematical interpolation method. To select the input combinations to be simulated, we use Latin hypercube sampling (LHS); we allow uniform and non-uniform distributions of the simulation inputs. Besides

  14. Sequential assignment of proline-rich regions in proteins: Application to modular binding domain complexes

    International Nuclear Information System (INIS)

    Kanelis, Voula; Donaldson, Logan; Muhandiram, D.R.; Rotin, Daniela; Forman-Kay, Julie D.; Kay, Lewis E.

    2000-01-01

    Many protein-protein interactions involve amino acid sequences containing proline-rich motifs and even poly-proline stretches. The lack of amide protons in such regions complicates assignment, since 1 HN-based triple-resonance assignment strategies cannot be employed. Two such systems that we are currently studying include an SH2 domain from the protein Crk with a region containing 9 prolines in a 14 amino acid sequence, as well as a WW domain that interacts with a proline-rich target. A modified version of the HACAN pulse scheme, originally described by Bax and co-workers [Wang et al. (1995) J. Biomol. NMR, 5, 376-382], and an experiment which correlates the intra-residue 1 H α , 13 C α / 13 C β chemical shifts with the 15 N shift of the subsequent residue are presented and applied to the two systems listed above, allowing sequential assignment of the molecules

  15. Single, simultaneous and sequential applications of ultrasonic frequencies for the elimination of ibuprofen in water.

    Science.gov (United States)

    Ziylan-Yavas, Asu; Ince, Nilsun H

    2018-01-01

    The study is about the assessment of single and multi-frequency operations for the overall degradation of a widely consumed analgesic pharmaceutical-ibuprofen (IBP). The selected frequencies were in the range of 20-1130kHz emissions coming from probes, baths and piezo-electric transducers attached to plate-type devices. Multi-frequency operations were applied either simultaneously as "duals", or sequentially at fixed time intervals; and the total reaction time in all operations was 30-min. The work also covers evaluation of the effect of zero-valent iron (ZVI) on the efficiency of the degradation process and the performance of the reaction systems. It was found that low-frequency probe type devices especially at 20kHz were ineffective when applied singly and without ZVI, and relatively more effective in combined-frequency operations in the presence of ZVI. The power efficiencies of the reactors and/or reaction systems showed that 20-kHz probe was considerably more energy intensive than all others, and was therefore not used in multi-frequency operations. The most efficient reactor in terms of power consumption was the bath (200kHz), which however provided insufficient mineralization of the test chemical. The highest percentage of TOC decay (37%) was obtained in a dual-frequency operation (40/572kHz) with ZVI, in which the energy consumption was neither low nor exceptionally too high. A sequential operation (40+200kHz) in that respect was more efficient, because it required much less energy for a similar TOC decay performance (30%). In general, the degradation of IBP increased with increased power consumption, which in turn reduced the sonochemical yield. The study also showed that advanced Fenton reactions with ZVI were faster in the presence of ultrasound, and the metal was very effective in improving the performance of low-frequency operations. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Direct methods and residue type specific isotope labeling in NMR structure determination and model-driven sequential assignment

    International Nuclear Information System (INIS)

    Schedlbauer, Andreas; Auer, Renate; Ledolter, Karin; Tollinger, Martin; Kloiber, Karin; Lichtenecker, Roman; Ruedisser, Simon; Hommel, Ulrich; Schmid, Walther; Konrat, Robert; Kontaxis, Georg

    2008-01-01

    Direct methods in NMR based structure determination start from an unassigned ensemble of unconnected gaseous hydrogen atoms. Under favorable conditions they can produce low resolution structures of proteins. Usually a prohibitively large number of NOEs is required, to solve a protein structure ab-initio, but even with a much smaller set of distance restraints low resolution models can be obtained which resemble a protein fold. One problem is that at such low resolution and in the absence of a force field it is impossible to distinguish the correct protein fold from its mirror image. In a hybrid approach these ambiguous models have the potential to aid in the process of sequential backbone chemical shift assignment when 13 C β and 13 C' shifts are not available for sensitivity reasons. Regardless of the overall fold they enhance the information content of the NOE spectra. These, combined with residue specific labeling and minimal triple-resonance data using 13 C α connectivity can provide almost complete sequential assignment. Strategies for residue type specific labeling with customized isotope labeling patterns are of great advantage in this context. Furthermore, this approach is to some extent error-tolerant with respect to data incompleteness, limited precision of the peak picking, and structural errors caused by misassignment of NOEs

  17. Application of sequential extraction analysis to electrokinetic remediation of cadmium, nickel and zinc from contaminated soils

    International Nuclear Information System (INIS)

    Giannis, Apostolos; Pentari, Despina; Wang, Jing-Yuan; Gidarakos, Evangelos

    2010-01-01

    An enhanced electrokinetic process for the removal of cadmium (Cd), nickel (Ni) and zinc (Zn) from contaminated soils was performed. The efficiency of the chelate agents nitrilotriacetic acid (NTA), diethylenetriaminepentaacetic acid (DTPA) and diaminocycloexanetetraacetic acid (DCyTA) was examined under constant potential gradient (1.23 V/cm). The results showed that chelates were effective in desorbing metals at a high pH, with metal-chelate anion complexes migrating towards the anode. At low pH, metals existing as dissolved cations migrated towards the cathode. In such conflicting directions, the metals accumulated in the middle of the cell. Speciation of the metals during the electrokinetic experiments was performed to provide an understanding of the distribution of the Cd, Ni and Zn. The results of sequential extraction analysis revealed that the forms of the metals could be altered from one fraction to another due to the variation of physico-chemical conditions throughout the cell, such as pH, redox potential and the chemistry of the electrolyte solution during the electrokinetic treatment. It was found that binding forms of metals were changed from the difficult type to easier extraction type.

  18. Application of sequential extraction analysis to electrokinetic remediation of cadmium, nickel and zinc from contaminated soils

    Energy Technology Data Exchange (ETDEWEB)

    Giannis, Apostolos, E-mail: apostolos.giannis@enveng.tuc.gr [Department of Environmental Engineering, Technical University of Crete, Politechnioupolis, Chania 73100 (Greece); Pentari, Despina [Department of Mineral Resources Engineering, Technical University of Crete, Politechnioupolis, Chania 73100 (Greece); Wang, Jing-Yuan [Residues and Resource Reclamation Centre (R3C), Nanyang Technological University, 50 Nanyang Avenue, 639798 Singapore (Singapore); Gidarakos, Evangelos, E-mail: gidarako@mred.tuc.gr [Department of Environmental Engineering, Technical University of Crete, Politechnioupolis, Chania 73100 (Greece)

    2010-12-15

    An enhanced electrokinetic process for the removal of cadmium (Cd), nickel (Ni) and zinc (Zn) from contaminated soils was performed. The efficiency of the chelate agents nitrilotriacetic acid (NTA), diethylenetriaminepentaacetic acid (DTPA) and diaminocycloexanetetraacetic acid (DCyTA) was examined under constant potential gradient (1.23 V/cm). The results showed that chelates were effective in desorbing metals at a high pH, with metal-chelate anion complexes migrating towards the anode. At low pH, metals existing as dissolved cations migrated towards the cathode. In such conflicting directions, the metals accumulated in the middle of the cell. Speciation of the metals during the electrokinetic experiments was performed to provide an understanding of the distribution of the Cd, Ni and Zn. The results of sequential extraction analysis revealed that the forms of the metals could be altered from one fraction to another due to the variation of physico-chemical conditions throughout the cell, such as pH, redox potential and the chemistry of the electrolyte solution during the electrokinetic treatment. It was found that binding forms of metals were changed from the difficult type to easier extraction type.

  19. Variable complexity online sequential extreme learning machine, with applications to streamflow prediction

    Science.gov (United States)

    Lima, Aranildo R.; Hsieh, William W.; Cannon, Alex J.

    2017-12-01

    In situations where new data arrive continually, online learning algorithms are computationally much less costly than batch learning ones in maintaining the model up-to-date. The extreme learning machine (ELM), a single hidden layer artificial neural network with random weights in the hidden layer, is solved by linear least squares, and has an online learning version, the online sequential ELM (OSELM). As more data become available during online learning, information on the longer time scale becomes available, so ideally the model complexity should be allowed to change, but the number of hidden nodes (HN) remains fixed in OSELM. A variable complexity VC-OSELM algorithm is proposed to dynamically add or remove HN in the OSELM, allowing the model complexity to vary automatically as online learning proceeds. The performance of VC-OSELM was compared with OSELM in daily streamflow predictions at two hydrological stations in British Columbia, Canada, with VC-OSELM significantly outperforming OSELM in mean absolute error, root mean squared error and Nash-Sutcliffe efficiency at both stations.

  20. Environmentally adaptive processing for shallow ocean applications: A sequential Bayesian approach.

    Science.gov (United States)

    Candy, J V

    2015-09-01

    The shallow ocean is a changing environment primarily due to temperature variations in its upper layers directly affecting sound propagation throughout. The need to develop processors capable of tracking these changes implies a stochastic as well as an environmentally adaptive design. Bayesian techniques have evolved to enable a class of processors capable of performing in such an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean environment. A solution to this problem is addressed by developing a sequential Bayesian processor capable of providing a joint solution to the modal function tracking and environmental adaptivity problem. Here, the focus is on the development of both a particle filter and an unscented Kalman filter capable of providing reasonable performance for this problem. These processors are applied to hydrophone measurements obtained from a vertical array. The adaptivity problem is attacked by allowing the modal coefficients and/or wavenumbers to be jointly estimated from the noisy measurement data along with tracking of the modal functions while simultaneously enhancing the noisy pressure-field measurements.

  1. Design and Synthesis of a Novel Class of Flavonoid Derivatives via Sequential Phosphorylation and its Application for Greener Nanoparticle Synthesis

    Science.gov (United States)

    Osonga, Francis Juma

    Flavonoids exhibit arrays of biological effects that are beneficial to humans, including anti-viral, anti-oxidative, anti-inflammatory and anti-carcinogenic effects. However, these applications have been hindered by their poor stability and solubility in common solvents. Consequently, there is significant interest in the modification of flavonoids to improve their solubility. This poor solubility is also believed to be responsible for its permeability and bioavailability. Hence the central goal of this work is to design synthetic strategies for the sequential protection of the -OH groups in order to produce phosphorylated quercetin and apigenin derivatives. This work is divided into two parts: the first part presents the design, synthesis, and characterization of novel flavonoid derivatives via global and sequential phosphorylation. The second part focuses on the application of the synthesized derivatives for greener nanoparticle synthesis. This work shows for the first time that sequential phosphorylation of Quercetin is feasible through the design of 4 new derivatives namely: 5,4'-O-Quercetin Diphosphate (QDPI), 4'-O-phosphate Quercetin (4'-QPI), 5,4'-Quercetin Diphosphate (5,4'-QDP) and monophosphate 4-QP. The synthesis of 4'-QP and 5, 4'-QDP was successful with 85% and 60.5% yields respectively. In addition, the progress towards the total synthesis of apigenin phosphate derivatives (7, 4'-ADP and 7-AP) is presented. The synthesized derivatives were characterized using 1H, 13C, and 31P NMR. The phosphorylated derivatives were subsequently explored as reducing agents for sustainable synthesis of gold, silver and copper nanoparticles. We have successfully demonstrated the photochemical synthesis of gold nanoplates of sizes ranging from 10 - 200 nm using water soluble QDP in the presence of sunlight. This work contributes immensely in promoting the ideals of green nanosynthesis by (i) eliminating the use of organic solvents in the nanosynthesis, (ii) exploiting the

  2. Application of Forward Osmosis Membrane in a Sequential Batch Reactor for Water Reuse

    KAUST Repository

    Li, Qingyu

    2011-07-01

    Forward osmosis (FO) is a novel membrane process that potentially can be used as an energy-saving alternative to conventional membrane processes. The objective of this study is to investigate the performance of a FO membrane to draw water from wastewater using seawater as draw solution. A study on a novel osmotic sequential batch reactor (OsSBR) was explored. In this system, a plate and frame FO cell including two flat-sheet FO membranes was submerged in a bioreactor treating the wastewater. We found it feasible to treat the wastewater by the OsSBR process. The DOC removal rate was 98.55%. Total nitrogen removal was 62.4% with nitrate, nitrite and ammonium removals of 58.4%, 96.2% and 88.4% respectively. Phosphate removal was almost 100%. In this OsSBR system, the 15-hour average flux for a virgin membrane with air scouring is 3.103 LMH. After operation of 3 months, the average flux of a fouled membrane is 2.390 LMH with air scouring (23% flux decline). Air scouring can help to remove the loose foulants on the active layer, thus helping to maintain the flux. Cleaning of the FO membrane fouled in the active layer was probably not effective under the conditions of immersing the membrane in the bioreactor. LC-OCD results show that the FO membrane has a very good performance in rejecting biopolymers, humics and building blocks, but a limited ability in rejecting low molecular weight neutrals.

  3. Three-point method for measuring the geometric error components of linear and rotary axes based on sequential multilateration

    International Nuclear Information System (INIS)

    Zhang, Zhenjiu; Hu, Hong

    2013-01-01

    The linear and rotary axes are fundamental parts of multi-axis machine tools. The geometric error components of the axes must be measured for motion error compensation to improve the accuracy of the machine tools. In this paper, a simple method named the three point method is proposed to measure the geometric error of the linear and rotary axes of the machine tools using a laser tracker. A sequential multilateration method, where uncertainty is verified through simulation, is applied to measure the 3D coordinates. Three noncollinear points fixed on the stage of each axis are selected. The coordinates of these points are simultaneously measured using a laser tracker to obtain their volumetric errors by comparing these coordinates with ideal values. Numerous equations can be established using the geometric error models of each axis. The geometric error components can be obtained by solving these equations. The validity of the proposed method is verified through a series of experiments. The results indicate that the proposed method can measure the geometric error of the axes to compensate for the errors in multi-axis machine tools.

  4. Presentation of a method for the sequential analysis of incidents - NPP safety

    Energy Technology Data Exchange (ETDEWEB)

    Delage, M; Giroux, C; Quentin, P

    1989-04-01

    This paper presents a method which is designed to assist in the analysis of safety and based on the graphic representation of the occurrence of incidents significant for safety in 900-MWe PWR units. The graphs obtained are linked together to produce a general tree of events. With this tool, and on the basis of operating experience, it is then possible to imagine complex incident scenarios, to evaluate the potential consequences of a particular incident, or to seed out the causes which could lead to a given event. Interactions between systems or common mode faults can also be evidenced with this method.

  5. Sequential Spectrophotometric Method for the Simultaneous Determination of Amlodipine, Valsartan, and Hydrochlorothiazide in Coformulated Tablets

    Directory of Open Access Journals (Sweden)

    Hany W. Darwish

    2013-01-01

    Full Text Available A new, simple and specific spectrophotometric method was developed and validated in accordance with ICH guidelines for the simultaneous estimation of Amlodipine (AML, Valsartan (VAL, and Hydrochlorothiazide (HCT in their ternary mixture. In this method three techniques were used, namely, direct spectrophotometry, ratio subtraction, and isoabsorptive point. Amlodipine (AML was first determined by direct spectrophotometry and then ratio subtraction was applied to remove the AML spectrum from the mixture spectrum. Hydrochlorothiazide (HCT could then be determined directly without interference from Valsartan (VAL which could be determined using the isoabsorptive point theory. The calibration curve is linear over the concentration ranges of 4–32, 4–44 and 6–20 μg/mL for AML, VAL, and HCT, respectively. This method was tested by analyzing synthetic mixtures of the above drugs and was successfully applied to commercial pharmaceutical preparation of the drugs, where the standard deviation is <2 in the assay of raw materials and tablets. The method was validated according to the ICH guidelines and accuracy, precision, repeatability, and robustness were found to be within the acceptable limits.

  6. Analyzing Gender Differences in Black Faculty Marginalization through a Sequential Mixed-Methods Design

    Science.gov (United States)

    Griffin, Kimberly A.; Bennett, Jessica C.; Harris, Jessica

    2011-01-01

    In this article, the authors demonstrate how researchers can integrate qualitative and quantitative methods to gain a deeper understanding of the prevalence and nature of cultural taxation among black professors. In doing so, they show how the impact of cultural taxation on the experiences of black faculty in the academy is best captured using…

  7. Voltammetric behaviour at gold electrodes immersed in the BCR sequential extraction scheme media Application of underpotential deposition-stripping voltammetry to determination of copper in soil extracts

    Energy Technology Data Exchange (ETDEWEB)

    Beni, Valerio; Newton, Hazel V.; Arrigan, Damien W.M.; Hill, Martin; Lane, William A.; Mathewson, Alan

    2004-01-30

    The development of mercury-free electroanalytical systems for in-field analysis of pollutants requires a foundation on the electrochemical behaviour of the chosen electrode material in the target sample matrices. In this work, the behaviour of gold working electrodes in the media employed in the BCR sequential extraction protocol, for the fractionation of metals in solid environmental matrices, is reported. All three of the BCR sequential extraction media are redox active, on the basis of acidity and oxygen content as well as the inherent reducing or oxidising nature of some of the reagents employed: 0.11 M acetic acid, 0.1 M hydroxylammonium chloride (adjusted to pH 2) and 1 M ammonium acetate (adjusted to pH 2) with added trace hydrogen peroxide. The available potential ranges together with the demonstrated detection of target metals in these media are presented. Stripping voltammetry of copper or lead in the BCR extract media solutions reveal a multi-peak behaviour due to the stripping of both bulk metal and underpotential metal deposits. A procedure based on underpotential deposition-stripping voltammetry (UPD-SV) was evaluated for application to determination of copper in 0.11 M acetic acid soil extracts. A preliminary screening step in which different deposition times are applied to the sample enables a deposition time commensurate with UPD-SV to be selected so that no bulk deposition or stripping occurs thus simplifying the shape and features of the resulting voltammograms. Choice of the suitable deposition time is then followed by standards addition calibration. The method was validated by the analysis of a number of BCR 0.11 M acetic acid soil extracts. Good agreement was obtained been the UPD-SV method and atomic spectroscopic results.

  8. Voltammetric behaviour at gold electrodes immersed in the BCR sequential extraction scheme media Application of underpotential deposition-stripping voltammetry to determination of copper in soil extracts

    International Nuclear Information System (INIS)

    Beni, Valerio; Newton, Hazel V.; Arrigan, Damien W.M.; Hill, Martin; Lane, William A.; Mathewson, Alan

    2004-01-01

    The development of mercury-free electroanalytical systems for in-field analysis of pollutants requires a foundation on the electrochemical behaviour of the chosen electrode material in the target sample matrices. In this work, the behaviour of gold working electrodes in the media employed in the BCR sequential extraction protocol, for the fractionation of metals in solid environmental matrices, is reported. All three of the BCR sequential extraction media are redox active, on the basis of acidity and oxygen content as well as the inherent reducing or oxidising nature of some of the reagents employed: 0.11 M acetic acid, 0.1 M hydroxylammonium chloride (adjusted to pH 2) and 1 M ammonium acetate (adjusted to pH 2) with added trace hydrogen peroxide. The available potential ranges together with the demonstrated detection of target metals in these media are presented. Stripping voltammetry of copper or lead in the BCR extract media solutions reveal a multi-peak behaviour due to the stripping of both bulk metal and underpotential metal deposits. A procedure based on underpotential deposition-stripping voltammetry (UPD-SV) was evaluated for application to determination of copper in 0.11 M acetic acid soil extracts. A preliminary screening step in which different deposition times are applied to the sample enables a deposition time commensurate with UPD-SV to be selected so that no bulk deposition or stripping occurs thus simplifying the shape and features of the resulting voltammograms. Choice of the suitable deposition time is then followed by standards addition calibration. The method was validated by the analysis of a number of BCR 0.11 M acetic acid soil extracts. Good agreement was obtained been the UPD-SV method and atomic spectroscopic results

  9. Using qualitative repertory grid interviews to gather shared perspectives in a sequential mixed methods research design

    OpenAIRE

    Rojon, C; Saunders, M.N.K.; McDowall, Almuth

    2016-01-01

    In this chapter, we consider a specific example of applying mixed methods designs combining both qualitative and quantitative data collection and analysis approaches, giving particular attention to issues including reliability and validity. Human resource management (HRM) researchers, like others setting out to examine a novel or insufficiently defined research topic, frequently favour qualitative approaches to gather data during initial stages, to facilitate an in-depth exploration of indivi...

  10. Sequential application of non-pharmacological interventions reduces the severity of labour pain, delays use of pharmacological analgesia, and improves some obstetric outcomes: a randomised trial

    Directory of Open Access Journals (Sweden)

    Rubneide Barreto Silva Gallo

    2018-01-01

    Trial registration: NCT01389128. [Gallo RBS, Santana LS, Marcolin AC, Duarte G, Quintana SM (2018 Sequential application of non-pharmacological interventions reduces the severity of labour pain, delays use of pharmacological analgesia, and improves some obstetric outcomes: a randomised trial. Journal of Physiotherapy 64: 33–40

  11. Variability of bronchial measurements obtained by sequential CT using two computer-based methods

    International Nuclear Information System (INIS)

    Brillet, Pierre-Yves; Fetita, Catalin I.; Mitrea, Mihai; Preteux, Francoise; Capderou, Andre; Dreuil, Serge; Simon, Jean-Marc; Grenier, Philippe A.

    2009-01-01

    This study aimed to evaluate the variability of lumen (LA) and wall area (WA) measurements obtained on two successive MDCT acquisitions using energy-driven contour estimation (EDCE) and full width at half maximum (FWHM) approaches. Both methods were applied to a database of segmental and subsegmental bronchi with LA > 4 mm 2 containing 42 bronchial segments of 10 successive slices that best matched on each acquisition. For both methods, the 95% confidence interval between repeated MDCT was between -1.59 and 1.5 mm 2 for LA, and -3.31 and 2.96 mm 2 for WA. The values of the coefficient of measurement variation (CV 10 , i.e., percentage ratio of the standard deviation obtained from the 10 successive slices to their mean value) were strongly correlated between repeated MDCT data acquisitions (r > 0.72; p 2 , whereas WA values were lower for bronchi with WA 2 ; no systematic EDCE underestimation or overestimation was observed for thicker-walled bronchi. In conclusion, variability between CT examinations and assessment techniques may impair measurements. Therefore, new parameters such as CV 10 need to be investigated to study bronchial remodeling. Finally, EDCE and FWHM are not interchangeable in longitudinal studies. (orig.)

  12. Iodine speciation in coastal and inland bathing waters and seaweeds extracts using a sequential injection standard addition flow-batch method.

    Science.gov (United States)

    Santos, Inês C; Mesquita, Raquel B R; Bordalo, Adriano A; Rangel, António O S S

    2015-02-01

    The present work describes the development of a sequential injection standard addition method for iodine speciation in bathing waters and seaweeds extracts without prior sample treatment. Iodine speciation was obtained by assessing the iodide and iodate content, the two inorganic forms of iodine in waters. For the determination of iodide, an iodide ion selective electrode (ISE) was used. The indirect determination of iodate was based on the spectrophotometric determination of nitrite (Griess reaction). For the iodate measurement, a mixing chamber was employed (flow batch approach) to explore the inherent efficient mixing, essential for the indirect determination of iodate. The application of the standard addition method enabled detection limits of 0.14 µM for iodide and 0.02 µM for iodate, together with the direct introduction of the target water samples, coastal and inland bathing waters. The results obtained were in agreement with those obtained by ICP-MS and a colorimetric reference procedure. Recovery tests also confirmed the accuracy of the developed method which was effectively applied to bathing waters and seaweed extracts. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. A Vector Flow Imaging Method for Portable Ultrasound Using Synthetic Aperture Sequential Beamforming

    DEFF Research Database (Denmark)

    di Ianni, Tommaso; Villagómez Hoyos, Carlos Armando; Ewertsen, Caroline

    2017-01-01

    for the velocity estimation along the lateral and axial directions using a phase-shift estimator. The performance of the method was investigated with constant flow measurements in a flow rig system using the SARUS scanner and a 4.1-MHz linear array. A sequence was designed with interleaved B-mode and flow......, and the standard deviation (SD) was between 6% and 9.6%. The axial bias was lower than 1% with an SD around 2%. The mean estimated angles were 66.70° ± 2.86°, 72.65° ± 2.48°, and 89.13° ± 0.79° for the three cases. A proof-of-concept demonstration of the real-time processing and wireless transmission was tested...

  14. Sequential Banking.

    OpenAIRE

    Bizer, David S; DeMarzo, Peter M

    1992-01-01

    The authors study environments in which agents may borrow sequentially from more than one leader. Although debt is prioritized, additional lending imposes an externality on prior debt because, with moral hazard, the probability of repayment of prior loans decreases. Equilibrium interest rates are higher than they would be if borrowers could commit to borrow from at most one bank. Even though the loan terms are less favorable than they would be under commitment, the indebtedness of borrowers i...

  15. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  16. Sequential C-Si Bond Formations from Diphenylsilane: Application to Silanediol Peptide Isostere Precursors

    DEFF Research Database (Denmark)

    Nielsen, Lone; Skrydstrup, Troels

    2008-01-01

    and the first new carbon-silicon bond. The next step is the reduction of this hydridosilane with lithium metal providing a silyl lithium reagent, which undergoes a highly diastereoselective addition to an optically active tert-butanesulfinimine, thus generating the second C-Si bond. This method allows...

  17. Transformer inrush current reduction through sequential energization for wind farm applications

    Energy Technology Data Exchange (ETDEWEB)

    Abdulsalam, S.; Xu, W. [Alberta Univ., Edmonton, AB (Canada)

    2008-07-01

    Wind power is considered as one of the fastest growing technologies in the power industry. The electrical configuration of a wind farm consists of long spans of medium voltage collector feeders. Each wind generator is connected to the collector circuit/feeder through either a pad mount oil filled, or a nacelle-mounted dry type transformer. All collector feeders connect to a single collector substation where the connection to the high-voltage transmission is established through a step up transformer. With a large number of wind generators per feeder, large inrush current will flow due to simultaneous transformer energization which can cause high voltage sag at the point of common coupling. Wind farms are generally located in unpopulated remote areas where no access to strong network connection is feasible. It is common to have the PCC on a relatively weak location on the sub-transmission/distribution network. In order to meet interconnection standards requirements, the amount of voltage sag due to the energization of a number of transformers needs to be evaluated. This paper presented an effective solution to the mitigation of inrush currents and associated voltage sag for wind farm applications. The paper presented a diagram of a typical configuration of a wind farm electrical distribution system and also described the analytical methodologies for the evaluation of inrush current level together with simulation results. A simplified analysis and sizing criteria for the associated neutral resistor size was presented. It was concluded that the scheme could significantly reduce inrush current level when a large number of transformers are simultaneously energized. The presented application eliminates the need to sectionalize feeders, thereby simplifying them for the energization process. 6 refs., 5 figs.

  18. Non-Pilot-Aided Sequential Monte Carlo Method to Joint Signal, Phase Noise, and Frequency Offset Estimation in Multicarrier Systems

    Directory of Open Access Journals (Sweden)

    Christelle Garnier

    2008-05-01

    Full Text Available We address the problem of phase noise (PHN and carrier frequency offset (CFO mitigation in multicarrier receivers. In multicarrier systems, phase distortions cause two effects: the common phase error (CPE and the intercarrier interference (ICI which severely degrade the accuracy of the symbol detection stage. Here, we propose a non-pilot-aided scheme to jointly estimate PHN, CFO, and multicarrier signal in time domain. Unlike existing methods, non-pilot-based estimation is performed without any decision-directed scheme. Our approach to the problem is based on Bayesian estimation using sequential Monte Carlo filtering commonly referred to as particle filtering. The particle filter is efficiently implemented by combining the principles of the Rao-Blackwellization technique and an approximate optimal importance function for phase distortion sampling. Moreover, in order to fully benefit from time-domain processing, we propose a multicarrier signal model which includes the redundancy information induced by the cyclic prefix, thus leading to a significant performance improvement. Simulation results are provided in terms of bit error rate (BER and mean square error (MSE to illustrate the efficiency and the robustness of the proposed algorithm.

  19. The facilitators and barriers to nurses' participation in continuing education programs: a mixed method explanatory sequential study.

    Science.gov (United States)

    Shahhosseini, Zohreh; Hamzehgardeshi, Zeinab

    2014-11-30

    Since several factors affect nurses' participation in Continuing Education, and that nurses' Continuing Education affects patients' and community health status, it is essential to know facilitators and barriers of participation in Continuing Education programs and plan accordingly. This mixed approach study aimed to investigate the facilitators and barriers of nurses' participation, to explore nurses' perception of the most common facilitators and barriers. An explanatory sequential mixed methods design with follow up explanations variant were used, and it involved collecting quantitative data (361 nurses) first and then explaining the quantitative results with in-depth interviews during a qualitative study. The results showed that the mean score of facilitators to nurses' participation in Continuing Education was significantly higher than the mean score of barriers (61.99 ± 10.85 versus 51.17 ± 12.83; pEducation was related to "Update my knowledge". By reviewing the handwritings in qualitative phase, two main levels of updating information and professional skills were extracted as the most common facilitators and lack of support as the most common barrier to nurses' participation in continuing education program. According to important role Continuing Education on professional skills, nurse managers should facilitate the nurse' participation in the Continues Education.

  20. Application of sequential and orthogonalised-partial least squares (SO-PLS) regression to predict sensory properties of Cabernet Sauvignon wines from grape chemical composition.

    Science.gov (United States)

    Niimi, Jun; Tomic, Oliver; Næs, Tormod; Jeffery, David W; Bastian, Susan E P; Boss, Paul K

    2018-08-01

    The current study determined the applicability of sequential and orthogonalised-partial least squares (SO-PLS) regression to relate Cabernet Sauvignon grape chemical composition to the sensory perception of the corresponding wines. Grape samples (n = 25) were harvested at a similar maturity and vinified identically in 2013. Twelve measures using various (bio)chemical methods were made on grapes. Wines were evaluated using descriptive analysis with a trained panel (n = 10) for sensory profiling. Data was analysed globally using SO-PLS for the entire sensory profiles (SO-PLS2), as well as for single sensory attributes (SO-PLS1). SO-PLS1 models were superior in validated explained variances than SO-PLS2. SO-PLS provided a structured approach in the selection of predictor chemical data sets that best contributed to the correlation of important sensory attributes. This new approach presents great potential for application in other explorative metabolomics studies of food and beverages to address factors such as quality and regional influences. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Sequential assimilation of volcanic monitoring data to quantify eruption potential: Application to Kerinci volcano

    Science.gov (United States)

    Zhan, Yan; Gregg, Patricia M.; Chaussard, Estelle; Aoki, Yosuke

    2017-12-01

    Quantifying the eruption potential of a restless volcano requires the ability to model parameters such as overpressure and calculate the host rock stress state as the system evolves. A critical challenge is developing a model-data fusion framework to take advantage of observational data and provide updates of the volcanic system through time. The Ensemble Kalman Filter (EnKF) uses a Monte Carlo approach to assimilate volcanic monitoring data and update models of volcanic unrest, providing time-varying estimates of overpressure and stress. Although the EnKF has been proven effective to forecast volcanic deformation using synthetic InSAR and GPS data, until now, it has not been applied to assimilate data from an active volcanic system. In this investigation, the EnKF is used to provide a “hindcast” of the 2009 explosive eruption of Kerinci volcano, Indonesia. A two-sources analytical model is used to simulate the surface deformation of Kerinci volcano observed by InSAR time-series data and to predict the system evolution. A deep, deflating dike-like source reproduces the subsiding signal on the flanks of the volcano, and a shallow spherical McTigue source reproduces the central uplift. EnKF predicted parameters are used in finite element models to calculate the host-rock stress state prior to the 2009 eruption. Mohr-Coulomb failure models reveal that the shallow magma reservoir is trending towards tensile failure prior to 2009, which may be the catalyst for the 2009 eruption. Our results illustrate that the EnKF shows significant promise for future applications to forecasting the eruption potential of restless volcanoes and hind-cast the triggering mechanisms of observed eruptions.

  2. Sequential Assimilation of Volcanic Monitoring Data to Quantify Eruption Potential: Application to Kerinci Volcano, Sumatra

    Directory of Open Access Journals (Sweden)

    Yan Zhan

    2017-12-01

    Full Text Available Quantifying the eruption potential of a restless volcano requires the ability to model parameters such as overpressure and calculate the host rock stress state as the system evolves. A critical challenge is developing a model-data fusion framework to take advantage of observational data and provide updates of the volcanic system through time. The Ensemble Kalman Filter (EnKF uses a Monte Carlo approach to assimilate volcanic monitoring data and update models of volcanic unrest, providing time-varying estimates of overpressure and stress. Although the EnKF has been proven effective to forecast volcanic deformation using synthetic InSAR and GPS data, until now, it has not been applied to assimilate data from an active volcanic system. In this investigation, the EnKF is used to provide a “hindcast” of the 2009 explosive eruption of Kerinci volcano, Indonesia. A two-sources analytical model is used to simulate the surface deformation of Kerinci volcano observed by InSAR time-series data and to predict the system evolution. A deep, deflating dike-like source reproduces the subsiding signal on the flanks of the volcano, and a shallow spherical McTigue source reproduces the central uplift. EnKF predicted parameters are used in finite element models to calculate the host-rock stress state prior to the 2009 eruption. Mohr-Coulomb failure models reveal that the host rock around the shallow magma reservoir is trending toward tensile failure prior to 2009, which may be the catalyst for the 2009 eruption. Our results illustrate that the EnKF shows significant promise for future applications to forecasting the eruption potential of restless volcanoes and hind-cast the triggering mechanisms of observed eruptions.

  3. Informetrics theory, methods and applications

    CERN Document Server

    Qiu, Junping; Yang, Siluo; Dong, Ke

    2017-01-01

    This book provides an accessible introduction to the history, theory and techniques of informetrics. Divided into 14 chapters, it develops the content system of informetrics from the theory, methods and applications; systematically analyzes the six basic laws and the theory basis of informetrics and presents quantitative analysis methods such as citation analysis and computer-aided analysis. It also discusses applications in information resource management, information and library science, science of science, scientific evaluation and the forecast field. Lastly, it describes a new development in informetrics- webometrics. Providing a comprehensive overview of the complex issues in today's environment, this book is a valuable resource for all researchers, students and practitioners in library and information science.

  4. Exploring selection and recruitment processes for newly qualified nurses: a sequential-explanatory mixed-method study.

    Science.gov (United States)

    Newton, Paul; Chandler, Val; Morris-Thomson, Trish; Sayer, Jane; Burke, Linda

    2015-01-01

    To map current selection and recruitment processes for newly qualified nurses and to explore the advantages and limitations of current selection and recruitment processes. The need to improve current selection and recruitment practices for newly qualified nurses is highlighted in health policy internationally. A cross-sectional, sequential-explanatory mixed-method design with 4 components: (1) Literature review of selection and recruitment of newly qualified nurses; and (2) Literature review of a public sector professions' selection and recruitment processes; (3) Survey mapping existing selection and recruitment processes for newly qualified nurses; and (4) Qualitative study about recruiters' selection and recruitment processes. Literature searches on the selection and recruitment of newly qualified candidates in teaching and nursing (2005-2013) were conducted. Cross-sectional, mixed-method data were collected from thirty-one (n = 31) individuals in health providers in London who had responsibility for the selection and recruitment of newly qualified nurses using a survey instrument. Of these providers who took part, six (n = 6) purposively selected to be interviewed qualitatively. Issues of supply and demand in the workforce, rather than selection and recruitment tools, predominated in the literature reviews. Examples of tools to measure values, attitudes and skills were found in the nursing literature. The mapping exercise found that providers used many selection and recruitment tools, some providers combined tools to streamline process and assure quality of candidates. Most providers had processes which addressed the issue of quality in the selection and recruitment of newly qualified nurses. The 'assessment centre model', which providers were adopting, allowed for multiple levels of assessment and streamlined recruitment. There is a need to validate the efficacy of the selection tools. © 2014 John Wiley & Sons Ltd.

  5. Multiple and sequential data acquisition method: an improved method for fragmentation and detection of cross-linked peptides on a hybrid linear trap quadrupole Orbitrap Velos mass spectrometer.

    Science.gov (United States)

    Rudashevskaya, Elena L; Breitwieser, Florian P; Huber, Marie L; Colinge, Jacques; Müller, André C; Bennett, Keiryn L

    2013-02-05

    The identification and validation of cross-linked peptides by mass spectrometry remains a daunting challenge for protein-protein cross-linking approaches when investigating protein interactions. This includes the fragmentation of cross-linked peptides in the mass spectrometer per se and following database searching, the matching of the molecular masses of the fragment ions to the correct cross-linked peptides. The hybrid linear trap quadrupole (LTQ) Orbitrap Velos combines the speed of the tandem mass spectrometry (MS/MS) duty circle with high mass accuracy, and these features were utilized in the current study to substantially improve the confidence in the identification of cross-linked peptides. An MS/MS method termed multiple and sequential data acquisition method (MSDAM) was developed. Preliminary optimization of the MS/MS settings was performed with a synthetic peptide (TP1) cross-linked with bis[sulfosuccinimidyl] suberate (BS(3)). On the basis of these results, MSDAM was created and assessed on the BS(3)-cross-linked bovine serum albumin (BSA) homodimer. MSDAM applies a series of multiple sequential fragmentation events with a range of different normalized collision energies (NCE) to the same precursor ion. The combination of a series of NCE enabled a considerable improvement in the quality of the fragmentation spectra for cross-linked peptides, and ultimately aided in the identification of the sequences of the cross-linked peptides. Concurrently, MSDAM provides confirmatory evidence from the formation of reporter ions fragments, which reduces the false positive rate of incorrectly assigned cross-linked peptides.

  6. Sequential Optimization Methods for Augmentation of Marine Enzymes Production in Solid-State Fermentation: l-Glutaminase Production a Case Study.

    Science.gov (United States)

    Sathish, T; Uppuluri, K B; Veera Bramha Chari, P; Kezia, D

    There is an increased l-glutaminase market worldwide due to its relevant industrial applications. Salt tolerance l-glutaminases play a vital role in the increase of flavor of different types of foods like soya sauce and tofu. This chapter is presenting the economically viable l-glutaminases production in solid-state fermentation (SSF) by Aspergillus flavus MTCC 9972 as a case study. The enzyme production was improved following a three step optimization process. Initially mixture design (MD) (augmented simplex lattice design) was employed to optimize the solid substrate mixture. Such solid substrate mixture consisted of 59:41 of wheat bran and Bengal gram husk has given higher amounts of l-glutaminase. Glucose and l-glutamine were screened as a finest additional carbon and nitrogen sources for l-glutaminase production with help of Plackett-Burman Design (PBD). l-Glutamine also acting as a nitrogen source as well as inducer for secretion of l-glutaminase from A. flavus MTCC 9972. In the final step of optimization various environmental and nutritive parameters such as pH, temperature, moisture content, inoculum concentration, glucose, and l-glutamine levels were optimized through the use of hybrid feed forward neural networks (FFNNs) and genetic algorithm (GA). Through sequential optimization methods MD-PBD-FFNN-GA, the l-glutaminase production in SSF could be improved by 2.7-fold (453-1690U/g). © 2016 Elsevier Inc. All rights reserved.

  7. The sequentially discounting autoregressive (SDAR) method for on-line automatic seismic event detecting on long term observation

    Science.gov (United States)

    Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.

    2017-12-01

    In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long

  8. Sequentially-crosslinked bioactive hydrogels as nano-patterned substrates with customizable stiffness and degradation for corneal tissue engineering applications.

    Science.gov (United States)

    Rizwan, Muhammad; Peh, Gary S L; Ang, Heng-Pei; Lwin, Nyein Chan; Adnan, Khadijah; Mehta, Jodhbir S; Tan, Wui Siew; Yim, Evelyn K F

    2017-03-01

    Naturally-bioactive hydrogels like gelatin provide favorable properties for tissue-engineering but lack sufficient mechanical strength for use as implantable tissue engineering substrates. Complex fabrication or multi-component additives can improve material strength, but often compromises other properties. Studies have shown gelatin methacrylate (GelMA) as a bioactive hydrogel with diverse tissue growth applications. We hypothesize that, with suitable material modifications, GelMA could be employed for growth and implantation of tissue-engineered human corneal endothelial cell (HCEC) monolayer. Tissue-engineered HCEC monolayer could potentially be used to treat corneal blindness due to corneal endothelium dysfunction. Here, we exploited a sequential hybrid (physical followed by UV) crosslinking to create an improved material, named as GelMA+, with over 8-fold increase in mechanical strength as compared to regular GelMA. The presence of physical associations increased the subsequent UV-crosslinking efficiency resulting in robust materials able to withstand standard endothelium insertion surgical device loading. Favorable biodegradation kinetics were also measured in vitro and in vivo. We achieved hydrogels patterning with nano-scale resolution by use of oxygen impermeable stamps that overcome the limitations of PDMS based molding processes. Primary HCEC monolayers grown on GelMA+ carrier patterned with pillars of optimal dimension demonstrated improved zona-occludin-1 expression, higher cell density and cell size homogeneity, which are indications of functionally-superior transplantable monolayers. The hybrid crosslinking and fabrication approach offers potential utility for development of implantable tissue-engineered cell-carrier constructs with enhanced bio-functional properties. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Small Molecule Sequential Dual-Targeting Theragnostic Strategy (SMSDTTS): from Preclinical Experiments towards Possible Clinical Anticancer Applications.

    Science.gov (United States)

    Li, Junjie; Oyen, Raymond; Verbruggen, Alfons; Ni, Yicheng

    2013-01-01

    Hitting the evasive tumor cells proves challenging in targeted cancer therapies. A general and unconventional anticancer approach namely small molecule sequential dual-targeting theragnostic strategy (SMSDTTS) has recently been introduced with the aims to target and debulk the tumor mass, wipe out the residual tumor cells, and meanwhile enable cancer detectability. This dual targeting approach works in two steps for systemic delivery of two naturally derived drugs. First, an anti-tubulin vascular disrupting agent, e.g., combretastatin A4 phosphate (CA4P), is injected to selectively cut off tumor blood supply and to cause massive necrosis, which nevertheless always leaves peripheral tumor residues. Secondly, a necrosis-avid radiopharmaceutical, namely (131)I-hypericin ((131)I-Hyp), is administered the next day, which accumulates in intratumoral necrosis and irradiates the residual cancer cells with beta particles. Theoretically, this complementary targeted approach may biologically and radioactively ablate solid tumors and reduce the risk of local recurrence, remote metastases, and thus cancer mortality. Meanwhile, the emitted gamma rays facilitate radio-scintigraphy to detect tumors and follow up the therapy, hence a simultaneous theragnostic approach. SMSDTTS has now shown promise from multicenter animal experiments and may demonstrate unique anticancer efficacy in upcoming preliminary clinical trials. In this short review article, information about the two involved agents, the rationale of SMSDTTS, its preclinical antitumor efficacy, multifocal targetability, simultaneous theragnostic property, and toxicities of the dose regimens are summarized. Meanwhile, possible drawbacks, practical challenges and future improvement with SMSDTTS are discussed, which hopefully may help to push forward this strategy from preclinical experiments towards possible clinical applications.

  10. Biometrics Theory, Methods, and Applications

    CERN Document Server

    Boulgouris, N V; Micheli-Tzanakou, Evangelia

    2009-01-01

    An in-depth examination of the cutting edge of biometrics. This book fills a gap in the literature by detailing the recent advances and emerging theories, methods, and applications of biometric systems in a variety of infrastructures. Edited by a panel of experts, it provides comprehensive coverage of:. Multilinear discriminant analysis for biometric signal recognition;. Biometric identity authentication techniques based on neural networks;. Multimodal biometrics and design of classifiers for biometric fusion;. Feature selection and facial aging modeling for face recognition;. Geometrical and

  11. Distance sampling methods and applications

    CERN Document Server

    Buckland, S T; Marques, T A; Oedekoven, C S

    2015-01-01

    In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website.  Some of the case studies use the software Distance, while others use R code. The book is in three parts.  The first part addresses basic methods, the ...

  12. Exploring Liquid Sequential Injection Chromatography to Teach Fundamentals of Separation Methods: A Very Fast Analytical Chemistry Experiment

    Science.gov (United States)

    Penteado, Jose C.; Masini, Jorge Cesar

    2011-01-01

    Influence of the solvent strength determined by the addition of a mobile-phase organic modifier and pH on chromatographic separation of sorbic acid and vanillin has been investigated by the relatively new technique, liquid sequential injection chromatography (SIC). This technique uses reversed-phase monolithic stationary phase to execute fast…

  13. A novel approach to severe acute pancreatitis in sequential liver-kidney transplantation: the first report on the application of VAC therapy.

    Science.gov (United States)

    Zanus, Giacomo; Boetto, Riccardo; D'Amico, Francesco; Gringeri, Enrico; Vitale, Alessandro; Carraro, Amedeo; Bassi, Domenico; Scopelliti, Michele; Bonsignore, Pasquale; Burra, Patrizia; Angeli, Paolo; Feltracco, Paolo; Cillo, Umberto

    2011-03-01

    This work is the first report of vacuum-assisted closure (VAC) therapy applied as a life-saving surgical treatment for severe acute pancreatitis occurring in a sequential liver- and kidney-transplanted patient who had percutaneous biliary drainage for obstructive "late-onset" jaundice. Surgical exploration with necrosectomy and sequential laparotomies was performed because of increasing intra-abdominal pressure with hemodynamic instability and intra-abdominal multidrug-resistant sepsis, with increasingly difficult abdominal closure. Repeated laparotomies with VAC therapy (applying a continuous negative abdominal pressure) enabled a progressive, successful abdominal decompression, with the clearance of infection and definitive abdominal wound closure. The application of a negative pressure is a novel approach to severe abdominal sepsis and laparostomy management with a view to preventing compartment syndrome and fatal sepsis, and it can lead to complete abdominal wound closure. © 2010 The Authors. Transplant International © 2010 European Society for Organ Transplantation.

  14. Sequential Events in the Irreversible Thermal Denaturation of Human Brain-Type Creatine Kinase by Spectroscopic Methods

    Directory of Open Access Journals (Sweden)

    Yan-Song Gao

    2010-06-01

    Full Text Available The non-cooperative or sequential events which occur during protein thermal denaturation are closely correlated with protein folding, stability, and physiological functions. In this research, the sequential events of human brain-type creatine kinase (hBBCK thermal denaturation were studied by differential scanning calorimetry (DSC, CD, and intrinsic fluorescence spectroscopy. DSC experiments revealed that the thermal denaturation of hBBCK was calorimetrically irreversible. The existence of several endothermic peaks suggested that the denaturation involved stepwise conformational changes, which were further verified by the discrepancy in the transition curves obtained from various spectroscopic probes. During heating, the disruption of the active site structure occurred prior to the secondary and tertiary structural changes. The thermal unfolding and aggregation of hBBCK was found to occur through sequential events. This is quite different from that of muscle-type CK (MMCK. The results herein suggest that BBCK and MMCK undergo quite dissimilar thermal unfolding pathways, although they are highly conserved in the primary and tertiary structures. A minor difference in structure might endow the isoenzymes dissimilar local stabilities in structure, which further contribute to isoenzyme-specific thermal stabilities.

  15. Sequential application of ligand and structure based modeling approaches to index chemicals for their hH4R antagonism.

    Directory of Open Access Journals (Sweden)

    Matteo Pappalardo

    Full Text Available The human histamine H4 receptor (hH4R, a member of the G-protein coupled receptors (GPCR family, is an increasingly attractive drug target. It plays a key role in many cell pathways and many hH4R ligands are studied for the treatment of several inflammatory, allergic and autoimmune disorders, as well as for analgesic activity. Due to the challenging difficulties in the experimental elucidation of hH4R structure, virtual screening campaigns are normally run on homology based models. However, a wealth of information about the chemical properties of GPCR ligands has also accumulated over the last few years and an appropriate combination of these ligand-based knowledge with structure-based molecular modeling studies emerges as a promising strategy for computer-assisted drug design. Here, two chemoinformatics techniques, the Intelligent Learning Engine (ILE and Iterative Stochastic Elimination (ISE approach, were used to index chemicals for their hH4R bioactivity. An application of the prediction model on external test set composed of more than 160 hH4R antagonists picked from the chEMBL database gave enrichment factor of 16.4. A virtual high throughput screening on ZINC database was carried out, picking ∼ 4000 chemicals highly indexed as H4R antagonists' candidates. Next, a series of 3D models of hH4R were generated by molecular modeling and molecular dynamics simulations performed in fully atomistic lipid membranes. The efficacy of the hH4R 3D models in discrimination between actives and non-actives were checked and the 3D model with the best performance was chosen for further docking studies performed on the focused library. The output of these docking studies was a consensus library of 11 highly active scored drug candidates. Our findings suggest that a sequential combination of ligand-based chemoinformatics approaches with structure-based ones has the potential to improve the success rate in discovering new biologically active GPCR drugs and

  16. Sequential application of ligand and structure based modeling approaches to index chemicals for their hH4R antagonism.

    Science.gov (United States)

    Pappalardo, Matteo; Shachaf, Nir; Basile, Livia; Milardi, Danilo; Zeidan, Mouhammed; Raiyn, Jamal; Guccione, Salvatore; Rayan, Anwar

    2014-01-01

    The human histamine H4 receptor (hH4R), a member of the G-protein coupled receptors (GPCR) family, is an increasingly attractive drug target. It plays a key role in many cell pathways and many hH4R ligands are studied for the treatment of several inflammatory, allergic and autoimmune disorders, as well as for analgesic activity. Due to the challenging difficulties in the experimental elucidation of hH4R structure, virtual screening campaigns are normally run on homology based models. However, a wealth of information about the chemical properties of GPCR ligands has also accumulated over the last few years and an appropriate combination of these ligand-based knowledge with structure-based molecular modeling studies emerges as a promising strategy for computer-assisted drug design. Here, two chemoinformatics techniques, the Intelligent Learning Engine (ILE) and Iterative Stochastic Elimination (ISE) approach, were used to index chemicals for their hH4R bioactivity. An application of the prediction model on external test set composed of more than 160 hH4R antagonists picked from the chEMBL database gave enrichment factor of 16.4. A virtual high throughput screening on ZINC database was carried out, picking ∼ 4000 chemicals highly indexed as H4R antagonists' candidates. Next, a series of 3D models of hH4R were generated by molecular modeling and molecular dynamics simulations performed in fully atomistic lipid membranes. The efficacy of the hH4R 3D models in discrimination between actives and non-actives were checked and the 3D model with the best performance was chosen for further docking studies performed on the focused library. The output of these docking studies was a consensus library of 11 highly active scored drug candidates. Our findings suggest that a sequential combination of ligand-based chemoinformatics approaches with structure-based ones has the potential to improve the success rate in discovering new biologically active GPCR drugs and increase the

  17. Sequential Probability Ration Tests : Conservative and Robust

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Shi, Wen

    2017-01-01

    In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output

  18. A one-sided sequential test

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.; Lux, I. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.

    1996-04-16

    The applicability of the classical sequential probability ratio testing (SPRT) for early failure detection problems is limited by the fact that there is an extra time delay between the occurrence of the failure and its first recognition. Chien and Adams developed a method to minimize this time for the case when the problem can be formulated as testing the mean value of a Gaussian signal. In our paper we propose a procedure that can be applied for both mean and variance testing and that minimizes the time delay. The method is based on a special parametrization of the classical SPRT. The one-sided sequential tests (OSST) can reproduce the results of the Chien-Adams test when applied for mean values. (author).

  19. Rapid method for the sequential measurement of isotopes of Am and Pu in liquid matrices by alpha spectrometry

    International Nuclear Information System (INIS)

    Mantero Cabrera, J.

    2014-01-01

    : In radiological emergencies it's necessary a fast response from the laboratories in the quantification of certain radionuclides, in order to take decisions. In these cases, is the reaction time the key parameter to consider (without neglecting the quality of the measurement). In this work, it is shown a method for aqueous matrices that generates Pu and Am isotopes sources in one single day of work, to be measured subsequently by alpha spectrometry. The developed methodology has been validated through its application to reference samples and also taking part intercom- parison exercises, having in both cases, satisfactory results. This way, we check the validity of this fast method that let us generate in 24 hours (since the sample arrives our laboratory), one measurement with a Minimun Detectable Activity (MDA) of about 0.004Bq/L for Pu and Am isotopes in liquid matrices. [es

  20. Development of a sequential injection-square wave voltammetry method for determination of paraquat in water samples employing the hanging mercury drop electrode.

    Science.gov (United States)

    dos Santos, Luciana B O; Infante, Carlos M C; Masini, Jorge C

    2010-03-01

    This work describes the development and optimization of a sequential injection method to automate the determination of paraquat by square-wave voltammetry employing a hanging mercury drop electrode. Automation by sequential injection enhanced the sampling throughput, improving the sensitivity and precision of the measurements as a consequence of the highly reproducible and efficient conditions of mass transport of the analyte toward the electrode surface. For instance, 212 analyses can be made per hour if the sample/standard solution is prepared off-line and the sequential injection system is used just to inject the solution towards the flow cell. In-line sample conditioning reduces the sampling frequency to 44 h(-1). Experiments were performed in 0.10 M NaCl, which was the carrier solution, using a frequency of 200 Hz, a pulse height of 25 mV, a potential step of 2 mV, and a flow rate of 100 µL s(-1). For a concentration range between 0.010 and 0.25 mg L(-1), the current (i(p), µA) read at the potential corresponding to the peak maximum fitted the following linear equation with the paraquat concentration (mg L(-1)): i(p) = (-20.5 ± 0.3)C (paraquat) - (0.02 ± 0.03). The limits of detection and quantification were 2.0 and 7.0 µg L(-1), respectively. The accuracy of the method was evaluated by recovery studies using spiked water samples that were also analyzed by molecular absorption spectrophotometry after reduction of paraquat with sodium dithionite in an alkaline medium. No evidence of statistically significant differences between the two methods was observed at the 95% confidence level.

  1. Applying Sparse Machine Learning Methods to Twitter: Analysis of the 2012 Change in Pap Smear Guidelines. A Sequential Mixed-Methods Study.

    Science.gov (United States)

    Lyles, Courtney Rees; Godbehere, Andrew; Le, Gem; El Ghaoui, Laurent; Sarkar, Urmimala

    2016-06-10

    It is difficult to synthesize the vast amount of textual data available from social media websites. Capturing real-world discussions via social media could provide insights into individuals' opinions and the decision-making process. We conducted a sequential mixed methods study to determine the utility of sparse machine learning techniques in summarizing Twitter dialogues. We chose a narrowly defined topic for this approach: cervical cancer discussions over a 6-month time period surrounding a change in Pap smear screening guidelines. We applied statistical methodologies, known as sparse machine learning algorithms, to summarize Twitter messages about cervical cancer before and after the 2012 change in Pap smear screening guidelines by the US Preventive Services Task Force (USPSTF). All messages containing the search terms "cervical cancer," "Pap smear," and "Pap test" were analyzed during: (1) January 1-March 13, 2012, and (2) March 14-June 30, 2012. Topic modeling was used to discern the most common topics from each time period, and determine the singular value criterion for each topic. The results were then qualitatively coded from top 10 relevant topics to determine the efficiency of clustering method in grouping distinct ideas, and how the discussion differed before vs. after the change in guidelines . This machine learning method was effective in grouping the relevant discussion topics about cervical cancer during the respective time periods (~20% overall irrelevant content in both time periods). Qualitative analysis determined that a significant portion of the top discussion topics in the second time period directly reflected the USPSTF guideline change (eg, "New Screening Guidelines for Cervical Cancer"), and many topics in both time periods were addressing basic screening promotion and education (eg, "It is Cervical Cancer Awareness Month! Click the link to see where you can receive a free or low cost Pap test.") It was demonstrated that machine learning

  2. Sequential coating upconversion NaYF{sub 4}:Yb,Tm nanocrystals with SiO{sub 2} and ZnO layers for NIR-driven photocatalytic and antibacterial applications

    Energy Technology Data Exchange (ETDEWEB)

    Tou, Meijie; Luo, Zhenguo; Bai, Song; Liu, Fangying; Chai, Qunxia; Li, Sheng; Li, Zhengquan, E-mail: zqli@zjnu.edu.cn

    2017-01-01

    ZnO is one of the most promising materials for both photocatalytic and antibacterial applications, but its wide bandgap requires the excitation of UV light which limits their applications under visible and NIR bands. Herein, we demonstrate a facile approach to synthesize core-shell-shell hybrid nanoparticles consisting of hexagonal NaYF{sub 4}:Yb,Tm, amorphous SiO{sub 2} and wurtzite ZnO. The upconversion nanocrystals are used as the core seeds and sequentially coated with an insulting shell and a semiconductor layer. Such hybrid nanoparticles can efficiently utilize the NIR light through the upconverting process, and display notable photocatalytic performance and antibacterial activity under NIR irradiation. The developed NaYF{sub 4}:Yb,Tm@SiO{sub 2}@ZnO nanoparticles are characterized with TEM, XRD, EDS, XPS and PL spectra, and their working mechanism is also elucidated. - Highlights: • Core-shell NaYF{sub 4}:Yb,Tm@SiO{sub 2}@TiO{sub 2} NPs were synthesized via a sequential coating method. • Hybrid NaYF{sub 4}:Yb,Tm@SiO{sub 2}@TiO{sub 2} NPs show NIR-light enhanced photocatalytic activity. • NIR-driven antibacterial performance has been realized with NaYF{sub 4}:Yb,Tm@SiO{sub 2}@TiO{sub 2} NPs. • Working mechanism of the hybrid photocatalysts as antibacterial agents was proposed.

  3. Asymptotic Expansions - Methods and Applications

    International Nuclear Information System (INIS)

    Harlander, R.

    1999-01-01

    Different viewpoints on the asymptotic expansion of Feynman diagrams are reviewed. The relations between the field theoretic and diagrammatic approaches are sketched. The focus is on problems with large masses or large external momenta. Several recent applications also for other limiting cases are touched upon. Finally, the pros and cons of the different approaches are briefly discussed. (author)

  4. Development of a flow method for the determination of phosphate in estuarine and freshwaters-Comparison of flow cells in spectrophotometric sequential injection analysis

    International Nuclear Information System (INIS)

    Mesquita, Raquel B.R.; Ferreira, M. Teresa S.O.B.; Toth, Ildiko V.; Bordalo, Adriano A.; McKelvie, Ian D.; Rangel, Antonio O.S.S.

    2011-01-01

    Highlights: → Sequential injection determination of phosphate in estuarine and freshwaters. → Alternative spectrophotometric flow cells are compared. → Minimization of schlieren effect was assessed. → Proposed method can cope with wide salinity ranges. → Multi-reflective cell shows clear advantages. - Abstract: A sequential injection system with dual analytical line was developed and applied in the comparison of two different detection systems viz; a conventional spectrophotometer with a commercial flow cell, and a multi-reflective flow cell coupled with a photometric detector under the same experimental conditions. The study was based on the spectrophotometric determination of phosphate using the molybdenum-blue chemistry. The two alternative flow cells were compared in terms of their response to variation of sample salinity, susceptibility to interferences and to refractive index changes. The developed method was applied to the determination of phosphate in natural waters (estuarine, river, well and ground waters). The achieved detection limit (0.007 μM PO 4 3- ) is consistent with the requirement of the target water samples, and a wide quantification range (0.024-9.5 μM) was achieved using both detection systems.

  5. Development of a flow method for the determination of phosphate in estuarine and freshwaters-Comparison of flow cells in spectrophotometric sequential injection analysis

    Energy Technology Data Exchange (ETDEWEB)

    Mesquita, Raquel B.R. [CBQF/Escola Superior de Biotecnologia, Universidade Catolica Portuguesa, R. Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal); Laboratory of Hydrobiology, Institute of Biomedical Sciences Abel Salazar (ICBAS) and Institute of Marine Research (CIIMAR), Universidade do Porto, Lg. Abel Salazar 2, 4099-003 Porto (Portugal); Ferreira, M. Teresa S.O.B. [CBQF/Escola Superior de Biotecnologia, Universidade Catolica Portuguesa, R. Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal); Toth, Ildiko V. [REQUIMTE, Departamento de Quimica, Faculdade de Farmacia, Universidade de Porto, Rua Anibal Cunha, 164, 4050-047 Porto (Portugal); Bordalo, Adriano A. [Laboratory of Hydrobiology, Institute of Biomedical Sciences Abel Salazar (ICBAS) and Institute of Marine Research (CIIMAR), Universidade do Porto, Lg. Abel Salazar 2, 4099-003 Porto (Portugal); McKelvie, Ian D. [School of Chemistry, University of Melbourne, Victoria 3010 (Australia); Rangel, Antonio O.S.S., E-mail: aorangel@esb.ucp.pt [CBQF/Escola Superior de Biotecnologia, Universidade Catolica Portuguesa, R. Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal)

    2011-09-02

    Highlights: {yields} Sequential injection determination of phosphate in estuarine and freshwaters. {yields} Alternative spectrophotometric flow cells are compared. {yields} Minimization of schlieren effect was assessed. {yields} Proposed method can cope with wide salinity ranges. {yields} Multi-reflective cell shows clear advantages. - Abstract: A sequential injection system with dual analytical line was developed and applied in the comparison of two different detection systems viz; a conventional spectrophotometer with a commercial flow cell, and a multi-reflective flow cell coupled with a photometric detector under the same experimental conditions. The study was based on the spectrophotometric determination of phosphate using the molybdenum-blue chemistry. The two alternative flow cells were compared in terms of their response to variation of sample salinity, susceptibility to interferences and to refractive index changes. The developed method was applied to the determination of phosphate in natural waters (estuarine, river, well and ground waters). The achieved detection limit (0.007 {mu}M PO{sub 4}{sup 3-}) is consistent with the requirement of the target water samples, and a wide quantification range (0.024-9.5 {mu}M) was achieved using both detection systems.

  6. A sequential and fast method for low level of 226Ra , 228Ra, 210Pb e 210Po in mine effluents and uranium processing plant

    International Nuclear Information System (INIS)

    Taddei, M.H.T.; Taddei, J.F.A.C.

    2005-01-01

    Due to biological risk and long half lives, the radionuclides 228 Ra, 226 Ra, 210 Pb and 210 Po should be frequently monitored to check for any environmental contamination around mines and uranium plants. Currently, the methods used for the determination of these radionuclides take about thirty days to reach the radioactive equilibrium of the 210 Pb and 226 Ra daughter's. The evaluation of effluent discharges and leakage of deposits to water bodies in monitoring programs, require quick answers to implement corrective measures. Thereby fast determination methods must be implemented. This work presents a fast and sequential method to, in three days, determine accurately and sensitively, 226 Ra, 228 Ra, 210 Pb, 210 Po, in water and effluent samples

  7. The application of sequential sup(99m)Tc-methylene diphosphonate scintigraphy to evaluate bone healing in non human primates

    International Nuclear Information System (INIS)

    Dormehl, I.C.; Mennen, U.; Goosen, D.J.

    1982-01-01

    The study was performed to devise and assess a sensitive non-invasive method for investigated the healing process of long bones in non-human primates. A specific clinical application in mind is the early detection of non-healing or delayed healing of fractures in the aged. Important for accurate evaluation is the consistency of the localisation of the fracture site and the region of healthy bone from each scintiscan for the entire study. The present report concerns a technique which seems to be successful for this purpose, and is found useful towards the clinical application. Four adult chacma baboons (Papio ursinus) were used in the experiment. All four animals were clinically and radiographically normal. They were housed indoors in environmentally controlled rooms for the duration of the experiment and they were fed a balanced commercial diet with water freely available. Eight forearms, i.e. a total of 16 radius and ulna bones were osteotomized with a Gigli saw to create simple standard and controlled fractures

  8. Evaluation of the mobility and pollution index of selected essential/toxic metals in paddy soil by sequential extraction method.

    Science.gov (United States)

    Hasan, Maria; Kausar, Dilshad; Akhter, Gulraiz; Shah, Munir H

    2018-01-01

    Comparative distribution and mobility of selected essential and toxic metals in the paddy soil from district Sargodha, Pakistan was evaluated by the modified Community Bureau of Reference (mBCR) sequential extraction procedure. Most of the soil samples showed slightly alkaline nature while the soil texture was predominantly silty loam in nature. The metal contents were quantified in the exchangeable, reducible, oxidisable and residual fractions of the soil by flame atomic absorption spectrophotometry and the metal data were subjected to the statistical analyses in order to evaluate the mutual relationships among the metals in each fraction. Among the metals, Ca, Sr and Mn were found to be more mobile in the soil. A number of significant correlations between different metal pairs were noted in various fractions. Contamination factor, geoaccumulation index and enrichment factor revealed extremely severe enrichment/contamination for Cd; moderate to significant enrichment/contamination for Ni, Zn, Co and Pb while Cr, Sr, Cu and Mn revealed minimal to moderate contamination and accumulation in the soil. Multivariate cluster analysis showed significant anthropogenic intrusions of the metals in various fractions. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Trace metal distribution in the Arosa estuary (N.W. Spain): The application of a recently developed sequential extraction procedure for metal partitioning

    International Nuclear Information System (INIS)

    Santamaria-Fernandez, Rebeca; Cave, Mark R.; Hill, Steve J.

    2006-01-01

    A study of the trace metal distribution in sediment samples from the Galician coast (Spain) has been performed. A multielement extraction method optimised via experimental design has been employed. The method uses centrifugation to pass the extractant solution at varying pH, through the sediment sample. The sequential leaches were collected and analysed by ICP-AES. Chemometric approaches were utilised to identify the composition of the physico-chemical components in order to characterise the sample. The samples collected at different sites could be classified according to their differences in metal bio-availability and important information regarding element distribution within the physico-chemical components is given. The method has proved to be a quick and reliable way to evaluate sediment samples for environmental geochemistry analysis. In addition, this approach has potential as fast screening method for the bio-availability of metals in the environment

  10. Statistical methods and computer applications

    CERN Document Server

    Arora, PN

    2009-01-01

    Some of the exclusive features of the book are: Every concept has been explained with the help of solved examples. Working rules showing the various steps for the applications of formulae have also been given. The diagrams and graphs have been neatly and correctly drawn in such a way that the students have the complete understanding of the problem by simply looking at them. Efforts have been made to make the subject throughly exhaustive and nothing important has been omitted. Answer to all the problems have been throughly checked. It is a user-friendly book containing many, solved problems and

  11. Data mining methods and applications

    CERN Document Server

    Lawrence, Kenneth D; Klimberg, Ronald K

    2007-01-01

    With today's information explosion, many organizations are now able to access a wealth of valuable data. Unfortunately, most of these organizations find they are ill-equipped to organize this information, let alone put it to work for them. Gain a Competitive Advantage Employ data mining in research and forecasting Build models with data management tools and methodology optimization Gain sophisticated breakdowns and complex analysis through multivariate, evolutionary, and neural net methodsLearn how to classify data and maintain qualityTransform Data into Business Acumen Data Mining Methods and

  12. Development of a rapid method for the sequential extraction and subsequent quantification of fatty acids and sugars from avocado mesocarp tissue.

    Science.gov (United States)

    Meyer, Marjolaine D; Terry, Leon A

    2008-08-27

    Methods devised for oil extraction from avocado (Persea americana Mill.) mesocarp (e.g., Soxhlet) are usually lengthy and require operation at high temperature. Moreover, methods for extracting sugars from avocado tissue (e.g., 80% ethanol, v/v) do not allow for lipids to be easily measured from the same sample. This study describes a new simple method that enabled sequential extraction and subsequent quantification of both fatty acids and sugars from the same avocado mesocarp tissue sample. Freeze-dried mesocarp samples of avocado cv. Hass fruit of different ripening stages were extracted by homogenization with hexane and the oil extracts quantified for fatty acid composition by GC. The resulting filter residues were readily usable for sugar extraction with methanol (62.5%, v/v). For comparison, oil was also extracted using the standard Soxhlet technique and the resulting thimble residue extracted for sugars as before. An additional experiment was carried out whereby filter residues were also extracted using ethanol. Average oil yield using the Soxhlet technique was significantly (P < 0.05) higher than that obtained by homogenization with hexane, although the difference remained very slight, and fatty acid profiles of the oil extracts following both methods were very similar. Oil recovery improved with increasing ripeness of the fruit with minor differences observed in the fatty acid composition during postharvest ripening. After lipid removal, methanolic extraction was superior in recovering sucrose and perseitol as compared to 80% ethanol (v/v), whereas mannoheptulose recovery was not affected by solvent used. The method presented has the benefits of shorter extraction time, lower extraction temperature, and reduced amount of solvent and can be used for sequential extraction of fatty acids and sugars from the same sample.

  13. Influence of layer compositions and annealing conditions on complete formation of ternary PdAgCu alloys prepared by sequential electroless and electroplating methods

    Energy Technology Data Exchange (ETDEWEB)

    Sumrunronnasak, Sarocha [Graduate Program of Petrochemistry and Polymer Science, Faculty of Science, Chulalongkorn University, Bangkok 10330 (Thailand); Tantayanon, Supawan, E-mail: supawan.t@chula.ac.th [Green Chemistry Research Laboratory, Department of Chemistry, Faculty of Science, Chulalongkorn University, Bangkok 10330 (Thailand); Kiatgamolchai, Somchai [Department of Physics, Faculty of Science, Chulalongkorn University, Bangkok 10330 (Thailand)

    2017-01-01

    PdAgCu ternary alloy membranes were synthesized by the sequential electroless plating of Pd following by electroplating of Ag and Cu onto stainless steel substrate. The composition of the composite was varied by changing the deposition times. The fabricated layers were annealed at the temperatures between 500 and 600 °C for 20–60 h. The Energy Dispersive X-ray spectroscopy (EDX) and X-ray diffraction (XRD) were employed to investigate the element distribution in the membrane which provided the insight on membrane alloying process. Complete formation of the alloy could be obtained when the Pd composition was greater than a critical value of 60 wt%, and Ag and Cu contents were in the range of 18–30 wt% and 2–13 wt%, respectively. Deposition times of Ag and Cu were found to affect the completion of alloy formation. Excess amount of the deposited Cu particularly tended to segregate on the surface of the membrane. - Highlights: • Ternary PdAgCu alloy membranes were successfully prepared by the sequential electroless and electroplating methods. • The average Pd composition required to form alloy was found to be approximately at least 60%wt. • The alloy region was achieved for f Pd 60–73 wt%, Cu 18–30 wt% and Ag 2–13 wt%. • Suitable annealing temperature in the range of 500–600 °C for an adequate period of treating time (20–60 h).

  14. Electrodeposition: Principles, Applications and Methods

    International Nuclear Information System (INIS)

    Nur Ubaidah Saidin; Ying, K.K.; Khuan, N.I.

    2011-01-01

    Electrodeposition technique has been around for a very long time. It is a process of coating a thin layer of one metal on top of a different metal to modify its surface properties, by donating electrons to the ions in a solution. This bottom-up fabrication technique is versatile and can be applied to a wide range of potential applications. Electrodeposition is gaining popularity in recent years due to its capability in fabricating one-dimensional nano structures such as nano rods, nao wires and nano tubes. In this paper, we present an overview on the fabrication and characterization of high aspect ratio nano structures prepared using the nano electrochemical deposition system set up in our laboratory. (author)

  15. Sequential logic analysis and synthesis

    CERN Document Server

    Cavanagh, Joseph

    2007-01-01

    Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a

  16. Applications of the reduction method

    International Nuclear Information System (INIS)

    Zimmermann, W.

    1987-01-01

    A renoramalizable model of quantum field theory involving several independent coupling parameters, λ 0 , ..., λ n and a normalization mass K is considered. If the model involves massive particles a formulation of the renormalization group should be used in which the β-functions are independent of the masses. The aim of the reduction method is to reduce the model to a description in terms of a single coupling parameter. Although the reduction method does not work for the gauge couplings it leads to reasonable mass constraints if applied to the Yukawa and the Higgs couplings. The underlying idea is that - whatever the fundamental interaction if going to be - eventually there is only one coupling which determines all parameters of the standard model. However, one should be skeptical about numerical results in the standard model. For the standard model is only an effective theory, its β-functions are only approximate and change on their lowest order coefficients may have large effects on the reduction solutions

  17. Applications of mixed-methods methodology in clinical pharmacy research.

    Science.gov (United States)

    Hadi, Muhammad Abdul; Closs, S José

    2016-06-01

    Introduction Mixed-methods methodology, as the name suggests refers to mixing of elements of both qualitative and quantitative methodologies in a single study. In the past decade, mixed-methods methodology has gained popularity among healthcare researchers as it promises to bring together the strengths of both qualitative and quantitative approaches. Methodology A number of mixed-methods designs are available in the literature and the four most commonly used designs in healthcare research are: the convergent parallel design, the embedded design, the exploratory design, and the explanatory design. Each has its own unique advantages, challenges and procedures and selection of a particular design should be guided by the research question. Guidance on designing, conducting and reporting mixed-methods research is available in the literature, so it is advisable to adhere to this to ensure methodological rigour. When to use it is best suited when the research questions require: triangulating findings from different methodologies to explain a single phenomenon; clarifying the results of one method using another method; informing the design of one method based on the findings of another method, development of a scale/questionnaire and answering different research questions within a single study. Two case studies have been presented to illustrate possible applications of mixed-methods methodology. Limitations Possessing the necessary knowledge and skills to undertake qualitative and quantitative data collection, analysis, interpretation and integration remains the biggest challenge for researchers conducting mixed-methods studies. Sequential study designs are often time consuming, being in two (or more) phases whereas concurrent study designs may require more than one data collector to collect both qualitative and quantitative data at the same time.

  18. Development and Application of a Stepwise Assessment Process for Rational Redesign of Sequential Skills-Based Courses.

    Science.gov (United States)

    Gallimore, Casey E; Porter, Andrea L; Barnett, Susanne G

    2016-10-25

    Objective. To develop and apply a stepwise process to assess achievement of course learning objectives related to advanced pharmacy practice experiences (APPEs) preparedness and inform redesign of sequential skills-based courses. Design. Four steps comprised the assessment and redesign process: (1) identify skills critical for APPE preparedness; (2) utilize focus groups and course evaluations to determine student competence in skill performance; (3) apply course mapping to identify course deficits contributing to suboptimal skill performance; and (4) initiate course redesign to target exposed deficits. Assessment. Focus group participants perceived students were least prepared for skills within the Accreditation Council for Pharmacy Education's pre-APPE core domains of Identification and Assessment of Drug-related Problems and General Communication Abilities. Course mapping identified gaps in instruction, performance, and assessment of skills within aforementioned domains. Conclusions. A stepwise process that identified strengths and weaknesses of a course, was used to facilitate structured course redesign. Strengths of the process included input and corroboration from both preceptors and students. Limitations included feedback from a small number of pharmacy preceptors and increased workload on course coordinators.

  19. Differential equations methods and applications

    CERN Document Server

    Said-Houari, Belkacem

    2015-01-01

    This book presents a variety of techniques for solving ordinary differential equations analytically and features a wealth of examples. Focusing on the modeling of real-world phenomena, it begins with a basic introduction to differential equations, followed by linear and nonlinear first order equations and a detailed treatment of the second order linear equations. After presenting solution methods for the Laplace transform and power series, it lastly presents systems of equations and offers an introduction to the stability theory. To help readers practice the theory covered, two types of exercises are provided: those that illustrate the general theory, and others designed to expand on the text material. Detailed solutions to all the exercises are included. The book is excellently suited for use as a textbook for an undergraduate class (of all disciplines) in ordinary differential equations. .

  20. Application of sequential fragmentation/transport theory to deposits of 1723 and 1963-65 eruptions of Volcan Irazu, Costa Rica: positive dispersion case and fractal model

    International Nuclear Information System (INIS)

    Brenes, Jose; Alvarado, Guillermo E.

    2013-01-01

    The theory of Fragmentation and Sequential Transport (FST) was applied to the granulometric analyzes of the deposits from the eruptions of 1723 and 1963-65 of the Volcan Irazu. An appreciable number of cases of positive dispersion was showed, associated in the literature with aggregation processes. A new fractal dimension defined in research has shown to be the product of secondary fragmentation. The application of the new dimension is used in the analyses of the eruptions of 1723 and 1963-65. A fractal model of a volcanic activity is formulated for the first time. The Hurst coefficient and the exponent of the law of powers are incorporated. The existence of values of dissidence near zero have been indicators of an effusive process, as would be the lava pools. The results derived from the model were agreed with field observations. (author) [es

  1. Arsenic fractionation in agricultural soil using an automated three-step sequential extraction method coupled to hydride generation-atomic fluorescence spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Rosas-Castor, J.M. [Universidad Autónoma de Nuevo León, UANL, Facultad de Ciencias Químicas, Cd. Universitaria, San Nicolás de los Garza, Nuevo León, C.P. 66451 Nuevo León (Mexico); Group of Analytical Chemistry, Automation and Environment, University of Balearic Islands, Cra. Valldemossa km 7.5, 07122 Palma de Mallorca (Spain); Portugal, L.; Ferrer, L. [Group of Analytical Chemistry, Automation and Environment, University of Balearic Islands, Cra. Valldemossa km 7.5, 07122 Palma de Mallorca (Spain); Guzmán-Mar, J.L.; Hernández-Ramírez, A. [Universidad Autónoma de Nuevo León, UANL, Facultad de Ciencias Químicas, Cd. Universitaria, San Nicolás de los Garza, Nuevo León, C.P. 66451 Nuevo León (Mexico); Cerdà, V. [Group of Analytical Chemistry, Automation and Environment, University of Balearic Islands, Cra. Valldemossa km 7.5, 07122 Palma de Mallorca (Spain); Hinojosa-Reyes, L., E-mail: laura.hinojosary@uanl.edu.mx [Universidad Autónoma de Nuevo León, UANL, Facultad de Ciencias Químicas, Cd. Universitaria, San Nicolás de los Garza, Nuevo León, C.P. 66451 Nuevo León (Mexico)

    2015-05-18

    Highlights: • A fully automated flow-based modified-BCR extraction method was developed to evaluate the extractable As of soil. • The MSFIA–HG-AFS system included an UV photo-oxidation step for organic species degradation. • The accuracy and precision of the proposed method were found satisfactory. • The time analysis can be reduced up to eight times by using the proposed flow-based BCR method. • The labile As (F1 + F2) was <50% of total As in soil samples from As-contaminated-mining zones. - Abstract: A fully automated modified three-step BCR flow-through sequential extraction method was developed for the fractionation of the arsenic (As) content from agricultural soil based on a multi-syringe flow injection analysis (MSFIA) system coupled to hydride generation-atomic fluorescence spectrometry (HG-AFS). Critical parameters that affect the performance of the automated system were optimized by exploiting a multivariate approach using a Doehlert design. The validation of the flow-based modified-BCR method was carried out by comparison with the conventional BCR method. Thus, the total As content was determined in the following three fractions: fraction 1 (F1), the acid-soluble or interchangeable fraction; fraction 2 (F2), the reducible fraction; and fraction 3 (F3), the oxidizable fraction. The limits of detection (LOD) were 4.0, 3.4, and 23.6 μg L{sup −1} for F1, F2, and F3, respectively. A wide working concentration range was obtained for the analysis of each fraction, i.e., 0.013–0.800, 0.011–0.900 and 0.079–1.400 mg L{sup −1} for F1, F2, and F3, respectively. The precision of the automated MSFIA–HG-AFS system, expressed as the relative standard deviation (RSD), was evaluated for a 200 μg L{sup −1} As standard solution, and RSD values between 5 and 8% were achieved for the three BCR fractions. The new modified three-step BCR flow-based sequential extraction method was satisfactorily applied for arsenic fractionation in real agricultural

  2. The SIESTA method; developments and applicability

    International Nuclear Information System (INIS)

    Artacho, Emilio; Anglada, E; Dieguez, O; Gale, J D; Garcia, A; Junquera, J; Martin, R M; Ordejon, P; Pruneda, J M; Sanchez-Portal, D; Soler, J M

    2008-01-01

    Recent developments in and around the SIESTA method of first-principles simulation of condensed matter are described and reviewed, with emphasis on (i) the applicability of the method for large and varied systems (ii) efficient basis sets for the standards of accuracy of density-functional methods (iii) new implementations, and (iv) extensions beyond ground-state calculations

  3. Finite element method - theory and applications

    International Nuclear Information System (INIS)

    Baset, S.

    1992-01-01

    This paper summarizes the mathematical basis of the finite element method. Attention is drawn to the natural development of the method from an engineering analysis tool into a general numerical analysis tool. A particular application to the stress analysis of rubber materials is presented. Special advantages and issues associated with the method are mentioned. (author). 4 refs., 3 figs

  4. Optimization of the analysis by means of liquid chromatography of metabolites of the Uncaria Tomentosa plant (cat's claw) using the sequential simplex method

    International Nuclear Information System (INIS)

    Romero Blanco, Eric

    2005-01-01

    A new method was developed for the analysis using liquid chromatography of the metabolites present in extracts of root bark of Uncaria Tomentosa (cat's claw) by applying the simplex sequential technique to determine the magnitude of the chromatographic variables; i.e. flow, temperature and stationary-phase composition, which allowed the optimizing the elusion time and the resolution of the chromatographic separation. The chromatographic analysis was performed in isocratic mode using a C12 (-urea) column of 15 cm in length and 4,6 mm of diameter and a UV detector. The magnitude of the chromatographic variables that optimized the separation turned out to be: flow of 1.80 mL/min, temperature of 27.5 centigrade and a mobile phase composition of 22:78 (Methanol: to butter). (Author) [es

  5. Synthetic Aperture Sequential Beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke

    2008-01-01

    A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...

  6. GNSS remote sensing theory, methods and applications

    CERN Document Server

    Jin, Shuanggen; Xie, Feiqin

    2014-01-01

    This book presents the theory and methods of GNSS remote sensing as well as its applications in the atmosphere, oceans, land and hydrology. It contains detailed theory and study cases to help the reader put the material into practice.

  7. Exploring the response process of culturally differing survey respondents with a response style: A sequential mixed-methods study

    NARCIS (Netherlands)

    Morren, M.H.; Gelissen, J.P.T.M.; Vermunt, J.K.

    2013-01-01

    This article presents a mixed methods approach that integrates quantitative and qualitative methods to analyze why the four largest minorities in the Netherlands-Turks, Moroccans, Antilleans, and Surinamese-respond differently to items treating cultural topics. First, we conducted latent class

  8. Advanced computational electromagnetic methods and applications

    CERN Document Server

    Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya

    2015-01-01

    This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

  9. Power system state estimation using an iteratively reweighted least squares method for sequential L{sub 1}-regression

    Energy Technology Data Exchange (ETDEWEB)

    Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)

    2006-02-15

    This paper presents an implementation of the least absolute value (LAV) power system state estimator based on obtaining a sequence of solutions to the L{sub 1}-regression problem using an iteratively reweighted least squares (IRLS{sub L1}) method. The proposed implementation avoids reformulating the regression problem into standard linear programming (LP) form and consequently does not require the use of common methods of LP, such as those based on the simplex method or interior-point methods. It is shown that the IRLS{sub L1} method is equivalent to solving a sequence of linear weighted least squares (LS) problems. Thus, its implementation presents little additional effort since the sparse LS solver is common to existing LS state estimators. Studies on the termination criteria of the IRLS{sub L1} method have been carried out to determine a procedure for which the proposed estimator is more computationally efficient than a previously proposed non-linear iteratively reweighted least squares (IRLS) estimator. Indeed, it is revealed that the proposed method is a generalization of the previously reported IRLS estimator, but is based on more rigorous theory. (author)

  10. Sequential determination of important ecotoxic radionuclides in nuclear waste samples

    International Nuclear Information System (INIS)

    Bilohuscin, J.

    2016-01-01

    In the dissertation thesis we focused on the development and optimization of a sequential determination method for radionuclides 93 Zr, 94 Nb, 99 Tc and 126 Sn, employing extraction chromatography sorbents TEVA (R) Resin and Anion Exchange Resin, supplied by Eichrom Industries. Prior to the attestation of sequential separation of these proposed radionuclides from radioactive waste samples, a unique sequential procedure of 90 Sr, 239 Pu, 241 Am separation from urine matrices was tried, using molecular recognition sorbents of AnaLig (R) series and extraction chromatography sorbent DGA (R) Resin. On these experiments, four various sorbents were continually used for separation, including PreFilter Resin sorbent, which removes interfering organic materials present in raw urine. After the acquisition of positive results of this sequential procedure followed experiments with a 126 Sn separation using TEVA (R) Resin and Anion Exchange Resin sorbents. Radiochemical recoveries obtained from samples of radioactive evaporate concentrates and sludge showed high efficiency of the separation, while values of 126 Sn were under the minimum detectable activities MDA. Activity of 126 Sn was determined after ingrowth of daughter nuclide 126m Sb on HPGe gamma detector, with minimal contamination of gamma interfering radionuclides with decontamination factors (D f ) higher then 1400 for 60 Co and 47000 for 137 Cs. Based on the acquired experiments and results of these separation procedures, a complex method of sequential separation of 93 Zr, 94 Nb, 99 Tc and 126 Sn was proposed, which included optimization steps similar to those used in previous parts of the dissertation work. Application of the sequential separation method for sorbents TEVA (R) Resin and Anion Exchange Resin on real samples of radioactive wastes provided satisfactory results and an economical, time sparing, efficient method. (author)

  11. The Obstacles for the Teaching of 8th Grade TR History of Revolution and Kemalism Course According to the Constructivist Approach (An Example of Exploratory Sequential Mixed Method Design)

    Science.gov (United States)

    Karademir, Yavuz; Demir, Selcuk Besir

    2015-01-01

    The aim of this study is to ascertain the problems social studies teachers face in the teaching of topics covered in 8th grade TRHRK Course. The study was conducted in line with explanatory sequential mixed method design, which is one of the mixed research method, was used. The study involves three phases. In the first step, exploratory process…

  12. Comparing a recursive digital filter with the moving-average and sequential probability-ratio detection methods for SNM portal monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.

    1993-01-01

    The author compared a recursive digital filter proposed as a detection method for French special nuclear material monitors with the author's detection methods, which employ a moving-average scaler or a sequential probability-ratio test. Each of these nine test subjects repeatedly carried a test source through a walk-through portal monitor that had the same nuisance-alarm rate with each method. He found that the average detection probability for the test source is also the same for each method. However, the recursive digital filter may have on drawback: its exponentially decreasing response to past radiation intensity prolongs the impact of any interference from radiation sources of radiation-producing machinery. He also examined the influence of each test subject on the monitor's operation by measuring individual attenuation factors for background and source radiation, then ranked the subjects' attenuation factors against their individual probabilities for detecting the test source. The one inconsistent ranking was probably caused by that subject's unusually long stride when passing through the portal

  13. Risk assessment theory, methods, and applications

    CERN Document Server

    Rausand, Marvin

    2011-01-01

    With its balanced coverage of theory and applications along with standards and regulations, Risk Assessment: Theory, Methods, and Applications serves as a comprehensive introduction to the topic. The book serves as a practical guide to current risk analysis and risk assessment, emphasizing the possibility of sudden, major accidents across various areas of practice from machinery and manufacturing processes to nuclear power plants and transportation systems. The author applies a uniform framework to the discussion of each method, setting forth clear objectives and descriptions, while also shedding light on applications, essential resources, and advantages and disadvantages. Following an introduction that provides an overview of risk assessment, the book is organized into two sections that outline key theory, methods, and applications. * Introduction to Risk Assessment defines key concepts and details the steps of a thorough risk assessment along with the necessary quantitative risk measures. Chapters outline...

  14. Application of nuclear gamma methods in mining

    International Nuclear Information System (INIS)

    Simon, L.; Bosak, J.

    1980-01-01

    A brief review is presented of basic physical characteristics of laboratory, field and operating gamma methods, of their classifications and principles. The measuring instrumentation used and the current state of applications of nuclear gamma methods in coal and ore mining and related branches are described in detail. Principles and practical recommendations are given for safety at work when handling gamma sources. (B.S.)

  15. Applicabilities of ship emission reduction methods

    Energy Technology Data Exchange (ETDEWEB)

    Guleryuz, Adem [ARGEMAN Research Group, Marine Division (Turkey)], email: ademg@argeman.org; Kilic, Alper [Istanbul Technical University, Maritime Faculty, Marine Engineering Department (Turkey)], email: enviromarineacademic@yahoo.com

    2011-07-01

    Ships, with their high consumption of fossil fuels to power their engines, are significant air polluters. Emission reduction methods therefore need to be implemented and the aim of this paper is to assess the advantages and disadvantages of each emissions reduction method. Benefits of the different methods are compared, with their disadvantages and requirements, to determine the applicability of such solutions. The methods studied herein are direct water injection, humid air motor, sea water scrubbing, diesel particulate filter, selected catalytic reduction, design of engine components, exhaust gas recirculation and engine replacement. Results of the study showed that the usefulness of each emissions reduction method depends on the particular case and that an evaluation should be carried out for each ship. This study pointed out that methods to reduce ship emissions are available but that their applicability depends on each case.

  16. Behavioural Sequential Analysis of Using an Instant Response Application to Enhance Peer Interactions in a Flipped Classroom

    Science.gov (United States)

    Hsu, Ting-Chia

    2018-01-01

    To stimulate classroom interactions, this study employed two different smartphone application modes, providing an additional instant interaction channel in a flipped classroom teaching fundamental computer science concepts. One instant interaction mode provided the students (N = 36) with anonymous feedback in chronological time sequence, while the…

  17. Full coverage of perovskite layer onto ZnO nanorods via a modified sequential two-step deposition method for efficiency enhancement in perovskite solar cells

    Science.gov (United States)

    Ruankham, Pipat; Wongratanaphisan, Duangmanee; Gardchareon, Atcharawon; Phadungdhitidhada, Surachet; Choopun, Supab; Sagawa, Takashi

    2017-07-01

    Full coverage of perovskite layer onto ZnO nanorod substrates with less pinholes is crucial for achieving high-efficiency perovskite solar cells. In this work, a two-step sequential deposition method is modified to achieve an appropriate property of perovskite (MAPbI3) film. Surface treatment of perovskite layer and its precursor have been systematically performed and their morphologies have been investigated. By pre-wetting of lead iodide (PbI2) and letting it dry before reacting with methylammonium iodide (MAI) provide better coverage of perovskite film onto ZnO nanorod substrate than one without any treatment. An additional MAI deposition followed with toluene drop-casting technique on the perovskite film is also found to increase the coverage and enhance the transformation of PbI2 to MAPbI3. These lead to longer charge carrier lifetime, resulting in an enhanced power conversion efficiency (PCE) from 1.21% to 3.05%. The modified method could been applied to a complex ZnO nanorods/TiO2 nanoparticles substrate. The enhancement in PCE to 3.41% is observed. These imply that our introduced method provides a simple way to obtain the full coverage and better transformation to MAPbI3 phase for enhancement in performances of perovskite solar cells.

  18. How Well Can We Detect Lineage-Specific Diversification-Rate Shifts? A Simulation Study of Sequential AIC Methods.

    Science.gov (United States)

    May, Michael R; Moore, Brian R

    2016-11-01

    Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified [Formula: see text] of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers-in order to clarify whether these methods can make reliable inferences from empirical datasets-and to theoretical biologists-in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  19. Applicability of optical scanner method for fine root dynamics

    Science.gov (United States)

    Kume, Tomonori; Ohashi, Mizue; Makita, Naoki; Khoon Kho, Lip; Katayama, Ayumi; Matsumoto, Kazuho; Ikeno, Hidetoshi

    2016-04-01

    Fine root dynamics is one of the important components in forest carbon cycling, as ~60 % of tree photosynthetic production can be allocated to root growth and metabolic activities. Various techniques have been developed for monitoring fine root biomass, production, mortality in order to understand carbon pools and fluxes resulting from fine roots dynamics. The minirhizotron method is now a widely used technique, in which a transparent tube is inserted into the soil and researchers count an increase and decrease of roots along the tube using images taken by a minirhizotron camera or minirhizotron video camera inside the tube. This method allows us to observe root behavior directly without destruction, but has several weaknesses; e.g., the difficulty of scaling up the results to stand level because of the small observation windows. Also, most of the image analysis are performed manually, which may yield insufficient quantitative and objective data. Recently, scanner method has been proposed, which can produce much bigger-size images (A4-size) with lower cost than those of the minirhizotron methods. However, laborious and time-consuming image analysis still limits the applicability of this method. In this study, therefore, we aimed to develop a new protocol for scanner image analysis to extract root behavior in soil. We evaluated applicability of this method in two ways; 1) the impact of different observers including root-study professionals, semi- and non-professionals on the detected results of root dynamics such as abundance, growth, and decomposition, and 2) the impact of window size on the results using a random sampling basis exercise. We applied our new protocol to analyze temporal changes of root behavior from sequential scanner images derived from a Bornean tropical forests. The results detected by the six observers showed considerable concordance in temporal changes in the abundance and the growth of fine roots but less in the decomposition. We also examined

  20. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  1. Helium leak testing methods in nuclear applications

    International Nuclear Information System (INIS)

    Ahmad, Anis

    2004-01-01

    Helium mass-spectrometer leak test is the most sensitive leak test method. It gives very reliable and sensitive test results. In last few years application of helium leak testing has gained more importance due to increased public awareness of safety and environment pollution caused by number of growing chemical and other such industries. Helium leak testing is carried out and specified in most of the critical area applications like nuclear, space, chemical and petrochemical industries

  2. Analysis of mixed data methods & applications

    CERN Document Server

    de Leon, Alexander R

    2013-01-01

    A comprehensive source on mixed data analysis, Analysis of Mixed Data: Methods & Applications summarizes the fundamental developments in the field. Case studies are used extensively throughout the book to illustrate interesting applications from economics, medicine and health, marketing, and genetics. Carefully edited for smooth readability and seamless transitions between chaptersAll chapters follow a common structure, with an introduction and a concluding summary, and include illustrative examples from real-life case studies in developmental toxicolog

  3. Developing management capacity building package to district health manager in northwest of Iran: A sequential mixed method study.

    Science.gov (United States)

    Tabrizi, Jafar Sadegh; Gholipour, Kamal; Farahbakhsh, Mostafa; Jahanbin, Hasan; Karamuz, Majid

    2016-11-01

    To assess districts health managers educational needs and develop management training programmes. This mixed-method study was carried out between August 2014 and August 2015 in Tabriz, Iran. Four focus group discussion sessions and three semi-structured face-to-face interviews were conducted among district health managers and experts of a health centre. Besides, 52 questionnaires were completed to weigh and finalise management education module and courses. Interviews and focus group discussions were tape-recorded, transcribed and analysed using content analysis method. Data was analysed using SPSS17. There were 52 participants, of whom 40(78.8%) were men and 12(21.2%) were women. All of the subjects (100%) took part in the quantitative phase, while 25(48.08%) participated in the qualitative phase. In the qualitative section, 11(44%) participants were heads of unit/departments in provincial health centre and 14(56%) were district health managers. In the quantitative phase, 30(57.7%) participants were district health managers and 8(28.8%) were heads of units/departments. Moreover, 33(63.4%) participants had medical education. The job experience of 3(5.8%) participants in the current position was below five years. Districts health management training programme consisted of 10modules with 53 educational topics. The normalised score out of a total of 100 for rules and ethics was 75.51, health information management 71.19, management and leadership 69.27, district management 68.08, human resources and organisational creativity 67.58,quality improvement 66.6, health resources management 62.37, planning and evaluation 61.87, research in health system 59.15, and community participation was 53.15. Considering district health managers' qualification in health and medicine, they had not been trained in basic management. Almost all the management and leadership courses were prioritised as most necessary.

  4. Sequential Dip-spin Coating Method: Fully Infiltration of MAPbI 3-x Cl x into Mesoporous TiO 2 for Stable Hybrid Perovskite Solar Cells

    KAUST Repository

    Kim, Woochul

    2017-05-31

    Organic-inorganic hybrid perovskite solar cells (PSCs) have reached a power conversion efficiency of 22.1% in a short period (∼7 years), which has been obtainable in silicon-based solar cells for decades. The high power conversion efficiency and simple fabrication process render perovskite solar cells as potential future power generators, after overcoming the lack of long-term stability, for which the deposition of void-free and pore-filled perovskite films on mesoporous TiO2 layers is the key pursuit. In this research, we developed a sequential dip-spin coating method in which the perovskite solution can easily infiltrate the pores within the TiO2 nanoparticulate layer, and the resultant film has large crystalline grains without voids between them. As a result, a higher short circuit current is achieved owing to the large interfacial area of TiO2/perovskite, along with enhanced power conversion efficiency, compared to the conventional spin coating method. The as-made pore-filled and void-free perovskite film avoids intrinsic moisture and air and can effectively protect the diffusion of degradation factors into the perovskite film, which is advantageous for the long-term stability of PSCs.

  5. Extending the applicability of multigrid methods

    International Nuclear Information System (INIS)

    Brannick, J; Brezina, M; Falgout, R; Manteuffel, T; McCormick, S; Ruge, J; Sheehan, B; Xu, J; Zikatanov, L

    2006-01-01

    Multigrid methods are ideal for solving the increasingly large-scale problems that arise in numerical simulations of physical phenomena because of their potential for computational costs and memory requirements that scale linearly with the degrees of freedom. Unfortunately, they have been historically limited by their applicability to elliptic-type problems and the need for special handling in their implementation. In this paper, we present an overview of several recent theoretical and algorithmic advances made by the TOPS multigrid partners and their collaborators in extending applicability of multigrid methods. specific examples that are presented include quantum chromodynamics, radiation transport, and electromagnetics

  6. Particle methods: An introduction with applications

    Directory of Open Access Journals (Sweden)

    Moral Piere Del

    2014-01-01

    Full Text Available Interacting particle methods are increasingly used to sample from complex high-dimensional distributions. They have found a wide range of applications in applied probability, Bayesian statistics and information engineering. Understanding rigorously these new Monte Carlo simulation tools leads to fascinating mathematics related to Feynman-Kac path integral theory and their interacting particle interpretations. In these lecture notes, we provide a pedagogical introduction to the stochastic modeling and the theoretical analysis of these particle algorithms. We also illustrate these methods through several applications including random walk confinements, particle absorption models, nonlinear filtering, stochastic optimization, combinatorial counting and directed polymer models.

  7. Realization of Personalized Services for Intelligent Residential Space based on User Identification Method using Sequential Walking Footprints

    Directory of Open Access Journals (Sweden)

    Jin-Woo Jung

    2005-04-01

    Full Text Available A new human-friendly assistive home environment, Intelligent Sweet Home (ISH, developed at KAIST, Korea for testing advanced concepts for independent living of the elderly/the physically handicapped. The concept of ISH is to consider the home itself as an intelligent robot. ISH always checks the intention or health status of the resident. Therefore, ISH can do actively the most proper services considering the resident's life-style by the detected intention or emergency information. But, when there are more than two residents, ISH cannot consider the residents' characteristics or tastes if ISH cannot identify who he/she is before. To realize a personalized service system in the intelligent residential space like ISH, we deal with a human-friendly user identification method for ubiquitous computing environment, specially focused on dynamic human footprint recognition. And then, we address some case studies of personalized services that have been experienced by Human-friendly Welfare Robot System research center, KAIST.

  8. An Efficient System Based On Closed Sequential Patterns for Web Recommendations

    OpenAIRE

    Utpala Niranjan; R.B.V. Subramanyam; V-Khana

    2010-01-01

    Sequential pattern mining, since its introduction has received considerable attention among the researchers with broad applications. The sequential pattern algorithms generally face problems when mining long sequential patterns or while using very low support threshold. One possible solution of such problems is by mining the closed sequential patterns, which is a condensed representation of sequential patterns. Recently, several researchers have utilized the sequential pattern discovery for d...

  9. Method development for the determination of arsenic by sequential injection/anodic stripping voltammetry using long-lasting gold-modified screen-printed carbon electrode.

    Science.gov (United States)

    Punrat, Eakkasit; Chuanuwatanakul, Suchada; Kaneta, Takashi; Motomizu, Shoji; Chailapakul, Orawon

    2013-11-15

    An automated method has been developed for determining the concentration of inorganic arsenic. The technique uses sequential injection/anodic stripping voltammetry with a long-lasting gold-modified screen-printed carbon electrode. The long-lasting gold electrode was electrochemically deposited onto a screen-printed carbon electrode at a potential of -0.5 V vs. Ag/AgCl in a supporting electrolyte solution of 1M hydrochloric acid. Under optimal conditions and the applied potentials, the electrode demonstrated that it can be used for a long time without a renewal process. The linear range for the determination of arsenic(III) was 1-100 μg L(-1), and the limit of detection (LOD) in standard solutions was as low as 0.03 μg L(-1) for a deposition time of 120 s and sample volume of 1 mL. This method was used to determine the concentration of arsenic(III) in water samples with satisfactory results. The LOD in real samples was found to be 0.5 μg L(-1). In addition, speciation between arsenic(III) and arsenic(V) has been achieved with the proposed method using deposition potentials of -0.5 V and -1.5 V for the determination of the arsenic(III) concentration and the total arsenic concentration, respectively; the results were acceptable. The proposed method is an automated system that offers a less expensive alternative for determining trace amounts of inorganic arsenic. © 2013 Elsevier B.V. All rights reserved.

  10. Engineering applications of heuristic multilevel optimization methods

    Science.gov (United States)

    Barthelemy, Jean-Francois M.

    1989-01-01

    Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.

  11. Preconditioning of iterative methods - theory and applications

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Blaheta, Radim; Neytcheva, M.; Pultarová, I.

    2015-01-01

    Roč. 22, č. 6 (2015), s. 901-902 ISSN 1070-5325 Institutional support: RVO:68145535 Keywords : preconditioning * iterative methods * applications Subject RIV: BA - General Mathematics Impact factor: 1.431, year: 2015 http://onlinelibrary.wiley.com/doi/10.1002/nla.2016/epdf

  12. Orbital angular momentum transfer and spin desalignment mechanisms in the deep inelastic collisions Ar+Bi and Ni+Pb using the sequential fission method

    International Nuclear Information System (INIS)

    Steckmeyer, J.C.

    1984-10-01

    Angular momentum transfer and spin dealignment mechanisms have been studied in the deep inelastic collisions Ar+Bi and Ni+Pb using the sequential fission method. This experimental technique consists to measure the angular distribution of the fission fragments of a heavy nucleus in coincidence with the reaction partner, and leads to a complete determination of the heavy nucleus spin distribution. High spin values are transferred to the heavy nucleus in the interaction and indicate that the dinuclear system has reached the rigid rotation limit. A theoretical model, taking into account the excitation of surface vibrations of the nuclei and the nucleon transfer between the two partners, is able to reproduce the high spin values measured in our experiments. The spin fluctuations are important, with values of the order of 15 to 20 h units. These fluctuations increase with the charge transfer from the projectile to the target and the total kinetic energy loss. The spin dealignment mechanisms act mainly in a plane approximately perpendicular to the heavy recoil direction in the laboratory system. These results are well described by a dynamical transport model based on the stochastic exchange of individual nucleons between the two nuclei during the interaction. The origin of the dealignment mechanisms in the spin transfer processes is then related to the statistical nature of the nucleon exchange. However other mechanisms can contribute to the spin dealignment as the surface vibrations, the nuclear deformations as well their relative orientations [fr

  13. Effects of a self-management education program on self-efficacy in patients with COPD: a mixed-methods sequential explanatory designed study

    Science.gov (United States)

    Ng, Wai I; Smith, Graeme Drummond

    2017-01-01

    Background Self-management education programs (SMEPs) are potentially effective in the symptomatic management of COPD. Little is presently known about the effectiveness of these programs in Chinese COPD patients. The objective of this study was to evaluate the effectiveness of a specifically designed SMEP on levels of self-efficacy in Chinese patients with COPD. Materials and methods Based on the Medical Research Council framework for evaluating complex interventions, an exploratory phase randomized controlled trial was employed to examine the effects of an SMEP. Self-efficacy was the primary outcome using the COPD Self-efficacy Scale, measured at baseline and 6 months after the program. Qualitative data were sequentially collected from these patients via three focus groups to supplement the quantitative findings. Results The experimental group displayed significant improvement in their general self-efficacy (Z =−2.44, P=0.015) and specifically in confronting 1) physical exertion (Z =−2.57, P=0.01), 2) weather/environment effects (Z =−2.63, PChinese patients with COPD. Further attention should be given to cultural considerations when developing this type of intervention in Chinese populations with COPD and other chronic diseases. PMID:28790816

  14. Sequential application of NaHCO3, CaCl2 and Candida oleophila (isolate 13L) affects significantly Penicillum expansum growth and the infection degree in apples.

    Science.gov (United States)

    Molinu, M G; Pani, G; Venditti, T; Dore, A; Ladu, G; D'Hallewin, G

    2011-01-01

    The employment of biocontrol agents to restrain postharvest pathogens is an encouraging approach, although, efficacy and consistency are still below those of synthetic pesticides. Up to date, the 'integrated control strategy' seems to be the most promising way to overcome this gap. Here, we report the feasibility to control postharvest decay caused by Penicillium expansum in apples by a 2 min, single or sequential, immersion in water with an antagonistic yeast (Candida oleophila, isolate '13L'), 2% NaHCO3 (SBC) or 1% CaCl2. The treatments were carried out, on appels cv 'Miali' either un-wounded, wounded or wound-pathogen inoculated and then stored at 2 degrees C for 30 d followed by a 6 d simulated marketing period at 20 degrees C or alternatively stored only for 7 d at 20 degrees C. As a general role, the best results were attained when CaCl2 was applied with the yeast or when preceded by the SBC treatment. When the wounding and inoculation took place 24 h before the treatment, the latter application sequence of the two salts was three times more effective compared to the treatment with the sole antagonist, and one time when performed 24 h after the treatment. Interestingly, apples immersed in the sole 2% SBC solution had the highest percentage of decay during storage and when inoculated before moving to the simulated marketing period at 20 degrees C.

  15. Phi-value analysis of a linear, sequential reaction mechanism: theory and application to ion channel gating.

    Science.gov (United States)

    Zhou, Yu; Pearson, John E; Auerbach, Anthony

    2005-12-01

    We derive the analytical form of a rate-equilibrium free-energy relationship (with slope Phi) for a bounded, linear chain of coupled reactions having arbitrary connecting rate constants. The results confirm previous simulation studies showing that Phi-values reflect the position of the perturbed reaction within the chain, with reactions occurring earlier in the sequence producing higher Phi-values than those occurring later in the sequence. The derivation includes an expression for the transmission coefficients of the overall reaction based on the rate constants of an arbitrary, discrete, finite Markov chain. The results indicate that experimental Phi-values can be used to calculate the relative heights of the energy barriers between intermediate states of the chain but provide no information about the energies of the wells along the reaction path. Application of the equations to the case of diliganded acetylcholine receptor channel gating suggests that the transition-state ensemble for this reaction is nearly flat. Although this mechanism accounts for many of the basic features of diliganded and unliganded acetylcholine receptor channel gating, the experimental rate-equilibrium free-energy relationships appear to be more linear than those predicted by the theory.

  16. Photogrammetric methods of measurement in industrial applications

    International Nuclear Information System (INIS)

    Godding, R.; Groene, A.; Heinrich, G.; Schneider, C.T.

    1993-01-01

    Methods for 3D measurement are required for very varied applications in the industrial field. This includes tasks of quality assurance and plant monitoring, among others. It should be possible to apply the process flexibly it should require as short interruptions of production as possible and should meet the required accuracies. These requirements can be met by photogrammetric methods of measurement. The article introduces these methods and shows their capabilities from various selected examples (eg: the replacement of large components in a pressurized water reactor, and aircraft measurements (orig./DG) [de

  17. The early identification of risk factors on the pathway to school dropout in the SIODO study: a sequential mixed-methods study

    Directory of Open Access Journals (Sweden)

    Theunissen Marie-José

    2012-11-01

    Full Text Available Abstract Background School dropout is a persisting problem with major socioeconomic consequences. Although poor health probably contributes to pathways leading to school dropout and health is likely negatively affected by dropout, these issues are relatively absent on the public health agenda. This emphasises the importance of integrative research aimed at identifying children at risk for school dropout at an early stage, discovering how socioeconomic status and gender affect health-related pathways that lead to dropout and developing a prevention tool that can be used in public health services for youth. Methods/design The SIODO study is a sequential mixed-methods study. A case–control study will be conducted among 18 to 24 year olds in the south of the Netherlands (n = 580. Data are currently being collected from compulsory education departments at municipalities (dropout data, regional public health services (developmental data from birth onwards and an additional questionnaire has been sent to participants (e.g. personality data. Advanced analyses, including cluster and factor analyses, will be used to identify children at risk at an early stage. Using the quantitative data, we have planned individual interviews with participants and focus groups with important stakeholders such as parents, teachers and public health professionals. A thematic content analysis will be used to analyse the qualitative data. Discussion The SIODO study will use a life-course perspective, the ICF-CY model to group the determinants and a mixed-methods design. In this respect, the SIODO study is innovative because it both broadens and deepens the study of health-related determinants of school dropout. It examines how these determinants contribute to socioeconomic and gender differences in health and contributes to the development of a tool that can be used in public health practice to tackle the problem of school dropout at its roots.

  18. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    Science.gov (United States)

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Application of reliability methods in Ontario Hydro

    International Nuclear Information System (INIS)

    Jeppesen, R.; Ravishankar, T.J.

    1985-01-01

    Ontario Hydro have established a reliability program in support of its substantial nuclear program. Application of the reliability program to achieve both production and safety goals is described. The value of such a reliability program is evident in the record of Ontario Hydro's operating nuclear stations. The factors which have contributed to the success of the reliability program are identified as line management's commitment to reliability; selective and judicious application of reliability methods; establishing performance goals and monitoring the in-service performance; and collection, distribution, review and utilization of performance information to facilitate cost-effective achievement of goals and improvements. (orig.)

  20. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  1. Application of cybernetic methods in physics

    Energy Technology Data Exchange (ETDEWEB)

    Fradkov, Aleksandr L [Institute of Problems of Mechanical Engineering, Russian Academy of Sciences, St.-Petersburg (Russian Federation)

    2005-02-28

    Basic aspects of the subject and methodology for a new and rapidly developing area of research that has emerged at the intersection of physics and control theory (cybernetics) and emphasizes the application of cybernetic methods to the study of physical systems are reviewed. Speed-gradient and Hamiltonian solutions for energy control problems in conservative and dissipative systems are presented. Application examples such as the Kapitza pendulum, controlled overcoming of a potential barrier, and controlling coupled oscillators and molecular systems are presented. A speed-gradient approach to modeling the dynamics of physical systems is discussed. (reviews of topical problems)

  2. Application of cybernetic methods in physics

    International Nuclear Information System (INIS)

    Fradkov, Aleksandr L

    2005-01-01

    Basic aspects of the subject and methodology for a new and rapidly developing area of research that has emerged at the intersection of physics and control theory (cybernetics) and emphasizes the application of cybernetic methods to the study of physical systems are reviewed. Speed-gradient and Hamiltonian solutions for energy control problems in conservative and dissipative systems are presented. Application examples such as the Kapitza pendulum, controlled overcoming of a potential barrier, and controlling coupled oscillators and molecular systems are presented. A speed-gradient approach to modeling the dynamics of physical systems is discussed. (reviews of topical problems)

  3. Efficacy of premixed versus sequential administration of ...

    African Journals Online (AJOL)

    sequential administration in separate syringes on block characteristics, haemodynamic parameters, side effect profile and postoperative analgesic requirement. Trial design: This was a prospective, randomised clinical study. Method: Sixty orthopaedic patients scheduled for elective lower limb surgery under spinal ...

  4. A non-homogeneous dynamic Bayesian network with sequentially coupled interaction parameters for applications in systems and synthetic biology.

    Science.gov (United States)

    Grzegorczyk, Marco; Husmeier, Dirk

    2012-07-12

    An important and challenging problem in systems biology is the inference of gene regulatory networks from short non-stationary time series of transcriptional profiles. A popular approach that has been widely applied to this end is based on dynamic Bayesian networks (DBNs), although traditional homogeneous DBNs fail to model the non-stationarity and time-varying nature of the gene regulatory processes. Various authors have therefore recently proposed combining DBNs with multiple changepoint processes to obtain time varying dynamic Bayesian networks (TV-DBNs). However, TV-DBNs are not without problems. Gene expression time series are typically short, which leaves the model over-flexible, leading to over-fitting or inflated inference uncertainty. In the present paper, we introduce a Bayesian regularization scheme that addresses this difficulty. Our approach is based on the rationale that changes in gene regulatory processes appear gradually during an organism's life cycle or in response to a changing environment, and we have integrated this notion in the prior distribution of the TV-DBN parameters. We have extensively tested our regularized TV-DBN model on synthetic data, in which we have simulated short non-homogeneous time series produced from a system subject to gradual change. We have then applied our method to real-world gene expression time series, measured during the life cycle of Drosophila melanogaster, under artificially generated constant light condition in Arabidopsis thaliana, and from a synthetically designed strain of Saccharomyces cerevisiae exposed to a changing environment.

  5. Finite Element Methods and Their Applications

    CERN Document Server

    Chen, Zhangxin

    2005-01-01

    This book serves as a text for one- or two-semester courses for upper-level undergraduates and beginning graduate students and as a professional reference for people who want to solve partial differential equations (PDEs) using finite element methods. The author has attempted to introduce every concept in the simplest possible setting and maintain a level of treatment that is as rigorous as possible without being unnecessarily abstract. Quite a lot of attention is given to discontinuous finite elements, characteristic finite elements, and to the applications in fluid and solid mechanics including applications to porous media flow, and applications to semiconductor modeling. An extensive set of exercises and references in each chapter are provided.

  6. Sequential charged particle reaction

    International Nuclear Information System (INIS)

    Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo

    2004-01-01

    The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)

  7. Computational methods for industrial radiation measurement applications

    International Nuclear Information System (INIS)

    Gardner, R.P.; Guo, P.; Ao, Q.

    1996-01-01

    Computational methods have been used with considerable success to complement radiation measurements in solving a wide range of industrial problems. The almost exponential growth of computer capability and applications in the last few years leads to a open-quotes black boxclose quotes mentality for radiation measurement applications. If a black box is defined as any radiation measurement device that is capable of measuring the parameters of interest when a wide range of operating and sample conditions may occur, then the development of computational methods for industrial radiation measurement applications should now be focused on the black box approach and the deduction of properties of interest from the response with acceptable accuracy and reasonable efficiency. Nowadays, increasingly better understanding of radiation physical processes, more accurate and complete fundamental physical data, and more advanced modeling and software/hardware techniques have made it possible to make giant strides in that direction with new ideas implemented with computer software. The Center for Engineering Applications of Radioisotopes (CEAR) at North Carolina State University has been working on a variety of projects in the area of radiation analyzers and gauges for accomplishing this for quite some time, and they are discussed here with emphasis on current accomplishments

  8. Low temperature plasma technology methods and applications

    CERN Document Server

    Chu, Paul K

    2013-01-01

    Written by a team of pioneering scientists from around the world, Low Temperature Plasma Technology: Methods and Applications brings together recent technological advances and research in the rapidly growing field of low temperature plasmas. The book provides a comprehensive overview of related phenomena such as plasma bullets, plasma penetration into biofilms, discharge-mode transition of atmospheric pressure plasmas, and self-organization of microdischarges. It describes relevant technology and diagnostics, including nanosecond pulsed discharge, cavity ringdown spectroscopy, and laser-induce

  9. Microautoradiographic methods and their applications in biology

    International Nuclear Information System (INIS)

    Benes, L.

    1978-01-01

    A survey of microautoradiographic methods and of their application in biology is given. The current state of biological microautoradiography is shown, focusing on the efficiency of techniques and on special problems proceeding in autoradiographic investigations in biology. Four more or less independent fields of autoradiography are considered. In describing autoradiographic techniques two methodological tasks are emphasized: The further development of the labelling technique in all metabolic studies and of instrumentation and automation of autoradiograph evaluation. (author)

  10. Keller-box method and its application

    CERN Document Server

    Prasad, Kerehalli V

    2014-01-01

    Most of the problems arising in science and engineering are nonlinear. They are inherently difficult to solve. Traditional analytical approximations are valid only for weakly nonlinear problems, and often break down for problems with strong nonlinearity. This book presents the current theoretical developments and applications of Keller-Box method to nonlinear problems. The first half of the bookaddresses basic concepts to understand the theoretical framework for the method. In the second half of the book, the authorsgive a number of examples of coupled nonlinear problems that have been solved

  11. Harmony Search Method: Theory and Applications

    Directory of Open Access Journals (Sweden)

    X. Z. Gao

    2015-01-01

    Full Text Available The Harmony Search (HS method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.

  12. Semiautomatic methods for segmentation of the proliferative tumour volume on sequential FLT PET/CT images in head and neck carcinomas and their relation to clinical outcome

    Energy Technology Data Exchange (ETDEWEB)

    Arens, Anne I.J.; Grootjans, Willem; Oyen, Wim J.G.; Visser, Eric P. [Radboud University Medical Center, Department of Nuclear Medicine, P.O. Box 9101, Nijmegen (Netherlands); Troost, Esther G.C. [Radboud University Medical Center, Department of Radiation Oncology, Nijmegen (Netherlands); Maastricht University Medical Centre, MAASTRO clinic, GROW School for Oncology and Developmental Biology, Maastricht (Netherlands); Hoeben, Bianca A.W.; Bussink, Johan; Kaanders, Johannes H.A.M. [Radboud University Medical Center, Department of Radiation Oncology, Nijmegen (Netherlands); Lee, John A.; Gregoire, Vincent [St-Luc University Hospital, Department of Radiation Oncology, Universite Catholique de Louvain, Brussels (Belgium); Hatt, Mathieu; Visvikis, Dimitris [Laboratoire de Traitement de l' Information Medicale (LaTIM), INSERM UMR1101, Brest (France)

    2014-05-15

    Radiotherapy of head and neck cancer induces changes in tumour cell proliferation during treatment, which can be depicted by the PET tracer {sup 18}F-fluorothymidine (FLT). In this study, three advanced semiautomatic PET segmentation methods for delineation of the proliferative tumour volume (PV) before and during (chemo)radiotherapy were compared and related to clinical outcome. The study group comprised 46 patients with 48 squamous cell carcinomas of the head and neck, treated with accelerated (chemo)radiotherapy, who underwent FLT PET/CT prior to treatment and in the 2nd and 4th week of therapy. Primary gross tumour volumes were visually delineated on CT images (GTV{sub CT}). PVs were visually determined on all PET scans (PV{sub VIS}). The following semiautomatic segmentation methods were applied to sequential PET scans: background-subtracted relative-threshold level (PV{sub RTL}), a gradient-based method using the watershed transform algorithm and hierarchical clustering analysis (PV{sub W} and {sub C}), and a fuzzy locally adaptive Bayesian algorithm (PV{sub FLAB}). Pretreatment PV{sub VIS} correlated best with PV{sub FLAB} and GTV{sub CT}. Correlations with PV{sub RTL} and PV{sub W} and {sub C} were weaker although statistically significant. During treatment, the PV{sub VIS}, PV{sub W} and {sub C} and PV{sub FLAB} significant decreased over time with the steepest decline over time for PV{sub FLAB}. Among these advanced segmentation methods, PV{sub FLAB} was the most robust in segmenting volumes in the third scan (67 % of tumours as compared to 40 % for PV{sub W} and {sub C} and 27 % for PV{sub RTL}). A decrease in PV{sub FLAB} above the median between the pretreatment scan and the scan obtained in the 4th week was associated with better disease-free survival (4 years 90 % versus 53 %). In patients with head and neck cancer, FLAB proved to be the best performing method for segmentation of the PV on repeat FLT PET/CT scans during (chemo)radiotherapy. This may

  13. Monte Carlo Tree Search for Continuous and Stochastic Sequential Decision Making Problems

    International Nuclear Information System (INIS)

    Couetoux, Adrien

    2013-01-01

    In this thesis, I studied sequential decision making problems, with a focus on the unit commitment problem. Traditionally solved by dynamic programming methods, this problem is still a challenge, due to its high dimension and to the sacrifices made on the accuracy of the model to apply state of the art methods. I investigated on the applicability of Monte Carlo Tree Search methods for this problem, and other problems that are single player, stochastic and continuous sequential decision making problems. In doing so, I obtained a consistent and anytime algorithm, that can easily be combined with existing strong heuristic solvers. (author)

  14. C-quence: a tool for analyzing qualitative sequential data.

    Science.gov (United States)

    Duncan, Starkey; Collier, Nicholson T

    2002-02-01

    C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.

  15. Automatic synthesis of sequential control schemes

    International Nuclear Information System (INIS)

    Klein, I.

    1993-01-01

    Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main ideas is to spend some effort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plant is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simple problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planing tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This

  16. Perceptions and experiences of a gender gap at a Canadian research institute and potential strategies to mitigate this gap: a sequential mixed-methods study.

    Science.gov (United States)

    Mascarenhas, Alekhya; Moore, Julia E; Tricco, Andrea C; Hamid, Jemila; Daly, Caitlin; Bain, Julie; Jassemi, Sabrina; Kiran, Tara; Baxter, Nancy; Straus, Sharon E

    2017-01-01

    The gender gap in academia is long-standing. Failure to ensure that our academic faculty reflect our student pool and national population deprives Canada of talent. We explored the gender distribution and perceptions of the gender gap at a Canadian university-affiliated, hospital-based research institute. We completed a sequential mixed-methods study. In phase 1, we used the research institute's registry of scientists (1999-2014) and estimated overall prevalence of a gender gap and the gap with respect to job description (e.g., associate v. full-time) and research discipline. In phase 2, we conducted qualitative interviews to provide context for phase 1 data. Both purposive and snowball sampling were used for recruitment. The institute included 30.1% ( n = 62) women and 69.9% ( n = 144) men, indicating a 39.8% gender gap. Most full-time scientists (60.3%, n = 70) were clinicians; there were 54.2% more male than female clinician scientists. Ninety-five percent of basic scientists were men, indicating a 90.5% gap. Seven key themes emerged from 21 interviews, including perceived impact of the gender gap, factors perceived to influence the gap, recruitment trends, presence of institutional support, mentorship and suggestions to mitigate the gap. Several factors were postulated to contribute to the gender gap, including unconscious bias in hiring. A substantial gender gap exists within this research institute. Participants identified strategies to address this gap, such as establishing transparent search processes, providing opportunities for informal networking and mentorship of female scientists and establishing institutional support for work-life balance.

  17. Sequential application of Fenton and ozone-based oxidation process for the abatement of Ni-EDTA containing nickel plating effluents.

    Science.gov (United States)

    Zhao, Zilong; Liu, Zekun; Wang, Hongjie; Dong, Wenyi; Wang, Wei

    2018-07-01

    Treatment of Ni-EDTA in industrial nickel plating effluents was investigated by integrated application of Fenton and ozone-based oxidation processes. Determination of integrated sequence found that Fenton oxidation presented higher apparent kinetic rate constant of Ni-EDTA oxidation and capacity for contamination load than ozone-based oxidation process, the latter, however, was favorable to guarantee the further mineralization of organic substances, especially at a low concentration. Serial-connection mode of two oxidation processes was appraised, Fenton effluent after treated by hydroxide precipitation and filtration negatively affected the overall performance of the sequential system, as evidenced by the removal efficiencies of Ni 2+ and TOC dropping from 99.8% to 98.7%, and from 74.8% to 66.6%, respectively. As a comparison, O 3 /Fe 2+ oxidation process was proved to be more effective than other processes (e.g. O 3 -Fe 2+ , O 3 /H 2 O 2 /Fe 2+ , O 3 /H 2 O 2 -Fe 2+ ), and the final effluent Ni 2+ concentration could satisfied the discharge standard (Fenton reaction, initial influent pH of 3.0, O 3 dosage of 252 mg L -1 , Fe 2+ of 150 mg L -1 , and reaction time of 30 min for O 3 /Fe 2+ oxidation). Furthermore, pilot-scale test was carried out to study the practical treatability towards the real nickel plating effluent, revealing the effective removal of some other co-existence contaminations. And Fenton reaction has contributed most, with the percentage ranging from 72.41% to 93.76%. The economic cost advantage made it a promising alternative to the continuous Fenton oxidation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Application of Formal Methods in Software Engineering

    Directory of Open Access Journals (Sweden)

    Adriana Morales

    2011-12-01

    Full Text Available The purpose of this research work is to examine: (1 why are necessary the formal methods for software systems today, (2 high integrity systems through the methodology C-by-C –Correctness-by-Construction–, and (3 an affordable methodology to apply formal methods in software engineering. The research process included reviews of the literature through Internet, in publications and presentations in events. Among the Research results found that: (1 there is increasing the dependence that the nations have, the companies and people of software systems, (2 there is growing demand for software Engineering to increase social trust in the software systems, (3 exist methodologies, as C-by-C, that can provide that level of trust, (4 Formal Methods constitute a principle of computer science that can be applied software engineering to perform reliable process in software development, (5 software users have the responsibility to demand reliable software products, and (6 software engineers have the responsibility to develop reliable software products. Furthermore, it is concluded that: (1 it takes more research to identify and analyze other methodologies and tools that provide process to apply the Formal Software Engineering methods, (2 Formal Methods provide an unprecedented ability to increase the trust in the exactitude of the software products and (3 by development of new methodologies and tools is being achieved costs are not more a disadvantage for application of formal methods.

  19. Big data analytics methods and applications

    CERN Document Server

    Rao, BLS; Rao, SB

    2016-01-01

    This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.

  20. Nested partitions method, theory and applications

    CERN Document Server

    Shi, Leyuan

    2009-01-01

    There is increasing need to solve large-scale complex optimization problems in a wide variety of science and engineering applications, including designing telecommunication networks for multimedia transmission, planning and scheduling problems in manufacturing and military operations, or designing nanoscale devices and systems. Advances in technology and information systems have made such optimization problems more and more complicated in terms of size and uncertainty. Nested Partitions Method, Theory and Applications provides a cutting-edge research tool to use for large-scale, complex systems optimization. The Nested Partitions (NP) framework is an innovative mix of traditional optimization methodology and probabilistic assumptions. An important feature of the NP framework is that it combines many well-known optimization techniques, including dynamic programming, mixed integer programming, genetic algorithms and tabu search, while also integrating many problem-specific local search heuristics. The book uses...

  1. Restricted Kalman Filtering Theory, Methods, and Application

    CERN Document Server

    Pizzinga, Adrian

    2012-01-01

    In statistics, the Kalman filter is a mathematical method whose purpose is to use a series of measurements observed over time, containing random variations and other inaccuracies, and produce estimates that tend to be closer to the true unknown values than those that would be based on a single measurement alone. This Brief offers developments on Kalman filtering subject to general linear constraints. There are essentially three types of contributions: new proofs for results already established; new results within the subject; and applications in investment analysis and macroeconomics, where th

  2. Exergy method technical and ecological applications

    CERN Document Server

    Szargut, J

    2005-01-01

    The exergy method makes it possible to detect and quantify the possibilities of improving thermal and chemical processes and systems. The introduction of the concept ""thermo-ecological cost"" (cumulative consumption of non-renewable natural exergy resources) generated large application possibilities of exergy in ecology. This book contains a short presentation on the basic principles of exergy analysis and discusses new achievements in the field over the last 15 years. One of the most important issues considered by the distinguished author is the economy of non-renewable natural exergy.

  3. Accurately controlled sequential self-folding structures by polystyrene film

    Science.gov (United States)

    Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse

    2017-08-01

    Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.

  4. Intelligent numerical methods applications to fractional calculus

    CERN Document Server

    Anastassiou, George A

    2016-01-01

    In this monograph the authors present Newton-type, Newton-like and other numerical methods, which involve fractional derivatives and fractional integral operators, for the first time studied in the literature. All for the purpose to solve numerically equations whose associated functions can be also non-differentiable in the ordinary sense. That is among others extending the classical Newton method theory which requires usual differentiability of function. Chapters are self-contained and can be read independently and several advanced courses can be taught out of this book. An extensive list of references is given per chapter. The book’s results are expected to find applications in many areas of applied mathematics, stochastics, computer science and engineering. As such this monograph is suitable for researchers, graduate students, and seminars of the above subjects, also to be in all science and engineering libraries.

  5. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  6. Novel applications of fast neutron interrogation methods

    International Nuclear Information System (INIS)

    Gozani, Tsahi

    1994-01-01

    The development of non-intrusive inspection methods for contraband consisting primarily of carbon, nitrogen, oxygen, and hydrogen requires the use of fast neutrons. While most elements can be sufficiently well detected by the thermal neutron capture process, some important ones, e.g., carbon and in particular oxygen, cannot be detected by this process. Fortunately, fast neutrons, with energies above the threshold for inelastic scattering, stimulate relatively strong and specific gamma ray lines from these elements. The main lines are: 6.13 for O, 4.43 for C, and 5.11, 2.31 and 1.64 MeV for N. Accelerator-generated neutrons in the energy range of 7 to 15 MeV are being considered as interrogating radiations in a variety of non-intrusive inspection systems for contraband, from explosives to drugs and from coal to smuggled, dutiable goods. In some applications, mostly for inspection of small items such as luggage, the decision process involves a rudimentary imaging, akin to emission tomography, to obtain the localized concentration of various elements. This technique is called FNA - Fast Neutron Analysis. While this approach offers improvements over the TNA (Thermal Neutron Analysis), it is not applicable to large objects such as shipping containers and trucks. For these challenging applications, a collimated beam of neutrons is rastered along the height of the moving object. In addition, the neutrons are generated in very narrow nanosecond pulses. The point of their interaction inside the object is determined by the time of flight (TOF) method, that is measuring the time elapsed from the neutron generation to the time of detection of the stimulated gamma rays. This technique, called PFNA (Pulsed Fast Neutron Analysis), thus directly provides the elemental, and by inference, the chemical composition of the material at every volume element (voxel) of the object. The various neutron-based techniques are briefly described below. ((orig.))

  7. Application of Cocktail method in vegetation classification

    Directory of Open Access Journals (Sweden)

    Hamed Asadi

    2016-09-01

    Full Text Available This study intends to assess the application of Cocktail method in the classification of large vegetation databases. For this purpose, Buxus hyrcana dataset consisted of 442 relevés with 89 species were used and by the modified TWINSPAN. For running the Cocktail method, first primarily classification was done by modified TWINSPAN, and by performing phi analysis in the groups resulted five species were selected which had the highest fidelity value. Then sociological species groups were formed by examining co-occurrence of these 5 species with other species in the database. 21 plant communities belongs to 6 variant, 17 sub associations, 11 associations, 4 alliance, 1 order and 1 class were recognized by assigning 379 releves to the sociological species groups by using logical formulas. Also, 63 releves by the logical formula were not assigned to any sociological species groups, by FPFI index were assigned to the sociological species groups which had the most index value. According to 91% classification agreement with Brown-Blanquet classification and Cocktail classification, we suggest Cocktail method to vegetation scientists as an efficient alternative of Braun-Blanquet method to classify large vegetation databases.

  8. The effect of sequential dual-gas testing on laser-induced breakdown spectroscopy-based discrimination: Application to brass samples and bacterial strains

    International Nuclear Information System (INIS)

    Rehse, Steven J.; Mohaidat, Qassem I.

    2009-01-01

    Four Cu-Zn brass alloys with different stoichiometries and compositions have been analyzed by laser-induced breakdown spectroscopy (LIBS) using nanosecond laser pulses. The intensities of 15 emission lines of copper, zinc, lead, carbon, and aluminum (as well as the environmental contaminants sodium and calcium) were normalized and analyzed with a discriminant function analysis (DFA) to rapidly categorize the samples by alloy. The alloys were tested sequentially in two different noble gases (argon and helium) to enhance discrimination between them. When emission intensities from samples tested sequentially in both gases were combined to form a single 30-spectral line 'fingerprint' of the alloy, an overall 100% correct identification was achieved. This was a modest improvement over using emission intensities acquired in argon gas alone. A similar study was performed to demonstrate an enhanced discrimination between two strains of Escherichia coli (a Gram-negative bacterium) and a Gram-positive bacterium. When emission intensities from bacteria sequentially ablated in two different gas environments were combined, the DFA achieved a 100% categorization accuracy. This result showed the benefit of sequentially testing highly similar samples in two different ambient gases to enhance discrimination between the samples.

  9. Manganese Fractionation Using a Sequential Extraction Method to Evaluate Welders' Shielded Metal Arc Welding Exposures During Construction Projects in Oil Refineries.

    Science.gov (United States)

    Hanley, Kevin W; Andrews, Ronnee; Bertke, Steven; Ashley, Kevin

    2015-01-01

    The National Institute for Occupational Safety and Health has conducted an occupational exposure assessment study of manganese (Mn) in welding fume of construction workers rebuilding tanks, piping, and process equipment at two oil refineries. The objective of this study was to evaluate exposures to different Mn fractions using a sequential extraction procedure. Seventy-two worker-days were monitored for either total or respirable Mn during stick welding and associated activities both within and outside of confined spaces. The samples were analyzed using an experimental method to separate different Mn fractions by valence states based on selective chemical solubility. The full-shift total particulate Mn time-weighted average (TWA) breathing zone concentrations ranged from 0.013-29 for soluble Mn in a mild ammonium acetate solution; from 0.26-250 for Mn(0,2+) in acetic acid; from non-detectable (ND) - 350 for Mn(3+,4+) in hydroxylamine-hydrochloride; and from ND - 39 micrograms per cubic meter (μg/m(3)) for insoluble Mn fractions in hydrochloric and nitric acid. The summation of all Mn fractions in total particulate TWA ranged from 0.52-470 μg/m(3). The range of respirable particulate Mn TWA concentrations were from 0.20-28 for soluble Mn; from 1.4-270 for Mn(0,2+); from 0.49-150 for Mn(3+,4+); from ND - 100 for insoluble Mn; and from 2.0-490 μg/m(3) for Mn (sum of fractions). For all jobs combined, total particulate TWA GM concentrations of the Mn(sum) were 99 (GSD = 3.35) and 8.7 (GSD = 3.54) μg/m(3) for workers inside and outside of confined spaces; respirable Mn also showed much higher levels for welders within confined spaces. Regardless of particle size and confined space work status, Mn(0,2+) fraction was the most abundant followed by Mn(3+,4+) fraction, typically >50% and ∼30-40% of Mn(sum), respectively. Eighteen welders' exposures exceeded the ACGIH Threshold Limit Values for total Mn (100 μg/m(3)) and 25 exceeded the recently adopted respirable

  10. Optimisation of beryllium-7 gamma analysis following BCR sequential extraction

    International Nuclear Information System (INIS)

    Taylor, A.; Blake, W.H.; Keith-Roach, M.J.

    2012-01-01

    Graphical abstract: Showing decrease in analytical uncertainty using the optimal (combined preconcentrated sample extract) method. nv (no value) where extract activities were 7 Be geochemical behaviour is required to support tracer studies. ► Sequential extraction with natural 7 Be returns high analytical uncertainties. ► Preconcentrating extracts from a large sample mass improved analytical uncertainty. ► This optimised method can be readily employed in studies using low activity samples. - Abstract: The application of cosmogenic 7 Be as a sediment tracer at the catchment-scale requires an understanding of its geochemical associations in soil to underpin the assumption of irreversible adsorption. Sequential extractions offer a readily accessible means of determining the associations of 7 Be with operationally defined soil phases. However, the subdivision of the low activity concentrations of fallout 7 Be in soils into geochemical fractions can introduce high gamma counting uncertainties. Extending analysis time significantly is not always an option for batches of samples, owing to the on-going decay of 7 Be (t 1/2 = 53.3 days). Here, three different methods of preparing and quantifying 7 Be extracted using the optimised BCR three-step scheme have been evaluated and compared with a focus on reducing analytical uncertainties. The optimal method involved carrying out the BCR extraction in triplicate, sub-sampling each set of triplicates for stable Be analysis before combining each set and coprecipitating the 7 Be with metal oxyhydroxides to produce a thin source for gamma analysis. This method was applied to BCR extractions of natural 7 Be in four agricultural soils. The approach gave good counting statistics from a 24 h analysis period (∼10% (2σ) where extract activity >40% of total activity) and generated statistically useful sequential extraction profiles. Total recoveries of 7 Be fell between 84 and 112%. The stable Be data demonstrated that the

  11. Research on parallel algorithm for sequential pattern mining

    Science.gov (United States)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  12. Partial differential equations methods, applications and theories

    CERN Document Server

    Hattori, Harumi

    2013-01-01

    This volume is an introductory level textbook for partial differential equations (PDE's) and suitable for a one-semester undergraduate level or two-semester graduate level course in PDE's or applied mathematics. Chapters One to Five are organized according to the equations and the basic PDE's are introduced in an easy to understand manner. They include the first-order equations and the three fundamental second-order equations, i.e. the heat, wave and Laplace equations. Through these equations we learn the types of problems, how we pose the problems, and the methods of solutions such as the separation of variables and the method of characteristics. The modeling aspects are explained as well. The methods introduced in earlier chapters are developed further in Chapters Six to Twelve. They include the Fourier series, the Fourier and the Laplace transforms, and the Green's functions. The equations in higher dimensions are also discussed in detail. This volume is application-oriented and rich in examples. Going thr...

  13. Application of radiotracer methods in streamflow measurements

    International Nuclear Information System (INIS)

    Dincer, T.

    1967-01-01

    An attempt is made to evaluate methods using radiotracers in streamflow measurements. The basic principles of the tracer method are explained and background information given. Radiotracers used in stream discharge measurements are discussed and measurements made by different research workers are described. Problems such as adsorption of the tracer and the mixing length are discussed and the potential use of the radioisotopes as tracer in the routine stream-gauging work is evaluated. It is concluded that, at the present stage of development, radiotracer methods do not seem to be ready for routine use in stream-gauging work, and can only be used in some special cases. For gamma-emitting radioisotopes there are problems related to safety, transport and injection which should be solved. Tritium, though a very attractive tracer in some respects, has the disadvantages of having a relatively long half-life and of disturbing the natural tritium levels in the region. Finally, an attempt is made to define the objectives of the research in the field of application of radioisotopes in hydrometry. (author)

  14. Sequential memory: Binding dynamics

    Science.gov (United States)

    Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

    2015-10-01

    Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

  15. Sequential Dependencies in Driving

    Science.gov (United States)

    Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.

    2012-01-01

    The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…

  16. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  17. Complex networks principles, methods and applications

    CERN Document Server

    Latora, Vito; Russo, Giovanni

    2017-01-01

    Networks constitute the backbone of complex systems, from the human brain to computer communications, transport infrastructures to online social systems and metabolic reactions to financial markets. Characterising their structure improves our understanding of the physical, biological, economic and social phenomena that shape our world. Rigorous and thorough, this textbook presents a detailed overview of the new theory and methods of network science. Covering algorithms for graph exploration, node ranking and network generation, among the others, the book allows students to experiment with network models and real-world data sets, providing them with a deep understanding of the basics of network theory and its practical applications. Systems of growing complexity are examined in detail, challenging students to increase their level of skill. An engaging presentation of the important principles of network science makes this the perfect reference for researchers and undergraduate and graduate students in physics, ...

  18. Innovative Methods and Applications in Mucoadhesion Research

    DEFF Research Database (Denmark)

    Mackie, Alan; Goycoolea, Francisco M.; Menchicchi, Bianca

    2017-01-01

    The present review is aimed at elucidating relatively new aspects of mucoadhesion/mucus interaction and related phenomena that emerged from a Mucoadhesion workshop held in Munster on 2–3 September 2015 as a satellite event of the ICCC 13th—EUCHIS 12th. After a brief outline of the new issues......, the focus is on mucus description, purification, and mucus/mucin characterization, all steps that are pivotal to the understanding of mucus related phenomena and the choice of the correct mucosal model for in vitro and ex vivo experiments, alternative bio/mucomimetic materials are also presented....... Then a selection of preparative techniques and testing methods are described (at molecular as well as micro and macroscale) that may support the pharmaceutical development of mucus interactive systems and assist formulators in the scale-up and industrialization steps. Recent applications of mucoadhesive systems...

  19. Nuclear methods: applications to Earth sciences

    International Nuclear Information System (INIS)

    Segovia, N.

    1994-01-01

    The discovery of radioactivity phenomenon occurred almost 100 years ago, in 1896, and constituted the base for new perspectives in many disciplines, including the Earth sciences. The initial works in this field, during the first quarter of the Century, established that the series of radioactive decay of long lifetime Uranium 238, Uranium 235 and Thorium 232 present radioactive isotopes of several elements which are physically and chemically different. The chemical differentiation of the Earth during its evolution has concentrated in the crust the major part of the radioactive materials. The application of radioactive in balance which occur as a consequence of chemical and physical differences, has evolve quickly, and the utilization of natural radioactive isotopes can be detach in two major headings: geologic clocks and tracers. The applications cover a wide spectra of geological, oceanographical, volcanic, hydrological, paleoclimatic and archaeological problems. In this paper, a description of radioactive phenomenon is presented, as well as the chemical and physical properties of the natural radioactive elements, the measurement methods and, finally, some examples of the uses in chronology and as radioactive tracers will be presented, doing an emphasis of some results obtained in Mexico. (Author)

  20. Comparison of methods for intestinal histamine application

    DEFF Research Database (Denmark)

    Vind, S; Søondergaard, I; Poulsen, L K

    1991-01-01

    The study was conducted to investigate whether introduction of histamine in enterosoluble capsules produced the same amount of urinary histamine metabolites as that found after application of histamine through a duodeno-jejunal tube. Secondly, to examine whether a histamine-restrictive or a fast ...... conclude that oral administration of enterosoluble capsules is an easy and appropriate method for intestinal histamine challenge. Fast and histamine-restrictive diets are not necessary, but subjects should record unexpected responses in a food and symptom diary.......The study was conducted to investigate whether introduction of histamine in enterosoluble capsules produced the same amount of urinary histamine metabolites as that found after application of histamine through a duodeno-jejunal tube. Secondly, to examine whether a histamine-restrictive or a fast...... all other intervals did not differ significantly between the two challenge regimens. Fast (water only) and histamine-restrictive diet versus non-restrictive diet did not affect the urinary MIAA. MIAA was significantly higher overall during the first 24 h after challenge than in any other fraction. We...

  1. Environmental applications of manometric respirometric methods

    Energy Technology Data Exchange (ETDEWEB)

    Roppola, K.

    2009-07-01

    decomposition rates was studied in order to evaluate the applicable peat types that can be used in landfill structures. Only minor (BOD/ThOD < 0.4%) biodegradation was observed with compaction peat samples, and the stable state, in which biodegradation stopped, was achieved during a two month period. The manometric respirometric method was also applied for the biodegradation studies in which the effect of the modification of soil properties on biodegradation rates of bio-oils was tested. Modified properties were the nutrient content and the pH of the soil. Fertiliser addition and pH adjustment increased both the BOD/ThOD% values of the model substances and the precision of the measurement. The manometric respirometric method was proved to be an advanced method for simulating biodegradation processes in soil and water media. (orig.)

  2. CSM research: Methods and application studies

    Science.gov (United States)

    Knight, Norman F., Jr.

    1989-01-01

    Computational mechanics is that discipline of applied science and engineering devoted to the study of physical phenomena by means of computational methods based on mathematical modeling and simulation, utilizing digital computers. The discipline combines theoretical and applied mechanics, approximation theory, numerical analysis, and computer science. Computational mechanics has had a major impact on engineering analysis and design. When applied to structural mechanics, the discipline is referred to herein as computational structural mechanics. Complex structures being considered by NASA for the 1990's include composite primary aircraft structures and the space station. These structures will be much more difficult to analyze than today's structures and necessitate a major upgrade in computerized structural analysis technology. NASA has initiated a research activity in structural analysis called Computational Structural Mechanics (CSM). The broad objective of the CSM activity is to develop advanced structural analysis technology that will exploit modern and emerging computers, such as those with vector and/or parallel processing capabilities. Here, the current research directions for the Methods and Application Studies Team of the Langley CSM activity are described.

  3. Nanoscale thermal transport: Theoretical method and application

    Science.gov (United States)

    Zeng, Yu-Jia; Liu, Yue-Yang; Zhou, Wu-Xing; Chen, Ke-Qiu

    2018-03-01

    With the size reduction of nanoscale electronic devices, the heat generated by the unit area in integrated circuits will be increasing exponentially, and consequently the thermal management in these devices is a very important issue. In addition, the heat generated by the electronic devices mostly diffuses to the air in the form of waste heat, which makes the thermoelectric energy conversion also an important issue for nowadays. In recent years, the thermal transport properties in nanoscale systems have attracted increasing attention in both experiments and theoretical calculations. In this review, we will discuss various theoretical simulation methods for investigating thermal transport properties and take a glance at several interesting thermal transport phenomena in nanoscale systems. Our emphasizes will lie on the advantage and limitation of calculational method, and the application of nanoscale thermal transport and thermoelectric property. Project supported by the Nation Key Research and Development Program of China (Grant No. 2017YFB0701602) and the National Natural Science Foundation of China (Grant No. 11674092).

  4. Methods of geodiversity assessment and theirs application

    Science.gov (United States)

    Zwoliński, Zbigniew; Najwer, Alicja; Giardino, Marco

    2016-04-01

    The concept of geodiversity has rapidly gained the approval of scientists around the world (Wiedenbein 1993, Sharples 1993, Kiernan 1995, 1996, Dixon 1996, Eberhard 1997, Kostrzewski 1998, 2011, Gray 2004, 2008, 2013, Zwoliński 2004, Serrano, Ruiz- Flano 2007, Gordon et al. 2012). However, the problem recognition is still at an early stage, and in effect not explicitly understood and defined (Najwer, Zwoliński 2014). Nevertheless, despite widespread use of the concept, little progress has been made in its assessment and mapping. Less than the last decade can be observing investigation of methods for geodiversity assessment and its visualisation. Though, many have acknowledged the importance of geodiversity evaluation (Kozłowski 2004, Gray 2004, Reynard, Panizza 2005, Zouros 2007, Pereira et al. 2007, Hjort et al. 2015). Hitherto, only a few authors have undertaken that kind of methodological issues. Geodiversity maps are being created for a variety of purposes and therefore their methods are quite manifold. In the literature exists some examples of the geodiversity maps applications for the geotourism purpose, basing mainly on the geological diversity, in order to point the scale of the area's tourist attractiveness (Zwoliński 2010, Serrano and Gonzalez Trueba 2011, Zwoliński and Stachowiak 2012). In some studies, geodiversity maps were created and applied to investigate the spatial or genetic relationships with the richness of particular natural environmental components (Burnett et al. 1998, Silva 2004, Jačková, Romportl 2008, Hjort et al. 2012, 2015, Mazurek et al. 2015, Najwer et al. 2014). There are also a few examples of geodiversity assessment in order to geoconservation and efficient management and planning of the natural protected areas (Serrano and Gonzalez Trueba 2011, Pellitero et al. 2011, 2014, Jaskulska et al. 2013, Melelli 2014, Martinez-Grana et al. 2015). The most popular method of assessing the diversity of abiotic components of the natural

  5. Effects of application methods and species of wood on color ...

    African Journals Online (AJOL)

    PRECIOUS

    2009-11-02

    Nov 2, 2009 ... methods. Key words: Waterborne varnishes, application methods, wood materials, color change. ... rate in open air conditions (Anderson et al., 1991). .... for topcoat application and they were held for drying for 3 weeks. Finally ...

  6. Sequential decoding of intramuscular EMG signals via estimation of a Markov model.

    Science.gov (United States)

    Monsifrot, Jonathan; Le Carpentier, Eric; Aoustin, Yannick; Farina, Dario

    2014-09-01

    This paper addresses the sequential decoding of intramuscular single-channel electromyographic (EMG) signals to extract the activity of individual motor neurons. A hidden Markov model is derived from the physiological generation of the EMG signal. The EMG signal is described as a sum of several action potentials (wavelet) trains, embedded in noise. For each train, the time interval between wavelets is modeled by a process that parameters are linked to the muscular activity. The parameters of this process are estimated sequentially by a Bayes filter, along with the firing instants. The method was tested on some simulated signals and an experimental one, from which the rates of detection and classification of action potentials were above 95% with respect to the reference decomposition. The method works sequentially in time, and is the first to address the problem of intramuscular EMG decomposition online. It has potential applications for man-machine interfacing based on motor neuron activities.

  7. Design Considerations for mHealth Programs Targeting Smokers Not Yet Ready to Quit: Results of a Sequential Mixed-Methods Study.

    Science.gov (United States)

    McClure, Jennifer B; Heffner, Jaimee; Hohl, Sarah; Klasnja, Predrag; Catz, Sheryl L

    2017-03-10

    Mobile health (mHealth) smoking cessation programs are typically designed for smokers who are ready to quit smoking. In contrast, most smokers want to quit someday but are not yet ready to quit. If mHealth apps were designed for these smokers, they could potentially encourage and assist more people to quit smoking. No prior studies have specifically examined the design considerations of mHealth apps targeting smokers who are not yet ready to quit. To inform the user-centered design of mHealth apps for smokers who were not yet ready to quit by assessing (1) whether these smokers were interested in using mHealth tools to change their smoking behavior; (2) their preferred features, functionality, and content of mHealth programs addressing smoking; and (3) considerations for marketing or distributing these programs to promote their uptake. We conducted a sequential exploratory, mixed-methods study. Qualitative interviews (phase 1, n=15) were completed with a demographically diverse group of smokers who were smartphone owners and wanted to quit smoking someday, but not yet. Findings informed a Web-based survey of smokers from across the United States (phase 2, n=116). Data were collected from April to September, 2016. Findings confirmed that although smokers not yet ready to quit are not actively seeking treatment or using cessation apps, most would be interested in using these programs to help them reduce or change their smoking behavior. Among phase 2 survey respondents, the app features, functions, and content rated most highly were (1) security of personal information; (2) the ability to track smoking, spending, and savings; (3) content that adaptively changes with one's needs; (4) the ability to request support as needed; (5) the ability to earn and redeem awards for program use; (6) guidance on how to quit smoking; and (7) content specifically addressing management of nicotine withdrawal, stress, depression, and anxiety. Results generally did not vary by stage of

  8. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  9. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  10. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  11. Methods and applications of analytical perturbation theory

    International Nuclear Information System (INIS)

    Kirchgraber, U.; Stiefel, E.

    1978-01-01

    This monograph on perturbation theory is based on various courses and lectures held by the authors at the ETH, Zurich and at the University of Texas, Austin. Its principal intention is to inform application-minded mathematicians, physicists and engineers about recent developments in this field. The reader is not assumed to have mathematical knowledge beyond what is presented in standard courses on analysis and linear algebra. Chapter I treats the transformations of systems of differential equations and the integration of perturbed systems in a formal way. These tools are applied in Chapter II to celestial mechanics and to the theory of tops and gyroscopic motion. Chapter III is devoted to the discussion of Hamiltonian systems of differential equations and exposes the algebraic aspects of perturbation theory showing also the necessary modifications of the theory in case of singularities. The last chapter gives the mathematical justification for the methods developed in the previous chapters and investigates important questions such as error estimations for the solutions and asymptotic stability. Each chapter ends with useful comments and an extensive reference to the original literature. (HJ) [de

  12. Antimicrobial applications of nanotechnology: methods and literature

    Directory of Open Access Journals (Sweden)

    Seil JT

    2012-06-01

    Full Text Available Justin T Seil, Thomas J WebsterLaboratory for Nanomedicine Research, School of Engineering, Brown University, Providence, RI, USAAbstract: The need for novel antibiotics comes from the relatively high incidence of bacterial infection and the growing resistance of bacteria to conventional antibiotics. Consequently, new methods for reducing bacteria activity (and associated infections are badly needed. Nanotechnology, the use of materials with dimensions on the atomic or molecular scale, has become increasingly utilized for medical applications and is of great interest as an approach to killing or reducing the activity of numerous microorganisms. While some natural antibacterial materials, such as zinc and silver, possess greater antibacterial properties as particle size is reduced into the nanometer regime (due to the increased surface to volume ratio of a given mass of particles, the physical structure of a nanoparticle itself and the way in which it interacts with and penetrates into bacteria appears to also provide unique bactericidal mechanisms. A variety of techniques to evaluate bacteria viability, each with unique advantages and disadvantages, has been established and must be understood in order to determine the effectiveness of nanoparticles (diameter ≤100 nm as antimicrobial agents. In addition to addressing those techniques, a review of select literature and a summary of bacteriostatic and bactericidal mechanisms are covered in this manuscript.Keywords: nanomaterial, nanoparticle, nanotechnology, bacteria, antibacterial, biofilm

  13. Antimicrobial applications of nanotechnology: methods and literature.

    Science.gov (United States)

    Seil, Justin T; Webster, Thomas J

    2012-01-01

    The need for novel antibiotics comes from the relatively high incidence of bacterial infection and the growing resistance of bacteria to conventional antibiotics. Consequently, new methods for reducing bacteria activity (and associated infections) are badly needed. Nanotechnology, the use of materials with dimensions on the atomic or molecular scale, has become increasingly utilized for medical applications and is of great interest as an approach to killing or reducing the activity of numerous microorganisms. While some natural antibacterial materials, such as zinc and silver, possess greater antibacterial properties as particle size is reduced into the nanometer regime (due to the increased surface to volume ratio of a given mass of particles), the physical structure of a nanoparticle itself and the way in which it interacts with and penetrates into bacteria appears to also provide unique bactericidal mechanisms. A variety of techniques to evaluate bacteria viability, each with unique advantages and disadvantages, has been established and must be understood in order to determine the effectiveness of nanoparticles (diameter ≤ 100 nm) as antimicrobial agents. In addition to addressing those techniques, a review of select literature and a summary of bacteriostatic and bactericidal mechanisms are covered in this manuscript.

  14. Survey of Instant Messaging Applications Encryption Methods

    OpenAIRE

    Kabakuş, Abdullah; Kara, Resul

    2015-01-01

    Instant messaging applications has already taken the place of traditional Short Messaging Service (SMS) and Multimedia Messaging Service (MMS) due to their popularity and usage easement they provide. Users of instant messaging applications are able to send both text and audio messages, different types of attachments such as photos, videos, contact information to their contacts in real time. Because of instant messaging applications use internet instead of Short Message Service Technical Reali...

  15. Application and analysis of retroperitoneal laparoscopic partial nephrectomy with sequential segmental renal artery clamping for patients with multiple renal tumor: initial experience.

    Science.gov (United States)

    Zhu, Jundong; Jiang, Fan; Li, Pu; Shao, Pengfei; Liang, Chao; Xu, Aiming; Miao, Chenkui; Qin, Chao; Wang, Zengjun; Yin, Changjun

    2017-09-11

    To explore the feasibility and safety of retroperitoneal laparoscopic partial nephrectomy with sequential segmental renal artery clamping for the patients with multiple renal tumor of who have solitary kidney or contralateral kidney insufficiency. Nine patients who have undergone retroperitoneal laparoscopic partial nephrectomy with sequential segmental renal artery clamping between October 2010 and January 2017 were retrospectively analyzed. Clinical materials and parameters during and after the operation were summarized. Nineteen tumors were resected in nine patients and the operations were all successful. The operation time ranged from 100 to 180 min (125 min); clamping time of segmental renal artery was 10 ~ 30 min (23 min); the amount of blood loss during the operation was 120 ~ 330 ml (190 ml); hospital stay after the operation is 3 ~ 6d (5d). There was no complication during the perioperative period, and the pathology diagnosis after the surgery showed that there were 13 renal clear cell carcinomas, two papillary carcinoma and four perivascular epithelioid cell tumors with negative margins from the 19 tumors. All patients were followed up for 3 ~ 60 months, and no local recurrence or metastasis was detected. At 3-month post-operation follow-up, the mean serum creatinine was 148.6 ± 28.1 μmol/L (p = 0.107), an increase of 3.0 μmol/L from preoperative baseline. For the patients with multiple renal tumors and solitary kidney or contralateral kidney insufficiency, retroperitoneal laparoscopic partial nephrectomy with sequential segmental renal artery clamping was feasible and safe, which minimized the warm ischemia injury to the kidney and preserved the renal function effectively.

  16. Optimisation of beryllium-7 gamma analysis following BCR sequential extraction

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, A. [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Blake, W.H., E-mail: wblake@plymouth.ac.uk [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Keith-Roach, M.J. [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Kemakta Konsult, Stockholm (Sweden)

    2012-03-30

    Graphical abstract: Showing decrease in analytical uncertainty using the optimal (combined preconcentrated sample extract) method. nv (no value) where extract activities were Sequential extraction with natural {sup 7}Be returns high analytical uncertainties. Black-Right-Pointing-Pointer Preconcentrating extracts from a large sample mass improved analytical uncertainty. Black-Right-Pointing-Pointer This optimised method can be readily employed in studies using low activity samples. - Abstract: The application of cosmogenic {sup 7}Be as a sediment tracer at the catchment-scale requires an understanding of its geochemical associations in soil to underpin the assumption of irreversible adsorption. Sequential extractions offer a readily accessible means of determining the associations of {sup 7}Be with operationally defined soil phases. However, the subdivision of the low activity concentrations of fallout {sup 7}Be in soils into geochemical fractions can introduce high gamma counting uncertainties. Extending analysis time significantly is not always an option for batches of samples, owing to the on-going decay of {sup 7}Be (t{sub 1/2} = 53.3 days). Here, three different methods of preparing and quantifying {sup 7}Be extracted using the optimised BCR three-step scheme have been evaluated and compared with a focus on reducing analytical uncertainties. The optimal method involved carrying out the BCR extraction in triplicate, sub-sampling each set of triplicates for stable Be analysis before combining each set and coprecipitating the {sup 7}Be with metal oxyhydroxides to produce a thin source for gamma analysis. This method was applied to BCR extractions of natural {sup 7}Be in four agricultural soils. The approach gave good counting statistics from a 24 h analysis period ({approx}10% (2

  17. A Data-Driven Method for Selecting Optimal Models Based on Graphical Visualisation of Differences in Sequentially Fitted ROC Model Parameters

    Directory of Open Access Journals (Sweden)

    K S Mwitondi

    2013-05-01

    Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.

  18. Methods, safety, and early clinical outcomes of dose escalation using simultaneous integrated and sequential boosts in patients with locally advanced gynecologic malignancies.

    Science.gov (United States)

    Boyle, John; Craciunescu, Oana; Steffey, Beverly; Cai, Jing; Chino, Junzo

    2014-11-01

    To evaluate the safety of dose escalated radiotherapy using a simultaneous integrated boost technique in patients with locally advanced gynecological malignancies. Thirty-nine women with locally advanced gynecological malignancies were treated with intensity modulated radiation therapy utilizing a simultaneous integrated boost (SIB) technique for gross disease in the para-aortic and/or pelvic nodal basins, sidewall extension, or residual primary disease. Women were treated to 45Gy in 1.8Gy fractions to elective nodal regions. Gross disease was simultaneously treated to 55Gy in 2.2Gy fractions (n=44 sites). An additional sequential boost of 10Gy in 2Gy fractions was delivered if deemed appropriate (n=29 sites). Acute and late toxicity, local control in the treated volumes (LC), overall survival (OS), and distant metastases (DM) were assessed. All were treated with a SIB to a dose of 55Gy. Twenty-four patients were subsequently treated with a sequential boost to a median dose of 65Gy. Median follow-up was 18months. Rates of acute>grade 2 gastrointestinal (GI), genitourinary (GU), and hematologic (heme) toxicities were 2.5%, 0%, and 30%, respectively. There were no grade 4 acute toxicities. At one year, grade 1-2 late GI toxicities were 24.5%. There were no grade 3 or 4 late GI toxicities. Rates of grade 1-2 late GU toxicities were 12.7%. There were no grade 3 or 4 late GU toxicities. Dose escalated radiotherapy using a SIB results in acceptable rates of acute toxicity. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Characterization of a sequential pipeline approach to automatic tissue segmentation from brain MR Images

    International Nuclear Information System (INIS)

    Hou, Zujun; Huang, Su

    2008-01-01

    Quantitative analysis of gray matter and white matter in brain magnetic resonance imaging (MRI) is valuable for neuroradiology and clinical practice. Submission of large collections of MRI scans to pipeline processing is increasingly important. We characterized this process and suggest several improvements. To investigate tissue segmentation from brain MR images through a sequential approach, a pipeline that consecutively executes denoising, skull/scalp removal, intensity inhomogeneity correction and intensity-based classification was developed. The denoising phase employs a 3D-extension of the Bayes-Shrink method. The inhomogeneity is corrected by an improvement of the Dawant et al.'s method with automatic generation of reference points. The N3 method has also been evaluated. Subsequently the brain tissue is segmented into cerebrospinal fluid, gray matter and white matter by a generalized Otsu thresholding technique. Intensive comparisons with other sequential or iterative methods have been carried out using simulated and real images. The sequential approach with judicious selection on the algorithm selection in each stage is not only advantageous in speed, but also can attain at least as accurate segmentation as iterative methods under a variety of noise or inhomogeneity levels. A sequential approach to tissue segmentation, which consecutively executes the wavelet shrinkage denoising, scalp/skull removal, inhomogeneity correction and intensity-based classification was developed to automatically segment the brain tissue into CSF, GM and WM from brain MR images. This approach is advantageous in several common applications, compared with other pipeline methods. (orig.)

  20. Sequential decay of Reggeons

    International Nuclear Information System (INIS)

    Yoshida, Toshihiro

    1981-01-01

    Probabilities of meson production in the sequential decay of Reggeons, which are formed from the projectile and the target in the hadron-hadron to Reggeon-Reggeon processes, are investigated. It is assumed that pair creation of heavy quarks and simultaneous creation of two antiquark-quark pairs are negligible. The leading-order terms with respect to ratio of creation probabilities of anti s s to anti u u (anti d d) are calculated. The production cross sections in the target fragmentation region are given in terms of probabilities in the initial decay of the Reggeons and an effect of manyparticle production. (author)

  1. Development and application of an on-line sequential injection system for the separation of artificial and natural radionuclides in environmental samples

    International Nuclear Information System (INIS)

    Kim, C.-K.; Sansone, U.; Martin, P.; Kim, C.-S.

    2007-02-01

    The Chemistry Unit of the Physics, Chemistry and Instrumentation Laboratory in the IAEA's Seibersdorf Laboratory in Austria, has the programmatic responsibility to provide assistance to Member State laboratories in maintaining and improving the reliability of analytical measurement results, both in trace element and radionuclide determinations. This is accomplished through the provision of reference materials of terrestrial origin, validated analytical procedures, training in the implementation of internal quality control, and through the evaluation of measurement performance by organization of worldwide and regional interlaboratory comparison exercises. In this framework an on-line sequential injection (SI) system was developed, which can be widely used for the separation and preconcentration of target analytes from diverse environmental samples. The system enables the separation time to be shortened by maintaining a constant flow rate of solution and by avoiding clogging or bubbling in a chromatographic column. The SI system was successfully applied to the separation of Pu in IAEA reference material (IAEA Soil-6) and to the sequential separation of 210 Po and 210 Pb in phosphogypsum candidate reference material. The replicate analysis results of Pu in IAEA reference material (Soil-6) obtained with the SI system are in good agreement with the recommended value within 5% of standard deviation. The SI system enabled a halving in the separation time required for of radionuclides

  2. Progress in spatial analysis methods and applications

    CERN Document Server

    Páez, Antonio; Buliung, Ron N; Dall'erba, Sandy

    2010-01-01

    This book brings together developments in spatial analysis techniques, including spatial statistics, econometrics, and spatial visualization, and applications to fields such as regional studies, transportation and land use, population and health.

  3. Stability by Liapunov's direct methods with applications

    CERN Document Server

    Salle, Joseph La

    1961-01-01

    In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat

  4. Radionuclide methods application in cardiac studies

    International Nuclear Information System (INIS)

    Kotina, E.D.; Ploskikh, V.A.; Babin, A.V.

    2013-01-01

    Radionuclide methods are one of the most modern methods of functional diagnostics of diseases of the cardio-vascular system that requires the use of mathematical methods of processing and analysis of data obtained during the investigation. Study is carried out by means of one-photon emission computed tomography (SPECT). Mathematical methods and software for SPECT data processing are developed. This software allows defining physiologically meaningful indicators for cardiac studies

  5. Augmented reality implementation methods in mainstream applications

    Directory of Open Access Journals (Sweden)

    David Procházka

    2011-01-01

    Full Text Available Augmented reality has became an useful tool in many areas from space exploration to military applications. Although used theoretical principles are well known for almost a decade, the augmented reality is almost exclusively used in high budget solutions with a special hardware. However, in last few years we could see rising popularity of many projects focused on deployment of the augmented reality on dif­ferent mobile devices. Our article is aimed on developers who consider development of an augmented reality application for the mainstream market. Such developers will be forced to keep the application price, therefore also the development price, at reasonable level. Usage of existing image processing software library could bring a significant cut-down of the development costs. In the theoretical part of the article is presented an overview of the augmented reality application structure. Further, an approach for selection appropriate library as well as the review of the existing software libraries focused in this area is described. The last part of the article out­lines our implementation of key parts of the augmented reality application using the OpenCV library.

  6. STABILIZED SEQUENTIAL QUADRATIC PROGRAMMING: A SURVEY

    Directory of Open Access Journals (Sweden)

    Damián Fernández

    2014-12-01

    Full Text Available We review the motivation for, the current state-of-the-art in convergence results, and some open questions concerning the stabilized version of the sequential quadratic programming algorithm for constrained optimization. We also discuss the tools required for its local convergence analysis, globalization challenges, and extentions of the method to the more general variational problems.

  7. Terminating Sequential Delphi Survey Data Collection

    Science.gov (United States)

    Kalaian, Sema A.; Kasim, Rafa M.

    2012-01-01

    The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…

  8. Eyewitness decisions in simultaneous and sequential lineups: a dual-process signal detection theory analysis.

    Science.gov (United States)

    Meissner, Christian A; Tredoux, Colin G; Parker, Janat F; MacLin, Otto H

    2005-07-01

    Many eyewitness researchers have argued for the application of a sequential alternative to the traditional simultaneous lineup, given its role in decreasing false identifications of innocent suspects (sequential superiority effect). However, Ebbesen and Flowe (2002) have recently noted that sequential lineups may merely bring about a shift in response criterion, having no effect on discrimination accuracy. We explored this claim, using a method that allows signal detection theory measures to be collected from eyewitnesses. In three experiments, lineup type was factorially combined with conditions expected to influence response criterion and/or discrimination accuracy. Results were consistent with signal detection theory predictions, including that of a conservative criterion shift with the sequential presentation of lineups. In a fourth experiment, we explored the phenomenological basis for the criterion shift, using the remember-know-guess procedure. In accord with previous research, the criterion shift in sequential lineups was associated with a reduction in familiarity-based responding. It is proposed that the relative similarity between lineup members may create a context in which fluency-based processing is facilitated to a greater extent when lineup members are presented simultaneously.

  9. Application of machine learning methods in bioinformatics

    Science.gov (United States)

    Yang, Haoyu; An, Zheng; Zhou, Haotian; Hou, Yawen

    2018-05-01

    Faced with the development of bioinformatics, high-throughput genomic technology have enabled biology to enter the era of big data. [1] Bioinformatics is an interdisciplinary, including the acquisition, management, analysis, interpretation and application of biological information, etc. It derives from the Human Genome Project. The field of machine learning, which aims to develop computer algorithms that improve with experience, holds promise to enable computers to assist humans in the analysis of large, complex data sets.[2]. This paper analyzes and compares various algorithms of machine learning and their applications in bioinformatics.

  10. [Application of Delphi method in traditional Chinese medicine clinical research].

    Science.gov (United States)

    Bi, Ying-fei; Mao, Jing-yuan

    2012-03-01

    In recent years, Delphi method has been widely applied in traditional Chinese medicine (TCM) clinical research. This article analyzed the present application situation of Delphi method in TCM clinical research, and discussed some problems presented in the choice of evaluation method, classification of observation indexes and selection of survey items. On the basis of present application of Delphi method, the author analyzed the method on questionnaire making, selection of experts, evaluation of observation indexes and selection of survey items. Furthermore, the author summarized the steps of application of Delphi method in TCM clinical research.

  11. Accounting for Heterogeneous Returns in Sequential Schooling Decisions

    NARCIS (Netherlands)

    Zamarro, G.

    2006-01-01

    This paper presents a method for estimating returns to schooling that takes into account that returns may be heterogeneous among agents and that educational decisions are made sequentially.A sequential decision model is interesting because it explicitly considers that the level of education of each

  12. Numerical methods for differential equations and applications

    International Nuclear Information System (INIS)

    Ixaru, L.G.

    1984-01-01

    This book is addressed to persons who, without being professionals in applied mathematics, are often faced with the problem of numerically solving differential equations. In each of the first three chapters a definite class of methods is discussed for the solution of the initial value problem for ordinary differential equations: multistep methods; one-step methods; and piecewise perturbation methods. The fourth chapter is mainly focussed on the boundary value problems for linear second-order equations, with a section devoted to the Schroedinger equation. In the fifth chapter the eigenvalue problem for the radial Schroedinger equation is solved in several ways, with computer programs included. (Auth.)

  13. Chlorinated Cyanurates: Method Interferences and Application Implications

    Science.gov (United States)

    Experiments were conducted to investigate method interferences, residual stability, regulated DBP formation, and a water chemistry model associated with the use of Dichlor & Trichlor in drinking water.

  14. Dynamical Systems Method and Applications Theoretical Developments and Numerical Examples

    CERN Document Server

    Ramm, Alexander G

    2012-01-01

    Demonstrates the application of DSM to solve a broad range of operator equations The dynamical systems method (DSM) is a powerful computational method for solving operator equations. With this book as their guide, readers will master the application of DSM to solve a variety of linear and nonlinear problems as well as ill-posed and well-posed problems. The authors offer a clear, step-by-step, systematic development of DSM that enables readers to grasp the method's underlying logic and its numerous applications. Dynamical Systems Method and Applications begins with a general introduction and

  15. Immediate Sequential Bilateral Cataract Surgery

    DEFF Research Database (Denmark)

    Kessel, Line; Andresen, Jens; Erngaard, Ditte

    2015-01-01

    The aim of the present systematic review was to examine the benefits and harms associated with immediate sequential bilateral cataract surgery (ISBCS) with specific emphasis on the rate of complications, postoperative anisometropia, and subjective visual function in order to formulate evidence......-based national Danish guidelines for cataract surgery. A systematic literature review in PubMed, Embase, and Cochrane central databases identified three randomized controlled trials that compared outcome in patients randomized to ISBCS or bilateral cataract surgery on two different dates. Meta-analyses were...... performed using the Cochrane Review Manager software. The quality of the evidence was assessed using the GRADE method (Grading of Recommendation, Assessment, Development, and Evaluation). We did not find any difference in the risk of complications or visual outcome in patients randomized to ISBCS or surgery...

  16. Random and cooperative sequential adsorption

    Science.gov (United States)

    Evans, J. W.

    1993-10-01

    Irreversible random sequential adsorption (RSA) on lattices, and continuum "car parking" analogues, have long received attention as models for reactions on polymer chains, chemisorption on single-crystal surfaces, adsorption in colloidal systems, and solid state transformations. Cooperative generalizations of these models (CSA) are sometimes more appropriate, and can exhibit richer kinetics and spatial structure, e.g., autocatalysis and clustering. The distribution of filled or transformed sites in RSA and CSA is not described by an equilibrium Gibbs measure. This is the case even for the saturation "jammed" state of models where the lattice or space cannot fill completely. However exact analysis is often possible in one dimension, and a variety of powerful analytic methods have been developed for higher dimensional models. Here we review the detailed understanding of asymptotic kinetics, spatial correlations, percolative structure, etc., which is emerging for these far-from-equilibrium processes.

  17. Scenistic Methods for Training: Applications and Practice

    Science.gov (United States)

    Lyons, Paul R.

    2011-01-01

    Purpose: This paper aims to complement an earlier article (2010) in "Journal of European Industrial Training" in which the description and theory bases of scenistic methods were presented. This paper also offers a description of scenistic methods and information on theory bases. However, the main thrust of this paper is to describe, give suggested…

  18. Development and Application of Kinetic Spectrophotometric Method ...

    African Journals Online (AJOL)

    Purpose: To develop an improved kinetic-spectrophotometric procedure for the determination of metronidazole (MNZ) in pharmaceutical formulations. Methods: The method is based on oxidation reaction of MNZ by hydrogen peroxide in the presence of Fe(II) ions at pH 4.5 (acetate buffer). The reaction was monitored ...

  19. Development and Application of Kinetic Spectrophotometric Method ...

    African Journals Online (AJOL)

    ISSN: 1596-5996 (print); 1596-9827 (electronic) ... Methods: The method is based on oxidation reaction of MNZ by hydrogen peroxide ... optimum operating conditions for reagent concentrations and temperature were ... 1-yl) ethanol] is an amebicide, antiprotozoal and .... The dependence of reaction rate on concentration of.

  20. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  1. Sequential strand displacement beacon for detection of DNA coverage on functionalized gold nanoparticles.

    Science.gov (United States)

    Paliwoda, Rebecca E; Li, Feng; Reid, Michael S; Lin, Yanwen; Le, X Chris

    2014-06-17

    Functionalizing nanomaterials for diverse analytical, biomedical, and therapeutic applications requires determination of surface coverage (or density) of DNA on nanomaterials. We describe a sequential strand displacement beacon assay that is able to quantify specific DNA sequences conjugated or coconjugated onto gold nanoparticles (AuNPs). Unlike the conventional fluorescence assay that requires the target DNA to be fluorescently labeled, the sequential strand displacement beacon method is able to quantify multiple unlabeled DNA oligonucleotides using a single (universal) strand displacement beacon. This unique feature is achieved by introducing two short unlabeled DNA probes for each specific DNA sequence and by performing sequential DNA strand displacement reactions. Varying the relative amounts of the specific DNA sequences and spacing DNA sequences during their coconjugation onto AuNPs results in different densities of the specific DNA on AuNP, ranging from 90 to 230 DNA molecules per AuNP. Results obtained from our sequential strand displacement beacon assay are consistent with those obtained from the conventional fluorescence assays. However, labeling of DNA with some fluorescent dyes, e.g., tetramethylrhodamine, alters DNA density on AuNP. The strand displacement strategy overcomes this problem by obviating direct labeling of the target DNA. This method has broad potential to facilitate more efficient design and characterization of novel multifunctional materials for diverse applications.

  2. Advanced photon counting applications, methods, instrumentation

    CERN Document Server

    Kapusta, Peter; Erdmann, Rainer

    2015-01-01

    This volume focuses on Time-Correlated Single Photon Counting (TCSPC), a powerful tool allowing luminescence lifetime measurements to be made with high temporal resolution, even on single molecules. Combining spectrum and lifetime provides a "fingerprint" for identifying such molecules in the presence of a background. Used together with confocal detection, this permits single-molecule spectroscopy and microscopy in addition to ensemble measurements, opening up an enormous range of hot life science applications such as fluorescence lifetime imaging (FLIM) and measurement of Förster Resonant Energy Transfer (FRET) for the investigation of protein folding and interaction. Several technology-related chapters present both the basics and current state-of-the-art, in particular of TCSPC electronics, photon detectors and lasers. The remaining chapters cover a broad range of applications and methodologies for experiments and data analysis, including the life sciences, defect centers in diamonds, super-resolution micr...

  3. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    International Nuclear Information System (INIS)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  4. Retrieval of sea surface velocities using sequential ocean colour monitor (OCM) data

    Digital Repository Service at National Institute of Oceanography (India)

    Prasad, J.S.; Rajawat, A.S.; Pradhan, Y.; Chauhan, O.S.; Nayak, S.R.

    velocities has been developed. The method is based on matching suspended sediment dispersion patterns, in sequential two time lapsed images. The pattern matching is performed on atmospherically corrected and geo-referenced sequential pair of images by Maximum...

  5. Application of geophysical methods for fracture characterization

    International Nuclear Information System (INIS)

    Lee, K.H.; Majer, E.L.; McEvilly, T.V.; California Univ., Berkeley, CA; Morrison, H.F.; California Univ., Berkeley, CA

    1990-01-01

    One of the most crucial needs in the design and implementation of an underground waste isolation facility is a reliable method for the detection and characterization of fractures in zones away from boreholes or subsurface workings. Geophysical methods may represent a solution to this problem. If fractures represent anomalies in the elastic properties or conductive properties of the rocks, then the seismic and electrical techniques may be useful in detecting and characterizing fracture properties. 7 refs., 3 figs

  6. Classical and sequential limit analysis revisited

    Science.gov (United States)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  7. Multivariate Analyses and Evaluation of Heavy Metals by Chemometric BCR Sequential Extraction Method in Surface Sediments from Lingdingyang Bay, South China

    Directory of Open Access Journals (Sweden)

    Linglong Cao

    2015-04-01

    Full Text Available Sediments in estuary areas are recognized as the ultimate reservoirs for numerous contaminants, e.g., toxic metals. Multivariate analyses by chemometric evaluation were performed to classify metal ions (Cu, Zn, As, Cr, Pb, Ni and Cd in superficial sediments from Lingdingyang Bay and to determine whether or not there were potential contamination risks based on the BCR sequential extraction scheme. The results revealed that Cd was mainly in acid-soluble form with an average of 75.99% of its total contents and thus of high potential availability, indicating significant anthropogenic sources, while Cr, As, Ni were enriched in the residual fraction which could be considered as the safest ingredients to the environment. According to the proportion of secondary to primary phases (KRSP, Cd had the highest bioavailable fraction and represented high or very high risk, followed by Pb and Cu with medium risks in most of samples. The combined evaluation of the Pollution Load Index (PLI and the mean Effect Range Median Quotient (mERM-Q highlighted that the greatest potential environmental risk area was in the northwest of Lingdingyang Bay. Almost all of the sediments had a 21% probability of toxicity. Additionally, Principal Component Analysis (PCA revealed that the survey region was significantly affected by two main sources of anthropogenic contributions: PC1 showed increased loadings of variables in acid-soluble and reducible fractions that were consistent with the input from industrial wastes (such as manufacturing, metallurgy, chemical industry and domestic sewages; PC2 was characterized by increased loadings of variables in residual fraction that could be attributed to leaching and weathering of parent rocks. The results obtained demonstrated the need for appropriate remediation measures to alleviate soil pollution problem due to the more aggregation of potentially risky metals. Therefore, it is of crucial significance to implement the targeted

  8. Application of fuzzy methods in tunnelling

    Directory of Open Access Journals (Sweden)

    Ľudmila Tréfová

    2011-12-01

    Full Text Available Full-face tunnelling machines were used for the tunnel construction in Slovakia for boring of the exploratory galleries of highwaytunnels Branisko and Višňové-Dubná skala. A monitoring system of boring process parameters was installed on the tunnelling machinesand the acquired outcomes were processed by several theoretical approaches. Method IKONA was developed for the determination ofchanges in the rock mass strength characteristics in the line of exploratory gallery. Individual geological sections were evaluated bythe descriptive statistics and the TBM performance was evaluated by the fuzzy method. The paper informs on the procedure of the designof fuzzy models and their verification.

  9. Silver nanoparticles: Synthesis methods, bio-applications and properties.

    Science.gov (United States)

    Abbasi, Elham; Milani, Morteza; Fekri Aval, Sedigheh; Kouhi, Mohammad; Akbarzadeh, Abolfazl; Tayefi Nasrabadi, Hamid; Nikasa, Parisa; Joo, San Woo; Hanifehpour, Younes; Nejati-Koshki, Kazem; Samiei, Mohammad

    2016-01-01

    Silver nanoparticles size makes wide range of new applications in various fields of industry. Synthesis of noble metal nanoparticles for applications such as catalysis, electronics, optics, environmental and biotechnology is an area of constant interest. Two main methods for Silver nanoparticles are the physical and chemical methods. The problem with these methods is absorption of toxic substances onto them. Green synthesis approaches overcome this limitation. Silver nanoparticles size makes wide range of new applications in various fields of industry. This article summarizes exclusively scalable techniques and focuses on strengths, respectively, limitations with respect to the biomedical applicability and regulatory requirements concerning silver nanoparticles.

  10. Application of numerical analysis methods to thermoluminescence dosimetry

    International Nuclear Information System (INIS)

    Gomez Ros, J. M.; Delgado, A.

    1989-01-01

    This report presents the application of numerical methods to thermoluminescence dosimetry (TLD), showing the advantages obtained over conventional evaluation systems. Different configurations of the analysis method are presented to operate in specific dosimetric applications of TLD, such as environmental monitoring and mailed dosimetry systems for quality assurance in radiotherapy facilities. (Author) 10 refs

  11. Industrial applications of neutron physics methods

    International Nuclear Information System (INIS)

    Gozani, T.

    1994-01-01

    Three areas where nuclear based techniques have significant are briefly described. These are: Nuclear material control and non-proliferation, on-line elemental analysis of coal and minerals, and non- detection of explosives and other contraband. The nuclear physics and the role of reactor physics methods are highlighted. (author). 5 refs., 10 figs., 5 tabs

  12. Molecular Combing of DNA: Methods and Applications

    DEFF Research Database (Denmark)

    Nazari, Zeniab Esmail; Gurevich, Leonid

    2013-01-01

    studies to nanoelectronics. While molecular combing has been applied in a variety of DNA-related studies, no comprehensive review has been published on different combing methods proposed so far. In this review, the underlying mechanisms of molecular combing of DNA are described followed by discussion...

  13. Application of Turchin's method of statistical regularization

    Science.gov (United States)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  14. Synthesis of sequential control algorithms for pneumatic drives controlled by monostable valves

    Directory of Open Access Journals (Sweden)

    Ł. Dworzak

    2009-07-01

    Full Text Available Application of the Grafpol method [1] for synthesising sequential control algorithms for pneumatic drives controlled by monostable valves is presented. The developed principles simplify the MTS method of programming production processes in the scope of the memory realisation [2]. Thanks to this, time for synthesising the schematic equation can be significantly reduced in comparison to the network transformation method [3]. The designed schematic equation makes a ground for writing an application program of a PLC using any language defined in IEC 61131-3.

  15. Asymmetric synthesis II more methods and applications

    CERN Document Server

    Christmann, Mathias

    2012-01-01

    After the overwhelming success of 'Asymmetric Synthesis - The Essentials', narrating the colorful history of asymmetric synthesis, this is the second edition with latest subjects and authors. While the aim of the first edition was mainly to honor the achievements of the pioneers in asymmetric syntheses, the aim of this new edition was bringing the current developments, especially from younger colleagues, to the attention of students. The format of the book remained unchanged, i.e. short conceptual overviews by young leaders in their field including a short biography of the authors. The growing multidisciplinary research within chemistry is reflected in the selection of topics including metal catalysis, organocatalysis, physical organic chemistry, analytical chemistry, and its applications in total synthesis. The prospective reader of this book is a graduate or undergraduate student of advanced organic chemistry as well as the industrial chemist who wants to get a brief update on the current developments in th...

  16. Ensemble Machine Learning Methods and Applications

    CERN Document Server

    Ma, Yunqian

    2012-01-01

    It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed “ensemble learning” by researchers in computational intelligence and machine learning, it is known to improve a decision system’s robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as “boosting” and “random forest” facilitate solutions to key computational issues such as face detection and are now being applied in areas as diverse as object trackingand bioinformatics.   Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including various contributions from researchers in leading industrial research labs. At once a solid theoretical study and a practical guide, the volume is a windfall for r...

  17. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  18. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  19. Application of Software Safety Analysis Methods

    International Nuclear Information System (INIS)

    Park, G. Y.; Hur, S.; Cheon, S. W.; Kim, D. H.; Lee, D. Y.; Kwon, K. C.; Lee, S. J.; Koo, Y. H.

    2009-01-01

    A fully digitalized reactor protection system, which is called the IDiPS-RPS, was developed through the KNICS project. The IDiPS-RPS has four redundant and separated channels. Each channel is mainly composed of a group of bistable processors which redundantly compare process variables with their corresponding setpoints and a group of coincidence processors that generate a final trip signal when a trip condition is satisfied. Each channel also contains a test processor called the ATIP and a display and command processor called the COM. All the functions were implemented in software. During the development of the safety software, various software safety analysis methods were applied, in parallel to the verification and validation (V and V) activities, along the software development life cycle. The software safety analysis methods employed were the software hazard and operability (Software HAZOP) study, the software fault tree analysis (Software FTA), and the software failure modes and effects analysis (Software FMEA)

  20. Rhenium-osmium geochemistry: method and applications

    International Nuclear Information System (INIS)

    Luck, J.M.

    1982-03-01

    Experimental methods for chemical separation and isotopic analysis of rhenium-osmium are described. Accurate determinations are obtained for a quantity ratio around 10 -6 -10 -7 g. Development as a geochemical tracer is examined. Study of rhenium-osmium in meteorites allows the determination of solar system chronology and age of the galaxy. Rhenium-osmium chronology in meteorites is improved and osmium isotopes are used as petrogenetic and geological tracers. Molybdenites are studied through 187 Re- 187 Os dating [fr

  1. Sequential Monte Carlo with Highly Informative Observations

    OpenAIRE

    Del Moral, Pierre; Murray, Lawrence M.

    2014-01-01

    We propose sequential Monte Carlo (SMC) methods for sampling the posterior distribution of state-space models under highly informative observation regimes, a situation in which standard SMC methods can perform poorly. A special case is simulating bridges between given initial and final values. The basic idea is to introduce a schedule of intermediate weighting and resampling times between observation times, which guide particles towards the final state. This can always be done for continuous-...

  2. Multivariate methods in nuclear waste remediation: Needs and applications

    International Nuclear Information System (INIS)

    Pulsipher, B.A.

    1992-05-01

    The United States Department of Energy (DOE) has developed a strategy for nuclear waste remediation and environmental restoration at several major sites across the country. Nuclear and hazardous wastes are found in underground storage tanks, containment drums, soils, and facilities. Due to the many possible contaminants and complexities of sampling and analysis, multivariate methods are directly applicable. However, effective application of multivariate methods will require greater ability to communicate methods and results to a non-statistician community. Moreover, more flexible multivariate methods may be required to accommodate inherent sampling and analysis limitations. This paper outlines multivariate applications in the context of select DOE environmental restoration activities and identifies several perceived needs

  3. Applications of the Monte Carlo method in radiation protection

    International Nuclear Information System (INIS)

    Kulkarni, R.N.; Prasad, M.A.

    1999-01-01

    This paper gives a brief introduction to the application of the Monte Carlo method in radiation protection. It may be noted that an exhaustive review has not been attempted. The special advantage of the Monte Carlo method has been first brought out. The fundamentals of the Monte Carlo method have next been explained in brief, with special reference to two applications in radiation protection. Some sample current applications have been reported in the end in brief as examples. They are, medical radiation physics, microdosimetry, calculations of thermoluminescence intensity and probabilistic safety analysis. The limitations of the Monte Carlo method have also been mentioned in passing. (author)

  4. Visual Appearance-Based Unmanned Vehicle Sequential Localization

    Directory of Open Access Journals (Sweden)

    Wei Liu

    2013-01-01

    Full Text Available Localizationis of vital importance for an unmanned vehicle to drive on the road. Most of the existing algorithms are based on laser range finders, inertial equipment, artificial landmarks, distributing sensors or global positioning system(GPS information. Currently, the problem of localization with vision information is most concerned. However, vision-based localization techniquesare still unavailable for practical applications. In this paper, we present a vision-based sequential probability localization method. This method uses the surface information of the roadside to locate the vehicle, especially in the situation where GPS information is unavailable. It is composed of two step, first, in a recording stage, we construct a ground truthmap with the appearance of the roadside environment. Then in an on-line stage, we use a sequential matching approach to localize the vehicle. In the experiment, we use two independent cameras to observe the environment, one is left-orientated and the other is right. SIFT features and Daisy features are used to represent for the visual appearance of the environment. The experiment results show that the proposed method could locate the vehicle in a complicated, large environment with high reliability.

  5. Fuzzy multiple attribute decision making methods and applications

    CERN Document Server

    Chen, Shu-Jen

    1992-01-01

    This monograph is intended for an advanced undergraduate or graduate course as well as for researchers, who want a compilation of developments in this rapidly growing field of operations research. This is a sequel to our previous works: "Multiple Objective Decision Making--Methods and Applications: A state-of-the-Art Survey" (No.164 of the Lecture Notes); "Multiple Attribute Decision Making--Methods and Applications: A State-of-the-Art Survey" (No.186 of the Lecture Notes); and "Group Decision Making under Multiple Criteria--Methods and Applications" (No.281 of the Lecture Notes). In this monograph, the literature on methods of fuzzy Multiple Attribute Decision Making (MADM) has been reviewed thoroughly and critically, and classified systematically. This study provides readers with a capsule look into the existing methods, their characteristics, and applicability to the analysis of fuzzy MADM problems. The basic concepts and algorithms from the classical MADM methods have been used in the development of the f...

  6. Analytical chromatography. Methods, instrumentation and applications

    International Nuclear Information System (INIS)

    Yashin, Ya I; Yashin, A Ya

    2006-01-01

    The state-of-the-art and the prospects in the development of main methods of analytical chromatography, viz., gas, high performance liquid and ion chromatographic techniques, are characterised. Achievements of the past 10-15 years in the theory and general methodology of chromatography and also in the development of new sorbents, columns and chromatographic instruments are outlined. The use of chromatography in the environmental control, biology, medicine, pharmaceutics, and also for monitoring the quality of foodstuffs and products of chemical, petrochemical and gas industries, etc. is considered.

  7. A simplified method to recover urinary vesicles for clinical applications, and sample banking.

    Science.gov (United States)

    Musante, Luca; Tataruch, Dorota; Gu, Dongfeng; Benito-Martin, Alberto; Calzaferri, Giulio; Aherne, Sinead; Holthofer, Harry

    2014-12-23

    Urinary extracellular vesicles provide a novel source for valuable biomarkers for kidney and urogenital diseases: Current isolation protocols include laborious, sequential centrifugation steps which hampers their widespread research and clinical use. Furthermore, large individual urine sample volumes or sizable target cohorts are to be processed (e.g. for biobanking), the storage capacity is an additional problem. Thus, alternative methods are necessary to overcome such limitations. We have developed a practical vesicle isolation technique to yield easily manageable sample volumes in an exceptionally cost efficient way to facilitate their full utilization in less privileged environments and maximize the benefit of biobanking. Urinary vesicles were isolated by hydrostatic dialysis with minimal interference of soluble proteins or vesicle loss. Large volumes of urine were concentrated up to 1/100 of original volume and the dialysis step allowed equalization of urine physico-chemical characteristics. Vesicle fractions were found suitable to any applications, including RNA analysis. In the yield, our hydrostatic filtration dialysis system outperforms the conventional ultracentrifugation-based methods and the labour intensive and potentially hazardous step of ultracentrifugations are eliminated. Likewise, the need for trained laboratory personnel and heavy initial investment is avoided. Thus, our method qualifies as a method for laboratories working with urinary vesicles and biobanking.

  8. Application of DCI to the lipid method

    International Nuclear Information System (INIS)

    Raffi, J.; Lesgards, G.; Pouliquen, I.; Giamarchi, P.; Fakirian, A.

    1996-01-01

    At the end of the sixties it was proposed that a cleavage point on the triglycerides which can produce alkanes and alkenes with one or two carbons less, aldehydes and free fatty acids. The first results of work on pork were extended to chicken and poultry meats. The methodology involved extraction of the lipid fraction followed by vacuum distillation and analysis by gas chromatography (GC). Other extraction and fractionation procedures have been investigated by ADMIT and BCR groups which are more appropriate for the routine examination of large numbers of samples. In the present study, the radio-induced volatile compounds were analysed with a DI200 chromatograph, used with a head-space system, also called the DCI system (Desorption, Concentration, Injection). The main advantage of the method is that it avoids the soxhlet extraction of the lipid fraction from the foodstuffs. Several products were studied; oils, poultry meat, avocado pear. It appears that the DCI is a good and fast method provided that the temperature of the oven is controlled, which is not the case with the commercial apparatus used. (author)

  9. Application of DCI to the lipid method

    Energy Technology Data Exchange (ETDEWEB)

    Raffi, J.; Lesgards, G.; Pouliquen, I.; Giamarchi, P.; Fakirian, A. [Laboratoire de Recherche sur la Qualite des Aliments, Marseille (France)

    1996-12-31

    At the end of the sixties it was proposed that a cleavage point on the triglycerides which can produce alkanes and alkenes with one or two carbons less, aldehydes and free fatty acids. The first results of work on pork were extended to chicken and poultry meats. The methodology involved extraction of the lipid fraction followed by vacuum distillation and analysis by gas chromatography (GC). Other extraction and fractionation procedures have been investigated by ADMIT and BCR groups which are more appropriate for the routine examination of large numbers of samples. In the present study, the radio-induced volatile compounds were analysed with a DI200 chromatograph, used with a head-space system, also called the DCI system (Desorption, Concentration, Injection). The main advantage of the method is that it avoids the soxhlet extraction of the lipid fraction from the foodstuffs. Several products were studied; oils, poultry meat, avocado pear. It appears that the DCI is a good and fast method provided that the temperature of the oven is controlled, which is not the case with the commercial apparatus used. (author).

  10. Adaptive sequential controller

    Energy Technology Data Exchange (ETDEWEB)

    El-Sharkawi, Mohamed A. (Renton, WA); Xing, Jian (Seattle, WA); Butler, Nicholas G. (Newberg, OR); Rodriguez, Alonso (Pasadena, CA)

    1994-01-01

    An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

  11. Adaptive sequential controller

    Science.gov (United States)

    El-Sharkawi, Mohamed A.; Xing, Jian; Butler, Nicholas G.; Rodriguez, Alonso

    1994-01-01

    An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

  12. Application of a path sensitizing method on automated generation of test specifications for control software

    International Nuclear Information System (INIS)

    Morimoto, Yuuichi; Fukuda, Mitsuko

    1995-01-01

    An automated generation method for test specifications has been developed for sequential control software in plant control equipment. Sequential control software can be represented as sequential circuits. The control software implemented in a control equipment is designed from these circuit diagrams. In logic tests of VLSI's, path sensitizing methods are widely used to generate test specifications. But the method generates test specifications at a single time only, and can not be directly applied to sequential control software. The basic idea of the proposed method is as follows. Specifications of each logic operator in the diagrams are defined in the software design process. Therefore, test specifications of each operator in the control software can be determined from these specifications, and validity of software can be judged by inspecting all of the operators in the logic circuit diagrams. Candidates for sensitized paths, on which test data for each operator propagates, can be generated by the path sensitizing method. To confirm feasibility of the method, it was experimentally applied to control software in digital control equipment. The program could generate test specifications exactly, and feasibility of the method was confirmed. (orig.) (3 refs., 7 figs.)

  13. XRSW method, its application and development

    Energy Technology Data Exchange (ETDEWEB)

    Zheludeva, S I; Kovalchuk, M V [Russian Academy of Sciences, Institute of Crystallography, Moscow (Russian Federation)

    1996-09-01

    X-Ray Standing Waves (XRSW) may be obtained under dynamical diffraction in periodic structures or under total external reflection conditions (TR) is stratified medium. As the incident angle varies, XRSW nodes and antinodes move in the direction perpendicular to the reflecting planes, leading to drastic variation of photoelectron interaction of X-ray with matter and resulting in specific angular dependencies of secondary radiation yields (photoelectrons, fluorescence, internal photoeffect, photoluminescence, Compton and thermal diffuse scattering). The structural information - the position of investigated atoms in the direction of XRSW movement (coherent position), the distribution of atoms about this position (coherent fraction) - is obtained with the accuracy about several percents from XRSW period D. The objects under investigation are: semiconductor surface layers, heterostructure, multicomponent crystals, interfaces, adsorbed layers. Besides the development of XRSW method allow to obtain structure, geometrical and optical parameters of ultrathin films (crystalline and disordered, organic and inorganic) and nanostructures on their base.

  14. Proportional representation apportionment methods and their applications

    CERN Document Server

    Pukelsheim, Friedrich

    2017-01-01

    The book offers an in-depth study of the translation of vote counts into seat numbers in proportional representation systems  – an approach guided by practical needs. It also provides plenty of empirical instances illustrating the results. It analyzes in detail the 2014 elections to the European Parliament in the 28 member states, as well as the 2009 and 2013 elections to the German Bundestag. This second edition is a complete revision and expanded version of the first edition published in 2014, and many empirical election results that serve as examples have been updated. Further, a final chapter has been added assembling biographical sketches and authoritative quotes from individuals who pioneered the development of apportionment methodology. The mathematical exposition and the interrelations with political science and constitutional jurisprudence make this an apt resource for interdisciplinary courses and seminars on electoral systems and apportionment methods.

  15. Methods and applications of HPLC-AMS

    International Nuclear Information System (INIS)

    Buchholz, Bruce A.; Dueker, Stephen R.; Lin, Yumei; Clifford, Andrew J.; Vogel, John S.

    2000-01-01

    Pharmacokinetics of physiologic doses of nutrients, pesticides, and herbicides can easily be traced in humans using a 14 C-labeled compound. Basic kinetics can be monitored in blood or urine by measuring the elevation in the 14 C content above the control predose tissue and converting to equivalents of the parent compound. High performance liquid chromatography (HPLC) is an excellent method for the chemical separation of complex mixtures whose profiles afford estimation of biochemical pathways of metabolism. Compounds elute from the HPLC systems with characteristic retention times and can be collected in fractions that can then be graphitized for AMS measurement. Unknowns are tentatively identified by co-elution with known standards and chemical tests that reveal functional groupings. Metabolites are quantified with the 14 C signal. Thoroughly accounting for the carbon inventory in the LC solvents, ion-pairing agents, samples, and carriers adds some complexity to the analysis. In most cases the total carbon inventory is dominated by carrier. Baseline background and stability need to be carefully monitored. Limits of quantitation near 10 amol of 14 C per HPLC fraction are typically achieved. Baselines are maintained by limiting injected 14 C activity <0.17 Bq (4.5 pCi) on the HPLC column

  16. Formal Methods Applications in Air Transportation

    Science.gov (United States)

    Farley, Todd

    2009-01-01

    The U.S. air transportation system is the most productive in the world, moving far more people and goods than any other. It is also the safest system in the world, thanks in part to its venerable air traffic control system. But as demand for air travel continues to grow, the air traffic control system s aging infrastructure and labor-intensive procedures are impinging on its ability to keep pace with demand. And that impinges on the growth of our economy. Air traffic control modernization has long held the promise of a more efficient air transportation system. Part of NASA s current mission is to develop advanced automation and operational concepts that will expand the capacity of our national airspace system while still maintaining its excellent record for safety. It is a challenging mission, as efforts to modernize have, for decades, been hamstrung by the inability to assure safety to the satisfaction of system operators, system regulators, and/or the traveling public. In this talk, we ll provide a brief history of air traffic control, focusing on the tension between efficiency and safety assurance, and the promise of formal methods going forward.

  17. Clustering Methods Application for Customer Segmentation to Manage Advertisement Campaign

    OpenAIRE

    Maciej Kutera; Mirosława Lasek

    2010-01-01

    Clustering methods are recently so advanced elaborated algorithms for large collection data analysis that they have been already included today to data mining methods. Clustering methods are nowadays larger and larger group of methods, very quickly evolving and having more and more various applications. In the article, our research concerning usefulness of clustering methods in customer segmentation to manage advertisement campaign is presented. We introduce results obtained by using four sel...

  18. Quantum Inequalities and Sequential Measurements

    International Nuclear Information System (INIS)

    Candelpergher, B.; Grandouz, T.; Rubinx, J.L.

    2011-01-01

    In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)

  19. Comment on: "Cell Therapy for Heart Disease: Trial Sequential Analyses of Two Cochrane Reviews"

    DEFF Research Database (Denmark)

    Castellini, Greta; Nielsen, Emil Eik; Gluud, Christian

    2017-01-01

    Trial Sequential Analysis is a frequentist method to help researchers control the risks of random errors in meta-analyses (1). Fisher and colleagues used Trial Sequential Analysis on cell therapy for heart diseases (2). The present article discusses the usefulness of Trial Sequential Analysis and...

  20. An anomaly detection and isolation scheme with instance-based learning and sequential analysis

    International Nuclear Information System (INIS)

    Yoo, T. S.; Garcia, H. E.

    2006-01-01

    This paper presents an online anomaly detection and isolation (FDI) technique using an instance-based learning method combined with a sequential change detection and isolation algorithm. The proposed method uses kernel density estimation techniques to build statistical models of the given empirical data (null hypothesis). The null hypothesis is associated with the set of alternative hypotheses modeling the abnormalities of the systems. A decision procedure involves a sequential change detection and isolation algorithm. Notably, the proposed method enjoys asymptotic optimality as the applied change detection and isolation algorithm is optimal in minimizing the worst mean detection/isolation delay for a given mean time before a false alarm or a false isolation. Applicability of this methodology is illustrated with redundant sensor data set and its performance. (authors)

  1. Studies on the methods of inorganic nutrient application in coconut

    International Nuclear Information System (INIS)

    Dwivedi, R.S.; Ray, P.K.; Ninan, S.

    1981-01-01

    Using carrier free 32 P, tagged single superphosphate and 86 Rb, the efficiency of different methods of plant injection and soil placement techniques for fertilizer applications was examined. In the plant injection techniques the radioactivity was fed to the palms through growing roots tips, cut ends of roots, stem injection and leaf axils. The application of radioactivity through the cut ends of roots was most efficient since 32 P was detected in 10 m tall palms, four hours after application. In stem, leaf axil and growing roots tips injection the 32 P was detected after 8, 12 and 18 h. Out of four methods of soil application, the quickest recovery of 32 P in the palms was detected after 7 days of placement when applied by the hole method. The 32 P activity in the palms through circular trenches, strips and basin methods was recorded after 8, 8 and 11 days of application respectively. The accumulation of 86 Rb was significantly higher than 32 P. With plant injection technique the accumulation of activity was found to be significantly higher than with soil placement methods. The rate of radioactivity absorption was 10 to 60 time faster in the former technique as compared to that of the latter. The application of radioactivity through cut ends of roots and circular trench methods, were found to be better and may recommended for nutrient application in coconut. (orig.)

  2. Multichannel, sequential or combined X-ray spectrometry

    International Nuclear Information System (INIS)

    Florestan, J.

    1979-01-01

    X-ray spectrometer qualities and defects are evaluated for sequential and multichannel categories. Multichannel X-ray spectrometer has time-coherency advantage and its results could be more reproducible; on the other hand some spatial incoherency limits low percentage and traces applications, specially when backgrounds are very variable. In this last case, sequential X-ray spectrometer would find again great usefulness [fr

  3. Sequential bayes estimation algorithm with cubic splines on uniform meshes

    International Nuclear Information System (INIS)

    Hossfeld, F.; Mika, K.; Plesser-Walk, E.

    1975-11-01

    After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de

  4. Aplicações seqüenciais de flumicloracpentil para o controle de Euphorbia heterophylla na cultura da soja = Sequential application of flumicloracpentil for Euphorbia heterophylla control in soybeans

    Directory of Open Access Journals (Sweden)

    Rubem Silvério de Oliveira Jr

    2006-01-01

    Full Text Available Euphorbia heterophylla destacase como a principal planta daninha na cultura da soja no Estado do Paraná, não só pela sua disseminação, mas também pela natural dificuldade de controle e pela ocorrência de biótipos que apresentam resistência. Em face de tais problemas, buscou se avaliar a eficácia de aplicações seqüenciais em pós emergência de flumicloracpentil no controle dessa planta daninha. As aplicações seqüenciais de flumicloracpentil foram superiores, em termos de eficácia, às doses recomendadas em aplicação única, promovendo no mínimo 86% de controle aos 24 dias após a aplicação dasegunda etapa das seqüenciais. Todas as combinações de doses de flumicloracpentil avaliadas promoveram controle adequado de E. heterophylla, desde que aplicadas no estádio de duas folhas verdadeiras. Comparando se os resultados obtidos em casadevegetaçãoe em estufa, observou se que, em condições de campo, o sombreamento imposto pela soja auxilia no nível de controle final da planta daninha.Wild poinsettia (Euphorbia heterophylla is one the most importantweeds in soybean fields in the state of Paraná, not only for its dissemination, but also due to its natural difficulty of controlling features by herbicides and to occurrence of resistant biotypes. Facing those problems, an evaluation of the field and greenhouse trials was carried out in order to check the efficacy of postemergencesequential application of flumicloracpentil to control this weed. Sequential applications of flumicloracpentil were more efficientthan a single application, promoting, at least, 86% of weed control on the 24th day after the second application of the sequential treatments. All the evaluated combinations of flumiclorac rates once sprayed at two true leaves stage, promoted adequate control of E. heterophylla. Comparing the greenhouse and the field results, it was concluded that, under field conditions, the canopy provided by the crop helps to improve the

  5. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  6. Inverse operator theory method and its applications in nonlinear physics

    International Nuclear Information System (INIS)

    Fang Jinqing

    1993-01-01

    Inverse operator theory method, which has been developed by G. Adomian in recent years, and its applications in nonlinear physics are described systematically. The method can be an unified effective procedure for solution of nonlinear and/or stochastic continuous dynamical systems without usual restrictive assumption. It is realized by Mathematical Mechanization by us. It will have a profound on the modelling of problems of physics, mathematics, engineering, economics, biology, and so on. Some typical examples of the application are given and reviewed

  7. Bayesian non- and semi-parametric methods and applications

    CERN Document Server

    Rossi, Peter

    2014-01-01

    This book reviews and develops Bayesian non-parametric and semi-parametric methods for applications in microeconometrics and quantitative marketing. Most econometric models used in microeconomics and marketing applications involve arbitrary distributional assumptions. As more data becomes available, a natural desire to provide methods that relax these assumptions arises. Peter Rossi advocates a Bayesian approach in which specific distributional assumptions are replaced with more flexible distributions based on mixtures of normals. The Bayesian approach can use either a large but fixed number

  8. Sequentially pulsed traveling wave accelerator

    Science.gov (United States)

    Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA

    2009-08-18

    A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.

  9. A Modified Homogeneous Balance Method and Its Applications

    International Nuclear Information System (INIS)

    Liu Chunping

    2011-01-01

    A modified homogeneous balance method is proposed by improving some key steps in the homogeneous balance method. Bilinear equations of some nonlinear evolution equations are derived by using the modified homogeneous balance method. Generalized Boussinesq equation, KP equation, and mKdV equation are chosen as examples to illustrate our method. This approach is also applicable to a large variety of nonlinear evolution equations. (general)

  10. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  11. Application of machine learning methods for traffic signs recognition

    Science.gov (United States)

    Filatov, D. V.; Ignatev, K. V.; Deviatkin, A. V.; Serykh, E. V.

    2018-02-01

    This paper focuses on solving a relevant and pressing safety issue on intercity roads. Two approaches were considered for solving the problem of traffic signs recognition; the approaches involved neural networks to analyze images obtained from a camera in the real-time mode. The first approach is based on a sequential image processing. At the initial stage, with the help of color filters and morphological operations (dilatation and erosion), the area containing the traffic sign is located on the image, then the selected and scaled fragment of the image is analyzed using a feedforward neural network to determine the meaning of the found traffic sign. Learning of the neural network in this approach is carried out using a backpropagation method. The second approach involves convolution neural networks at both stages, i.e. when searching and selecting the area of the image containing the traffic sign, and when determining its meaning. Learning of the neural network in the second approach is carried out using the intersection over union function and a loss function. For neural networks to learn and the proposed algorithms to be tested, a series of videos from a dash cam were used that were shot under various weather and illumination conditions. As a result, the proposed approaches for traffic signs recognition were analyzed and compared by key indicators such as recognition rate percentage and the complexity of neural networks’ learning process.

  12. A Bayesian Optimal Design for Sequential Accelerated Degradation Testing

    Directory of Open Access Journals (Sweden)

    Xiaoyang Li

    2017-07-01

    Full Text Available When optimizing an accelerated degradation testing (ADT plan, the initial values of unknown model parameters must be pre-specified. However, it is usually difficult to obtain the exact values, since many uncertainties are embedded in these parameters. Bayesian ADT optimal design was presented to address this problem by using prior distributions to capture these uncertainties. Nevertheless, when the difference between a prior distribution and actual situation is large, the existing Bayesian optimal design might cause some over-testing or under-testing issues. For example, the implemented ADT following the optimal ADT plan consumes too much testing resources or few accelerated degradation data are obtained during the ADT. To overcome these obstacles, a Bayesian sequential step-down-stress ADT design is proposed in this article. During the sequential ADT, the test under the highest stress level is firstly conducted based on the initial prior information to quickly generate degradation data. Then, the data collected under higher stress levels are employed to construct the prior distributions for the test design under lower stress levels by using the Bayesian inference. In the process of optimization, the inverse Gaussian (IG process is assumed to describe the degradation paths, and the Bayesian D-optimality is selected as the optimal objective. A case study on an electrical connector’s ADT plan is provided to illustrate the application of the proposed Bayesian sequential ADT design method. Compared with the results from a typical static Bayesian ADT plan, the proposed design could guarantee more stable and precise estimations of different reliability measures.

  13. Reconstruction of a ring applicator using CT imaging: impact of the reconstruction method and applicator orientation

    International Nuclear Information System (INIS)

    Hellebust, Taran Paulsen; Tanderup, Kari; Bergstrand, Eva Stabell; Knutsen, Bjoern Helge; Roeislien, Jo; Olsen, Dag Rune

    2007-01-01

    The purpose of this study is to investigate whether the method of applicator reconstruction and/or the applicator orientation influence the dose calculation to points around the applicator for brachytherapy of cervical cancer with CT-based treatment planning. A phantom, containing a fixed ring applicator set and six lead pellets representing dose points, was used. The phantom was CT scanned with the ring applicator at four different angles related to the image plane. In each scan the applicator was reconstructed by three methods: (1) direct reconstruction in each image (DR) (2) reconstruction in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method was significantly lower (p < 0.05) than for the DR and MPR methods for all but two points. All applicator orientations had similar dose calculation reproducibility. Using library plans for applicator reconstruction gives the most reproducible dose calculation. However, with restrictive guidelines for applicator reconstruction the uncertainties for all methods are low compared to other factors influencing the accuracy of brachytherapy

  14. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    International Nuclear Information System (INIS)

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  15. A process for application of ATHEANA - a new HRA method

    International Nuclear Information System (INIS)

    Parry, G.W.; Bley, D.C.; Cooper, S.E.

    1996-01-01

    This paper describes the analytical process for the application of ATHEANA, a new approach to the performance of human reliability analysis as part of a PRA. This new method, unlike existing methods, is based upon an understanding of the reasons why people make errors, and was developed primarily to address the analysis of errors of commission

  16. Application of SBRA Method in Mechanics of Continetal Plates

    Directory of Open Access Journals (Sweden)

    Ivo WANDROL

    2012-06-01

    Full Text Available This paper shows the probabilistic SBRA Method application to the model of the behaviour of the lithosphere of the Earth. The method extends our initial work where we created the geomechanical model of the lithosphere. The basic idea was about the generation of thermoelastic waves due to thermal expansion of the rock mass and the ratcheting mechanisms.

  17. Application of Lyapunov's Second Method in the Stability Analysis of ...

    African Journals Online (AJOL)

    In this paper, Lyapunov's method for determining the stability of non-linear systems under dynamic states is presented. The paper highlights a practical application of the method to investigate the stability of crude oil/natural gas separation process. Mathematical state models for the separation process, used in the ...

  18. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  19. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  20. Sequential bidding in day-ahead auctions for spot energy and power systems reserve

    International Nuclear Information System (INIS)

    Swider, Derk J.

    2005-01-01

    In this paper a novel approach for sequential bidding on day-ahead auction markets for spot energy and power systems reserve is presented. For the spot market a relatively simple method is considered as a competitive market is assumed. For the reserve market one bidder is assumed to behave strategically and the behavior of the competitors is summarized in a probability distribution of the market price. This results in a method for sequential bidding, where the bidding prices and capacities on the spot and reserve markets are calculated by maximizing a stochastic non-linear objective function of expected profit. With an exemplary application is shown that the trading sequence leads to increasing bidding capacities and prices in the reverse rank number of the markets. Hence, the consideration of a defined trading sequence greatly influences the mathematical representation of the optimal bidding behavior under price uncertainty in day-ahead auctions for spot energy and power systems reserve. (Author)

  1. The thin layer activation method and its applications in industry

    International Nuclear Information System (INIS)

    1997-01-01

    The thin layer activation (TLA) method is one of the most effective and precise methods for the measurement and monitoring of corrosion (erosion) and wear in industry and is used for on-line remote measurement of wear and corrosion rate of central parts in machines or processing vessels under real operating conditions. This document is a comprehensive manual on TLA method in its applications for monitoring wear and corrosion in industry. It describes the theory and presents case studies on TLA method applications in industry. In addition, in annexes are given tables of nuclear data relating to TLA (decay characteristics, depth distribution of reaction products, activation data for charged-particle nuclear reactions), references from INIS database on TLA and a detailed production of the application of TLA for wear measurement of superhard turning tools

  2. The thin layer activation method and its applications in industry

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-01

    The thin layer activation (TLA) method is one of the most effective and precise methods for the measurement and monitoring of corrosion (erosion) and wear in industry and is used for on-line remote measurement of wear and corrosion rate of central parts in machines or processing vessels under real operating conditions. This document is a comprehensive manual on TLA method in its applications for monitoring wear and corrosion in industry. It describes the theory and presents case studies on TLA method applications in industry. In addition, in annexes are given tables of nuclear data relating to TLA (decay characteristics, depth distribution of reaction products, activation data for charged-particle nuclear reactions), references from INIS database on TLA and a detailed production of the application of TLA for wear measurement of superhard turning tools.

  3. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    Science.gov (United States)

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  4. Effect of sumatriptan on cerebral blood flow during migraine headache. Measurement by sequential SPECT used 99mTc-ECD background subtraction method

    International Nuclear Information System (INIS)

    Ueda, Takashi; Torihara, Yoshito; Tsuneyoshi, Noritaka; Ikeda, Yoshitomo

    2001-01-01

    The present study was designed to examine the effect of sumatriptan on regional cerebral blood flow (CBF) during migraine headache. Nine cases were examined by 99m Tc-ECD background subtraction method for the absolute value measurement of regional CBF before and after sumatriptan injection. rCBF except for occipital and perioccipital lobes, were increased 10-20% during migraine headache and significant decreases were observed by sumatriptan injection. Two cases of nine had transiently increased systemic blood pressure and cardiac pulse rate, however, all cases improved migraine headache after injection of sumatriptan. (author)

  5. Effect of sumatriptan on cerebral blood flow during migraine headache. Measurement by sequential SPECT used {sup 99m}Tc-ECD background subtraction method

    Energy Technology Data Exchange (ETDEWEB)

    Ueda, Takashi; Torihara, Yoshito; Tsuneyoshi, Noritaka; Ikeda, Yoshitomo [Miyazaki Social Insurance Hospital (Japan)

    2001-07-01

    The present study was designed to examine the effect of sumatriptan on regional cerebral blood flow (CBF) during migraine headache. Nine cases were examined by {sup 99m}Tc-ECD background subtraction method for the absolute value measurement of regional CBF before and after sumatriptan injection. rCBF except for occipital and perioccipital lobes, were increased 10-20% during migraine headache and significant decreases were observed by sumatriptan injection. Two cases of nine had transiently increased systemic blood pressure and cardiac pulse rate, however, all cases improved migraine headache after injection of sumatriptan. (author)

  6. Forensic linguistics: Applications of forensic linguistics methods to anonymous letters

    OpenAIRE

    NOVÁKOVÁ, Veronika

    2011-01-01

    The title of my bachelor work is ?Forensic linguistics: Applications of forensic linguistics methods to anonymous letters?. Forensic linguistics is young and not very known branch of applied linguistics. This bachelor work wants to introduce forensic linguistics and its method. The bachelor work has two parts ? theory and practice. The theoretical part informs about forensic linguistics in general. Its two basic aspects utilized in forensic science and respective methods. The practical part t...

  7. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  8. Prospective Mathematics Teachers' Opinions about Mathematical Modeling Method and Applicability of This Method

    Science.gov (United States)

    Akgün, Levent

    2015-01-01

    The aim of this study is to identify prospective secondary mathematics teachers' opinions about the mathematical modeling method and the applicability of this method in high schools. The case study design, which is among the qualitative research methods, was used in the study. The study was conducted with six prospective secondary mathematics…

  9. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  10. A Survey of Multi-Objective Sequential Decision-Making

    OpenAIRE

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-obj...

  11. Remarks on sequential designs in risk assessment

    International Nuclear Information System (INIS)

    Seidenfeld, T.

    1982-01-01

    The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism

  12. Application of Multi-Analyte Methods for Pesticide Formulations

    Energy Technology Data Exchange (ETDEWEB)

    Lantos, J.; Virtics, I. [Plant Protection & Soil Conservation Service of Szabolcs-Szatmár-Bereg County, Nyíregyháza (Hungary)

    2009-07-15

    The application of multi-analyte methods for pesticide formulations by GC analysis is discussed. HPLC was used to determine active ingredients. HPLC elution sequences were related to individual n-octanol/water partition coefficients. Real laboratory data are presented and evaluated with regard to validation requirements. The retention time data of pesticides on different HPLC columns under gradient and isocratic conditions are compared to illustrate the applicability of the methodologies. (author)

  13. Application of geo-information science methods in ecotourism exploitation

    Science.gov (United States)

    Dong, Suocheng; Hou, Xiaoli

    2004-11-01

    Application of geo-information science methods in ecotourism development was discussed in the article. Since 1990s, geo-information science methods, which take the 3S (Geographic Information System, Global Positioning System, and Remote Sensing) as core techniques, has played an important role in resources reconnaissance, data management, environment monitoring, and regional planning. Geo-information science methods can easily analyze and convert geographic spatial data. The application of 3S methods is helpful to sustainable development in tourism. Various assignments are involved in the development of ecotourism, such as reconnaissance of ecotourism resources, drawing of tourism maps, dealing with mass data, and also tourism information inquire, employee management, quality management of products. The utilization of geo-information methods in ecotourism can make the development more efficient by promoting the sustainable development of tourism and the protection of eco-environment.

  14. Investigations on application of multigrid method to MHD equilibrium analysis

    International Nuclear Information System (INIS)

    Ikuno, Soichiro

    2000-01-01

    The potentiality of application for Multi-grid method to MHD equilibrium analysis is investigated. The nonlinear eigenvalue problem often appears when the MHD equilibria are determined by solving the Grad-Shafranov equation numerically. After linearization of the equation, the problem is solved by use of the iterative method. Although the Red-Black SOR method or Gauss-Seidel method is often used for the solution of the linearized equation, it takes much CPU time to solve the problem. The Multi-grid method is compared with the SOR method for the Poisson Problem. The results of computations show that the CPU time required for the Multi-grid method is about 1000 times as small as that for the SOR method. (author)

  15. The application of mixed methods designs to trauma research.

    Science.gov (United States)

    Creswell, John W; Zhang, Wanqing

    2009-12-01

    Despite the use of quantitative and qualitative data in trauma research and therapy, mixed methods studies in this field have not been analyzed to help researchers designing investigations. This discussion begins by reviewing four core characteristics of mixed methods research in the social and human sciences. Combining these characteristics, the authors focus on four select mixed methods designs that are applicable in trauma research. These designs are defined and their essential elements noted. Applying these designs to trauma research, a search was conducted to locate mixed methods trauma studies. From this search, one sample study was selected, and its characteristics of mixed methods procedures noted. Finally, drawing on other mixed methods designs available, several follow-up mixed methods studies were described for this sample study, enabling trauma researchers to view design options for applying mixed methods research in trauma investigations.

  16. Development and application of advanced methods for electronic structure calculations

    DEFF Research Database (Denmark)

    Schmidt, Per Simmendefeldt

    . For this reason, part of this thesis relates to developing and applying a new method for constructing so-called norm-conserving PAW setups, that are applicable to GW calculations by using a genetic algorithm. The effect of applying the new setups significantly affects the absolute band positions, both for bulk......This thesis relates to improvements and applications of beyond-DFT methods for electronic structure calculations that are applied in computational material science. The improvements are of both technical and principal character. The well-known GW approximation is optimized for accurate calculations...... of electronic excitations in two-dimensional materials by exploiting exact limits of the screened Coulomb potential. This approach reduces the computational time by an order of magnitude, enabling large scale applications. The GW method is further improved by including so-called vertex corrections. This turns...

  17. Prony's method application for BWR instabilities characterization

    Energy Technology Data Exchange (ETDEWEB)

    Castillo, Rogelio, E-mail: rogelio.castillo@inin.gob.mx [Instituto Nacional de Investigaciones Nucleares, Carretera México-Toluca s/n, La Marquesa, Ocoyoacac, Estado de México 52750 (Mexico); Ramírez, J. Ramón, E-mail: ramon.ramirez@inin.gob.mx [Instituto Nacional de Investigaciones Nucleares, Carretera México-Toluca s/n, La Marquesa, Ocoyoacac, Estado de México 52750 (Mexico); Alonso, Gustavo, E-mail: gustavo.alonso@inin.gob.mx [Instituto Nacional de Investigaciones Nucleares, Carretera México-Toluca s/n, La Marquesa, Ocoyoacac, Estado de México 52750 (Mexico); Instituto Politecnico Nacional, Unidad Profesional Adolfo Lopez Mateos, Ed. 9, Lindavista, D.F. 07300 (Mexico); Ortiz-Villafuerte, Javier, E-mail: javier.ortiz@inin.gob.mx [Instituto Nacional de Investigaciones Nucleares, Carretera México-Toluca s/n, La Marquesa, Ocoyoacac, Estado de México 52750 (Mexico)

    2015-04-01

    Highlights: • Prony's method application for BWR instability events. • Several BWR instability benchmark are assessed using this method. • DR and frequency are obtained and a new parameter is proposed to eliminate false signals. • Adequate characterization of in-phase and out-of-phase events is obtained. • The Prony's method application is validated. - Abstract: Several methods have been developed for the analysis of reactor power signals during BWR power oscillations. Among them is the Prony's method, its application provides the DR and the frequency of oscillations. In this paper another characteristic of the method is proposed to determine the type of oscillations that can occur, in-phase or out-of-phase. Prony's method decomposes a given signal in all the frequencies that it contains, therefore the DR of the fundamental mode and the first harmonic are obtained. To determine the more dominant pole of the system a normalized amplitude W of the system is calculated, which depends on the amplitude and the damping coefficient. With this term, it can be analyzed which type of oscillations is present, if W of the fundamental mode frequency is the greater, the type of oscillations is in-phase, if W of the first harmonic frequency is the greater, the type of oscillations is out-of-phase. The method is applied to several stability benchmarks to assess its validity. Results show the applicability of the method as an alternative analysis method to determine the type of oscillations occurred.

  18. Prony's method application for BWR instabilities characterization

    International Nuclear Information System (INIS)

    Castillo, Rogelio; Ramírez, J. Ramón; Alonso, Gustavo; Ortiz-Villafuerte, Javier

    2015-01-01

    Highlights: • Prony's method application for BWR instability events. • Several BWR instability benchmark are assessed using this method. • DR and frequency are obtained and a new parameter is proposed to eliminate false signals. • Adequate characterization of in-phase and out-of-phase events is obtained. • The Prony's method application is validated. - Abstract: Several methods have been developed for the analysis of reactor power signals during BWR power oscillations. Among them is the Prony's method, its application provides the DR and the frequency of oscillations. In this paper another characteristic of the method is proposed to determine the type of oscillations that can occur, in-phase or out-of-phase. Prony's method decomposes a given signal in all the frequencies that it contains, therefore the DR of the fundamental mode and the first harmonic are obtained. To determine the more dominant pole of the system a normalized amplitude W of the system is calculated, which depends on the amplitude and the damping coefficient. With this term, it can be analyzed which type of oscillations is present, if W of the fundamental mode frequency is the greater, the type of oscillations is in-phase, if W of the first harmonic frequency is the greater, the type of oscillations is out-of-phase. The method is applied to several stability benchmarks to assess its validity. Results show the applicability of the method as an alternative analysis method to determine the type of oscillations occurred

  19. A reliability evaluation method for NPP safety DCS application software

    International Nuclear Information System (INIS)

    Li Yunjian; Zhang Lei; Liu Yuan

    2014-01-01

    In the field of nuclear power plant (NPP) digital i and c application, reliability evaluation for safety DCS application software is a key obstacle to be removed. In order to quantitatively evaluate reliability of NPP safety DCS application software, this paper propose a reliability evaluating method based on software development life cycle every stage's v and v defects density characteristics, by which the operating reliability level of the software can be predicted before its delivery, and helps to improve the reliability of NPP safety important software. (authors)

  20. The pursuit of balance in sequential randomized trials

    Directory of Open Access Journals (Sweden)

    Raymond P. Guiteras

    2016-06-01

    Full Text Available In many randomized trials, subjects enter the sample sequentially. Because the covariates for all units are not known in advance, standard methods of stratification do not apply. We describe and assess the method of DA-optimal sequential allocation (Atkinson, 1982 for balancing stratification covariates across treatment arms. We provide simulation evidence that the method can provide substantial improvements in precision over commonly employed alternatives. We also describe our experience implementing the method in a field trial of a clean water and handwashing intervention in Dhaka, Bangladesh, the first time the method has been used. We provide advice and software for future researchers.

  1. Applications of the rotating orientation XRD method to oriented materials

    International Nuclear Information System (INIS)

    Guo Zhenqi; Li Fei; Jin Li; Bai Yu

    2009-01-01

    The rotating orientation x-ray diffraction (RO-XRD) method, based on conventional XRD instruments by a modification of the sample stage, was introduced to investigate the orientation-related issues of such materials. In this paper, we show its applications including the determination of single crystal orientation, assistance in crystal cutting and evaluation of crystal quality. The interpretation of scanning patterns by RO-XRD on polycrystals with large grains, bulk material with several grains and oriented thin film is also presented. These results will hopefully expand the applications of the RO-XRD method and also benefit the conventional XRD techniques. (fast track communication)

  2. Pass-transistor asynchronous sequential circuits

    Science.gov (United States)

    Whitaker, Sterling R.; Maki, Gary K.

    1989-01-01

    Design methods for asynchronous sequential pass-transistor circuits, which result in circuits that are hazard- and critical-race-free and which have added degrees of freedom for the input signals, are discussed. The design procedures are straightforward and easy to implement. Two single-transition-time state assignment methods are presented, and hardware bounds for each are established. A surprising result is that the hardware realizations for each next state variable and output variable is identical for a given flow table. Thus, a state machine with N states and M outputs can be constructed using a single layout replicated N + M times.

  3. Mining Emerging Sequential Patterns for Activity Recognition in Body Sensor Networks

    DEFF Research Database (Denmark)

    Gu, Tao; Wang, Liang; Chen, Hanhua

    2010-01-01

    Body Sensor Networks oer many applications in healthcare, well-being and entertainment. One of the emerging applications is recognizing activities of daily living. In this paper, we introduce a novel knowledge pattern named Emerging Sequential Pattern (ESP)|a sequential pattern that discovers...... signicant class dierences|to recognize both simple (i.e., sequential) and complex (i.e., interleaved and concurrent) activities. Based on ESPs, we build our complex activity models directly upon the sequential model to recognize both activity types. We conduct comprehensive empirical studies to evaluate...

  4. Sequential versus simultaneous market delineation

    DEFF Research Database (Denmark)

    Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus

    2005-01-01

    and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...

  5. 3rd Workshop on "Combinations of Intelligent Methods and Applications"

    CERN Document Server

    Palade, Vasile

    2013-01-01

    The combination of different intelligent methods is a very active research area in Artificial Intelligence (AI). The aim is to create integrated or hybrid methods that benefit from each of their components.  The 3rd Workshop on “Combinations of Intelligent Methods and Applications” (CIMA 2012) was intended to become a forum for exchanging experience and ideas among researchers and practitioners who are dealing with combining intelligent methods either based on first principles or in the context of specific applications. CIMA 2012 was held in conjunction with the 22nd European Conference on Artificial Intelligence (ECAI 2012).This volume includes revised versions of the papers presented at CIMA 2012.  .

  6. Quantum statistical Monte Carlo methods and applications to spin systems

    International Nuclear Information System (INIS)

    Suzuki, M.

    1986-01-01

    A short review is given concerning the quantum statistical Monte Carlo method based on the equivalence theorem that d-dimensional quantum systems are mapped onto (d+1)-dimensional classical systems. The convergence property of this approximate tansformation is discussed in detail. Some applications of this general appoach to quantum spin systems are reviewed. A new Monte Carlo method, ''thermo field Monte Carlo method,'' is presented, which is an extension of the projection Monte Carlo method at zero temperature to that at finite temperatures

  7. Application of Electrical Resistivity Method (ERM) in Groundwater Exploration

    Science.gov (United States)

    Izzaty Riwayat, Akhtar; Nazri, Mohd Ariff Ahmad; Hazreek Zainal Abidin, Mohd

    2018-04-01

    The geophysical method which dominant by geophysicists become one of most popular method applied by engineers in civil engineering fields. Electrical Resistivity Method (ERM) is one of geophysical tool that offer very attractive technique for subsurface profile characterization in larger area. Applicable alternative technique in groundwater exploration such as ERM which complement with existing conventional method may produce comprehensive and convincing output thus effective in terms of cost, time, data coverage and sustainable. ERM has been applied by various application in groundwater exploration. Over the years, conventional method such as excavation and test boring are the tools used to obtain information of earth layer especially during site investigation. There are several problems regarding the application of conventional technique as it only provides information at actual drilling point only. This review paper was carried out to expose the application of ERM in groundwater exploration. Results from ERM could be additional information to respective expert for their problem solving such as the information on groundwater pollution, leachate, underground and source of water supply.

  8. Minimal Residual Disease Assessment in Lymphoma: Methods and Applications.

    Science.gov (United States)

    Herrera, Alex F; Armand, Philippe

    2017-12-01

    Standard methods for disease response assessment in patients with lymphoma, including positron emission tomography and computed tomography scans, are imperfect. In other hematologic malignancies, particularly leukemias, the ability to detect minimal residual disease (MRD) is increasingly influencing treatment paradigms. However, in many subtypes of lymphoma, the application of MRD assessment techniques, like flow cytometry or polymerase chain reaction-based methods, has been challenging because of the absence of readily detected circulating disease or canonic chromosomal translocations. Newer MRD detection methods that use next-generation sequencing have yielded promising results in a number of lymphoma subtypes, fueling the hope that MRD detection may soon be applicable in clinical practice for most patients with lymphoma. MRD assessment can provide real-time information about tumor burden and response to therapy, noninvasive genomic profiling, and monitoring of clonal dynamics, allowing for many possible applications that could significantly affect the care of patients with lymphoma. Further validation of MRD assessment methods, including the incorporation of MRD assessment into clinical trials in patients with lymphoma, will be critical to determine how best to deploy MRD testing in routine practice and whether MRD assessment can ultimately bring us closer to the goal of personalized lymphoma care. In this review article, we describe the methods available for detecting MRD in patients with lymphoma and their relative advantages and disadvantages. We discuss preliminary results supporting the potential applications for MRD testing in the care of patients with lymphoma and strategies for including MRD assessment in lymphoma clinical trials.

  9. Formal methods for industrial critical systems a survey of applications

    CERN Document Server

    Margaria-Steffen, Tiziana

    2012-01-01

    "Today, formal methods are widely recognized as an essential step in the design process of industrial safety-critical systems. In its more general definition, the term formal methods encompasses all notations having a precise mathematical semantics, together with their associated analysis methods, that allow description and reasoning about the behavior of a system in a formal manner.Growing out of more than a decade of award-winning collaborative work within the European Research Consortium for Informatics and Mathematics, Formal Methods for Industrial Critical Systems: A Survey of Applications presents a number of mainstream formal methods currently used for designing industrial critical systems, with a focus on model checking. The purpose of the book is threefold: to reduce the effort required to learn formal methods, which has been a major drawback for their industrial dissemination; to help designers to adopt the formal methods which are most appropriate for their systems; and to offer a panel of state-of...

  10. Logic-based aggregation methods for ranking student applicants

    Directory of Open Access Journals (Sweden)

    Milošević Pavle

    2017-01-01

    Full Text Available In this paper, we present logic-based aggregation models used for ranking student applicants and we compare them with a number of existing aggregation methods, each more complex than the previous one. The proposed models aim to include depen- dencies in the data using Logical aggregation (LA. LA is a aggregation method based on interpolative Boolean algebra (IBA, a consistent multi-valued realization of Boolean algebra. This technique is used for a Boolean consistent aggregation of attributes that are logically dependent. The comparison is performed in the case of student applicants for master programs at the University of Belgrade. We have shown that LA has some advantages over other presented aggregation methods. The software realization of all applied aggregation methods is also provided. This paper may be of interest not only for student ranking, but also for similar problems of ranking people e.g. employees, team members, etc.

  11. Nanosilicon properties, synthesis, applications, methods of analysis and control

    CERN Document Server

    Ischenko, Anatoly A; Aslalnov, Leonid A

    2015-01-01

    Nanosilicon: Properties, Synthesis, Applications, Methods of Analysis and Control examines the latest developments on the physics and chemistry of nanosilicon. The book focuses on methods for producing nanosilicon, its electronic and optical properties, research methods to characterize its spectral and structural properties, and its possible applications. The first part of the book covers the basic properties of semiconductors, including causes of the size dependence of the properties, structural and electronic properties, and physical characteristics of the various forms of silicon. It presents theoretical and experimental research results as well as examples of porous silicon and quantum dots. The second part discusses the synthesis of nanosilicon, modification of the surface of nanoparticles, and properties of the resulting particles. The authors give special attention to the photoluminescence of silicon nanoparticles. The third part describes methods used for studying and controlling the structure and pro...

  12. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  13. Overview of INAA method and its application in Malaysia

    International Nuclear Information System (INIS)

    Yavar, A.R.; Sarmani, S.B.; Khalafi, H.; Abdul Khalik Wood; Khoo, K.S.

    2011-01-01

    Present work shows the development of nuclear technology in Malaysia and highlights its applications that have been developed by using the instrumental neutron activation analysis (INAA) method. In addition, present study exhibits a comprehensive review of INAA for calculation of neutron flux parameters and concentration of elements. The INAA is a powerful method to analyse the sample which identifies qualitative and quantitative of elements present in a sample. The INAA is a working instrument with advantages of experimental simplicity, high accuracy, excellent flexibility with respect to irradiation and counting conditions, and suitability for computerization. In INAA, sample is irradiated and measured directly. In practical, INAA is based on an absolute, relative and single-comparator standardisation method.The INAA has been developed since 1982 when the TRIGA Mark II reactor of Malaysia has commissioned. The absolute method was less utilised, the relative method has been used since 1982, and the k 0 -INAA method is derived from single-comparator standardization method has been developed since 1996 in Malaysian. The relative method, because of its advantages, such as high accuracy, easy for using, has the most application in Malaysia. Currently, local Universities and Malaysian Nuclear Agency (MNA) research reactor use INAA method in Malaysia. (Author)

  14. Overview of INAA Method and Its Application in Malaysia

    International Nuclear Information System (INIS)

    Yavar, A.R.; Sarmani, S.B.; Khalafi, H.; Wood, A.K.; Khoo, K.S.

    2015-01-01

    Present work shows the development of nuclear technology in Malaysia and highlights its applications that have been developed by using the instrumental neutron activation analysis (INAA) method. In addition, present study exhibits a comprehensive review of INAA for calculation of neutron flux parameters and concentration of elements. The INAA is a powerful method to analyse the sample which identifies qualitative and quantitative of elements present in a sample. The INAA is a working instrument with advantages of experimental simplicity, high accuracy, excellent flexibility with respect to irradiation and counting conditions, and suitability for computerization. In INAA, sample is irradiated and measured directly. In practical, INAA is based on an absolute, relative and single-comparator standardisation method. The INAA has been developed since 1982 when the TRIGA MARK II reactor of Malaysia has commissioned. The absolute method was less utilised, the relative method has been used since 1982, and the k_0-INAA method is derived from single-comparator standardization method has been developed since 1996 in Malaysia. The relative method, because of its advantages, such as high accuracy, easy for using, has the most application in Malaysia. Currently, local Universities and Malaysian Nuclear Agency (MNA) research reactor use INAA method in Malaysia. (author)

  15. Clustering Methods Application for Customer Segmentation to Manage Advertisement Campaign

    Directory of Open Access Journals (Sweden)

    Maciej Kutera

    2010-10-01

    Full Text Available Clustering methods are recently so advanced elaborated algorithms for large collection data analysis that they have been already included today to data mining methods. Clustering methods are nowadays larger and larger group of methods, very quickly evolving and having more and more various applications. In the article, our research concerning usefulness of clustering methods in customer segmentation to manage advertisement campaign is presented. We introduce results obtained by using four selected methods which have been chosen because their peculiarities suggested their applicability to our purposes. One of the analyzed method k-means clustering with random selected initial cluster seeds gave very good results in customer segmentation to manage advertisement campaign and these results were presented in details in the article. In contrast one of the methods (hierarchical average linkage was found useless in customer segmentation. Further investigations concerning benefits of clustering methods in customer segmentation to manage advertisement campaign is worth continuing, particularly that finding solutions in this field can give measurable profits for marketing activity.

  16. The J-Matrix Method Developments and Applications

    CERN Document Server

    Alhaidari, Abdulaziz D; Heller, Eric J; Abdelmonem, Mohamed S

    2008-01-01

    This volume aims to provide the fundamental knowledge to appreciate the advantages of the J-matrix method and to encourage its use and further development. The J-matrix method is an algebraic method of quantum scattering with substantial success in atomic and nuclear physics. The accuracy and convergence property of the method compares favourably with other successful scattering calculation methods. Despite its thirty-year long history new applications are being found for the J-matrix method. This book gives a brief account of the recent developments and some selected applications of the method in atomic and nuclear physics. New findings are reported in which experimental results are compared to theoretical calculations. Modifications, improvements and extensions of the method are discussed using the language of the J-matrix. The volume starts with a Foreword by the two co-founders of the method, E.J. Heller and H.A. Yamani and it contains contributions from 24 prominent international researchers.

  17. Numerical methods in image processing for applications in jewellery industry

    OpenAIRE

    Petrla, Martin

    2016-01-01

    Presented thesis deals with a problem from the field of image processing for application in multiple scanning of jewelery stones. The aim is to develop a method for preprocessing and subsequent mathematical registration of images in order to increase the effectivity and reliability of the output quality control. For these purposes the thesis summerizes mathematical definition of digital image as well as theoretical base of image registration. It proposes a method adjusting every single image ...

  18. Molecular methods for typing of Helicobacter pylori and their applications

    DEFF Research Database (Denmark)

    Colding, H; Hartzen, S H; Roshanisefat, H

    1999-01-01

    .g. the urease genes. Furthermore, reproducibility, discriminatory power, ease of performance and interpretation, cost and toxic procedures of each method are assessed. To date no direct comparison of all the molecular typing methods described has been performed in the same study with the same H. pylori strains....... However, PCR analysis of the urease gene directly on suspensions of H. pylori or gastric biopsy material seems to be useful for routine use and applicable in specific epidemiological situations....

  19. Monte Carlo methods and applications in nuclear physics

    International Nuclear Information System (INIS)

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs

  20. Monte Carlo methods and applications in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  1. Methods for production of aluminium powders and their application fields

    Energy Technology Data Exchange (ETDEWEB)

    Gopienko, V.G.; Kiselev, V.P.; Zobnina, N.S. (Vsesoyuznyj Nauchno-Issledovatel' skij i Proektnyj Inst. Alyuminievoj, magnievoj i ehlektrodnoj promyshlennosti (USSR))

    1984-12-01

    Different types of powder products made of alluminium and its alloys (powder, fine powders, granules and pastes) as well as their basic physicochemical properties are briefly characterized. The principle methods for alluminium powder production are outlined: physicochemical methods, the melt spraying by compressed gas being the mostly developed among them, and physico-mechanical ones. Main application spheres for powder productions of aluminium and its alloys are reported in short.

  2. Methods for production of aluminium powders and their application fields

    International Nuclear Information System (INIS)

    Gopienko, V.G.; Kiselev, V.P.; Zobnina, N.S.

    1984-01-01

    Different types of powder products made of alluminium and its alloys (powder, fine powders, granules and pastes) as well as their basic physicochemical properties are briefly characterized. The principle methods for alluminium powder production are outlined: physicochemical methods, the melt spraying by compressed gas being the mostly developed among them, and physico-mechanical ones. Main application spheres for powder productions of aluminium and its alloys are reported in short

  3. Validation of an analytical method based on the high-resolution continuum source flame atomic absorption spectrometry for the fast-sequential determination of several hazardous/priority hazardous metals in soil.

    Science.gov (United States)

    Frentiu, Tiberiu; Ponta, Michaela; Hategan, Raluca

    2013-03-01

    The aim of this paper was the validation of a new analytical method based on the high-resolution continuum source flame atomic absorption spectrometry for the fast-sequential determination of several hazardous/priority hazardous metals (Ag, Cd, Co, Cr, Cu, Ni, Pb and Zn) in soil after microwave assisted digestion in aqua regia. Determinations were performed on the ContrAA 300 (Analytik Jena) air-acetylene flame spectrometer equipped with xenon short-arc lamp as a continuum radiation source for all elements, double monochromator consisting of a prism pre-monocromator and an echelle grating monochromator, and charge coupled device as detector. For validation a method-performance study was conducted involving the establishment of the analytical performance of the new method (limits of detection and quantification, precision and accuracy). Moreover, the Bland and Altman statistical method was used in analyzing the agreement between the proposed assay and inductively coupled plasma optical emission spectrometry as standardized method for the multielemental determination in soil. The limits of detection in soil sample (3σ criterion) in the high-resolution continuum source flame atomic absorption spectrometry method were (mg/kg): 0.18 (Ag), 0.14 (Cd), 0.36 (Co), 0.25 (Cr), 0.09 (Cu), 1.0 (Ni), 1.4 (Pb) and 0.18 (Zn), close to those in inductively coupled plasma optical emission spectrometry: 0.12 (Ag), 0.05 (Cd), 0.15 (Co), 1.4 (Cr), 0.15 (Cu), 2.5 (Ni), 2.5 (Pb) and 0.04 (Zn). Accuracy was checked by analyzing 4 certified reference materials and a good agreement for 95% confidence interval was found in both methods, with recoveries in the range of 94-106% in atomic absorption and 97-103% in optical emission. Repeatability found by analyzing real soil samples was in the range 1.6-5.2% in atomic absorption, similar with that of 1.9-6.1% in optical emission spectrometry. The Bland and Altman method showed no statistical significant difference between the two spectrometric

  4. The methods and applications of optimization of radiation protection

    International Nuclear Information System (INIS)

    Liu Hua

    2007-01-01

    Optimization is the most important principle in radiation protection. The present article briefs the concept and up-to-date progress of optimization of protection, introduces some methods used in current optimization analysis, and presents various applications of optimization of protection. The author emphasizes that optimization of protection is a forward-looking iterative process aimed at preventing exposures before they occur. (author)

  5. A modified sliding spectral method and its application to COSMIC ...

    Indian Academy of Sciences (India)

    A modified sliding spectral method and its application to COSMIC radio occultation data 1751. The window length with 300 samples is supposed to provide a reasonable resolution. In a spherically symmetric atmosphere, the refractive index n as a function of tangent radius r0 can be computed from the bending angle α as.

  6. Pseudo-harmonics method: an application to thermal reactors

    International Nuclear Information System (INIS)

    Silva, F.C. da; Rotenberg, S.; Thome Filho, Z.D.

    1985-10-01

    Several applications of the Pseudo-Harmonics method are presented, aiming to calculate the neutron flux and the perturbed eigenvalue of a nuclear reactor, like PWR, with three enrichment regions as Angra-1 reactor. In the reference reactor, perturbations of several types as global as local were simulated. The results were compared with those from the direct calculation. (E.G.) [pt

  7. Hybrid Particle-Continuum Numerical Methods for Aerospace Applications

    Science.gov (United States)

    2011-01-01

    Many applications of MEMS/NEMS devices, which include micro- turbines [3, 4], micro-sensors for chemical con- centrations or gas ow properties [5, 6, 7...Oran, E. S., and Kaplan , C. R., The Coupled Multiscale Multiphysics Method (CM3) for Rareed Gas Flows, AIAA 2010-823, 2010. [63] Holman, T. D

  8. 40 CFR 425.03 - Sulfide analytical methods and applicability.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sulfide analytical methods and applicability. 425.03 Section 425.03 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS LEATHER TANNING AND FINISHING POINT SOURCE CATEGORY General Provisions...

  9. Application of New Variational Homotopy Perturbation Method For ...

    African Journals Online (AJOL)

    This paper discusses the application of the New Variational Homotopy Perturbation Method (NVHPM) for solving integro-differential equations. The advantage of the new Scheme is that it does not require discretization, linearization or any restrictive assumption of any form be fore it is applied. Several test problems are ...

  10. Cattle slurry on grassland - application methods and nitrogen use efficiency

    NARCIS (Netherlands)

    Lalor, S.T.J.

    2014-01-01

    Cattle slurry represents a significant resource on grassland-based farming systems. The objective of this thesis was to investigate and devise cattle slurry application methods and strategies that can be implemented on grassland farms to improve the efficiency with which nitrogen (N) in

  11. APPLICATION OF CHEMICAL METHODS TO THE SOLID WASTE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    C. P. Bulimaga

    2008-12-01

    Full Text Available The present article is a synthesis analysis of application of chemical methods for the development of technologies of hazardous waste management. Here are offered some technologies of neutralization of the waste containing hexacyanofferates, galvanic wastes and those with contain of vanadium, which are collected at Power Thermoelectric Plants.

  12. Application of laplace transform method in heavy ion reaction research

    International Nuclear Information System (INIS)

    Wang Jinchuan; Xi Hongfei; Guo Zhongyan; Zhan Wenlong; Zhu Yongtai; Zhou Jianqun; Liu Guanhua

    1993-01-01

    Laplace transform method (LTM) is applied to investigate the effects of different spectroscopy amplifiers parameters on identification of the light charged particles (LCP) emitted from 12 C(46,7 MeV/u) + 58 Ni reaction. The significance of application of LTM in heavy ion experimental nuclear physics is also discussed

  13. TRIZ method application for improving the special vehicles maintenance

    OpenAIRE

    Petrović Saša; Lozanović-Šajić Jasmina; Knežević Tijana; Pavlović Jovan; Ivanov Goran

    2014-01-01

    TRIZ methodology provides an opportunity for improving the classical engineering approach based on personal knowledge and experience. This paper presents the application of TRIZ methods for improving vehicle maintenance where special equipment is installed. A specific problem is the maintenance of the periscopes with heating system. Protective glass panels with heating system are rectangular glass elements. Their purpose is to perform mechanical protection ...

  14. Effects of application methods and species of wood on color ...

    African Journals Online (AJOL)

    In this study, the color effects of wood materials to coloring with different application methods (brush, roller sponge and spray gun) and waterborne varnishes were investigated according to ASTM-D 2244. For this purpose, the experimental samples of Scots pine (Pinus silvestris L.), oriental beech (Fagus orientalis L.) and ...

  15. Hepatobiliary sequential scintiscanning

    Energy Technology Data Exchange (ETDEWEB)

    Germann, G.; Hottenrott, C.; Maul, F.D.

    1985-01-04

    The duodeno-gastric reflux was evaluated in 33 patients following gastric surgery by functional hepato-biliary scintigraphy. In 16 of 26 patients with gastric resection a reflux was found. The Y-en-Roux and the retrocolic B II resection with Braun's Anastomosis showed the lowest incidence of reflux. The functional scintigraphy permits an objective diagnosis of reflux without provocation by diagnostic manipulations. The high percentage of accuracy in evaluating reflux recommends the scintigraphy as an optimal method in postoperative reflux control.

  16. Separation by sequential chromatography of americium, plutonium and neptunium elements: application to the study of trans-uranian elements migration in a European lacustrine system

    International Nuclear Information System (INIS)

    Michel, H.

    1999-01-01

    The nuclear tests carried out in the atmosphere in the Sixties, the accidents and in particular that to the power station of Chernobyl in 1986, were at the origin of the dispersion of a significant quantity of transuranic elements and fission products. The study of a lake system, such that of the Blelham Tarn in Great Britain, presented in this memory, can bring interesting answers to the problems of management of the environment. The determination of the radionuclides in sediment cores made it possible not only to establish the history of the depositions and consequently the origin of the radionuclides, but also to evaluate the various transfers which took place according to the parameters of the site and the properties of the elements. The studied transuranic elements are plutonium 238, 239-240, americium 241 and neptunium 237. Alpha emitting radionuclides, their determination requires complex radiochemical separations. A method was worked out to successively separate the three radioelements by using a same chromatographic column. Cesium 137 is the studied fission product, its determination is done by direct Gamma spectrometry. Lead 210, natural radionuclide, whose atmospheric flow can be supposed constant. makes it possible to obtain a chronology of the various events. The detailed vertical study of sediment cores showed that the accumulation mode of the studied elements is the same one and that the methods of dating converge. The cesium, more mobile than transuranic elements in the atmosphere, was detected in the 1963 and 1986 fallout whereas an activity out of transuranic elements appears only for the 1963 fallout. The activity of the 1963 cesium fallout is of the same order of magnitude as that of 1986. The calculation of the diffusion coefficients of the elements in the sediments shows an increased migration of cesium compared to transuranic elements. An inventory on the whole of the lake made it possible to note that the atmospheric fallout constitute the

  17. Attack Trees with Sequential Conjunction

    NARCIS (Netherlands)

    Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando

    2015-01-01

    We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of

  18. On-line dynamic fractionation and automatic determination of inorganic phosphorous in environmental solid substrates exploiting sequential injection microcolumn extraction and flow injection analysi

    DEFF Research Database (Denmark)

    Buanuam, Janya; Miró, Manuel; Hansen, Elo Harald

    2006-01-01

    Sequential injection microcolumn extraction (SI-MCE) based on the implementation of a soil containing microcartridge as external reactor in a sequential injection network is, for the first time, proposed for dynamic fractionation of macronutrients in environmental solids, as exemplified by the pa......Sequential injection microcolumn extraction (SI-MCE) based on the implementation of a soil containing microcartridge as external reactor in a sequential injection network is, for the first time, proposed for dynamic fractionation of macronutrients in environmental solids, as exemplified...... by the partitioning of inorganic phosphorous in agricultural soils. The on-line fractionation method capitalises on the accurate metering and sequential exposure of the various extractants to the solid sample by application of programmable flow as precisely coordinated by a syringe pump. Three different soil phase...... associations for phosphorus, that is, exchangeable, Al- and Fe-bound and Ca-bound fractions, were elucidated by accommodation in the flow manifold of the 3 steps of the Hietjles-Litjkema (HL) scheme involving the use of 1.0 M NH4Cl, 0.1 M NaOH and 0.5 M HCl, respectively, as sequential leaching reagents...

  19. What is the method in applying formal methods to PLC applications?

    NARCIS (Netherlands)

    Mader, Angelika H.; Engel, S.; Wupper, Hanno; Kowalewski, S.; Zaytoon, J.

    2000-01-01

    The question we investigate is how to obtain PLC applications with confidence in their proper functioning. Especially, we are interested in the contribution that formal methods can provide for their development. Our maxim is that the place of a particular formal method in the total picture of system

  20. Sequential simulation approach to modeling of multi-seam coal deposits with an application to the assessment of a Louisiana lignite

    Science.gov (United States)

    Olea, Ricardo A.; Luppens, James A.

    2012-01-01

    There are multiple ways to characterize uncertainty in the assessment of coal resources, but not all of them are equally satisfactory. Increasingly, the tendency is toward borrowing from the statistical tools developed in the last 50 years for the quantitative assessment of other mineral commodities. Here, we briefly review the most recent of such methods and formulate a procedure for the systematic assessment of multi-seam coal deposits taking into account several geological factors, such as fluctuations in thickness, erosion, oxidation, and bed boundaries. A lignite deposit explored in three stages is used for validating models based on comparing a first set of drill holes against data from infill and development drilling. Results were fully consistent with reality, providing a variety of maps, histograms, and scatterplots characterizing the deposit and associated uncertainty in the assessments. The geostatistical approach was particularly informative in providing a probability distribution modeling deposit wide uncertainty about total resources and a cumulative distribution of coal tonnage as a function of local uncertainty.

  1. Speciation fingerprints of binary mixtures by the optimized sequential two-phase separation

    International Nuclear Information System (INIS)

    Macasek, F.

    1995-01-01

    The analysis of the separation methods suitable for chemical speciation of radionuclides and metals, and advantages of sequential (double) distribution technique were discussed. The equilibria are relatively easy to control and the method enables to minimize a matrix composition adjustment, and therefore it minimizes also the disturbance of original (native) state of elements. The technique may consist in the repeat solvent extraction of sample, or the replicate equilibration with sorbent. The common condition of applicability is a linear separation isotherm of the species, what is mostly a reasonable condition in case of trace concentrations. The equations used for simultaneous fitting were written in general form. 1 tab., 1 fig., 2 refs

  2. Comparison of Sequential and Variational Data Assimilation

    Science.gov (United States)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  3. Application of autoradiography methods for solving problems of microelectronics

    International Nuclear Information System (INIS)

    Frejer, K.; Trojtler, Kh.-Kh.; Birkgol'ts, V.

    1979-01-01

    Methods of contact autoradiography with halogen-silver emulsions and autoradiography, caused by the interaction of neutrons with solid track detectors, are successfully used for determination of lateral and longitudal distributions of matter in the basic semiconductor material as well as in the frameworks of its preparation. Possibilities for application and power parameters of some autoradiographic methods related to sensitivity of detection and local resolution are considered on the example of the basic material - silicon. In this case, special attention was paid on investigation of elements combibation, for example: boron/phosphorus as well as on the methods of correlation of solid track and halogen-silver autoradiogrammes [ru

  4. Application of the IPEBS method to dynamic contingency analysis

    Energy Technology Data Exchange (ETDEWEB)

    Martins, A C.B. [FURNAS, Rio de Janeiro, RJ (Brazil); Pedroso, A S [Centro de Pesquisas de Energia Eletrica (CEPEL), Rio de Janeiro, RJ (Brazil)

    1994-12-31

    Dynamic contingency analysis is certainly a demanding task in the context of dynamic performance evaluation. This paper presents the results of a test for checking the contingency screening capability of the IPEBS method. A brazilian 1100-bus, 112-gen system was used in the test; the ranking of the contingencies based on critical clearing times obtained with IPEBS, was compared with the ranking derived from detailed time-domain simulation. The results of this comparison encourages us to recommended the use of the method in industry applications, in a complementary basis to the current method of time domain simulation. (author) 5 refs., 1 fig., 2 tabs.

  5. Statistical disclosure control for microdata methods and applications in R

    CERN Document Server

    Templ, Matthias

    2017-01-01

    This book on statistical disclosure control presents the theory, applications and software implementation of the traditional approach to (micro)data anonymization, including data perturbation methods, disclosure risk, data utility, information loss and methods for simulating synthetic data. Introducing readers to the R packages sdcMicro and simPop, the book also features numerous examples and exercises with solutions, as well as case studies with real-world data, accompanied by the underlying R code to allow readers to reproduce all results. The demand for and volume of data from surveys, registers or other sources containing sensible information on persons or enterprises have increased significantly over the last several years. At the same time, privacy protection principles and regulations have imposed restrictions on the access and use of individual data. Proper and secure microdata dissemination calls for the application of statistical disclosure control methods to the data before release. This book is in...

  6. A new ore reserve estimation method, Yang Chizhong filtering and inferential measurement method, and its application

    International Nuclear Information System (INIS)

    Wu Jingqin.

    1989-01-01

    Yang Chizhong filtering and inferential measurement method is a new method used for variable statistics of ore deposits. In order to apply this theory to estimate the uranium ore reserves under the circumstances of regular or irregular prospecting grids, small ore bodies, less sampling points, and complex occurrence, the author has used this method to estimate the ore reserves in five ore bodies of two deposits and achieved satisfactory results. It is demonstrated that compared with the traditional block measurement method, this method is simple and clear in formula, convenient in application, rapid in calculation, accurate in results, less expensive, and high economic benefits. The procedure and experience in the application of this method and the preliminary evaluation of its results are mainly described

  7. Application Profile Matching Method for Employees Online Recruitment

    Science.gov (United States)

    Sunarti; Rangga, Rahmadian Y.; Marlim, Yulvia Nora

    2017-12-01

    Employees is one of the determinant factors of company’s success. Thus, reliable human resources are needed to support the survival of the company. This research takes case study at PT. Asuransi Bina Dana Arta, Tbk Pekanbaru Branch. Employee recruitment system at PT. Asuransi Bina Dana Arta, Tbk Pekanbaru Branch still uses manual system as seen in application letter files file so it needs long time to determine accepted and rejected the application. For that it needs to built a system or application that allows companies in determining employees who accepted or rejected easily. Pofile Matching Method is a process of competency assessment that is done by comparing the value of written, psychological and interview test between one applicationt with other. PT. Asuransi Bina Dana Arta, Tbk Pekanbaru branch set the percentage to calculate NCF (Core Factor Value) by 60% and NSF (Secondary Factor Value) by 40%, and set the percentage to calculate the total value of written test by 40%, the total value of psycho test by 30%, and the total value of interview 30%. The final result of this study is to determine the rank or ranking of each applicant based on the greater value which, the greater that score of final result of an application get, the greater the chance of the applicant occupy a position or vacancy. Online Recruitment application uses profile matching method can help employee selection process and employee acceptance decisions quickly. This system can be viewed by directors or owners anywhere because it is online and used for other company branch

  8. Applications and Preparation Methods of Copper Chromite Catalysts: A Review

    Directory of Open Access Journals (Sweden)

    Ram Prasad

    2011-11-01

    Full Text Available In this review article various applications and preparation methods of copper chromite catalysts have been discussed. While discussing it is concluded that copper chromite is a versatile catalyst which not only catalyses numerous processes of commercial importance and national program related to defence and space research but also finds applications in the most concerned problem worldwide i.e. environmental pollution control. Several other very useful applications of copper chromite catalysts are in production of clean energy, drugs and agro chemicals, etc. Various preparation methods about 15 have been discussed which depicts clear idea about the dependence of catalytic activity and selectivity on way of preparation of catalyst. In view of the globally increasing interest towards copper chromite catalysis, reexamination on the important applications of such catalysts and their useful preparation methods is thus the need of the time. This review paper encloses 369 references including a well-conceivable tabulation of the newer state of the art. Copyright © 2011 by BCREC UNDIP. All rights reserved.(Received: 19th March 2011, Revised: 03rd May 2011, Accepted: 23rd May 2011[How to Cite: R. Prasad, and P. Singh. (2011. Applications and Preparation Methods of Copper Chromite Catalysts: A Review. Bulletin of Chemical Reaction Engineering & Catalysis, 6 (2: 63-113. doi:10.9767/bcrec.6.2.829.63-113][How to Link / DOI: http://dx.doi.org/10.9767/bcrec.6.2.829.63-113 || or local:  http://ejournal.undip.ac.id/index.php/bcrec/article/view/829 ] | View in 

  9. Solid state nuclear track detection principles, methods and applications

    CERN Document Server

    Durrani, S A; ter Haar, D

    1987-01-01

    Solid State Nuclear Track Detection: Principles, Methods and Applications is the second book written by the authors after Nuclear Tracks in Solids: Principles and Applications. The book is meant as an introduction to the subject solid state of nuclear track detection. The text covers the interactions of charged particles with matter; the nature of the charged-particle track; the methodology and geometry of track etching; thermal fading of latent damage trails on tracks; the use of dielectric track recorders in particle identification; radiation dossimetry; and solid state nuclear track detecti

  10. Automatic Hypocenter Determination Method in JMA Catalog and its Application

    Science.gov (United States)

    Tamaribuchi, K.

    2017-12-01

    The number of detectable earthquakes around Japan has increased by developing the high-sensitivity seismic observation network. After the 2011 Tohoku-oki earthquake, the number of detectable earthquakes have dramatically increased due to its aftershocks and induced earthquakes. This enormous number of earthquakes caused inability of manually determination of all the hypocenters. The Japan Meteorological Agency (JMA), which produces the earthquake catalog in Japan, has developed a new automatic hypocenter determination method and started its operation from April 1, 2016. This method (named PF method; Phase combination Forward search method) can determine the hypocenters of earthquakes that occur simultaneously by searching for the optimal combination of P- and S-wave arrival times and the maximum amplitudes using a Bayesian estimation technique. In the 2016 Kumamoto earthquake sequence, we successfully detected about 70,000 aftershocks automatically during the period from April 14 to the end of May, and this method contributed to the real-time monitoring of the seismic activity. Furthermore, this method can be also applied to the Earthquake Early Warning (EEW). Application of this method for EEW is called the IPF method and has been used as the hypocenter determination method of the EEW system in JMA from December 2016. By developing this method further, it is possible to contribute to not only speeding up the catalog production, but also improving reliability of the early warning.

  11. Reconstruction of a ring applicator using CT imaging: impact of the reconstruction method and applicator orientation

    DEFF Research Database (Denmark)

    Hellebust, Taran Paulsen; Tanderup, Kari; Bergstrand, Eva Stabell

    2007-01-01

    in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method...

  12. A mathematical programming approach for sequential clustering of dynamic networks

    Science.gov (United States)

    Silva, Jonathan C.; Bennett, Laura; Papageorgiou, Lazaros G.; Tsoka, Sophia

    2016-02-01

    A common analysis performed on dynamic networks is community structure detection, a challenging problem that aims to track the temporal evolution of network modules. An emerging area in this field is evolutionary clustering, where the community structure of a network snapshot is identified by taking into account both its current state as well as previous time points. Based on this concept, we have developed a mixed integer non-linear programming (MINLP) model, SeqMod, that sequentially clusters each snapshot of a dynamic network. The modularity metric is used to determine the quality of community structure of the current snapshot and the historical cost is accounted for by optimising the number of node pairs co-clustered at the previous time point that remain so in the current snapshot partition. Our method is tested on social networks of interactions among high school students, college students and members of the Brazilian Congress. We show that, for an adequate parameter setting, our algorithm detects the classes that these students belong more accurately than partitioning each time step individually or by partitioning the aggregated snapshots. Our method also detects drastic discontinuities in interaction patterns across network snapshots. Finally, we present comparative results with similar community detection methods for time-dependent networks from the literature. Overall, we illustrate the applicability of mathematical programming as a flexible, adaptable and systematic approach for these community detection problems. Contribution to the Topical Issue "Temporal Network Theory and Applications", edited by Petter Holme.

  13. Application of the selected physical methods in biological research

    Directory of Open Access Journals (Sweden)

    Jaromír Tlačbaba

    2013-01-01

    Full Text Available This paper deals with the application of acoustic emission (AE, which is a part of the non-destructive methods, currently having an extensive application. This method is used for measuring the internal defects of materials. AE has a high potential in further research and development to extend the application of this method even in the field of process engineering. For that matter, it is the most elaborate acoustic emission monitoring in laboratory conditions with regard to external stimuli. The aim of the project is to apply the acoustic emission recording the activity of bees in different seasons. The mission is to apply a new perspective on the behavior of colonies by means of acoustic emission, which collects a sound propagation in the material. Vibration is one of the integral part of communication in the community. Sensing colonies with the support of this method is used for understanding of colonies biological behavior to stimuli clutches, colony development etc. Simulating conditions supported by acoustic emission monitoring system the illustrate colonies activity. Collected information will be used to represent a comprehensive view of the life cycle and behavior of honey bees (Apis mellifera. Use of information about the activities of bees gives a comprehensive perspective on using of acoustic emission in the field of biological research.

  14. Applicability of transfer tensor method for open quantum system dynamics.

    Science.gov (United States)

    Gelzinis, Andrius; Rybakovas, Edvardas; Valkunas, Leonas

    2017-12-21

    Accurate simulations of open quantum system dynamics is a long standing issue in the field of chemical physics. Exact methods exist, but are costly, while perturbative methods are limited in their applicability. Recently a new black-box type method, called transfer tensor method (TTM), was proposed [J. Cerrillo and J. Cao, Phys. Rev. Lett. 112, 110401 (2014)]. It allows one to accurately simulate long time dynamics with a numerical cost of solving a time-convolution master equation, provided many initial system evolution trajectories are obtained from some exact method beforehand. The possible time-savings thus strongly depend on the ratio of total versus initial evolution lengths. In this work, we investigate the parameter regimes where an application of TTM would be most beneficial in terms of computational time. We identify several promising parameter regimes. Although some of them correspond to cases when perturbative theories could be expected to perform well, we find that the accuracy of such approaches depends on system parameters in a more complex way than it is commonly thought. We propose that the TTM should be applied whenever system evolution is expected to be long and accuracy of perturbative methods cannot be ensured or in cases when the system under consideration does not correspond to any single perturbative regime.

  15. Application of blended learning in teaching statistical methods

    Directory of Open Access Journals (Sweden)

    Barbara Dębska

    2012-12-01

    Full Text Available The paper presents the application of a hybrid method (blended learning - linking traditional education with on-line education to teach selected problems of mathematical statistics. This includes the teaching of the application of mathematical statistics to evaluate laboratory experimental results. An on-line statistics course was developed to form an integral part of the module ‘methods of statistical evaluation of experimental results’. The course complies with the principles outlined in the Polish National Framework of Qualifications with respect to the scope of knowledge, skills and competencies that students should have acquired at course completion. The paper presents the structure of the course and the educational content provided through multimedia lessons made accessible on the Moodle platform. Following courses which used the traditional method of teaching and courses which used the hybrid method of teaching, students test results were compared and discussed to evaluate the effectiveness of the hybrid method of teaching when compared to the effectiveness of the traditional method of teaching.

  16. A new dynamic HRA method and its application

    International Nuclear Information System (INIS)

    Je, Moo Sung; Park, Chang Kyoo

    1995-01-01

    This paper presents a new dynamic HRA (Human Reliability Analysis) method and its application for quantifying the human error probabilities in implementing an accident management action. For comparisons of current HRA methods with the new method, the characteristics of THERP, HCR, and SLIM-MAUD, which are most frequently used methods in PSAs, are discussed. The action associated with the implementation of the cavity flooding during a station blackout sequence is considered for its application. This method is based on the concepts of the quantified correlation between the performance requirement and performance achievement. The MAAP 3.0B code and Latin Hypercube sampling technique are used to determine the uncertainty of the performance achievement parameter. Meanwhile, the value of the performance requirement parameter is obtained from interviews. Based on these stochastic distributions obtained, human error probabilities are calculated with respect to the various means and variances of the timings. It is shown that this method is very flexible in that it can be applied to any kind of the operator actions, including the actions associated with the implementation of accident management strategies. 1 fig., 3 tabs., 17 refs. (Author)

  17. Spectral/ hp element methods: Recent developments, applications, and perspectives

    Science.gov (United States)

    Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.

    2018-02-01

    The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.

  18. Application of thin layer activation method to industrial use

    International Nuclear Information System (INIS)

    Yamamoto, Masago; Hatakeyama, Noriko

    1996-01-01

    A thin layer activation method was reviewed for non-destructive, rapid, precise and real-time measurement of wear and corrosion. The review included wear measurement, the principle of the method, actual measurement, application, and laws and regulations. The method is to activate the material surface alone by accelerated ions like p, d and He ions produced by cyclotron, Van de Graaf apparatus or other accelerators and to utilize the yielded radioisotopes as a tracer, is widely used in the tribology field, and is more useful than the previous method with the reactor since it activated the whole material. Application of the method was reportedly resulted in saving the 80% cost and 90% time in the wear measurement of automobile parts such as engine and transmission. Actually, the activated material was combined into the part to be run and the radioactivity was to be measured externally or in the worn particles suitably collected. The activation thickness was generally in the range of 10-200 μm and the resultant radioactivity, 0.2-2 MBq. In most cases in Japan, the method would be under the law concerning prevention from radiation hazards due to radioisotopes, etc. (K.H.)

  19. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  20. Advantages and applicability of commonly used homogenisation methods for climate data

    Science.gov (United States)

    Ribeiro, Sara; Caineta, Júlio; Henriques, Roberto; Soares, Amílcar; Costa, Ana Cristina

    2014-05-01

    Homogenisation of climate data is a very relevant subject since these data are required as an input in a wide range of studies, such as atmospheric modelling, weather forecasting, climate change monitoring, or hydrological and environmental projects. Often, climate data series include non-natural irregularities which have to be detected and removed prior to their use, otherwise it would generate biased and erroneous results. Relocation of weather stations or changes in the measuring instruments are amongst the most relevant causes for these inhomogeneities. Depending on the climate variable, its temporal resolution and spatial continuity, homogenisation methods can be more or less effective. For example, due to its natural variability, precipitation is identified as a very challenging variable to be homogenised. During the last two decades, numerous methods have been proposed to homogenise climate data. In order to compare, evaluate and develop those methods, the European project COST Action ES0601, Advances in homogenisation methods of climate series: an integrated approach (HOME), was released in 2008. Existing homogenisation methods were improved based on the benchmark exercise issued by this project. A recent approach based on Direct Sequential Simulation (DSS), not yet evaluated by the benchmark exercise, is also presented as an innovative methodology for homogenising climate data series. DSS already proved to be a successful geostatistical method in environmental and hydrological studies, and it provides promising results for the homogenisation of climate data. Since DSS is a geostatistical stochastic approach, it accounts for the joint spatial and temporal dependence between observations, as well as the relative importance of stations both in terms of distance and correlation. This work presents a chronological review of the most commonly used homogenisation methods for climate data and available software packages. A short description and classification is

  1. Efeito da interação do nicosulfuron e chlorpyrifos sobre o banco de sementes e os atributos microbianos do solo Effect of sequential nicosulfuron and chlorpyrifos application on seed bank and soil microbial characteristics

    Directory of Open Access Journals (Sweden)

    Taciane Almeida de Oliveira

    2009-06-01

    sementes e sobre a atividade microbiana do solo.In the period of competition of weeds and the incidence of fall armyworm in the corn crop there is a need for herbicide and insecticide such as nicosulfuron and chlorpyrifos application within short time intervals. The aim of this study was to evaluate the effect of sequential applications of nicosulfuron and chlorpyrifos on the emergence of seedlings of the seed bank in the soil, the basal CO2 emission rate, and the microbial biomass carbon (MBC of soil. Sequential applications of nicosulfuron (doses from 0 to 64 g ha-1 with or without chlorpyrifos (0 and 240 g ha-1 were performed. At 20, 40 and 60 days after application (DAA of the products, the species of all seedlings that emerged from the seed bank were identified, and the frequency, density and abundance estimated, as well as the importance value IV. Sixty DAA the CO2 emission rate and CBM were were also determined, and based on the relationship between the accumulated CO2 and total soil MBC the metabolic coefficient (qCO2 was estimated. The application of nicosulfuron rates of over 20 g ha-1 severely affected the seedling dry weight and number of species. In the presence of the herbicide, the species with highest IV were Boehavia diffusa and Commelina bengalensis. There was a decrease in the basal soil respiration rate with increasing nicosulfuron doses, in the presence as well as in the absence of the insecticide chlorpyrifos. There was a linear decrease in MBC in all cases regardless of the chlorpyrifos application, although the reduction was 4.5 times greater in soil that received the combined application of the insecticide and nicosulfuron. The qCO2 confirmed the negative effect of the application of insecticide and herbicide. It was concluded that the application of chlorpyrifos + nicosulfuron causes a negative impact on the seeds in the soil and the soil microbial activity.

  2. Application of Canonical Effective Methods to Background-Independent Theories

    Science.gov (United States)

    Buyukcam, Umut

    Effective formalisms play an important role in analyzing phenomena above some given length scale when complete theories are not accessible. In diverse exotic but physically important cases, the usual path-integral techniques used in a standard Quantum Field Theory approach seldom serve as adequate tools. This thesis exposes a new effective method for quantum systems, called the Canonical Effective Method, which owns particularly wide applicability in backgroundindependent theories as in the case of gravitational phenomena. The central purpose of this work is to employ these techniques to obtain semi-classical dynamics from canonical quantum gravity theories. Application to non-associative quantum mechanics is developed and testable results are obtained. Types of non-associative algebras relevant for magnetic-monopole systems are discussed. Possible modifications of hypersurface deformation algebra and the emergence of effective space-times are presented. iii.

  3. Principles of Vibrational Spectroscopic Methods and their Application to Bioanalysis

    DEFF Research Database (Denmark)

    Moore, David S.; Jepsen, Peter Uhd; Volka, Karel

    2014-01-01

    imaging, fiber optic probes for in vivo and in vitro analysis, and methods to obtain depth profile information. The issue of fluorescence interference will be considered from the perspectives of excitation wavelength selection and data treatment. Methods to optimize signal to noise with minimized...... excitation laser irradiance to avoid sample damage are also discussed. This chapter then reviews applications of Raman spectroscopy to bioanalysis. Areas discussed include pathology, cytopathology, single-cell analysis, in vivo and in vitro tissue characterization, chemical composition of cell components...... as conformation of DNA and proteins), vibrations of inter- and intramolecular hydrogen bonds in solid-state materials, as well as picosecond dynamics in liquid solutions. This chapter reviews modern instrumentation and techniques for THz spectroscopy, with emphasis on applications in bioanalysis....

  4. Quantal density functional theory II. Approximation methods and applications

    International Nuclear Information System (INIS)

    Sahni, Viraht

    2010-01-01

    This book is on approximation methods and applications of Quantal Density Functional Theory (QDFT), a new local effective-potential-energy theory of electronic structure. What distinguishes the theory from traditional density functional theory is that the electron correlations due to the Pauli exclusion principle, Coulomb repulsion, and the correlation contribution to the kinetic energy -- the Correlation-Kinetic effects -- are separately and explicitly defined. As such it is possible to study each property of interest as a function of the different electron correlations. Approximations methods based on the incorporation of different electron correlations, as well as a many-body perturbation theory within the context of QDFT, are developed. The applications are to the few-electron inhomogeneous electron gas systems in atoms and molecules, as well as to the many-electron inhomogeneity at metallic surfaces. (orig.)

  5. Application of DNA-based methods in forensic entomology.

    Science.gov (United States)

    Wells, Jeffrey D; Stevens, Jamie R

    2008-01-01

    A forensic entomological investigation can benefit from a variety of widely practiced molecular genotyping methods. The most commonly used is DNA-based specimen identification. Other applications include the identification of insect gut contents and the characterization of the population genetic structure of a forensically important insect species. The proper application of these procedures demands that the analyst be technically expert. However, one must also be aware of the extensive list of standards and expectations that many legal systems have developed for forensic DNA analysis. We summarize the DNA techniques that are currently used in, or have been proposed for, forensic entomology and review established genetic analyses from other scientific fields that address questions similar to those in forensic entomology. We describe how accepted standards for forensic DNA practice and method validation are likely to apply to insect evidence used in a death or other forensic entomological investigation.

  6. Fuzzy multiple objective decision making methods and applications

    CERN Document Server

    Lai, Young-Jou

    1994-01-01

    In the last 25 years, the fuzzy set theory has been applied in many disciplines such as operations research, management science, control theory, artificial intelligence/expert system, etc. In this volume, methods and applications of crisp, fuzzy and possibilistic multiple objective decision making are first systematically and thoroughly reviewed and classified. This state-of-the-art survey provides readers with a capsule look into the existing methods, and their characteristics and applicability to analysis of fuzzy and possibilistic programming problems. To realize practical fuzzy modelling, it presents solutions for real-world problems including production/manufacturing, location, logistics, environment management, banking/finance, personnel, marketing, accounting, agriculture economics and data analysis. This book is a guided tour through the literature in the rapidly growing fields of operations research and decision making and includes the most up-to-date bibliographical listing of literature on the topi...

  7. Gastrin radioimmunoassay. Description and application of a novel method

    International Nuclear Information System (INIS)

    Nemeth, J.; Jakab, B.; Schweibert, I.; Szolcsanyi, J.; Oroszi, G.; Szilvassy, Z.

    2002-01-01

    Development and application of a novel gastrin radioimmunoassay (RIA) are described. 125 I-labeling of non-sulphated human gastrin-17 (nshG-17) was performed by the iodogen method and the mono-iodinated hormone, as RIA tracer, was separated by reversed-phase high performance liquid chromatography (HPLC). Serum gastrin levels were measured in response to intravenous application of isoproterenol, a non-selective beta and phenylephrine, a selective alpha-1 receptor agonist using a newly developed method specific for the C-terminal part of the hormone in rats. Isoproterenol at clinically relevant doses elicited a significant increase in serum gastrin concentration in a dose-dependent fashion, whereas phenylephrine was without effect. (author)

  8. Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection

    Science.gov (United States)

    Brasche, L. J. H.; Lopez, R.; Eisenmann, D.

    2006-03-01

    Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.

  9. A STUDY OF TEXT MINING METHODS, APPLICATIONS,AND TECHNIQUES

    OpenAIRE

    R. Rajamani*1 & S. Saranya2

    2017-01-01

    Data mining is used to extract useful information from the large amount of data. It is used to implement and solve different types of research problems. The research related areas in data mining are text mining, web mining, image mining, sequential pattern mining, spatial mining, medical mining, multimedia mining, structure mining and graph mining. Text mining also referred to text of data mining, it is also called knowledge discovery in text (KDT) or knowledge of intelligent text analysis. T...

  10. Successful applications of montessori methods with children at risk for learning disabilities.

    Science.gov (United States)

    Pickering, J S

    1992-12-01

    The critical elements in the Montessori philosophy are respect for the child, individualization of the program to that child, and the fostering of independence. With her research background, Maria Montessori devised a multisensory developmental method and designed materials which isolate each concept the teacher presents to the child.In presenting these materials the teacher observes the concept and skill development level of the child, ascertaining areas of strength and weakness and matching the next presentation to the child's level of development. Using small sequential steps, the teacher works to ameliorate weakness and guide the student to maximize his strengths. These presentations, usually initiated by the child, enhance cognitive growth using a process which integrates his physical, social, and emotional development.The curriculum contains four major content areas: Practical Life; Sensorial; Oral and Written Language; and Mathematics. Geography, History, Science, Art, Music, Literature, and Motor Skills are also included. In all of these the Montessori presentations build from the simple to the complex, from the concrete to the abstract, and from percept to concept. Vocabulary and language usage are integral to each presentation.The procedures introduced through these presentations are designed to enhance attention, increase self-discipline and self-direction, and to promote order, organization, and the development of a work cycle. At-risk children benefit from the structure, the procedures, and the curriculum. Applications of this method require more teacher selection of materials and direct teaching, particularly of language and math symbols and their manipulations.This early childhood intervention provides an individualized program which allows the at-risk child a successful experience at the preschool level. The program includes a strong conceptual preparation for later academic learning and it promotes the development of a healthy self-concept.

  11. Application of Stochastic Sensitivity Analysis to Integrated Force Method

    Directory of Open Access Journals (Sweden)

    X. F. Wei

    2012-01-01

    Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

  12. Application of the taguchi method in change management

    Directory of Open Access Journals (Sweden)

    Kata Ivić

    2011-07-01

    Full Text Available Application of the Taguchi methods results in efficient optimization of performance, quality and price, fast and accurate gathering of technical information, design and production of highly reliable products and processes at low prices, development of flexible technologies for designing of a whole group of high quality associated products. All this significantly reduces the duration of research, development and delivery. The most frequent use of the Taguchi methods is to improve existing products and production processes and to reduce the need for experiments. The Taguchi methods is a system of quality engineering which puts more emphasis on reduction of production costs and giving advantage to efficient use of engineering strategies than on the use of advanced statistical methods.

  13. Application of distinct element method of toppling failure of slope

    International Nuclear Information System (INIS)

    Ishida, Tsuyoshi; Hibino, Satoshi; Kitahara, Yoshihiro; Ito, Hiroshi

    1984-01-01

    The authors have pointed out, in the latest report, that DEM (Distinct Element Method) seems to be a very helpful numerical method to examine the stability of fissured rock slopes, in which toppling failure would occur during earthquakes. In this report, the applicability of DEM for such rock slopes is examined through the following comparisons between theoretical results and DEM results, referring Voegele's works (1982): (1) Stability of one block on a slope. (2) Failure of a rock block column composed of 10 same size rectangular blocks. (3) Cable force required to make a slope stable. Through above 3 comparisons, it seems that DEM give the reasonable results. Considering that these problems may not be treated by the other numerical methods such as FEM and so on, so DEM seems to be a very useful method for fissured rock slope analysis. (author)

  14. Speaker Linking and Applications using Non-Parametric Hashing Methods

    Science.gov (United States)

    2016-09-08

    nonparametric estimate of a multivariate density function,” The Annals of Math- ematical Statistics , vol. 36, no. 3, pp. 1049–1051, 1965. [9] E. A. Patrick...Speaker Linking and Applications using Non-Parametric Hashing Methods† Douglas Sturim and William M. Campbell MIT Lincoln Laboratory, Lexington, MA...with many approaches [1, 2]. For this paper, we focus on using i-vectors [2], but the methods apply to any embedding. For the task of speaker QBE and

  15. Statistical methods for longitudinal data with agricultural applications

    DEFF Research Database (Denmark)

    Anantharama Ankinakatte, Smitha

    The PhD study focuses on modeling two kings of longitudinal data arising in agricultural applications: continuous time series data and discrete longitudinal data. Firstly, two statistical methods, neural networks and generalized additive models, are applied to predict masistis using multivariate...... algorithm. This was found to compare favourably with the algorithm implemented in the well-known Beagle software. Finally, an R package to apply APFA models developed as part of the PhD project is described...

  16. Applications of Monte Carlo method in Medical Physics

    International Nuclear Information System (INIS)

    Diez Rios, A.; Labajos, M.

    1989-01-01

    The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)

  17. Cross-relaxation imaging:methods, challenges and applications

    International Nuclear Information System (INIS)

    Stikov, Nikola

    2010-01-01

    An overview of quantitative magnetization transfer (qMT) is given, with focus on cross relaxation imaging (CRI) as a fast method for quantifying the proportion of protons bound to complex macromolecules in tissue. The procedure for generating CRI maps is outlined, showing examples in the human brain and knee, and discussing the caveats and challenges in generating precise and accurate CRI maps. Finally, several applications of CRI for imaging tissue microstructure are presented.(Author)

  18. Development of medical application methods using radiation. Radionuclide therapy

    International Nuclear Information System (INIS)

    Choi, Chang Woon; Lim, S. M.; Kim, E.H.; Woo, K. S.; Chung, W. S.; Lim, S. J.; Choi, T. H.; Hong, S. W.; Chung, H. Y.; No, W. C.; Oh, B. H.; Hong, H. J.

    1999-04-01

    In this project, we studied following subjects: 1. development of monoclonal antibodies and radiopharmaceuticals 2. clinical applications of radionuclide therapy 3. radioimmunoguided surgery 4. prevention of restenosis with intracoronary radiation. The results can be applied for the following objectives: 1) radionuclide therapy will be applied in clinical practice to treat the cancer patients or other diseases in multi-center trial. 2) The newly developed monoclonal antibodies and biomolecules can be used in biology, chemistry or other basic life science research. 3) The new methods for the analysis of therapeutic effects, such as dosimetry, and quantitative analysis methods of radioactivity, can be applied in basic research, such as radiation oncology and radiation biology

  19. Handbook of Partial Least Squares Concepts, Methods and Applications

    CERN Document Server

    Vinzi, Vincenzo Esposito; Henseler, Jörg

    2010-01-01

    This handbook provides a comprehensive overview of Partial Least Squares (PLS) methods with specific reference to their use in marketing and with a discussion of the directions of current research and perspectives. It covers the broad area of PLS methods, from regression to structural equation modeling applications, software and interpretation of results. The handbook serves both as an introduction for those without prior knowledge of PLS and as a comprehensive reference for researchers and practitioners interested in the most recent advances in PLS methodology.

  20. Advanced FDTD methods parallelization, acceleration, and engineering applications

    CERN Document Server

    Yu, Wenhua

    2011-01-01

    The finite-difference time-domain (FDTD) method has revolutionized antenna design and electromagnetics engineering. Here's a cutting-edge book that focuses on the performance optimization and engineering applications of FDTD simulation systems. Covering the latest developments in this area, this unique resource offer you expert advice on the FDTD method, hardware platforms, and network systems. Moreover the book offers guidance in distinguishing between the many different electromagnetics software packages on the market today. You also find a complete chapter dedicated to large multi-scale pro

  1. Application of acoustic radiosity methods to noise propagation within buildings

    Science.gov (United States)

    Muehleisen, Ralph T.; Beamer, C. Walter

    2005-09-01

    The prediction of sound pressure levels in rooms from transmitted sound is a difficult problem. The sound energy in the source room incident on the common wall must be accurately predicted. In the receiving room, the propagation of sound from the planar wall source must also be accurately predicted. The radiosity method naturally computes the spatial distribution of sound energy incident on a wall and also naturally predicts the propagation of sound from a planar area source. In this paper, the application of the radiosity method to sound transmission problems is introduced and explained.

  2. Development of medical application methods using radiation. Radionuclide therapy

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Chang Woon; Lim, S. M.; Kim, E.H.; Woo, K. S.; Chung, W. S.; Lim, S. J.; Choi, T. H.; Hong, S. W.; Chung, H. Y.; No, W. C. [Korea Atomic Energy Research Institute. Korea Cancer Center Hospital, Seoul, (Korea, Republic of); Oh, B. H. [Seoul National University. Hospital, Seoul (Korea, Republic of); Hong, H. J. [Antibody Engineering Research Unit, Taejon (Korea, Republic of)

    1999-04-01

    In this project, we studied following subjects: 1. development of monoclonal antibodies and radiopharmaceuticals 2. clinical applications of radionuclide therapy 3. radioimmunoguided surgery 4. prevention of restenosis with intracoronary radiation. The results can be applied for the following objectives: (1) radionuclide therapy will be applied in clinical practice to treat the cancer patients or other diseases in multi-center trial. (2) The newly developed monoclonal antibodies and biomolecules can be used in biology, chemistry or other basic life science research. (3) The new methods for the analysis of therapeutic effects, such as dosimetry, and quantitative analysis methods of radioactivity, can be applied in basic research, such as radiation oncology and radiation biology.

  3. Robustness of the Sequential Lineup Advantage

    Science.gov (United States)

    Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.

    2009-01-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…

  4. Applications of the discrete element method in mechanical engineering

    International Nuclear Information System (INIS)

    Fleissner, Florian; Gaugele, Timo; Eberhard, Peter

    2007-01-01

    Compared to other fields of engineering, in mechanical engineering, the Discrete Element Method (DEM) is not yet a well known method. Nevertheless, there is a variety of simulation problems where the method has obvious advantages due to its meshless nature. For problems where several free bodies can collide and break after having been largely deformed, the DEM is the method of choice. Neighborhood search and collision detection between bodies as well as the separation of large solids into smaller particles are naturally incorporated in the method. The main DEM algorithm consists of a relatively simple loop that basically contains the three substeps contact detection, force computation and integration. However, there exists a large variety of different algorithms to choose the substeps to compose the optimal method for a given problem. In this contribution, we describe the dynamics of particle systems together with appropriate numerical integration schemes and give an overview over different types of particle interactions that can be composed to adapt the method to fit to a given simulation problem. Surface triangulations are used to model complicated, non-convex bodies in contact with particle systems. The capabilities of the method are finally demonstrated by means of application examples

  5. Application of an efficient Bayesian discretization method to biomedical data

    Directory of Open Access Journals (Sweden)

    Gopalakrishnan Vanathi

    2011-07-01

    Full Text Available Abstract Background Several data mining methods require data that are discrete, and other methods often perform better with discrete data. We introduce an efficient Bayesian discretization (EBD method for optimal discretization of variables that runs efficiently on high-dimensional biomedical datasets. The EBD method consists of two components, namely, a Bayesian score to evaluate discretizations and a dynamic programming search procedure to efficiently search the space of possible discretizations. We compared the performance of EBD to Fayyad and Irani's (FI discretization method, which is commonly used for discretization. Results On 24 biomedical datasets obtained from high-throughput transcriptomic and proteomic studies, the classification performances of the C4.5 classifier and the naïve Bayes classifier were statistically significantly better when the predictor variables were discretized using EBD over FI. EBD was statistically significantly more stable to the variability of the datasets than FI. However, EBD was less robust, though not statistically significantly so, than FI and produced slightly more complex discretizations than FI. Conclusions On a range of biomedical datasets, a Bayesian discretization method (EBD yielded better classification performance and stability but was less robust than the widely used FI discretization method. The EBD discretization method is easy to implement, permits the incorporation of prior knowledge and belief, and is sufficiently fast for application to high-dimensional data.

  6. Mechanomyographic Parameter Extraction Methods: An Appraisal for Clinical Applications

    Directory of Open Access Journals (Sweden)

    Morufu Olusola Ibitoye

    2014-12-01

    Full Text Available The research conducted in the last three decades has collectively demonstrated that the skeletal muscle performance can be alternatively assessed by mechanomyographic signal (MMG parameters. Indices of muscle performance, not limited to force, power, work, endurance and the related physiological processes underlying muscle activities during contraction have been evaluated in the light of the signal features. As a non-stationary signal that reflects several distinctive patterns of muscle actions, the illustrations obtained from the literature support the reliability of MMG in the analysis of muscles under voluntary and stimulus evoked contractions. An appraisal of the standard practice including the measurement theories of the methods used to extract parameters of the signal is vital to the application of the signal during experimental and clinical practices, especially in areas where electromyograms are contraindicated or have limited application. As we highlight the underpinning technical guidelines and domains where each method is well-suited, the limitations of the methods are also presented to position the state of the art in MMG parameters extraction, thus providing the theoretical framework for improvement on the current practices to widen the opportunity for new insights and discoveries. Since the signal modality has not been widely deployed due partly to the limited information extractable from the signals when compared with other classical techniques used to assess muscle performance, this survey is particularly relevant to the projected future of MMG applications in the realm of musculoskeletal assessments and in the real time detection of muscle activity.

  7. Methods for compressible multiphase flows and their applications

    Science.gov (United States)

    Kim, H.; Choe, Y.; Kim, H.; Min, D.; Kim, C.

    2018-06-01

    This paper presents an efficient and robust numerical framework to deal with multiphase real-fluid flows and their broad spectrum of engineering applications. A homogeneous mixture model incorporated with a real-fluid equation of state and a phase change model is considered to calculate complex multiphase problems. As robust and accurate numerical methods to handle multiphase shocks and phase interfaces over a wide range of flow speeds, the AUSMPW+_N and RoeM_N schemes with a system preconditioning method are presented. These methods are assessed by extensive validation problems with various types of equation of state and phase change models. Representative realistic multiphase phenomena, including the flow inside a thermal vapor compressor, pressurization in a cryogenic tank, and unsteady cavitating flow around a wedge, are then investigated as application problems. With appropriate physical modeling followed by robust and accurate numerical treatments, compressible multiphase flow physics such as phase changes, shock discontinuities, and their interactions are well captured, confirming the suitability of the proposed numerical framework to wide engineering applications.

  8. Random sequential adsorption of cubes

    Science.gov (United States)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  9. Simultaneous optimization of sequential IMRT plans

    International Nuclear Information System (INIS)

    Popple, Richard A.; Prellop, Perri B.; Spencer, Sharon A.; Santos, Jennifer F. de los; Duan, Jun; Fiveash, John B.; Brezovich, Ivan A.

    2005-01-01

    plans was equivalent to the independently optimized plans actually used for treatment. Tolerance doses of the critical structures were respected for the plan sum; however, the dose to critical structures for the individual initial and boost plans was different between the simultaneously optimized and the independently optimized plans. In conclusion, we have demonstrated a method for optimization of initial and boost plans that treat volume reductions using the same dose per fraction. The method is efficient, as it avoids the iterative approach necessitated by currently available TPSs, and is generalizable to more than two treatment phases. Comparison with clinical plans developed independently suggests that current manual techniques for planning sequential treatments may be suboptimal

  10. [Optimized application of nested PCR method for detection of malaria].

    Science.gov (United States)

    Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C

    2017-04-28

    Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.

  11. Application of the maximum entropy method to profile analysis

    International Nuclear Information System (INIS)

    Armstrong, N.; Kalceff, W.; Cline, J.P.

    1999-01-01

    Full text: A maximum entropy (MaxEnt) method for analysing crystallite size- and strain-induced x-ray profile broadening is presented. This method treats the problems of determining the specimen profile, crystallite size distribution, and strain distribution in a general way by considering them as inverse problems. A common difficulty faced by many experimenters is their inability to determine a well-conditioned solution of the integral equation, which preserves the positivity of the profile or distribution. We show that the MaxEnt method overcomes this problem, while also enabling a priori information, in the form of a model, to be introduced into it. Additionally, we demonstrate that the method is fully quantitative, in that uncertainties in the solution profile or solution distribution can be determined and used in subsequent calculations, including mean particle sizes and rms strain. An outline of the MaxEnt method is presented for the specific problems of determining the specimen profile and crystallite or strain distributions for the correspondingly broadened profiles. This approach offers an alternative to standard methods such as those of Williamson-Hall and Warren-Averbach. An application of the MaxEnt method is demonstrated in the analysis of alumina size-broadened diffraction data (from NIST, Gaithersburg). It is used to determine the specimen profile and column-length distribution of the scattering domains. Finally, these results are compared with the corresponding Williamson-Hall and Warren-Averbach analyses. Copyright (1999) Australian X-ray Analytical Association Inc

  12. The application of statistical methods to assess economic assets

    Directory of Open Access Journals (Sweden)

    D. V. Dianov

    2017-01-01

    Full Text Available The article is devoted to consideration and evaluation of machinery, equipment and special equipment, methodological aspects of the use of standards for assessment of buildings and structures in current prices, the valuation of residential, specialized houses, office premises, assessment and reassessment of existing and inactive military assets, the application of statistical methods to obtain the relevant cost estimates.The objective of the scientific article is to consider possible application of statistical tools in the valuation of the assets, composing the core group of elements of national wealth – the fixed assets. Firstly, capital tangible assets constitute the basis of material base of a new value creation, products and non-financial services. The gain, accumulated of tangible assets of a capital nature is a part of the gross domestic product, and from its volume and specific weight in the composition of GDP we can judge the scope of reproductive processes in the country.Based on the methodological materials of the state statistics bodies of the Russian Federation, regulations of the theory of statistics, which describe the methods of statistical analysis such as the index, average values, regression, the methodical approach is structured in the application of statistical tools to obtain value estimates of property, plant and equipment with significant accumulated depreciation. Until now, the use of statistical methodology in the practice of economic assessment of assets is only fragmentary. This applies to both Federal Legislation (Federal law № 135 «On valuation activities in the Russian Federation» dated 16.07.1998 in edition 05.07.2016 and the methodological documents and regulations of the estimated activities, in particular, the valuation activities’ standards. A particular problem is the use of a digital database of Rosstat (Federal State Statistics Service, as to the specific fixed assets the comparison should be carried

  13. Application of optical non-invasive methods in skin physiology

    Science.gov (United States)

    Lademann, J.; Patzelt, A.; Darvin, M.; Richter, H.; Antoniou, C.; Sterry, W.; Koch, S.

    2008-05-01

    In the present paper the application of optical non-invasive methods in dermatology and cosmetology is discussed. Laser scanning microscopy (LSM) and optical coherent tomography (OCT) are the most promising methods for this application. Using these methods, the analysis of different skin parameters like dryness and oiliness of the skin, the barrier function and the structure of furrows and wrinkles are discussed. Additionally the homogeneity of distribution of topically applied creams, as well as their penetration into the skin were investigated. It is shown that these methods are highly valuable in dermatology for diagnostic and therapy control and for basic research, for instance in the field of structure analysis of hair follicles and sweat glands. The vertical images of the tissue produced by OCT can be easily compared with histological sections. Unfortunately, the resolution of the OCT technique is not high enough to carry out measurements on a cellular level, as is possible by LSM. LSM has the advantage that it can be used for the investigation of penetration and storage processes of topically applied substances, if these substances have fluorescent properties or if they are fluorescent-labelled.

  14. Acoustic methods for cavitation mapping in biomedical applications

    Science.gov (United States)

    Wan, M.; Xu, S.; Ding, T.; Hu, H.; Liu, R.; Bai, C.; Lu, S.

    2015-12-01

    In recent years, cavitation is increasingly utilized in a wide range of applications in biomedical field. Monitoring the spatial-temporal evolution of cavitation bubbles is of great significance for efficiency and safety in biomedical applications. In this paper, several acoustic methods for cavitation mapping proposed or modified on the basis of existing work will be presented. The proposed novel ultrasound line-by-line/plane-by-plane method can depict cavitation bubbles distribution with high spatial and temporal resolution and may be developed as a potential standard 2D/3D cavitation field mapping method. The modified ultrafast active cavitation mapping based upon plane wave transmission and reception as well as bubble wavelet and pulse inversion technique can apparently enhance the cavitation to tissue ratio in tissue and further assist in monitoring the cavitation mediated therapy with good spatial and temporal resolution. The methods presented in this paper will be a foundation to promote the research and development of cavitation imaging in non-transparent medium.

  15. Application of optical non-invasive methods in skin physiology

    International Nuclear Information System (INIS)

    Lademann, J; Patzelt, A; Darvin, M; Richter, H; Sterry, W; Antoniou, C; Koch, S

    2008-01-01

    In the present paper the application of optical non-invasive methods in dermatology and cosmetology is discussed. Laser scanning microscopy (LSM) and optical coherent tomography (OCT) are the most promising methods for this application. Using these methods, the analysis of different skin parameters like dryness and oiliness of the skin, the barrier function and the structure of furrows and wrinkles are discussed. Additionally the homogeneity of distribution of topically applied creams, as well as their penetration into the skin were investigated. It is shown that these methods are highly valuable in dermatology for diagnostic and therapy control and for basic research, for instance in the field of structure analysis of hair follicles and sweat glands. The vertical images of the tissue produced by OCT can be easily compared with histological sections. Unfortunately, the resolution of the OCT technique is not high enough to carry out measurements on a cellular level, as is possible by LSM. LSM has the advantage that it can be used for the investigation of penetration and storage processes of topically applied substances, if these substances have fluorescent properties or if they are fluorescent-labelled

  16. Lexical decoder for continuous speech recognition: sequential neural network approach

    International Nuclear Information System (INIS)

    Iooss, Christine

    1991-01-01

    The work presented in this dissertation concerns the study of a connectionist architecture to treat sequential inputs. In this context, the model proposed by J.L. Elman, a recurrent multilayers network, is used. Its abilities and its limits are evaluated. Modifications are done in order to treat erroneous or noisy sequential inputs and to classify patterns. The application context of this study concerns the realisation of a lexical decoder for analytical multi-speakers continuous speech recognition. Lexical decoding is completed from lattices of phonemes which are obtained after an acoustic-phonetic decoding stage relying on a K Nearest Neighbors search technique. Test are done on sentences formed from a lexicon of 20 words. The results are obtained show the ability of the proposed connectionist model to take into account the sequentiality at the input level, to memorize the context and to treat noisy or erroneous inputs. (author) [fr

  17. Computing Sequential Equilibria for Two-Player Games

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro; Sørensen, Troels Bjerre

    2006-01-01

    Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Koller and Pfeffer pointed out that the strategies...... obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming...... a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial time. In addition, the equilibrium we find is normal-form perfect. Our technique generalizes to general-sum games...

  18. Computing sequential equilibria for two-player games

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro

    2006-01-01

    Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Their algorithm has been used by AI researchers...... for constructing prescriptive strategies for concrete, often fairly large games. Koller and Pfeffer pointed out that the strategies obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency...... by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial...

  19. Total System Performance Assessment-License Application Methods and Approach

    Energy Technology Data Exchange (ETDEWEB)

    J. McNeish

    2002-09-13

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issue (KTI) agreements, the ''Yucca Mountain Review Plan'' (CNWRA 2002 [158449]), and 10 CFR Part 63. This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are utilized in this document.

  20. Delayless acceleration measurement method for motion control applications

    Energy Technology Data Exchange (ETDEWEB)

    Vaeliviita, S.; Ovaska, S.J. [Helsinki University of Technology, Otaniemi (Finland). Institute of Intelligent Power Electronics

    1997-12-31

    Delayless and accurate sensing of angular acceleration can improve the performance of motion control in motor drives. Acceleration control is, however, seldom implemented in practical drive systems due to prohibitively high costs or unsatisfactory results of most acceleration measurement methods. In this paper we propose an efficient and accurate acceleration measurement method based on direct differentiation of the corresponding velocity signal. Polynomial predictive filtering is used to smooth the resulting noisy signal without delay. This type of prediction is justified by noticing that a low-degree polynomial can usually be fitted into the primary acceleration curve. No additional hardware is required to implement the procedure if the velocity signal is already available. The performance of the acceleration measurement method is evaluated by applying it to a demanding motion control application. (orig.) 12 refs.

  1. Total System Performance Assessment - License Application Methods and Approach

    International Nuclear Information System (INIS)

    McNeish, J.

    2003-01-01

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document

  2. Application of multi-block methods in cement production

    DEFF Research Database (Denmark)

    Svinning, K.; Høskuldsson, Agnar

    2008-01-01

    distribution and the two last blocks the superficial microstructure analysed by differential thermo gravimetric analysis. The multi-block method is used to identify the role of each part. The score vectors of each block can be analysed separately or together with score vectors of other blocks. Stepwise......Compressive strength at 1 day of Portland cement as a function of the microstructure of cement was statistically modelled by application of multi-block regression method. The observation X-matrix was partitioned into four blocks, the first block representing the mineralogy, the second particle size...... regression is used to find minimum number of variables of each block. The multi-block method proved useful in determining the modelling strength of each data block and finding minimum number of variables within each data block....

  3. Evolutionary Computation Methods and their applications in Statistics

    Directory of Open Access Journals (Sweden)

    Francesco Battaglia

    2013-05-01

    Full Text Available A brief discussion of the genesis of evolutionary computation methods, their relationship to artificial intelligence, and the contribution of genetics and Darwin’s theory of natural evolution is provided. Then, the main evolutionary computation methods are illustrated: evolution strategies, genetic algorithms, estimation of distribution algorithms, differential evolution, and a brief description of some evolutionary behavior methods such as ant colony and particle swarm optimization. We also discuss the role of the genetic algorithm for multivariate probability distribution random generation, rather than as a function optimizer. Finally, some relevant applications of genetic algorithm to statistical problems are reviewed: selection of variables in regression, time series model building, outlier identification, cluster analysis, design of experiments.

  4. Validation of the dual-table autoradiographic method to quantify two sequential rCBFs in a single SPET session with N-isopropyl-[123I]p-iodoamphetamine

    International Nuclear Information System (INIS)

    Nishizawa, Sadahiko; Iida, Hidehiro; Tsuchida, Tatsuro; Ito, Harumi; Konishi, Junji; Yonekura, Yoshiharu

    2003-01-01

    We evaluated an autoradiographic (ARG) method to calculate regional cerebral blood flow (rCBF) sequentially before and after an acetazolamide (ACZ) challenge in a single session of single-photon emission tomography (SPET) with two injections of N-isopropyl-[ 123 I]p-iodoamphetamine (IMP). The method uses a table look-up method with a fixed distribution volume (Vd) and a standard input function of IMP. To calculate rCBF after an ACZ challenge, two look-up tables (a dual-table) are used to reflect the effect of radioactivity in the brain from the first dose of IMP. We performed simulation studies to evaluate errors attributable to (a) a change in rCBF induced by an ACZ challenge during the scan and (b) a fixed Vd value that might be different from an individual one, along with the effect of (c) scan length. Thirty-three patients were studied by dynamic SPET with two injections of IMP and frequent arterial blood sampling, and the data were analysed using the dual-table ARG method. Twenty-four of the 33 patients received an injection of ACZ 10 min before the second dose of IMP. We generated a standard input function by averaging individual input functions. The optimal method to calibrate a standard input function was determined so that the SD of differences between rCBF calculated by using a calibrated standard input function (F SIF ) and that calculated by using an individual input function (F IIF ) was minimised. Reliability of the method was evaluated by comparing F SIF with gold standard rCBF (F REF ) obtained by two-compartment model analysis of dynamic SPET data and an individual input function with a non-linear least squares fitting method. Errors caused by (a) were less than 4% for a first rCBF ranging between 20 and 60 ml 100 g -1 min -1 and an rCBF change of between -25% and 50%. Errors caused by (b) were relatively large compared with those caused by (a), and were affected by (c) with an increasing error in a longer scan. In the patient study with a proposed

  5. Native Frames: Disentangling Sequential from Concerted Three-Body Fragmentation

    Science.gov (United States)

    Rajput, Jyoti; Severt, T.; Berry, Ben; Jochim, Bethany; Feizollah, Peyman; Kaderiya, Balram; Zohrabi, M.; Ablikim, U.; Ziaee, Farzaneh; Raju P., Kanaka; Rolles, D.; Rudenko, A.; Carnes, K. D.; Esry, B. D.; Ben-Itzhak, I.

    2018-03-01

    A key question concerning the three-body fragmentation of polyatomic molecules is the distinction of sequential and concerted mechanisms, i.e., the stepwise or simultaneous cleavage of bonds. Using laser-driven fragmentation of OCS into O++C++S+ and employing coincidence momentum imaging, we demonstrate a novel method that enables the clear separation of sequential and concerted breakup. The separation is accomplished by analyzing the three-body fragmentation in the native frame associated with each step and taking advantage of the rotation of the intermediate molecular fragment, CO2 + or CS2 + , before its unimolecular dissociation. This native-frame method works for any projectile (electrons, ions, or photons), provides details on each step of the sequential breakup, and enables the retrieval of the relevant spectra for sequential and concerted breakup separately. Specifically, this allows the determination of the branching ratio of all these processes in OCS3 + breakup. Moreover, we find that the first step of sequential breakup is tightly aligned along the laser polarization and identify the likely electronic states of the intermediate dication that undergo unimolecular dissociation in the second step. Finally, the separated concerted breakup spectra show clearly that the central carbon atom is preferentially ejected perpendicular to the laser field.

  6. Studies and applications of neutron radiography with film methods

    International Nuclear Information System (INIS)

    Ikeda, Yasushi

    1989-01-01

    Neutron radiography has been studied with film methods and applied to some industrial applications. The film methods include not only conventional silver-halide emulsion films, such as industrial, medical or soft X-ray ones, but also track-etch films and those for indirect methods. The characteristics of the film methods are analyzed and investigated with using various image converters, such as gadolinium metal foil and evaporation films, or some scintillation converters such as NE426. The sensitivities and MTFs for various sets of films and converters have been obtained, which gives a chart of the correlation between the appropriate exposure and resolving powers for them. From the chart, one can select some proper sets for the purpose and given conditions of neutron radiography facilities. The film methods have been applied to inspect very fine cracks in thick steel blocks and plates. It is also applied to observe nuclear fuel pellets or irradiated nuclear fuel pins. Furthermore, the film method has been used for neutron computed tomography. Very fine Eu-particles in TiO pellets, which diameters are nearly 300 micron, can be reconstructed by the neutron CT. The fine neutron CT will be useful for the inspection of Pu-particles in mixed oxide nuclear fuel pellets for future advance nuclear reactors. (author)

  7. Sequential Logic Model Deciphers Dynamic Transcriptional Control of Gene Expressions

    Science.gov (United States)

    Yeo, Zhen Xuan; Wong, Sum Thai; Arjunan, Satya Nanda Vel; Piras, Vincent; Tomita, Masaru; Selvarajoo, Kumar; Giuliani, Alessandro; Tsuchiya, Masa

    2007-01-01

    insight. The demonstration of the efficacy of this approach in endo16 is a promising step for further application of the proposed method. PMID:17712424

  8. Sequential logic model deciphers dynamic transcriptional control of gene expressions.

    Directory of Open Access Journals (Sweden)

    Zhen Xuan Yeo

    providing rich biological insight. The demonstration of the efficacy of this approach in endo16 is a promising step for further application of the proposed method.

  9. Frontiers of biostatistical methods and applications in clinical oncology

    CERN Document Server

    Crowley, John

    2017-01-01

    This book presents the state of the art of biostatistical methods and their applications in clinical oncology. Many methodologies established today in biostatistics have been brought about through its applications to the design and analysis of oncology clinical studies. This field of oncology, now in the midst of evolution owing to rapid advances in biotechnologies and cancer genomics, is becoming one of the most promising disease fields in the shift toward personalized medicine. Modern developments of diagnosis and therapeutics of cancer have also been continuously fueled by recent progress in establishing the infrastructure for conducting more complex, large-scale clinical trials and observational studies. The field of cancer clinical studies therefore will continue to provide many new statistical challenges that warrant further progress in the methodology and practice of biostatistics. This book provides a systematic coverage of various stages of cancer clinical studies. Topics from modern cancer clinical ...

  10. Application of the kernel method to the inverse geosounding problem.

    Science.gov (United States)

    Hidalgo, Hugo; Sosa León, Sonia; Gómez-Treviño, Enrique

    2003-01-01

    Determining the layered structure of the earth demands the solution of a variety of inverse problems; in the case of electromagnetic soundings at low induction numbers, the problem is linear, for the measurements may be represented as a linear functional of the electrical conductivity distribution. In this paper, an application of the support vector (SV) regression technique to the inversion of electromagnetic data is presented. We take advantage of the regularizing properties of the SV learning algorithm and use it as a modeling technique with synthetic and field data. The SV method presents better recovery of synthetic models than Tikhonov's regularization. As the SV formulation is solved in the space of the data, which has a small dimension in this application, a smaller problem than that considered with Tikhonov's regularization is produced. For field data, the SV formulation develops models similar to those obtained via linear programming techniques, but with the added characteristic of robustness.

  11. Learning in Non-Stationary Environments Methods and Applications

    CERN Document Server

    Lughofer, Edwin

    2012-01-01

    Recent decades have seen rapid advances in automatization processes, supported by modern machines and computers. The result is significant increases in system complexity and state changes, information sources, the need for faster data handling and the integration of environmental influences. Intelligent systems, equipped with a taxonomy of data-driven system identification and machine learning algorithms, can handle these problems partially. Conventional learning algorithms in a batch off-line setting fail whenever dynamic changes of the process appear due to non-stationary environments and external influences.   Learning in Non-Stationary Environments: Methods and Applications offers a wide-ranging, comprehensive review of recent developments and important methodologies in the field. The coverage focuses on dynamic learning in unsupervised problems, dynamic learning in supervised classification and dynamic learning in supervised regression problems. A later section is dedicated to applications in which dyna...

  12. Construction of crystal structure prototype database: methods and applications

    International Nuclear Information System (INIS)

    Su, Chuanxun; Lv, Jian; Wang, Hui; Wang, Yanchao; Ma, Yanming; Li, Quan; Zhang, Lijun

    2017-01-01

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery. (paper)

  13. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  14. Construction of crystal structure prototype database: methods and applications.

    Science.gov (United States)

    Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming

    2017-04-26

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.

  15. Rationalization of thermal injury quantification methods: application to skin burns.

    Science.gov (United States)

    Viglianti, Benjamin L; Dewhirst, Mark W; Abraham, John P; Gorman, John M; Sparrow, Eph M

    2014-08-01

    Classification of thermal injury is typically accomplished either through the use of an equivalent dosimetry method (equivalent minutes at 43 °C, CEM43 °C) or through a thermal-injury-damage metric (the Arrhenius method). For lower-temperature levels, the equivalent dosimetry approach is typically employed while higher-temperature applications are most often categorized by injury-damage calculations. The two methods derive from common thermodynamic/physical chemistry origins. To facilitate the development of the interrelationships between the two metrics, application is made to the case of skin burns. This thermal insult has been quantified by numerical simulation, and the extracted time-temperature results served for the evaluation of the respective characterizations. The simulations were performed for skin-surface exposure temperatures ranging from 60 to 90 °C, where each surface temperature was held constant for durations extending from 10 to 110 s. It was demonstrated that values of CEM43 at the basal layer of the skin were highly correlated with the depth of injury calculated from a thermal injury integral. Local values of CEM43 were connected to the local cell survival rate, and a correlating equation was developed relating CEM43 with the decrease in cell survival from 90% to 10%. Finally, it was shown that the cell survival/CEM43 relationship for the cases investigated here most closely aligns with isothermal exposure of tissue to temperatures of ~50 °C. Copyright © 2013 Elsevier Ltd and ISBI. All rights reserved.

  16. Advances in product family and product platform design methods & applications

    CERN Document Server

    Jiao, Jianxin; Siddique, Zahed; Hölttä-Otto, Katja

    2014-01-01

    Advances in Product Family and Product Platform Design: Methods & Applications highlights recent advances that have been made to support product family and product platform design and successful applications in industry. This book provides not only motivation for product family and product platform design—the “why” and “when” of platforming—but also methods and tools to support the design and development of families of products based on shared platforms—the “what”, “how”, and “where” of platforming. It begins with an overview of recent product family design research to introduce readers to the breadth of the topic and progresses to more detailed topics and design theory to help designers, engineers, and project managers plan, architect, and implement platform-based product development strategies in their companies. This book also: Presents state-of-the-art methods and tools for product family and product platform design Adopts an integrated, systems view on product family and pro...

  17. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Science.gov (United States)

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  18. DYNAMIC ANALYSIS OF THE BULK TRITIUM SHIPPING PACKAGE SUBJECTED TO CLOSURE TORQUES AND SEQUENTIAL IMPACTS

    International Nuclear Information System (INIS)

    Wu, T; Paul Blanton, P; Kurt Eberl, K

    2007-01-01

    This paper presents a finite-element technique to simulate the structural responses and to evaluate the cumulative damage of a radioactive material packaging requiring bolt closure-tightening torque and subjected to the scenarios of the Hypothetical Accident Conditions (HAC) defined in the Code of Federal Regulations Title 10 part 71 (10CFR71). Existing finite-element methods for modeling closure stresses from bolt pre-load are not readily adaptable to dynamic analyses. The HAC events are required to occur sequentially per 10CFR71 and thus the evaluation of the cumulative damage is desirable. Generally, each HAC event is analyzed separately and the cumulative damage is partially addressed by superposition. This results in relying on additional physical testing to comply with 10CFR71 requirements for assessment of cumulative damage. The proposed technique utilizes the combination of kinematic constraints, rigid-body motions and structural deformations to overcome some of the difficulties encountered in modeling the effect of cumulative damage. This methodology provides improved numerical solutions in compliance with the 10CFR71 requirements for sequential HAC tests. Analyses were performed for the Bulk Tritium Shipping Package (BTSP) designed by Savannah River National Laboratory to demonstrate the applications of the technique. The methodology proposed simulates the closure bolt torque preload followed by the sequential HAC events, the 30-foot drop and the 30-foot dynamic crush. The analytical results will be compared to the package test data

  19. A path-level exact parallelization strategy for sequential simulation

    Science.gov (United States)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  20. Selected asymptotic methods with applications to electromagnetics and antennas

    CERN Document Server

    Fikioris, George; Bakas, Odysseas N

    2013-01-01

    This book describes and illustrates the application of several asymptotic methods that have proved useful in the authors' research in electromagnetics and antennas. We first define asymptotic approximations and expansions and explain these concepts in detail. We then develop certain prerequisites from complex analysis such as power series, multivalued functions (including the concepts of branch points and branch cuts), and the all-important gamma function. Of particular importance is the idea of analytic continuation (of functions of a single complex variable); our discussions here include som