WorldWideScience

Sample records for variate analysis procedures

  1. Spinal Surgeon Variation in Single-Level Cervical Fusion Procedures: A Cost and Hospital Resource Utilization Analysis.

    Hijji, Fady Y; Massel, Dustin H; Mayo, Benjamin C; Narain, Ankur S; Long, William W; Modi, Krishna D; Burke, Rory M; Canar, Jeff; Singh, Kern

    2017-07-01

    Retrospective analysis. To compare perioperative costs and outcomes of patients undergoing single-level anterior cervical discectomy and fusions (ACDF) at both a service (orthopedic vs. neurosurgical) and individual surgeon level. Hospital systems are experiencing significant pressure to increase value of care by reducing costs while maintaining or improving patient-centered outcomes. Few studies have examined the cost-effectiveness cervical arthrodesis at a service level. A retrospective review of patients who underwent a primary 1-level ACDF by eight surgeons (four orthopedic and four neurosurgical) at a single academic institution between 2013 and 2015 was performed. Patients were identified by Diagnosis-Related Group and procedural codes. Patients with the ninth revision of the International Classification of Diseases coding for degenerative cervical pathology were included. Patients were excluded if they exhibited preoperative diagnoses or postoperative social work issues affecting their length of stay. Comparisons of preoperative demographics were performed using Student t tests and chi-squared analysis. Perioperative outcomes and costs for hospital services were compared using multivariate regression adjusted for preoperative characteristics. A total of 137 patients diagnosed with cervical degeneration underwent single-level ACDF; 44 and 93 were performed by orthopedic surgeons and neurosurgeons, respectively. There was no difference in patient demographics. ACDF procedures performed by orthopedic surgeons demonstrated shorter operative times (89.1 ± 25.5 vs. 96.0 ± 25.5 min; P = 0.002) and higher laboratory costs (Δ+$6.53 ± $5.52 USD; P = 0.041). There were significant differences in operative time (P = 0.014) and labor costs (P = 0.034) between individual surgeons. There was no difference in total costs between specialties or individual surgeons. Surgical subspecialty training does not significantly affect total costs of

  2. Procedural facade variations from a single layout

    Bao, Fan

    2013-02-19

    We introduce a framework to generate many variations of a facade design that look similar to a given facade layout. Starting from an input image, the facade is hierarchically segmented and labeled with a collection of manual and automatic tools. The user can then model constraints that should be maintained in any variation of the input facade design. Subsequently, facade variations are generated for different facade sizes, where multiple variations can be produced for a certain size. Computing such new facade variations has many unique challenges, and we propose a new algorithm based on interleaving heuristic search and quadratic programming. In contrast to most previous work, we focus on the generation of new design variations and not on the automatic analysis of the input\\'s structure. Adding a modeling step with the user in the loop ensures that our results routinely are of high quality. © 2013 ACM.

  3. Procedural facade variations from a single layout

    Bao, Fan; Schwarz, Michael; Wonka, Peter

    2013-01-01

    We introduce a framework to generate many variations of a facade design that look similar to a given facade layout. Starting from an input image, the facade is hierarchically segmented and labeled with a collection of manual and automatic tools. The user can then model constraints that should be maintained in any variation of the input facade design. Subsequently, facade variations are generated for different facade sizes, where multiple variations can be produced for a certain size. Computing such new facade variations has many unique challenges, and we propose a new algorithm based on interleaving heuristic search and quadratic programming. In contrast to most previous work, we focus on the generation of new design variations and not on the automatic analysis of the input's structure. Adding a modeling step with the user in the loop ensures that our results routinely are of high quality. © 2013 ACM.

  4. A comparison of vowel normalization procedures for language variation research

    Adank, Patti; Smits, Roel; van Hout, Roeland

    2004-11-01

    An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels (``vowel-extrinsic'' information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself (``vowel-intrinsic'' information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., ``formant-extrinsic'' F2-F1). .

  5. Variational method for inverting the Kohn-Sham procedure

    Kadantsev, Eugene S.; Stott, M.J.

    2004-01-01

    A procedure based on a variational principle is developed for determining the local Kohn-Sham (KS) potential corresponding to a given ground-state electron density. This procedure is applied to calculate the exchange-correlation part of the effective Kohn-Sham (KS) potential for the neon atom and the methane molecule

  6. Variation in radiation doses in paediatric cardiac catheterisation procedures

    Al-Haj, A. N.; Lobriguito, A. M.; Rafeh, W.

    2008-01-01

    Paediatric cardiac catheterisation involves diagnostic and therapeutic procedures that range from simple to complex and can subject paediatric patients to varying radiation doses. The study aims to determine the variation in entrance doses in patients in terms of dose-area product (DAP) values and to investigate the methods for optimising radiation protection. A total of 190 paediatric patients belonging to age groups 0, 1, 5 and 10 y who underwent diagnostic and six selected therapeutic procedures at King Faisal Specialist Hospital and Research Centre, Riyadh (Saudi Arabia) were included in the study. Therapeutic procedures include coarctation (COA), patent ductus arteriosus (PDA), radiofrequency ablation, pulmonary, embolisation and septostomy. Fluoroscopy and cine radiography were used in all procedures. Patient demography (weight, age, gender and height), radiographic technique factors, fluoroscopy and cine time, frame rate, and DAP values were taken from patients records. Effective doses for each procedure were estimated from the DAP values. The mean DAP per procedure were analysed for correlation with patient equivalent cylindrical diameter, weight, fluoroscopy time and number of frames. Factors influencing the variation in doses were investigated. Initial results show that PDA occlusion has the highest mean DAP value of 23.21 Gy-cm 2 , while the diagnostic and septostomy procedures have the lowest value of 7.77 and 6.95 Gy-cm 2 , respectively. (authors)

  7. Biomass Compositional Analysis Laboratory Procedures | Bioenergy | NREL

    Biomass Compositional Analysis Laboratory Procedures Biomass Compositional Analysis Laboratory Procedures NREL develops laboratory analytical procedures (LAPs) for standard biomass analysis. These procedures help scientists and analysts understand more about the chemical composition of raw biomass

  8. Safety analysis procedures for PHWR

    Min, Byung Joo; Kim, Hyoung Tae; Yoo, Kun Joong

    2004-03-01

    The methodology of safety analyses for CANDU reactors in Canada, a vendor country, uses a combination of best-estimate physical models and conservative input parameters so as to minimize the uncertainty of the plant behavior predictions. As using the conservative input parameters, the results of the safety analyses are assured the regulatory requirements such as the public dose, the integrity of fuel and fuel channel, the integrity of containment and reactor structures, etc. However, there is not the comprehensive and systematic procedures for safety analyses for CANDU reactors in Korea. In this regard, the development of the safety analyses procedures for CANDU reactors is being conducted not only to establish the safety analyses system, but also to enhance the quality assurance of the safety assessment. In the first phase of this study, the general procedures of the deterministic safety analyses are developed. The general safety procedures are covered the specification of the initial event, selection of the methodology and accident sequences, computer codes, safety analysis procedures, verification of errors and uncertainties, etc. Finally, These general procedures of the safety analyses are applied to the Large Break Loss Of Coolant Accident (LBLOCA) in Final Safety Analysis Report (FSAR) for Wolsong units 2, 3, 4

  9. Standardization and optimization of core sampling procedure for carbon isotope analysis in eucalyptus and variation in carbon isotope ratios across species and growth conditions

    Raju, M

    2011-11-01

    Full Text Available C Aspect grandis urophylla Variation in D13C 16.000 16.500 17.000 17.500 18.000 18.500 19.000 19.500 20.000 20.500 E. camal E urophylla E grandis E pellita E globulus D1 3C Variable N Level of significance Species 2 P<0...

  10. Procedures monitoring and MAAP analysis

    May, R.S.

    1991-01-01

    Numerous studies of severe accidents in light water reactors have shown that operator response can play a crucial role in the predicted outcomes of dominant accident scenarios. MAAP provides the capability to specify certain operator actions as input data. However, making reasonable assumptions about the nature and timing of operator response requires substantial knowledge about plant practices and procedures and what they imply for the event being analyzed. The appearance of knowledge based software technology in the mid-1980s provided a natural format for representing and maintaining procedures as IF-THEN rules. The boiling water reactor (BWR) Emergency Operating Procedures Tracking System (EOPTS) was composed of a rule base of procedures and a dedicated inference engine (problem-solver). Based on the general approach and experience of EOPTS, the authors have developed a prototype procedures monitoring system that reads MAAP transient output files and evaluate the EOP messages and instructions that would be implied during each transient time interval. The prototype system was built using the NEXPERT OBJECT expert system development system, running on a 386-class personal computer with 4 MB of memory. The limited scope prototype includes a reduced set of BWR6 EOPs procedures evaluation on a coarse time interval, a simple text-based user interface, and a summary-report generator. The prototype, which is limited to batch-mode analysis of MAAP output, is intended to demonstrate the concept and aid in the design of a production system, which will involve a direct link to MAAP and interactive capabilities

  11. Probabilistic safety analysis procedures guide

    Papazoglou, I.A.; Bari, R.A.; Buslik, A.J.

    1984-01-01

    A procedures guide for the performance of probabilistic safety assessment has been prepared for interim use in the Nuclear Regulatory Commission programs. The probabilistic safety assessment studies performed are intended to produce probabilistic predictive models that can be used and extended by the utilities and by NRC to sharpen the focus of inquiries into a range of tissues affecting reactor safety. This guide addresses the determination of the probability (per year) of core damage resulting from accident initiators internal to the plant and from loss of offsite electric power. The scope includes analyses of problem-solving (cognitive) human errors, a determination of importance of the various core damage accident sequences, and an explicit treatment and display of uncertainties for the key accident sequences. Ultimately, the guide will be augmented to include the plant-specific analysis of in-plant processes (i.e., containment performance) and the risk associated with external accident initiators, as consensus is developed regarding suitable methodologies in these areas. This guide provides the structure of a probabilistic safety study to be performed, and indicates what products of the study are essential for regulatory decision making. Methodology is treated in the guide only to the extent necessary to indicate the range of methods which is acceptable; ample reference is given to alternative methodologies which may be utilized in the performance of the study

  12. Construction of A Trial Function In The Variational Procedure of ...

    A form of variational method for calculating the ground state energy of a quantum mechanical system is considered. The method is based on a systematic construction of a trial variational function at each step of the calculation of the ground state energy. The construction involves introducing more variational parameters to ...

  13. "They Have to Adapt to Learn": Surgeons' Perspectives on the Role of Procedural Variation in Surgical Education.

    Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei

    2016-01-01

    Clinical research increasingly acknowledges the existence of significant procedural variation in surgical practice. This study explored surgeons' perspectives regarding the influence of intersurgeon procedural variation on the teaching and learning of surgical residents. This qualitative study used a grounded theory-based analysis of observational and interview data. Observational data were collected in 3 tertiary care teaching hospitals in Ontario, Canada. Semistructured interviews explored potential procedural variations arising during the observations and prompts from an iteratively refined guide. Ongoing data analysis refined the theoretical framework and informed data collection strategies, as prescribed by the iterative nature of grounded theory research. Our sample included 99 hours of observation across 45 cases with 14 surgeons. Semistructured, audio-recorded interviews (n = 14) occurred immediately following observational periods. Surgeons endorsed the use of intersurgeon procedural variations to teach residents about adapting to the complexity of surgical practice and the norms of surgical culture. Surgeons suggested that residents' efforts to identify thresholds of principle and preference are crucial to professional development. Principles that emerged from the study included the following: (1) knowing what comes next, (2) choosing the right plane, (3) handling tissue appropriately, (4) recognizing the abnormal, and (5) making safe progress. Surgeons suggested that learning to follow these principles while maintaining key aspects of surgical culture, like autonomy and individuality, are important social processes in surgical education. Acknowledging intersurgeon variation has important implications for curriculum development and workplace-based assessment in surgical education. Adapting to intersurgeon procedural variations may foster versatility in surgical residents. However, the existence of procedural variations and their active use in surgeons

  14. Human Reliability Analysis For Computerized Procedures

    Boring, Ronald L.; Gertman, David I.; Le Blanc, Katya

    2011-01-01

    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  15. Adaptation and innovation: a grounded theory study of procedural variation in the academic surgical workplace.

    Apramian, Tavis; Watling, Christopher; Lingard, Lorelei; Cristancho, Sayra

    2015-10-01

    Surgical research struggles to describe the relationship between procedural variations in daily practice and traditional conceptualizations of evidence. The problem has resisted simple solutions, in part, because we lack a solid understanding of how surgeons conceptualize and interact around variation, adaptation, innovation, and evidence in daily practice. This grounded theory study aims to describe the social processes that influence how procedural variation is conceptualized in the surgical workplace. Using the constructivist grounded theory methodology, semi-structured interviews with surgeons (n = 19) from four North American academic centres were collected and analysed. Purposive sampling targeted surgeons with experiential knowledge of the role of variations in the workplace. Theoretical sampling was conducted until a theoretical framework representing key processes was conceptually saturated. Surgical procedural variation was influenced by three key processes. Seeking improvement was shaped by having unsolved procedural problems, adapting in the moment, and pursuing personal opportunities. Orienting self and others to variations consisted of sharing stories of variations with others, taking stock of how a variation promoted personal interests, and placing trust in peers. Acting under cultural and material conditions was characterized by being wary, positioning personal image, showing the logic of a variation, and making use of academic resources to do so. Our findings include social processes that influence how adaptations are incubated in surgical practice and mature into innovations. This study offers a language for conceptualizing the sociocultural influences on procedural variations in surgery. Interventions to change how surgeons interact with variations on a day-to-day basis should consider these social processes in their design. © 2015 John Wiley & Sons, Ltd.

  16. Fixed point theory, variational analysis, and optimization

    Al-Mezel, Saleh Abdullah R; Ansari, Qamrul Hasan

    2015-01-01

    ""There is a real need for this book. It is useful for people who work in areas of nonlinear analysis, optimization theory, variational inequalities, and mathematical economics.""-Nan-Jing Huang, Sichuan University, Chengdu, People's Republic of China

  17. The Effect of Geographic Units of Analysis on Measuring Geographic Variation in Medical Services Utilization

    Agnus M. Kim

    2016-07-01

    Full Text Available Objectives: We aimed to evaluate the effect of geographic units of analysis on measuring geographic variation in medical services utilization. For this purpose, we compared geographic variations in the rates of eight major procedures in administrative units (districts and new areal units organized based on the actual health care use of the population in Korea. Methods: To compare geographic variation in geographic units of analysis, we calculated the age–sex standardized rates of eight major procedures (coronary artery bypass graft surgery, percutaneous transluminal coronary angioplasty, surgery after hip fracture, knee-replacement surgery, caesarean section, hysterectomy, computed tomography scan, and magnetic resonance imaging scan from the National Health Insurance database in Korea for the 2013 period. Using the coefficient of variation, the extremal quotient, and the systematic component of variation, we measured geographic variation for these eight procedures in districts and new areal units. Results: Compared with districts, new areal units showed a reduction in geographic variation. Extremal quotients and inter-decile ratios for the eight procedures were lower in new areal units. While the coefficient of variation was lower for most procedures in new areal units, the pattern of change of the systematic component of variation between districts and new areal units differed among procedures. Conclusions: Geographic variation in medical service utilization could vary according to the geographic unit of analysis. To determine how geographic characteristics such as population size and number of geographic units affect geographic variation, further studies are needed.

  18. Bio-Oil Analysis Laboratory Procedures | Bioenergy | NREL

    Bio-Oil Analysis Laboratory Procedures Bio-Oil Analysis Laboratory Procedures NREL develops laboratory analytical procedures (LAPs) for the analysis of raw and upgraded pyrolysis bio-oils. These standard procedures have been validated and allow for reliable bio-oil analysis. Procedures Determination

  19. Simplified procedures for fast reactor fuel cycle and sensitivity analysis

    Badruzzaman, A.

    1979-01-01

    The Continuous Slowing Down-Integral Transport Theory has been extended to perform criticality calculations in a Fast Reactor Core-blanket system achieving excellent prediction of the spectrum and the eigenvalue. The integral transport parameters did not need recalculation with source iteration and were found to be relatively constant with exposure. Fuel cycle parameters were accurately predicted when these were not varied, thus reducing a principal potential penalty of the Intergal Transport approach where considerable effort may be required to calculate transport parameters in more complicated geometries. The small variation of the spectrum in the central core region, and its weak dependence on exposure for both this region, the core blanket interface and blanket region led to the extension and development of inexpensive simplified procedures to complement exact methods. These procedures gave accurate predictions of the key fuel cycle parameters such as cost and their sensitivity to variation in spectrum-averaged and multigroup cross sections. They also predicted the implications of design variation on these parameters very well. The accuracy of these procedures and their use in analyzing a wide variety of sensitivities demonstrate the potential utility of survey calculations in Fast Reactor analysis and fuel management

  20. Safeguards Network Analysis Procedure (SNAP): overview

    Chapman, L.D; Engi, D.

    1979-08-01

    Nuclear safeguards systems provide physical protection and control of nuclear materials. The Safeguards Network Analysis Procedure (SNAP) provides a convenient and standard analysis methodology for the evaluation of physical protection system effectiveness. This is achieved through a standard set of symbols which characterize the various elements of safeguards systems and an analysis program to execute simulation models built using the SNAP symbology. The outputs provided by the SNAP simulation program supplements the safeguards analyst's evaluative capabilities and supports the evaluation of existing sites as well as alternative design possibilities. This paper describes the SNAP modeling technique and provides an example illustrating its use

  1. Building America House Performance Analysis Procedures

    Hendron, R.; Farrar-Nagy, S.; Anderson, R.; Judkoff, R.

    2001-10-29

    As the Building America Program has grown to include a large and diverse cross section of the home building industry, accurate and consistent analysis techniques have become more important to help all program partners as they perform design tradeoffs and calculate energy savings for prototype houses built as part of the program. This document illustrates some of the analysis concepts proven effective and reliable for analyzing the transient energy usage of advanced energy systems as well as entire houses. The analysis procedure described here provides a starting point for calculating energy savings of a prototype house relative to two base cases: builder standard practice and regional standard practice. Also provides building simulation analysis to calculate annual energy savings based on side-by-side short-term field testing of a prototype house.

  2. Thermoluminescence sensitivity variations in LiF PTFE dosemeters incurred by improper handling procedures

    Mason, E W; MacKinlay, Alistair F; Saunders, D

    1975-01-01

    A systematic study of some anomalous darkening effects and thermoluminescence sensitivity variations observed in LiF:PTFE thermoluminescent dosemeters is described. Various likely causes of such effects have been investigated. The manufacturer's recommended cleaning procedures have been found to be inadequate and, in some cases, have been found to actually promote discolouration of the dosemeters. Recommendations are given for the successful use of LiF:PTFE thermoluminescent dosemeters in personal dosimetry.

  3. Big Data Analysis of Human Genome Variations

    Gojobori, Takashi

    2016-01-25

    Since the human genome draft sequence was in public for the first time in 2000, genomic analyses have been intensively extended to the population level. The following three international projects are good examples for large-scale studies of human genome variations: 1) HapMap Data (1,417 individuals) (http://hapmap.ncbi.nlm.nih.gov/downloads/genotypes/2010-08_phaseII+III/forward/), 2) HGDP (Human Genome Diversity Project) Data (940 individuals) (http://www.hagsc.org/hgdp/files.html), 3) 1000 genomes Data (2,504 individuals) http://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/ If we can integrate all three data into a single volume of data, we should be able to conduct a more detailed analysis of human genome variations for a total number of 4,861 individuals (= 1,417+940+2,504 individuals). In fact, we successfully integrated these three data sets by use of information on the reference human genome sequence, and we conducted the big data analysis. In particular, we constructed a phylogenetic tree of about 5,000 human individuals at the genome level. As a result, we were able to identify clusters of ethnic groups, with detectable admixture, that were not possible by an analysis of each of the three data sets. Here, we report the outcome of this kind of big data analyses and discuss evolutionary significance of human genomic variations. Note that the present study was conducted in collaboration with Katsuhiko Mineta and Kosuke Goto at KAUST.

  4. Standard Procedure for Grid Interaction Analysis

    Svensson, Bertil; Lindahl, Sture; Karlsson, Daniel; Joensson, Jonas; Heyman, Fredrik

    2015-01-01

    Grid events, simultaneously affecting all safety related auxiliary systems in a nuclear power plant, are critical and must be carefully addressed in the design, upgrading and operational processes. Up to now, the connecting grid has often been treated as either fully available or totally unavailable, and too little attention has been paid to specify the grid performance criteria. This paper deals with standard procedures for grid interaction analysis, to derive tools and criteria to handle grid events challenging the safety systems of the plant. Critical external power system events are investigated and characterised, with respect to severity and rate of occurrence. These critical events are then grouped with respect to impact on the safety systems, when a disturbance propagates into the plant. It is then important to make sure that 1) the impact of the disturbance will never reach any critical system, 2) the impact of the disturbance will be eliminated before it will hurt any critical system, or 3) the critical systems will be proven to be designed in such a way that they can withstand the impact of the disturbance, and the associated control and protection systems can withstand voltage and frequency transients associated with the disturbances. A number of representative disturbance profiles, reflecting connecting grid conditions, are therefore derived, to be used for equipment testing. (authors)

  5. Variations in the usage and composition of a radial cocktail during radial access coronary angiography procedures.

    Pate, G

    2011-10-01

    A survey was conducted of medication administered during radial artery cannulation for coronary angiography in 2009 in Ireland; responses were obtained for 15 of 20 centres, in 5 of which no radial access procedures were undertaken. All 10 (100%) centres which provided data used heparin and one or more anti-spasmodics; verapamil in 9 (90%), nitrate in 1 (10%), both in 2 (20%). There were significant variations in the doses used. Further work needs to be done to determine the optimum cocktail to prevent radial artery injury following coronary angiography.

  6. SVAMP: Sequence variation analysis, maps and phylogeny

    Naeem, Raeece

    2014-04-03

    Summary: SVAMP is a stand-alone desktop application to visualize genomic variants (in variant call format) in the context of geographical metadata. Users of SVAMP are able to generate phylogenetic trees and perform principal coordinate analysis in real time from variant call format (VCF) and associated metadata files. Allele frequency map, geographical map of isolates, Tajima\\'s D metric, single nucleotide polymorphism density, GC and variation density are also available for visualization in real time. We demonstrate the utility of SVAMP in tracking a methicillin-resistant Staphylococcus aureus outbreak from published next-generation sequencing data across 15 countries. We also demonstrate the scalability and accuracy of our software on 245 Plasmodium falciparum malaria isolates from three continents. Availability and implementation: The Qt/C++ software code, binaries, user manual and example datasets are available at http://cbrc.kaust.edu.sa/svamp. © The Author 2014.

  7. Orthogonal projections and bootstrap resampling procedures in the study of infraspecific variation

    Luiza Carla Duarte

    1998-12-01

    Full Text Available The effect of an increase in quantitative continuous characters resulting from indeterminate growth upon the analysis of population differentiation was investigated using, as an example, a set of continuous characters measured as distance variables in 10 populations of a rodent species. The data before and after correction for allometric size effects using orthogonal projections were analyzed with a parametric bootstrap resampling procedure applied to canonical variate analysis. The variance component of the distance measures attributable to indeterminate growth within the populations was found to be substantial, although the ordination of the populations was not affected, as evidenced by the relative and absolute positions of the centroids. The covariance pattern of the distance variables used to infer the nature of the morphological differences was strongly influenced by indeterminate growth. The uncorrected data produced a misleading picture of morphological differentiation by indicating that groups of populations differed in size. However, the data corrected for allometric effects clearly demonstrated that populations differed morphologically both in size and shape. These results are discussed in terms of the analysis of morphological differentiation among populations and the definition of infraspecific geographic units.A influência do aumento em caracteres quantitativos contínuos devido ao crescimento indeterminado sobre a análise de diferenciação entre populações foi investigado utilizando como exemplo um conjunto de dados de variáveis craniométricas em 10 populações de uma espécie de roedor. Dois conjuntos de dados, um não corrigido para o efeito alométrico do tamanho e um outro corrigido para o efeito alométrico do tamanho utilizando um método de projeção ortogonal, foram analisados por um procedimento "bootstrap" de reamostragem aplicado à análise de variáveis canônicas. O componente de variância devido ao

  8. Summative Mass Analysis of Algal Biomass - Integration of Analytical Procedures: Laboratory Analytical Procedure (LAP)

    Laurens, Lieve M. L.

    2016-01-13

    This procedure guides the integration of laboratory analytical procedures to measure algal biomass constituents in an unambiguous manner and ultimately achieve mass balance closure for algal biomass samples. Many of these methods build on years of research in algal biomass analysis.

  9. Solid-phase extraction procedures in systematic toxicological analysis

    Franke, J.P.; de Zeeuw, R.A

    1998-01-01

    In systematic toxicological analysis (STA) the substance(s) present is (are) not known at the start of the analysis. in such an undirected search the extraction procedure cannot be directed to a given substance but must be a general procedure where a compromise must be reached in that the substances

  10. Analysis procedure for americium in environmental samples

    Holloway, R.W.; Hayes, D.W.

    1982-01-01

    Several methods for the analysis of 241 Am in environmental samples were evaluated and a preferred method was selected. This method was modified and used to determine the 241 Am content in sediments, biota, and water. The advantages and limitations of the method are discussed. The method is also suitable for 244 Cm analysis

  11. Analysis of half diallel mating designs I: a practical analysis procedure for ANOVA approximation.

    G.R. Johnson; J.N. King

    1998-01-01

    Procedures to analyze half-diallel mating designs using the SAS statistical package are presented. The procedure requires two runs of PROC and VARCOMP and results in estimates of additive and non-additive genetic variation. The procedures described can be modified to work on most statistical software packages which can compute variance component estimates. The...

  12. Building America Performance Analysis Procedures: Revision 1

    None

    2004-06-01

    To measure progress toward multi-year research goals, cost and performance trade-offs are evaluated through a series of controlled field and laboratory experiments supported by energy analysis techniques using test data to calibrate simulation models.

  13. Analysis of fetal dose in CT procedures

    Ortiz Torres, A.; Plazas, M. C.

    2006-01-01

    It is the miracle of the life, that sublime formation, the given more beautiful gift for heaven's sake to our to exist, and it is consequently our responsibility to look after their protection and care. Today in day the quantity of radiation absorbed by the fetus in the treatments for radiodiagnostic, mainly in the procedures of on-line axial tomography, the fetus absorbs a considerable dose of radiation and the questions generated regarding if these doses, bear to a risk of malformations or if it is necessary the interruption of the pregnancy is very frequent. In most of the cases, the treatment with ionizing radiations that it is beneficial for the mother, is only indirectly it for the fetus that is exposed to a risk. The possibility that a fetus or a small boy contract cancer caused by the radiation it can be three times superior to that of the population in general, of there the importance of analyzing the goods of the prenatal irradiation and the main agents to consider for the estimate of the magnitude of the risk of the exhibitions in uterus. In the different circumstances in that these can happen in treatments of on-line axial tomography computerized. (Author)

  14. Shakedown analysis by finite element incremental procedures

    Borkowski, A.; Kleiber, M.

    1979-01-01

    It is a common occurence in many practical problems that external loads are variable and the exact time-dependent history of loading is unknown. Instead of it load is characterized by a given loading domain: a convex polyhedron in the n-dimensional space of load parameters. The problem is then to check whether a structure shakes down, i.e. responds elastically after a few elasto-plastic cycles, or not to a variable loading as defined above. Such check can be performed by an incremental procedure. One should reproduce incrementally a simple cyclic process which consists of proportional load paths that connect the origin of the load space with the corners of the loading domain. It was proved that if a structure shakes down to such loading history then it is able to adopt itself to an arbitrary load path contained in the loading domain. The main advantage of such approach is the possibility to use existing incremental finite-element computer codes. (orig.)

  15. Video content analysis of surgical procedures.

    Loukas, Constantinos

    2018-02-01

    In addition to its therapeutic benefits, minimally invasive surgery offers the potential for video recording of the operation. The videos may be archived and used later for reasons such as cognitive training, skills assessment, and workflow analysis. Methods from the major field of video content analysis and representation are increasingly applied in the surgical domain. In this paper, we review recent developments and analyze future directions in the field of content-based video analysis of surgical operations. The review was obtained from PubMed and Google Scholar search on combinations of the following keywords: 'surgery', 'video', 'phase', 'task', 'skills', 'event', 'shot', 'analysis', 'retrieval', 'detection', 'classification', and 'recognition'. The collected articles were categorized and reviewed based on the technical goal sought, type of surgery performed, and structure of the operation. A total of 81 articles were included. The publication activity is constantly increasing; more than 50% of these articles were published in the last 3 years. Significant research has been performed for video task detection and retrieval in eye surgery. In endoscopic surgery, the research activity is more diverse: gesture/task classification, skills assessment, tool type recognition, shot/event detection and retrieval. Recent works employ deep neural networks for phase and tool recognition as well as shot detection. Content-based video analysis of surgical operations is a rapidly expanding field. Several future prospects for research exist including, inter alia, shot boundary detection, keyframe extraction, video summarization, pattern discovery, and video annotation. The development of publicly available benchmark datasets to evaluate and compare task-specific algorithms is essential.

  16. Analysis and optimization of blood-testing procedures.

    Bar-Lev, S.K.; Boxma, O.J.; Perry, D.; Vastazos, L.P.

    2017-01-01

    This paper is devoted to the performance analysis and optimization of blood testing procedures. We present a queueing model of two queues in series, representing the two stages of a blood-testing procedure. Service (testing) in stage 1 is performed in batches, whereas it is done individually in

  17. System analysis procedures for conducting PSA of nuclear power plants

    Lee, Yoon Hwan; Jeong, Won Dae; Kim, Tae Un; Kim, Kil You; Han, Sang Hoon; Chang, Seung Chul; Sung, Tae Yong; Yang, Jun Eon; Kang, Dae Il; Park, Jin Hee; Hwang, Mi Jeong; Jin, Young Ho.

    1997-03-01

    This document, the Probabilistic Safety Assessment(PSA) procedures guide for system analysis, is intended to provide the guidelines to analyze the target of system consistently and technically in the performance of PSA for nuclear power plants(NPPs). The guide has been prepared in accordance with the procedures and techniques for fault tree analysis(FTA) used in system analysis. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis(ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. After the construction of fault tree is completed, system unavailability is calculated with the CUT module of KIRAP, and the qualitative and quantitative analysis is performed through the process as above stated. As above mentioned, the procedures for system analysis is based on PSA procedures and methods which has been applied to the safety assessments of constructing NPPs in the country. Accordingly, the method of FTA stated in this procedures guide will be applicable to PSA for the NPPs to be constructed in the future. (author). 6 tabs., 11 figs., 7 refs

  18. System analysis procedures for conducting PSA of nuclear power plants

    Lee, Yoon Hwan; Jeong, Won Dae; Kim, Tae Un; Kim, Kil You; Han, Sang Hoon; Chang, Seung Chul; Sung, Tae Yong; Yang, Jun Eon; Kang, Dae Il; Park, Jin Hee; Hwang, Mi Jeong; Jin, Young Ho

    1997-03-01

    This document, the Probabilistic Safety Assessment(PSA) procedures guide for system analysis, is intended to provide the guidelines to analyze the target of system consistently and technically in the performance of PSA for nuclear power plants(NPPs). The guide has been prepared in accordance with the procedures and techniques for fault tree analysis(FTA) used in system analysis. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis(ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. After the construction of fault tree is completed, system unavailability is calculated with the CUT module of KIRAP, and the qualitative and quantitative analysis is performed through the process as above stated. As above mentioned, the procedures for system analysis is based on PSA procedures and methods which has been applied to the safety assessments of constructing NPPs in the country. Accordingly, the method of FTA stated in this procedures guide will be applicable to PSA for the NPPs to be constructed in the future. (author). 6 tabs., 11 figs., 7 refs.

  19. Convergence analysis of variational and non-variational multigrid algorithms for the Laplace-Beltrami operator

    Bonito, Andrea; Pasciak, Joseph E.

    2012-01-01

    is captured well enough by the coarsest grid. The main argument hinges on a perturbation analysis from an auxiliary variational algorithm defined directly on the smooth surface. In addition, the vanishing mean value constraint is imposed on each level, thereby

  20. Quantification procedures in micro X-ray fluorescence analysis

    Kanngiesser, Birgit

    2003-01-01

    For the quantification in micro X-ray fluorescence analysis standardfree quantification procedures have become especially important. An introduction to the basic concepts of these quantification procedures is given, followed by a short survey of the procedures which are available now and what kind of experimental situations and analytical problems are addressed. The last point is extended by the description of an own development for the fundamental parameter method, which renders the inclusion of nonparallel beam geometries possible. Finally, open problems for the quantification procedures are discussed

  1. Operating procedures: Fusion Experiments Analysis Facility

    Lerche, R.A.; Carey, R.W.

    1984-03-20

    The Fusion Experiments Analysis Facility (FEAF) is a computer facility based on a DEC VAX 11/780 computer. It became operational in late 1982. At that time two manuals were written to aid users and staff in their interactions with the facility. This manual is designed as a reference to assist the FEAF staff in carrying out their responsibilities. It is meant to supplement equipment and software manuals supplied by the vendors. Also this manual provides the FEAF staff with a set of consistent, written guidelines for the daily operation of the facility.

  2. Building America Performance Analysis Procedures: Revision 1

    Hendron, R.; Anderson, R.; Judkoff, R.; Christensen, C.; Eastment, M.; Norton, P.; Reeves, P.; Hancock, E.

    2004-06-01

    To measure progress toward multi-year Building America research goals, cost and performance trade-offs are evaluated through a series of controlled field and laboratory experiments supported by energy analysis techniques that use test data to''calibrate'' energy simulation models. This report summarizes the guidelines for reporting such analytical results using the Building America Research Benchmark (Version 3.1) in studies that also include consideration of current Regional and Builder Standard Practice. Version 3.1 of the Benchmark is generally consistent with the 1999 Home Energy Rating System (HERS) Reference Home, with additions that allow evaluation of all home energy uses.

  3. Operating procedures: Fusion Experiments Analysis Facility

    Lerche, R.A.; Carey, R.W.

    1984-01-01

    The Fusion Experiments Analysis Facility (FEAF) is a computer facility based on a DEC VAX 11/780 computer. It became operational in late 1982. At that time two manuals were written to aid users and staff in their interactions with the facility. This manual is designed as a reference to assist the FEAF staff in carrying out their responsibilities. It is meant to supplement equipment and software manuals supplied by the vendors. Also this manual provides the FEAF staff with a set of consistent, written guidelines for the daily operation of the facility

  4. Dosimetric impact of the variation of the prostate volume and shape between pretreatment planning and treatment procedure

    Beaulieu, Luc; Aubin, Sylviane; Taschereau, Richard; Pouliot, Jean; Vigneault, Eric

    2002-01-01

    Purpose: The goal of this study is to evaluate the dosimetric impact on a pretreatment planning of prostatic volume and shape variations occurring between the moment of the volume study (preplanning) and just before a transperineal permanent seed implant procedure. Such variations could be an obvious source of misplacement of the seeds relative to the prostate gland and organs at risk. Other sources of dosimetric uncertainties, such as misplacement due to the procedure itself or edema, are eliminated by looking at these variations before the implant procedure. Methods and Materials: For 35 clinical cases, prostate contours were taken at preplanning time as well as in the operating room (OR) minutes before the procedure. Comparison of shape and volume between the two sets was made. The impact on V100 was evaluated by placing the seeds in their planned positions in the new volume (clinical situation) and also by performing a new plan with the second set of contours to simulate an intraoperative approach. Results: The volume taken in the OR remained unchanged compared to the pretreatment planning volume in only 37% of the cases. While on average the dose coverage loss from pretreatment planning due to a combination of variations of volume and shape was small at 5.7%, a V100 degradation of up to 20.9% was observed in extreme cases. Even in cases in which no changes in volume were observed, changes in shape occurred and strongly affected implant dosimetry. Conclusions: Variations of volume and shape between pretreatment planning and the implant procedure can have a strong impact on the dosimetry if the planning and the implant procedure are not performed on the same day. This is an argument in favor of performing implant dosimetry in the OR

  5. Photographic paper X-ray procedure - a simple technique for the visualisation of osseous norm variations and malformations

    Markert, K.; Wirth, I.; Reinhold-Richter, L.

    1983-01-01

    On the basis of osseous norm variations and malformations, a simple X-ray procedure by means of photographic paper which can be applied in every institute of pathology is demonstrated. The quality of the photographs permits the assessment of skeletal changes which are of diagnostic importance. (author)

  6. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Ronald L. Boring

    2012-06-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  7. SVAMP: Sequence variation analysis, maps and phylogeny

    Naeem, Raeece; Hidayah, Lailatul; Preston, Mark D.; Clark, Taane G.; Pain, Arnab

    2014-01-01

    Summary: SVAMP is a stand-alone desktop application to visualize genomic variants (in variant call format) in the context of geographical metadata. Users of SVAMP are able to generate phylogenetic trees and perform principal coordinate analysis

  8. Procedure for the analysis of americium in complex matrices

    Knab, D.

    1978-02-01

    A radioanalytical procedure for the analysis of americium in complex matrices has been developed. Clean separations of americium can be obtained from up to 100 g of sample ash, regardless of the starting material. The ability to analyze large masses of material provides the increased sensitivity necessary to detect americium in many environmental samples. The procedure adequately decontaminates from rare earth elements and natural radioactive nuclides that interfere with the alpha spectrometric measurements

  9. ORNL-PWR BDHT analysis procedure: an overview

    Cliff, S.B.

    1978-01-01

    The key computer programs currently used by the analysis procedure of the ORNL-PWR Blowdown Heat Transfer Separate Effects Program are overviewed with particular emphasis placed on their interrelationships. The major modeling and calculational programs, COBRA, ORINC, ORTCAL, PINSIM, and various versions of RELAP4, are summarized and placed into the perspective of the procedure. The supportive programs, REDPLT, ORCPLT, BDHTPLOT, OXREPT, and OTOCI, and their uses are described

  10. Analysis of Price Variation and Market Integration of Prosopis ...

    Analysis of Price Variation and Market Integration of Prosopis Africana (guill. ... select five markets based on the presence of traders selling the commodity in the markets ... T- test result showed that Prosopis africana seed trade is profitable and ...

  11. Accident Sequence Evaluation Program: Human reliability analysis procedure

    Swain, A.D.

    1987-02-01

    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.

  12. Accident Sequence Evaluation Program: Human reliability analysis procedure

    Swain, A.D.

    1987-02-01

    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs

  13. Convergence analysis of variational and non-variational multigrid algorithms for the Laplace-Beltrami operator

    Bonito, Andrea

    2012-09-01

    We design and analyze variational and non-variational multigrid algorithms for the Laplace-Beltrami operator on a smooth and closed surface. In both cases, a uniform convergence for the V -cycle algorithm is obtained provided the surface geometry is captured well enough by the coarsest grid. The main argument hinges on a perturbation analysis from an auxiliary variational algorithm defined directly on the smooth surface. In addition, the vanishing mean value constraint is imposed on each level, thereby avoiding singular quadratic forms without adding additional computational cost. Numerical results supporting our analysis are reported. In particular, the algorithms perform well even when applied to surfaces with a large aspect ratio. © 2011 American Mathematical Society.

  14. Method and procedure of fatigue analysis for nuclear equipment

    Wen Jing; Fang Yonggang; Lu Yan; Zhang Yue; Sun Zaozhan; Zou Mingzhong

    2014-01-01

    As an example, the fatigue analysis for the upper head of the pressurizer in one NPP was carried out by using ANSYS, a finite element method analysis software. According to RCC-M code, only two kinds of typical transients of temperature and pressure were considered in the fatigue analysis. Meanwhile, the influence of earthquake was taken into account. The method and procedure of fatigue analysis for nuclear safety equipment were described in detail. This paper provides a reference for fatigue analysis and assessment of nuclear safety grade equipment and pipe. (authors)

  15. ORNL: PWR-BDHT analysis procedure, a preliminary overview

    Cliff, S.B.

    1978-01-01

    The computer programs currently used in the analysis of the ORNL-PWR Blowdown Heat Transfer Separate-Effects Program are overviewed. The current linkages and relationships among the programs are given along with general comments about the future directions of some of these programs. The overview is strictly from the computer science point of view with only minimal information concerning the engineering aspects of the analysis procedure

  16. Cost analysis of robotic versus laparoscopic general surgery procedures.

    Higgins, Rana M; Frelich, Matthew J; Bosler, Matthew E; Gould, Jon C

    2017-01-01

    Robotic surgical systems have been used at a rapidly increasing rate in general surgery. Many of these procedures have been performed laparoscopically for years. In a surgical encounter, a significant portion of the total costs is associated with consumable supplies. Our hospital system has invested in a software program that can track the costs of consumable surgical supplies. We sought to determine the differences in cost of consumables with elective laparoscopic and robotic procedures for our health care organization. De-identified procedural cost and equipment utilization data were collected from the Surgical Profitability Compass Procedure Cost Manager System (The Advisory Board Company, Washington, DC) for our health care system for laparoscopic and robotic cholecystectomy, fundoplication, and inguinal hernia between the years 2013 and 2015. Outcomes were length of stay, case duration, and supply cost. Statistical analysis was performed using a t-test for continuous variables, and statistical significance was defined as p robotic procedures. Length of stay did not differ for fundoplication or cholecystectomy. Length of stay was greater for robotic inguinal hernia repair. Case duration was similar for cholecystectomy (84.3 robotic and 75.5 min laparoscopic, p = 0.08), but significantly longer for robotic fundoplication (197.2 robotic and 162.1 min laparoscopic, p = 0.01) and inguinal hernia repair (124.0 robotic and 84.4 min laparoscopic, p = ≪0.01). We found a significantly increased cost of general surgery procedures for our health care system when cases commonly performed laparoscopically are instead performed robotically. Our analysis is limited by the fact that we only included costs associated with consumable surgical supplies. The initial acquisition cost (over $1 million for robotic surgical system), depreciation, and service contract for the robotic and laparoscopic systems were not included in this analysis.

  17. A variational centroid density procedure for the calculation of transmission coefficients for asymmetric barriers at low temperature

    Messina, M.; Schenter, G.K.; Garrett, B.C.

    1995-01-01

    The low temperature behavior of the centroid density method of Voth, Chandler, and Miller (VCM) [J. Chem. Phys. 91, 7749 (1989)] is investigated for tunneling through a one-dimensional barrier. We find that the bottleneck for a quantum activated process as defined by VCM does not correspond to the classical bottleneck for the case of an asymmetric barrier. If the centroid density is constrained to be at the classical bottleneck for an asymmetric barrier, the centroid density method can give transmission coefficients that are too large by as much as five orders of magnitude. We follow a variational procedure, as suggested by VCM, whereby the best transmission coefficient is found by varying the position of the centroid until the minimum value for this transmission coefficient is obtained. This is a procedure that is readily generalizable to multidimensional systems. We present calculations on several test systems which show that this variational procedure greatly enhances the accuracy of the centroid density method compared to when the centroid is constrained to be at the barrier top. Furthermore, the relation of this procedure to the low temperature periodic orbit or ''instanton'' approach is discussed. copyright 1995 American Institute of Physics

  18. On the analysis of line profile variations: A statistical approach

    McCandliss, S.R.

    1988-01-01

    This study is concerned with the empirical characterization of the line profile variations (LPV), which occur in many of and Wolf-Rayet stars. The goal of the analysis is to gain insight into the physical mechanisms producing the variations. The analytic approach uses a statistical method to quantify the significance of the LPV and to identify those regions in the line profile which are undergoing statistically significant variations. Line positions and flux variations are then measured and subject to temporal and correlative analysis. Previous studies of LPV have for the most part been restricted to observations of a single line. Important information concerning the range and amplitude of the physical mechanisms involved can be obtained by simultaneously observing spectral features formed over a range of depths in the extended mass losing atmospheres of massive, luminous stars. Time series of a Wolf-Rayet and two of stars with nearly complete spectral coverage from 3940 angstrom to 6610 angstrom and with spectral resolution of R = 10,000 are analyzed here. These three stars exhibit a wide range of both spectral and temporal line profile variations. The HeII Pickering lines of HD 191765 show a monotonic increase in the peak rms variation amplitude with lines formed at progressively larger radii in the Wolf-Rayet star wind. Two times scales of variation have been identified in this star: a less than one day variation associated with small scale flickering in the peaks of the line profiles and a greater than one day variation associated with large scale asymmetric changes in the overall line profile shapes. However, no convincing period phenomena are evident at those periods which are well sampled in this time series

  19. Flood risk analysis procedure for nuclear power plants

    Wagner, D.P.

    1982-01-01

    This paper describes a methodology and procedure for determining the impact of floods on nuclear power plant risk. The procedures are based on techniques of fault tree and event tree analysis and use the logic of these techniques to determine the effects of a flood on system failure probability and accident sequence occurrence frequency. The methodology can be applied independently or as an add-on analysis for an existing risk assessment. Each stage of the analysis yields useful results such as the critical flood level, failure flood level, and the flood's contribution to accident sequence occurrence frequency. The results of applications show the effects of floods on the risk from nuclear power plants analyzed in the Reactor Safety Study

  20. Sample preparation procedure for PIXE elemental analysis on soft tissues

    Kubica, B.; Kwiatek, W.M.; Dutkiewicz, E.M.; Lekka, M.

    1997-01-01

    Trace element analysis is one of the most important field in analytical chemistry. There are several instrumental techniques which are applied for determinations of microscopic elemental content. The PIXE (Proton Induced X-ray Emission) technique is one of the nuclear techniques that is commonly applied for such purpose due to its multielemental analysis possibilities. The aim of this study was to establish the optimal conditions for target preparation procedure. In this paper two different approaches to the topic are presented and widely discussed. The first approach was the traditional pellet technique and the second one was mineralization procedure. For the analysis soft tissue such as liver was used. Some results are also presented on water samples. (author)

  1. Methodology for dimensional variation analysis of ITER integrated systems

    Fuentes, F. Javier; Trouvé, Vincent; Cordier, Jean-Jacques; Reich, Jens

    2016-01-01

    Highlights: • Tokamak dimensional management methodology, based on 3D variation analysis, is presented. • Dimensional Variation Model implementation workflow is described. • Methodology phases are described in detail. The application of this methodology to the tolerance analysis of ITER Vacuum Vessel is presented. • Dimensional studies are a valuable tool for the assessment of Tokamak PCR (Project Change Requests), DR (Deviation Requests) and NCR (Non-Conformance Reports). - Abstract: The ITER machine consists of a large number of complex systems highly integrated, with critical functional requirements and reduced design clearances to minimize the impact in cost and performances. Tolerances and assembly accuracies in critical areas could have a serious impact in the final performances, compromising the machine assembly and plasma operation. The management of tolerances allocated to part manufacture and assembly processes, as well as the control of potential deviations and early mitigation of non-compliances with the technical requirements, is a critical activity on the project life cycle. A 3D tolerance simulation analysis of ITER Tokamak machine has been developed based on 3DCS dedicated software. This integrated dimensional variation model is representative of Tokamak manufacturing functional tolerances and assembly processes, predicting accurate values for the amount of variation on critical areas. This paper describes the detailed methodology to implement and update the Tokamak Dimensional Variation Model. The model is managed at system level. The methodology phases are illustrated by its application to the Vacuum Vessel (VV), considering the status of maturity of VV dimensional variation model. The following topics are described in this paper: • Model description and constraints. • Model implementation workflow. • Management of input and output data. • Statistical analysis and risk assessment. The management of the integration studies based on

  2. Methodology for dimensional variation analysis of ITER integrated systems

    Fuentes, F. Javier, E-mail: FranciscoJavier.Fuentes@iter.org [ITER Organization, Route de Vinon-sur-Verdon—CS 90046, 13067 St Paul-lez-Durance (France); Trouvé, Vincent [Assystem Engineering & Operation Services, rue J-M Jacquard CS 60117, 84120 Pertuis (France); Cordier, Jean-Jacques; Reich, Jens [ITER Organization, Route de Vinon-sur-Verdon—CS 90046, 13067 St Paul-lez-Durance (France)

    2016-11-01

    Highlights: • Tokamak dimensional management methodology, based on 3D variation analysis, is presented. • Dimensional Variation Model implementation workflow is described. • Methodology phases are described in detail. The application of this methodology to the tolerance analysis of ITER Vacuum Vessel is presented. • Dimensional studies are a valuable tool for the assessment of Tokamak PCR (Project Change Requests), DR (Deviation Requests) and NCR (Non-Conformance Reports). - Abstract: The ITER machine consists of a large number of complex systems highly integrated, with critical functional requirements and reduced design clearances to minimize the impact in cost and performances. Tolerances and assembly accuracies in critical areas could have a serious impact in the final performances, compromising the machine assembly and plasma operation. The management of tolerances allocated to part manufacture and assembly processes, as well as the control of potential deviations and early mitigation of non-compliances with the technical requirements, is a critical activity on the project life cycle. A 3D tolerance simulation analysis of ITER Tokamak machine has been developed based on 3DCS dedicated software. This integrated dimensional variation model is representative of Tokamak manufacturing functional tolerances and assembly processes, predicting accurate values for the amount of variation on critical areas. This paper describes the detailed methodology to implement and update the Tokamak Dimensional Variation Model. The model is managed at system level. The methodology phases are illustrated by its application to the Vacuum Vessel (VV), considering the status of maturity of VV dimensional variation model. The following topics are described in this paper: • Model description and constraints. • Model implementation workflow. • Management of input and output data. • Statistical analysis and risk assessment. The management of the integration studies based on

  3. Comparative analysis of face recognition techniques with illumination variation

    Jondhale, K C; Waghmare, L M

    2010-01-01

    Illumination variation is one of the major challenges in the face recognition. To deal with this problem, this paper presents comparative analysis of three different techniques. First, the DCT is employed to compensate for illumination variations in the logarithm domain. Since illumination variation lies mainly in the low frequency band, an appropriate number of DCT coefficients are truncated to reduce the variations under different lighting conditions. The nearest neighbor classifier based on Euclidean distance is employed for classification. Second, the performance of PCA is checked on normalized image. PCA is a technique used to reduce multidimensional data sets to a lower dimension for analysis. Third, LDA based methods gives a satisfactory result under controlled lighting condition. But its performance under large illumination variation is not satisfactory. So, the performance of LDA is checked on normalized image. Experimental results on the Yale B and ORL database show that the proposed approach of application of PCA and LDA on normalized dataset improves the performance significantly for the face images with large illumination variations.

  4. Ontogenic variation and effect of collection procedure on leaf biomechanical properties of Mediterranean seagrass

    de los Santos, C.B.; Vicencio-Rammsy, B.; Lepoint, G.; Remy, F.; Bouma, T.J.; Gobert, S.

    2016-01-01

    Leaf mechanical traits are important to understand how aquatic plants fracture and deform when subjected to abiotic (currents or waves) or biotic (herbivory attack) mechanical forces. The likely occurrence of variation during leaf ontogeny in these traits may thus have implications for hydrodynamic

  5. Computational modelling of flow and tip variations of aortic cannulae in cardiopulmonary bypass procedure

    Thomas, Siti A.; Empaling, Shirly; Darlis, Nofrizalidris; Osman, Kahar; Dillon, Jeswant; Taib, Ishkrizat; Khudzari, Ahmad Zahran Md

    2017-09-01

    Aortic cannulation has been the gold standard for maintaining cardiovascular function during open heart surgery while being connected onto the heart lung machine. These cannulation produces high velocity outflow which may lead to adverse effect on patient condition, especially sandblasting effect on aorta wall and blood cells damage. This paper reports a novel design that was able to decrease high velocity outflow. There were three design factors of that was investigated. The design factors consist of the cannula type, the flow rate, and the cannula tip design which result in 12 variations. The cannulae type used were the spiral flow inducing cannula and the standard cannula. The flow rates are varied from three to five litres per minute (lpm). Parameters for each cannula variation included maximum velocity within the aorta, pressure drop, wall shear stress (WSS) area exceeding 15 Pa, and impinging velocity on the aorta wall were evaluated. Based on the result, spiral flow inducing cannulae is proposed as a better alternatives due to its ability to reduce outflow velocity. Meanwhile, the pressure drop of all variations are less than the limit of 100 mmHg, although standard cannulae yielded better result. All cannulae show low reading of wall shear stress which decrease the possibilities for atherogenesis formation. In conclusion, as far as velocity is concerned, spiral flow is better compared to standard flow across all cannulae variations.

  6. Variational analysis of regular mappings theory and applications

    Ioffe, Alexander D

    2017-01-01

    This monograph offers the first systematic account of (metric) regularity theory in variational analysis. It presents new developments alongside classical results and demonstrates the power of the theory through applications to various problems in analysis and optimization theory. The origins of metric regularity theory can be traced back to a series of fundamental ideas and results of nonlinear functional analysis and global analysis centered around problems of existence and stability of solutions of nonlinear equations. In variational analysis, regularity theory goes far beyond the classical setting and is also concerned with non-differentiable and multi-valued operators. The present volume explores all basic aspects of the theory, from the most general problems for mappings between metric spaces to those connected with fairly concrete and important classes of operators acting in Banach and finite dimensional spaces. Written by a leading expert in the field, the book covers new and powerful techniques, whic...

  7. Some Effects of Procedural Variations on Choice Responding in Concurrent Chains

    Moore, J.

    2009-01-01

    The present research used pigeons in a three-key operant chamber and varied procedural features pertaining to both initial and terminal links of concurrent chains. The initial links randomly alternated on the side keys during a session, while the terminal links always appeared on the center key. Both equal and unequal initial-link schedules were…

  8. CARVEDILOL POPULATION PHARMACOKINETIC ANALYSIS – APPLIED VALIDATION PROCEDURE

    Aleksandra Catić-Đorđević

    2013-09-01

    Full Text Available Carvedilol is a nonselective beta blocker/alpha-1 blocker, which is used for treatment of essential hypertension, chronic stable angina, unstable angina and ischemic left ventricular dysfunction. The aim of this study was to describe carvedilol population pharmacokinetic (PK analysis as well as the validation of analytical procedure, which is an important step regarding this approach. In contemporary clinical practice, population PK analysis is often more important than standard PK approach in setting a mathematical model that describes the PK parameters. Also, it includes the variables that have particular importance in the drugs pharmacokinetics such as sex, body mass, dosage, pharmaceutical form, pathophysiological state, disease associated with the organism or the presence of a specific polymorphism in the isoenzyme important for biotransformation of the drug. One of the most frequently used approach in population PK analysis is the Nonlinear Modeling of Mixed Effects - NONMEM modeling. Analytical methods used in the data collection period is of great importance for the implementation of a population PK analysis of carvedilol in order to obtain reliable data that can be useful in clinical practice. High performance liquid chromatography (HPLC analysis of carvedilol is used to confirm the identity of a drug and provide quantitative results and also to monitor the efficacy of the therapy. Analytical procedures used in other studies could not be fully implemented in our research as it was necessary to perform certain modification and validation of the method with the aim of using the obtained results for the purpose of a population pharmacokinetic analysis. Validation process is a logical terminal phase of analytical procedure development that provides applicability of the procedure itself. The goal of validation is to ensure consistency of the method and accuracy of results or to confirm the selection of analytical method for a given sample

  9. The utilisation of thermal analysis to optimise radiocarbon dating procedures

    Brandova, D.; Keller, W.A.; Maciejewski, M.

    1999-01-01

    Thermal analysis combined with mass spectrometry was applied to radiocarbon dating procedures (age determination of carbon-containing samples). Experiments carried out under an oxygen atmosphere were used to determine carbon content and combustion range of soil and wood samples. Composition of the shell sample and its decomposition were investigated. The quantification of CO 2 formed by the oxidation of carbon was done by the application of pulse thermal analysis. Experiments carried out under an inert atmosphere determined the combustion range of coal with CuO as an oxygen source. To eliminate a possible source of contamination in the radiocarbon dating procedures the adsorption of CO 2 by CuO was investigated. (author)

  10. Structural analysis and optimization procedure of the TFTR device substructure

    Driesen, G.

    1975-10-01

    A structural evaluation of the TFTR device substructure is performed in order to verify the feasibility of the proposed design concept as well as to establish a design optimization procedure for minimizing the material and fabrication cost of the substructure members. A preliminary evaluation of the seismic capability is also presented. The design concept on which the analysis is based is consistent with that described in the Conceptual Design Status Briefing report dated June 18, 1975

  11. Introduction and application of the multiscale coefficient of variation analysis.

    Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh

    2017-10-01

    Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.

  12. Variational analysis and generalized differentiation I basic theory

    Mordukhovich, Boris S

    2006-01-01

    Contains a study of the basic concepts and principles of variational analysis and generalized differentiation in both finite-dimensional and infinite-dimensional spaces. This title presents many applications to problems in optimization, equilibria, stability and sensitivity, control theory, economics, mechanics, and more.

  13. A note on the implementation of variational procedures involving Green's operators

    Blakemore, M; Evans, G A; Hyslop, J

    1975-12-01

    The numerical implementation of a minimum energy principle for atoms and molecules involving integral operators is discussed. It is demonstrated that the use of unscaled trial functions may be as convenient computationally as the earlier procedures involving scaled trial functions and that the revised prescription has advantages, mainly in ease of interpretation and comparison of results. By way of illustration, some earlier calculations on the hydrogen ion are reconsidered and compared. (auth)

  14. Analysis of Elementary School students’ algebraic perceptions and procedures

    Sandra Mara Marasini

    2012-12-01

    Full Text Available This study aims to verify how students in elementary school see themselves in relation to mathematics and, at the same time, analyze the procedures used to solve algebraic tasks. These students in the 8th year of elementary school, and first and third years of high school, from two State schools in Passo Fundo/RS, answered a questionnaire about their own perceptions of the mathematics lessons, the subject mathematics and algebraic content. The analysis was based mainly on authors from the athematical education and the historic-cultural psychology areas. It was verifi ed that even among students who claimed to be happy with the idea of having mathematicsclasses several presented learning diffi culties regarding algebraic contents, revealed by the procedures employed. It was concluded that it is necessary to design proposals with didactic sequences, mathematically and pedagogically based, which can effi cientlyoptimize the appropriation of meaning from the concepts approached and their application in different situations.

  15. The analysis, by a fusion procedure and X-ray-fluorescence spectrometry, of silicates and slags

    Jacobs, J.J.; Balaes, A.M.E.

    1980-01-01

    A glass-disc fusion method is described for the analysis, by X-ray-fluorescence spectrometry, of slags and silicate materials. The data are corrected for detector dead time and short-term instrumental drift. Corrections are made for matrix variations by use of the Lachange-Traill mathematical model, and the results are processed on a mini-computer, an iterative procedure being used in the solving of the simultaneous equations. As the alpha-correction coefficients of the Lachange-Traill model are not truly constant, a modified version of the model is proposed [af

  16. Elemental hair analysis: A review of procedures and applications

    Pozebon, D.; Scheffler, G.L.; Dressler, V.L.

    2017-01-01

    Although exogenous contamination and unreliable reference values have limited the utility of scalp hair as a biomarker of chemical elements exposure, its use in toxicological, clinical, environmental and forensic investigations is growing and becoming more extensive. Therefore, hair elemental analysis is reviewed in the current manuscript which spans articles published in the last 10 years. It starts with a general discussion of history, morphology and possible techniques for elemental analysis, where inductively coupled plasma-mass spectrometry (ICP-MS) is clearly highlighted since this technique is leading quantitative ultra-trace elemental analysis. Emphasis over sampling, quality assurance, washing procedures and sample decomposition is given with detailed protocols compiled in tables as well as the utility of hair to identify human gender, age, diseases, healthy conditions, nutrition status and contamination sites. Isotope ratio information, chemical speciation analysis and analyte preconcentration are also considered for hair. Finally, the potential of laser ablation ICP-MS (LA-ICP-MS) to provide spatial resolution and time-track the monitoring of elements in hair strands instead of conventional bulk analysis is spotlighted as a real future trend in the field. - Highlights: • Elemental analysis of hair is critically reviewed, with focus on ICP-MS employment. • Standards protocols of hair washing and sample decomposition are compiled. • The usefulness of elemental and/or isotopic analysis of hair is demonstrated. • The potential of LA-ICP-MS for elemental time tracking in hair is highlighted.

  17. A simplified procedure of linear regression in a preliminary analysis

    Silvia Facchinetti

    2013-05-01

    Full Text Available The analysis of a statistical large data-set can be led by the study of a particularly interesting variable Y – regressed – and an explicative variable X, chosen among the remained variables, conjointly observed. The study gives a simplified procedure to obtain the functional link of the variables y=y(x by a partition of the data-set into m subsets, in which the observations are synthesized by location indices (mean or median of X and Y. Polynomial models for y(x of order r are considered to verify the characteristics of the given procedure, in particular we assume r= 1 and 2. The distributions of the parameter estimators are obtained by simulation, when the fitting is done for m= r + 1. Comparisons of the results, in terms of distribution and efficiency, are made with the results obtained by the ordinary least square methods. The study also gives some considerations on the consistency of the estimated parameters obtained by the given procedure.

  18. Reduction procedures for accurate analysis of MSX surveillance experiment data

    Gaposchkin, E. Mike; Lane, Mark T.; Abbot, Rick I.

    1994-01-01

    Technical challenges of the Midcourse Space Experiment (MSX) science instruments require careful characterization and calibration of these sensors for analysis of surveillance experiment data. Procedures for reduction of Resident Space Object (RSO) detections will be presented which include refinement and calibration of the metric and radiometric (and photometric) data and calculation of a precise MSX ephemeris. Examples will be given which support the reduction, and these are taken from ground-test data similar in characteristics to the MSX sensors and from the IRAS satellite RSO detections. Examples to demonstrate the calculation of a precise ephemeris will be provided from satellites in similar orbits which are equipped with S-band transponders.

  19. Automated procedure for performing computer security risk analysis

    Smith, S.T.; Lim, J.J.

    1984-05-01

    Computers, the invisible backbone of nuclear safeguards, monitor and control plant operations and support many materials accounting systems. Our automated procedure to assess computer security effectiveness differs from traditional risk analysis methods. The system is modeled as an interactive questionnaire, fully automated on a portable microcomputer. A set of modular event trees links the questionnaire to the risk assessment. Qualitative scores are obtained for target vulnerability, and qualitative impact measures are evaluated for a spectrum of threat-target pairs. These are then combined by a linguistic algebra to provide an accurate and meaningful risk measure. 12 references, 7 figures

  20. Review and Application of Ship Collision and Grounding Analysis Procedures

    Pedersen, Preben Terndrup

    2010-01-01

    It is the purpose of the paper to present a review of prediction and analysis tools for collision and grounding analyses and to outline a probabilistic procedure for which these tools can be used by the maritime industry to develop performance based rules to reduce the risk associated with human,......, environmental and economic costs of collision and grounding events. The main goal of collision and grounding research should be to identify the most economic risk control options associated with prevention and mitigation of collision and grounding events....

  1. User's operating procedures. Volume 2: Scout project financial analysis program

    Harris, C. G.; Haris, D. K.

    1985-01-01

    A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.

  2. Some new mathematical methods for variational objective analysis

    Wahba, Grace; Johnson, Donald R.

    1994-01-01

    Numerous results were obtained relevant to remote sensing, variational objective analysis, and data assimilation. A list of publications relevant in whole or in part is attached. The principal investigator gave many invited lectures, disseminating the results to the meteorological community as well as the statistical community. A list of invited lectures at meetings is attached, as well as a list of departmental colloquia at various universities and institutes.

  3. Optimized variational analysis scheme of single Doppler radar wind data

    Sasaki, Yoshi K.; Allen, Steve; Mizuno, Koki; Whitehead, Victor; Wilk, Kenneth E.

    1989-01-01

    A computer scheme for extracting singularities has been developed and applied to single Doppler radar wind data. The scheme is planned for use in real-time wind and singularity analysis and forecasting. The method, known as Doppler Operational Variational Extraction of Singularities is outlined, focusing on the principle of local symmetry. Results are presented from the application of the scheme to a storm-generated gust front in Oklahoma on May 28, 1987.

  4. Variation in markup of general surgical procedures by hospital market concentration.

    Cerullo, Marcelo; Chen, Sophia Y; Dillhoff, Mary; Schmidt, Carl R; Canner, Joseph K; Pawlik, Timothy M

    2018-04-01

    Increasing hospital market concentration (with concomitantly decreasing hospital market competition) may be associated with rising hospital prices. Hospital markup - the relative increase in price over costs - has been associated with greater hospital market concentration. Patients undergoing a cardiothoracic or gastrointestinal procedure in the 2008-2011 Nationwide Inpatient Sample (NIS) were identified and linked to Hospital Market Structure Files. The association between market concentration, hospital markup and hospital for-profit status was assessed using mixed-effects log-linear models. A weighted total of 1,181,936 patients were identified. In highly concentrated markets, private for-profit status was associated with an 80.8% higher markup compared to public/private not-for-profit status (95%CI: +69.5% - +96.9%; p markup compared to public/private not-for-profit status in unconcentrated markets (95%CI: +45.4% - +81.1%; p markup. Government and private not-for-profit hospitals employed lower markups in more concentrated markets, whereas private for-profit hospitals employed higher markups in more concentrated markets. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Green mathematics: Benefits of including biological variation in your data analysis

    Tijskens, L.M.M.; Schouten, R.E.; Unuk, T.; Simcic, M.

    2015-01-01

    Biological variation is omnipresent in nature. It contains useful information that is neglected by the usually applied statistical procedures. To extract this information special procedures have to be applied. Biological variation is seen in properties (e.g. size, colour, firmness), but the

  6. Shrunken head (tsantsa): a complete forensic analysis procedure.

    Charlier, P; Huynh-Charlier, I; Brun, L; Hervé, C; de la Grandmaison, G Lorin

    2012-10-10

    Based on the analysis of shrunken heads referred to our forensic laboratory for anthropological expertise, and data from both anthropological and medical literature, we propose a complete forensic procedure for the analysis of such pieces. A list of 14 original morphological criteria has been developed, based on the global aspect, color, physical deformation, anatomical details, and eventual associated material (wood, vegetal fibers, sand, charcoals, etc.). Such criteria have been tested on a control sample of 20 tsantsa (i.e. shrunken heads from the Jivaro or Shuar tribes of South America). Further complementary analyses are described such as CT-scan and microscopic examination. Such expertise is more and more asked to forensic anthropologists and practitioners in a context of global repatriation of human artifacts to native communities. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Pertinent anatomy and analysis for midface volumizing procedures.

    Surek, Christopher C; Beut, Javier; Stephens, Robert; Jelks, Glenn; Lamb, Jerome

    2015-05-01

    The study was conducted to construct an anatomically inspired midfacial analysis facilitating safe, accurate, and dynamic nonsurgical rejuvenation. Emphasis is placed on determining injection target areas and adverse event zones. Twelve hemifacial fresh cadavers were dissected in a layered fashion. Dimensional measurements between the midfacial fat compartments, prezygomatic space, mimetic muscles, and neurovascular bundles were used to develop a topographic analysis for clinical injections. A longitudinal line from the base of the alar crease to the medial edge of the levator anguli oris muscle (1.9 cm), lateral edge of the levator anguli oris muscle (2.6 cm), and zygomaticus major muscle (4.6 cm) partitions the cheek into two aesthetic regions. A six-step facial analysis outlines three target zones and two adverse event zones and triangulates the point of maximum cheek projection. The lower adverse event zone yields an anatomical explanation to inadvertent jowling during anterior cheek injection. The upper adverse event zone localizes the palpebral branch of the infraorbital artery. The medial malar target area isolates quadrants for anterior cheek projection and tear trough effacement. The middle malar target area addresses lid-cheek blending and superficial compartment turgor. The lateral malar target area highlights lateral cheek projection and locates the prezygomatic space. This stepwise analysis illustrates target areas and adverse event zones to achieve midfacial support, contour, and profile in the repose position and simultaneous molding of a natural shape during animation. This reproducible method can be used both procedurally and in record-keeping for midface volumizing procedures.

  8. Comparative analysis of diagnostic accuracy of different brain biopsy procedures.

    Jain, Deepali; Sharma, Mehar Chand; Sarkar, Chitra; Gupta, Deepak; Singh, Manmohan; Mahapatra, A K

    2006-12-01

    Image-guided procedures such as computed tomography (CT) guided, neuronavigator-guided and ultrasound-guided methods can assist neurosurgeons in localizing the intraparenchymal lesion of the brain. However, despite improvements in the imaging techniques, an accurate diagnosis of intrinsic lesion requires tissue sampling and histological verification. The present study was carried out to examine the reliability of the diagnoses made on tumor sample obtained via different stereotactic and ultrasound-guided brain biopsy procedures. A retrospective analysis was conducted of all brain biopsies (frame-based and frameless stereotactic and ultrasound-guided) performed in a single tertiary care neurosciences center between 1995 and 2005. The overall diagnostic accuracy achieved on histopathology and correlation with type of biopsy technique was evaluated. A total of 130 cases were included, which consisted of 82 males and 48 females. Age ranged from 4 to 75 years (mean age 39.5 years). Twenty per cent (27 patients) were in the pediatric age group, while 12% (16 patients) were >or= 60-years of age. A definitive histological diagnosis was established in 109 cases (diagnostic yield 80.2%), which encompassed 101 neoplastic and eight nonneoplastic lesions. Frame-based, frameless stereotactic and ultrasound-guided biopsies were done in 95, 15 and 20 patients respectively. Although the numbers of cases were small there was trend for better yield with frameless image-guided stereotactic biopsy and maximum diagnostic yield was obtained i.e, 87% (13/15) in comparison to conventional frame-based CT-guided stereotactic biopsy and ultrasound-guided biopsy. Overall, a trend of higher diagnostic yield was seen in cases with frameless image-guided stereotactic biopsy. Thus, this small series confirms that frameless neuronavigator-guided stereotactic procedures represent the lesion sufficiently in order to make histopathologic diagnosis.

  9. MO-F-304-01: Applying PDSA to a Practical Clinical Problem: Variation in Procedural Doses Among Interventional Radiologists

    Jones, A. [UT MD Anderson Cancer Center (United States)

    2015-06-15

    With the current Maintenance of Certification (MOC) requirement for a practice quality improvement (PQI) project every 3 years, we all (well, most of us) find ourselves searching for projects that are both manageable and likely to have a positive impact on our clinical practice. In this session we will walk through the use of the Plan, Do, Study, Act (PDSA) cycle to address a common finding in practices where fluoroscopically guided interventions (FGI) are performed: variation in procedural doses among physicians. We will also examine strategies to secure physician support using carrots, not sticks. Learning Objectives: Review the PDSA cycle and its application to PQI. Discuss strategies for action in the example presented. Examine strategies for successful group PQI projects. A. Kyle Jones: Owner, FluoroSafety Joseph R. Steele: Consultant, FluoroSafety.

  10. MO-F-304-01: Applying PDSA to a Practical Clinical Problem: Variation in Procedural Doses Among Interventional Radiologists

    Jones, A.

    2015-01-01

    With the current Maintenance of Certification (MOC) requirement for a practice quality improvement (PQI) project every 3 years, we all (well, most of us) find ourselves searching for projects that are both manageable and likely to have a positive impact on our clinical practice. In this session we will walk through the use of the Plan, Do, Study, Act (PDSA) cycle to address a common finding in practices where fluoroscopically guided interventions (FGI) are performed: variation in procedural doses among physicians. We will also examine strategies to secure physician support using carrots, not sticks. Learning Objectives: Review the PDSA cycle and its application to PQI. Discuss strategies for action in the example presented. Examine strategies for successful group PQI projects. A. Kyle Jones: Owner, FluoroSafety Joseph R. Steele: Consultant, FluoroSafety

  11. Three dimensional analysis of cosmic ray intensity variation

    Yasue, Shin-ichi; Mori, Satoru; Nagashima, Kazuo.

    1974-01-01

    Three dimensional analysis of cosmic ray anisotropy and its time variation was performed. This paper describes the analysis of the Forbush decrease in Jan. 1968 to investigate by comparing the direction of the magnetic field in interplanetary space and the direction of the reference axis for cosmic ray anisotropy. New anisotropy becomes dominant at the time of Forbush decrease because the anisotropy of cosmic ray in calm state is wiped out. Such anisotropy produces intensity variation in neutron monitors on the ground. The characteristic parameters of three dimensional anisotropy can be determined from theoretical value and observed intensity. Analyzed data were taken for 6 days from Jan. 25 to Jan. 30, 1968, at Deep River. The decrease of intensity at Deep River was seen for several hours from 11 o'clock (UT), Jan. 26, just before The Forbush decrease. This may be due to the loss cone. The Forbush decrease began at 19 o'clock, Jan. 26, and the main phase continued to 5 o'clock in the next morning. The spectrum of variation was Psup(-0.5). The time variations of the magnetic field in interplanetary space and the reference axis of cosmic ray anisotropy are shown for 15 hours. The average directions of both are almost in coincidence. The spatial distribution of cosmic ray near the earth may be expressed by the superposition of axial symmetrical distribution along a reference axis and its push-out to the direction of 12 o'clock. It is considered that the direction of magnetic force line and the velocity of solar wind correspond to the direction of the reference axis and the magnitude of anisotropy in the direction of 12 o'clock, respectively. (Kato, T.)

  12. National Variation in Urethroplasty Cost and Predictors of Extreme Cost: A Cost Analysis With Policy Implications.

    Harris, Catherine R; Osterberg, E Charles; Sanford, Thomas; Alwaal, Amjad; Gaither, Thomas W; McAninch, Jack W; McCulloch, Charles E; Breyer, Benjamin N

    2016-08-01

    To determine which factors are associated with higher costs of urethroplasty procedure and whether these factors have been increasing over time. Identification of determinants of extreme costs may help reduce cost while maintaining quality. We conducted a retrospective analysis using the 2001-2010 Healthcare Cost and Utilization Project-Nationwide Inpatient Sample (HCUP-NIS). The HCUP-NIS captures hospital charges which we converted to cost using the HCUP cost-to-charge ratio. Log cost linear regression with sensitivity analysis was used to determine variables associated with increased costs. Extreme cost was defined as the top 20th percentile of expenditure, analyzed with logistic regression, and expressed as odds ratios (OR). A total of 2298 urethroplasties were recorded in NIS over the study period. The median (interquartile range) calculated cost was $7321 ($5677-$10,000). Patients with multiple comorbid conditions were associated with extreme costs [OR 1.56, 95% confidence interval (CI) 1.19-2.04, P = .02] compared with patients with no comorbid disease. Inpatient complications raised the odds of extreme costs (OR 3.2, CI 2.14-4.75, P costs (OR 1.78, 95% CI 1.2-2.64, P = .005). Variations in patient age, race, hospital region, bed size, teaching status, payor type, and volume of urethroplasty cases were not associated with extremes of cost. Cost variation for perioperative inpatient urethroplasty procedures is dependent on preoperative patient comorbidities, postoperative complications, and surgical complexity related to graft usage. Procedural cost and cost variation are critical for understanding which aspects of care have the greatest impact on cost. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Manual of Standard Operating Procedures for Veterinary Drug Residue Analysis

    2016-01-01

    Laboratories are crucial to national veterinary drug residue monitoring programmes. However, one of the main challenges laboratories encounter is obtaining access to relevant methods of analysis. Thus, in addition to training, providing technical advice and transferring technology, the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture has resolved to develop clear and practical manuals to support Member State laboratories. The Coordinated Research Project (CRP) on Development of Radiometric and Allied Analytical Methods to Strengthen Residue Control Programs for Antibiotic and Anthelmintic Veterinary Drug Residues has developed a number of analytical methods as standard operating procedures (SOPs), which are now compiled here. This publication contains SOPs on chromatographic and spectrometric techniques, as well as radioimmunoassay and associated screening techniques, for various anthelmintic and antimicrobial veterinary drug residue analysis. Some analytical method validation protocols are also included. The publication is primarily aimed at food and environmental safety laboratories involved in testing veterinary drug residues, including under organized national residue monitoring programmes. It is expected to enhance laboratory capacity building and competence through the use of radiometric and complementary tools and techniques. The publication is also relevant for applied research on residues of veterinary drugs in food and environmental samples

  14. Seasonal Variations in the Risk of Reoperation for Surgical Site Infection Following Elective Spinal Fusion Surgery: A Retrospective Study Using the Japanese Diagnosis Procedure Combination Database.

    Ohya, Junichi; Chikuda, Hirotaka; Oichi, Takeshi; Kato, So; Matsui, Hiroki; Horiguchi, Hiromasa; Tanaka, Sakae; Yasunaga, Hideo

    2017-07-15

    A retrospective study of data abstracted from the Diagnosis Procedure Combination (DPC) database, a national representative database in Japan. The aim of this study was to examine seasonal variations in the risk of reoperation for surgical site infection (SSI) following spinal fusion surgery. Although higher rates of infection in the summer than in other seasons were thought to be caused by increasing inexperience of new staff, high temperature, and high humidity, no studies have examined seasonal variations in the risk of SSI following spinal fusion surgery in the country where medical staff rotation timing is not in summer season. In Japan, medical staff rotation starts in April. We retrospectively extracted the data of patients who were admitted between July 2010 and March 2013 from the DPC database. Patients were included if they were aged 20 years or older and underwent elective spinal fusion surgery. The primary outcome was reoperation for SSI during hospitalization. We performed multivariate analysis to clarify the risk factors of primary outcome with adjustment for patient background characteristics. We identified 47,252 eligible patients (23,659 male, 23,593 female). The mean age of the patients was 65.4 years (range, 20-101 yrs). Overall, reoperation for SSI occurred in 0.93% of the patients during hospitalization. The risk of reoperation for SSI was significantly higher in April (vs. February; odds ratio, 1.93; 95% confidence interval, 1.09-3.43, P = 0.03) as well as other known risk factors. In subgroup analysis with stratification for type of hospital, month of surgery was identified as an independent risk factor of reoperation for SSI among cases in an academic hospital, although there was no seasonal variation among those in a nonacademic hospital. This study showed that month of surgery is a risk factor of reoperation for SSI following elective spinal fusion surgery, nevertheless, in the country where medical staff rotation timing is not in

  15. Analysis of Δ14C variations in atmosphere

    Simon, J.; Sivo, A.; Richtarikova, M.; Holy, K.; Polaskova, A.; Bulko, M.; Hola, O.

    2005-01-01

    The Δ 14 C in the atmosphere have been measured and studied in two localities of Slovakia. The accomplished analysis proved the existence of the annual variations of the Δ 14 C with the attenuating amplitude and decreasing mean value. It seems to be logical and physically correct to describe the Δ 14 C time-dependence by the equation: y = Ae -at + Be -bt cos(ω 1 t + (φ)). The coefficients A, a, B, b, (φ) are listed in the table for both the localities. The observed variations of the Δ 14 C have a maximum in summer and minimum in winter .Probably it is caused by the higher requirement of the heat supply in winter season which is connected directly with the fossil CO 2 emissions and more intensive Suess effect. Summer maximum could be explained by the combination of the lower CO 2 emission rate and higher turbulent transport of the stratospheric 14 C to the troposphere. Using the Fourier harmonic analysis the amplitude spectra of the average annual variations were plotted. The obtained result shows that the variations have the high degree of symmetry. Furthermore, the obtained basic frequency ω 1 = 2π/12 [month -1 ] proves that the cyclic processes with the period of T = 12 [month] have a major influence on the 14 C amount in the troposphere. The presence of some higher-order harmonics is significant, but a physical interpretation has not yet been clear. In addition to the main frequency there are presented also 2ω 1 and 3ω 1 in Bratislava and 4ω 1 in Zlkovce data-set. The long-time average of the Δ 14 C in Zlkovce during years 1995-2004 is higher of about 6.6 o / oo than in Bratislava. It represents an unique evidence that the local CO 2 pollution affects the 14 C activity . The correlation on the level R 2 = 0,43 was found between Bratislava and Zlkovce atmospheric Δ 14 C data. (authors)

  16. Seasonal variation of the impact of a stressful procedure on open field behaviour and blood corticosterone in laboratory mice.

    Meyer, L; Caston, J; Mensah-Nyagan, A G

    2006-02-28

    Behavioural and hormonal seasonal changes are well documented in various vertebrate species living in their natural environment but circannual variations that may occur in laboratory animals reared in standard conditions are poorly investigated. This study shows that, in laboratory mice, the effects of stress on behavioural inhibition, investigatory behaviour and blood concentration of corticosterone are seasonally dependent. No consistency was observed between the reactivity of biological structures controlling the hormonal response to stress and the behavioural activities investigated at every period of the year. During the spring time, stress, which elicited a decrease of investigatory behaviour (estimated by the walking time in an open field), increased behavioural inhibition (estimated by the percentage of walking in the central area of the open field) as well as the blood corticosterone concentration in laboratory mice. In autumn, stress had no significant effect on behaviour despite the great hormonal concentration increase. The results reveal that, at certain period of the year, a stressful procedure is unable to affect behavioural parameters in laboratory mice which were maintained in constant 12-h dark/12-h light cycle. The report constitutes a novel piece of information suggesting a potential role of the endogenous biological clock in the modulation of stress response in mammals.

  17. Novel variational approach for analysis of photonic crystal slabs

    Aram, Mohammad Hasan; Khorasani, Sina

    2015-01-01

    We propose a new method, based on variational principle, for the analysis of photonic crystal (PC) slabs. Most of the methods used today treat PC slabs as three-dimensional (3D) crystal, and this makes these methods very time and/or memory consuming. In our proposed method, we use the Bloch theorem to expand the field on infinite plane waves, whose amplitudes depend on the component perpendicular to the slab surface. By approximating these amplitudes with appropriate functions, we can find modes of PC slabs almost as fast as we can find modes of two-dimensional crystals. In addition to this advantage, we can also calculate radiation modes with this method, which is not feasible with the 3D plane wave expansion method. (paper)

  18. Batch variation between branchial cell cultures: An analysis of variance

    Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.

    2003-01-01

    We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...... the radioactive lipid precursors were added on the apical, rather than on the basolateral, side. Theinsert cell cultures were obviously polarized. We argue that it is not reasonable to reject troublesome experimental results, when we do not know a priori that something went wrong. The ANOVA is a very useful...

  19. Sequence length variation, indel costs, and congruence in sensitivity analysis

    Aagesen, Lone; Petersen, Gitte; Seberg, Ole

    2005-01-01

    The behavior of two topological and four character-based congruence measures was explored using different indel treatments in three empirical data sets, each with different alignment difficulties. The analyses were done using direct optimization within a sensitivity analysis framework in which...... the cost of indels was varied. Indels were treated either as a fifth character state, or strings of contiguous gaps were considered single events by using linear affine gap cost. Congruence consistently improved when indels were treated as single events, but no congruence measure appeared as the obviously...... preferable one. However, when combining enough data, all congruence measures clearly tended to select the same alignment cost set as the optimal one. Disagreement among congruence measures was mostly caused by a dominant fragment or a data partition that included all or most of the length variation...

  20. Radiation and environmental data analysis computer (REDAC) hardware, software band analysis procedures

    Hendricks, T.J.

    1985-01-01

    The REDAC was conceived originally as a tape verifier for the Radiation and Environmental Data Acquisition Recorder (REDAR). From that simple beginning in 1971, the REDAC has evolved into a family of systems used for complete analysis of data obtained by the REDAR and other acquisition systems. Portable or mobile REDACs are deployed to support checkout and analysis tasks in the field. Laboratory systems are additionally used for software development, physics investigations, data base management and graphics. System configurations range from man-portable systems to a large laboratory-based system which supports time-shared analysis and development tasks. Custom operating software allows the analyst to process data either interactively or by batch procedures. Analysis packages are provided for numerous necessary functions. All these analysis procedures can be performed even on the smallest man-portable REDAC. Examples of the multi-isotope stripping and radiation isopleth mapping are presented. Techniques utilized for these operations are also presented

  1. A finite volume procedure for fluid flow, heat transfer and solid-body stress analysis

    Jagad, P. I.; Puranik, B. P.; Date, A. W.

    2018-01-01

    A unified cell-centered unstructured mesh finite volume procedure is presented for fluid flow, heat transfer and solid-body stress analysis. An in-house procedure (A. W. Date, Solution of Transport Equations on Unstructured Meshes with Cell

  2. Procedural-support music therapy in the healthcare setting: a cost-effectiveness analysis.

    DeLoach Walworth, Darcy

    2005-08-01

    This comparative analysis examined the cost-effectiveness of music therapy as a procedural support in the pediatric healthcare setting. Many healthcare organizations are actively attempting to reduce the amount of sedation for pediatric patients undergoing various procedures. Patients receiving music therapy-assisted computerized tomography scans ( n = 57), echocardiograms ( n = 92), and other procedures ( n = 17) were included in the analysis. Results of music therapy-assisted procedures indicate successful elimination of patient sedation, reduction in procedural times, and decrease in the number of staff members present for procedures. Implications for nurses and music therapists in the healthcare setting are discussed.

  3. Analysis of decision procedures for a sequence of inventory periods

    Avenhaus, R.

    1982-07-01

    Optimal test procedures for a sequence of inventory periods will be discussed. Starting with a game theoretical description of the conflict situation between the plant operator and the inspector, the objectives of the inspector as well as the general decision theoretical problem will be formulated. In the first part the objective of 'secure' detection will be emphasized which means that only at the end of the reference time a decision is taken by the inspector. In the second part the objective of 'timely' detection will be emphasized which will lead to sequential test procedures. At the end of the paper all procedures will be summarized, and in view of the multitude of procedures available at the moment some comments about future work will be given. (orig./HP) [de

  4. A Nationwide Analysis of Cost Variation for Autologous Free Flap Breast Reconstruction.

    Billig, Jessica I; Lu, Yiwen; Momoh, Adeyiza O; Chung, Kevin C

    2017-11-01

    Cost variation among hospitals has been demonstrated for surgical procedures. Uncovering these differences has helped guide measures taken to reduce health care spending. To date, the fiscal consequence of hospital variation for autologous free flap breast reconstruction is unknown. To investigate factors that influence cost variation for autologous free flap breast reconstruction. A secondary cross-sectional analysis was performed using the Healthcare Cost and Utilization Project National Inpatient Sample database from 2008 to 2010. The dates of analysis were September 2016 to February 2017. The setting was a stratified sample of all US community hospitals. Participants were female patients who were diagnosed as having breast cancer or were at high risk for breast cancer and underwent autologous free flap breast reconstruction. Variables of interest included demographic data, hospital characteristics, length of stay, complications (surgical and systemic), and inpatient cost. The study used univariate and generalized linear mixed models to examine associations between patient and hospital characteristics and cost. A total of 3302 patients were included in the study, with a median age of 50 years (interquartile range, 44-57 years). The mean cost for autologous free flap breast reconstruction was $22 677 (interquartile range, $14 907-$33 391). Flap reconstructions performed at high-volume hospitals were significantly more costly than those performed at low-volume hospitals ($24 360 vs $18 918, P Logistic regression demonstrated that hospital volume correlated with increased cost (Exp[β], 1.06; 95% CI, 1.02-1.11; P = .003). Fewer surgical complications (16.4% [169 of 1029] vs 23.7% [278 of 1174], P cost variation among patients undergoing autologous free flap breast reconstruction. Experience, as measured by a hospital's volume, provides quality health care with fewer complications but is more costly. Longer length of stay contributed to regional

  5. Solution Tree Problem Solving Procedure for Engineering Analysis ...

    Illustrations are provided in the thermofluid engineering area to showcase the procedure's applications. This approach has proved to be a veritable tool for enhancing the problem-solving and computer algorithmic skills of engineering students, eliciting their curiosity, active participation and appreciation of the taught course.

  6. A Comparative Analysis of the Procedure Employed in Item ...

    Zimbabwe Journal of Educational Research ... and psychological scales designed to measure constructs in education and social sciences were purposively selected for the study based on accessibility and availability of validation information. The instruments used for the study were scaling procedures used in 27 published ...

  7. Analysis of emergency response procedures and air traffic accidents ...

    Incessant air transport accidents have been a source of concern to stakeholders and aviation experts in Nigeria, yet the response and process has not been adequately appraised. This study attempts an evaluation of the emergency response procedures in the aviation industry with particular focus on Murtala Muhammed ...

  8. Analysis of Idiom Variation in the Framework of Linguistic Subjectivity

    Liu, Zhengyuan

    2012-01-01

    Idiom variation is a ubiquitous linguistic phenomenon which has raised a lot of research questions. The past approach was either formal or functional. Both of them did not pay much attention to cognitive factors of language users. By putting idiom variation in the framework of linguistic subjectivity, we have offered a new perspective in the…

  9. Multidimensional analysis of Drosophila wing variation in Evolution ...

    2008-12-23

    Dec 23, 2008 ... the different components of phenotypic variation of a complex trait: the wing. ... of Drosophila wing variation in. Evolution Canyon. J. Genet. 87, 407–419]. Introduction ..... identify the effect of slope on wing shape (figure 2,c). All.

  10. Variational formulation based analysis on growth of yield front in ...

    The present study investigates the growth of elastic-plastic front in rotating solid disks of non-uniform thickness having exponential and parabolic geometry variation. The problem is solved through an extension of a variational method in elastoplastic regime. The formulation is based on von-Mises yield criterion and linear ...

  11. Variational Bayesian Learning for Wavelet Independent Component Analysis

    Roussos, E.; Roberts, S.; Daubechies, I.

    2005-11-01

    In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.

  12. Is There Variation in Procedural Utilization for Lumbar Spine Disorders Between a Fee-for-Service and Salaried Healthcare System?

    Schoenfeld, Andrew J; Makanji, Heeren; Jiang, Wei; Koehlmoos, Tracey; Bono, Christopher M; Haider, Adil H

    2017-12-01

    Whether compensation for professional services drives the use of those services is an important question that has not been answered in a robust manner. Specifically, there is a growing concern that spine care practitioners may preferentially choose more costly or invasive procedures in a fee-for-service system, irrespective of the underlying lumbar disorder being treated. (1) Were proportions of interbody fusions higher in the fee-for-service setting as opposed to the salaried Department of Defense setting? (2) Were the odds of interbody fusion increased in a fee-for-service setting after controlling for indications for surgery? Patients surgically treated for lumbar disc herniation, spinal stenosis, and spondylolisthesis (2006-2014) were identified. Patients were divided into two groups based on whether the surgery was performed in the fee-for-service setting (beneficiaries receive care at a civilian facility with expenses covered by TRICARE insurance) or at a Department of Defense facility (direct care). There were 28,344 patients in the entire study, 21,290 treated in fee-for-service and 7054 treated in Department of Defense facilities. Differences in the rates of fusion-based procedures, discectomy, and decompression between both healthcare settings were assessed using multinomial logistic regression to adjust for differences in case-mix and surgical indication. TRICARE beneficiaries treated for lumbar spinal disorders in the fee-for-service setting had higher odds of receiving interbody fusions (fee-for-service: 7267 of 21,290 [34%], direct care: 1539 of 7054 [22%], odds ratio [OR]: 1.25 [95% confidence interval 1.20-1.30], p fee-for-service setting irrespective of the underlying diagnosis. These results speak to the existence of provider inducement within the field of spine surgery. This reality portends poor performance for surgical practices and hospitals in Accountable Care Organizations and bundled payment programs in which provider inducement is allowed

  13. Analysis of assistance procedures to normal birth in primiparous

    Joe Luiz Vieira Garcia Novo

    2016-04-01

    Full Text Available Introduction: Current medical technologies in care in birth increased maternal and fetal benefits persist, despite numerous unnecessary procedures. The purpose of the normal childbirth care is to have healthy women and newborns, using a minimum of safe interventions. Objective: To analyze the assistance to normal delivery in secondary care maternity. Methodology: A total of 100 primiparous mothers who had vaginal delivery were included, in which care practices used were categorized: 1 according to the WHO classification for assistance to normal childbirth: effective, harmful, used with caution and used inappropriately; 2 associating calculations with the Bologna Index parameters: presence of a birth partner, partograph, no stimulation of labor, delivery in non-supine position, and mother-newborn skin-to-skin contact. Results: Birth partners (85%, correctly filled partographs (62%, mother-newborn skin-to-skin contact (36%, use of oxytocin (87%, use of parenteral nutrition during labor (86% and at delivery (74%, episiotomy (94% and uterine fundal pressure in the expulsion stage (58%. The overall average value of the Bologna Index of the mothers analyzed was 1.95. Conclusions: Some effective procedures recommended by WHO (presence of a birth partner, some effective and mandatory practices were not complied with (partograph completely filled, potentially harmful or ineffective procedures were used (oxytocin in labor/post-partum, as well as inadequate procedures (uterine fundal pressure during the expulsion stage, use of forceps and episiotomy. The maternity’s care model did not offer excellence procedures in natural birth to their mothers in primiparity, (BI=1.95.

  14. Analysis of the thermal profiles and the charcoal gravimetric yield in three variations of rectangular brick

    Oliveira, Rogerio Lima Mota de; Alves Junior, Edson; Mulina, Bruno Henrique Oliveira; Borges, Valerio Luiz; Carvalho, Solidonio Rodrigues de [Federal University of Uberlandia - UFU, MG (Brazil). School of Mechanical Engineering - FEMEC], e-mails: rogerio@mecanica.ufu.br, edson@mec.ufu.br, vlborges@mecanica.ufu.br, srcarvalho@mecanica.ufu.br

    2010-07-01

    Charcoal assumes a major role in Brazilian economic scenario. The procedure for obtaining charcoal consists in carbonization of wood at certain specific temperatures in kilns. This ancient process has a few joined technologies and the kilns for such practice do not have any control instruments, in their great majority, becoming dependent on the ability of its operators. However, in recent decades several studies have been developed to improve the practice as well as the equipment that involve and control the stages of charcoal production. In this sense, this work proposes the analysis of the thermal profiles and the gravimetric yield in three variations of a rectangular brick kiln called RAC220: traditional (without any type of instrumentation), instrumented with thermal sensors (RTD PT100) and adapted with gasifier. The goal is to correlate temperature, gravimetric yield and quality of the produced charcoal. Immediate analyses were performed to determine the amount of fixed carbon, volatile gases and ashes contents in charcoal. Through such measurement procedures, together with statistical analysis, the aim is to identify an important tool to reduce the time of charcoal production and also contributes to minimize losses and to increase the thermal efficiency of the production process. (author)

  15. Inverse Transient Analysis for Classification of Wall Thickness Variations in Pipelines

    Jeffrey Tuck

    2013-12-01

    Full Text Available Analysis of transient fluid pressure signals has been investigated as an alternative method of fault detection in pipeline systems and has shown promise in both laboratory and field trials. The advantage of the method is that it can potentially provide a fast and cost effective means of locating faults such as leaks, blockages and pipeline wall degradation within a pipeline while the system remains fully operational. The only requirement is that high speed pressure sensors are placed in contact with the fluid. Further development of the method requires detailed numerical models and enhanced understanding of transient flow within a pipeline where variations in pipeline condition and geometry occur. One such variation commonly encountered is the degradation or thinning of pipe walls, which can increase the susceptible of a pipeline to leak development. This paper aims to improve transient-based fault detection methods by investigating how changes in pipe wall thickness will affect the transient behaviour of a system; this is done through the analysis of laboratory experiments. The laboratory experiments are carried out on a stainless steel pipeline of constant outside diameter, into which a pipe section of variable wall thickness is inserted. In order to detect the location and severity of these changes in wall conditions within the laboratory system an inverse transient analysis procedure is employed which considers independent variations in wavespeed and diameter. Inverse transient analyses are carried out using a genetic algorithm optimisation routine to match the response from a one-dimensional method of characteristics transient model to the experimental time domain pressure responses. The accuracy of the detection technique is evaluated and benefits associated with various simplifying assumptions and simulation run times are investigated. It is found that for the case investigated, changes in the wavespeed and nominal diameter of the

  16. Inverse Transient Analysis for Classification of Wall Thickness Variations in Pipelines

    Tuck, Jeffrey; Lee, Pedro

    2013-01-01

    Analysis of transient fluid pressure signals has been investigated as an alternative method of fault detection in pipeline systems and has shown promise in both laboratory and field trials. The advantage of the method is that it can potentially provide a fast and cost effective means of locating faults such as leaks, blockages and pipeline wall degradation within a pipeline while the system remains fully operational. The only requirement is that high speed pressure sensors are placed in contact with the fluid. Further development of the method requires detailed numerical models and enhanced understanding of transient flow within a pipeline where variations in pipeline condition and geometry occur. One such variation commonly encountered is the degradation or thinning of pipe walls, which can increase the susceptible of a pipeline to leak development. This paper aims to improve transient-based fault detection methods by investigating how changes in pipe wall thickness will affect the transient behaviour of a system; this is done through the analysis of laboratory experiments. The laboratory experiments are carried out on a stainless steel pipeline of constant outside diameter, into which a pipe section of variable wall thickness is inserted. In order to detect the location and severity of these changes in wall conditions within the laboratory system an inverse transient analysis procedure is employed which considers independent variations in wavespeed and diameter. Inverse transient analyses are carried out using a genetic algorithm optimisation routine to match the response from a one-dimensional method of characteristics transient model to the experimental time domain pressure responses. The accuracy of the detection technique is evaluated and benefits associated with various simplifying assumptions and simulation run times are investigated. It is found that for the case investigated, changes in the wavespeed and nominal diameter of the pipeline are both important

  17. A variation of the housing unit method for estimating the age and gender distribution of small, rural areas: A case study of the local expert procedure

    Carlson, J.F.; Roe, L.K.; Williams, C.A.; Swanson, D.A.

    1993-01-01

    This paper describes the methodologies used in the development of a demographic data base established in support of the Yucca Mountain Site Characterization Project Radiological Monitoring Plan (RadMP). It also examines the suitability of a survey-based procedure for estimating population in small, rural areas. The procedure is a variation of the Housing Unit Method. It employs the use of local experts enlisted to provide information about the demographic characteristics of households randomly selected from residential units sample frames developed from utility records. The procedure is nonintrusive and less costly than traditional survey data collection efforts. Because the procedure is based on random sampling, confidence intervals can be constructed around the population estimated by the technique. The results of a case study are provided in which the total population, and age and gender of the population, is estimated for three unincorporated communities in rural, southern Nevada

  18. Comparative analysis of diagnostic accuracy of different brain biopsy procedures

    Jain Deepali; Sharma Mehar; Sarkar Chitra; Gupta Deepak; Singh Manmohan; Mahapatra A

    2006-01-01

    Background: Image-guided procedures such as computed tomography (CT) guided, neuronavigator-guided and ultrasound-guided methods can assist neurosurgeons in localizing the intraparenchymal lesion of the brain. However, despite improvements in the imaging techniques, an accurate diagnosis of intrinsic lesion requires tissue sampling and histological verification. Aims: The present study was carried out to examine the reliability of the diagnoses made on tumor sample obtained via different s...

  19. RAPD analysis of colchicine induced variation of the Dendrobium ...

    STORAGESEVER

    2009-04-20

    Apr 20, 2009 ... species of the Dendrobium genera, and 13 orchids across generas. ... to detect variations at species level and among somaclonal variants in this study. ..... alternative for colchicines in in vitro choromosome doubling of Lilium.

  20. A Quantitative Review of Functional Analysis Procedures in Public School Settings

    Solnick, Mark D.; Ardoin, Scott P.

    2010-01-01

    Functional behavioral assessments can consist of indirect, descriptive and experimental procedures, such as a functional analysis. Although the research contains numerous examples demonstrating the effectiveness of functional analysis procedures, experimental conditions are often difficult to implement in classroom settings and analog conditions…

  1. Analysis of substructural variation in families of enzymatic proteins with applications to protein function prediction

    Fofanov Viacheslav Y

    2010-05-01

    , statistically rigorous procedure for incorporating structural variation data into protein function prediction pipelines. Our work provides an unbiased, automated assessment of the structural variability of identified binding site substructures among protein structure families and a technique for exploring the relation of substructural variation to protein function. As available proteomic data continues to expand, the techniques proposed will be indispensable for the large-scale analysis and interpretation of structural data.

  2. Empirical analysis of skin friction under variations of temperature

    Parra Alvarez, A. R. de la; Groot Viana, M. de

    2014-01-01

    In soil geotechnical characterization, strength parameters, cohesion (c) and internal friction angle (Φ) has been traditional measured without taking into account temperature, been a very important issue in energy geostructures. The present document analyzes the variation of these parameters in soil-concrete interface at different temperatures. A traditional shear strength case with a forced plane of failure was used. Several tests were carried out to determine the variation of skin friction in granular and cohesive oils with temperature. (Author)

  3. Vector optimization set-valued and variational analysis

    Chen, Guang-ya; Yang, Xiaogi

    2005-01-01

    This book is devoted to vector or multiple criteria approaches in optimization. Topics covered include: vector optimization, vector variational inequalities, vector variational principles, vector minmax inequalities and vector equilibrium problems. In particular, problems with variable ordering relations and set-valued mappings are treated. The nonlinear scalarization method is extensively used throughout the book to deal with various vector-related problems. The results presented are original and should be interesting to researchers and graduates in applied mathematics and operations research

  4. Comparative analysis of proteome and transcriptome variation in mouse.

    Anatole Ghazalpour

    2011-06-01

    Full Text Available The relationships between the levels of transcripts and the levels of the proteins they encode have not been examined comprehensively in mammals, although previous work in plants and yeast suggest a surprisingly modest correlation. We have examined this issue using a genetic approach in which natural variations were used to perturb both transcript levels and protein levels among inbred strains of mice. We quantified over 5,000 peptides and over 22,000 transcripts in livers of 97 inbred and recombinant inbred strains and focused on the 7,185 most heritable transcripts and 486 most reliable proteins. The transcript levels were quantified by microarray analysis in three replicates and the proteins were quantified by Liquid Chromatography-Mass Spectrometry using O(18-reference-based isotope labeling approach. We show that the levels of transcripts and proteins correlate significantly for only about half of the genes tested, with an average correlation of 0.27, and the correlations of transcripts and proteins varied depending on the cellular location and biological function of the gene. We examined technical and biological factors that could contribute to the modest correlation. For example, differential splicing clearly affects the analyses for certain genes; but, based on deep sequencing, this does not substantially contribute to the overall estimate of the correlation. We also employed genome-wide association analyses to map loci controlling both transcript and protein levels. Surprisingly, little overlap was observed between the protein- and transcript-mapped loci. We have typed numerous clinically relevant traits among the strains, including adiposity, lipoprotein levels, and tissue parameters. Using correlation analysis, we found that a low number of clinical trait relationships are preserved between the protein and mRNA gene products and that the majority of such relationships are specific to either the protein levels or transcript levels

  5. Hospital Variation in Cesarean Delivery: A Multilevel Analysis.

    Vecino-Ortiz, Andres I; Bardey, David; Castano-Yepes, Ramon

    2015-12-01

    To assess the issue of hospital variations in Colombia and to contribute to the methodology on health care variations by using a model that clusters the variance between hospitals while accounting for individual-level reimbursement rates and objective health-status variables. We used data on all births (N = 11,954) taking place in a contributory-regimen insurer network in Colombia during 2007. A multilevel logistic regression model was used to account for the share of unexplained variance between hospitals. In addition, an alternative variance decomposition specification was further carried out to measure the proportion of such unexplained variance due to the region effect. Hospitals account for 20% of the variation in performing cesarean sections, whereas region explains only one-third of such variance. Variables accounting for preferences on the demand side as well as reimbursement rates are found to predict the probability of performing cesarean sections. Hospital variations explain large variances within a single-payer's network. Because this insurer company is highly regarded in terms of performance and finance, these results might provide a lower bound for the scale of hospital variation in the Colombian health care market. Such lower bound provides guidance on the relevance of this issue for Colombia. Some factors such as demand-side preferences and physician reimbursement rates increase variations in health care even within a single-payer network. This is a source of inefficiencies, threatening the quality of health care and financial sustainability. The proposed methodology should be considered in further research on health care variations. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Sample handling and chemical procedures for efficacious trace analysis of urine by neutron activation analysis

    Blotcky, A.J.; Rack, E.P.; Roman, F.R.

    1988-01-01

    Important for the determination of trace elements, ions, or compounds in urine by chemical neutron activation analysis is the optimization of sample handling, preirradiation chemistry, and radioassay procedures necessary for viable analysis. Each element, because of its natural abundance in the earth's crust and, hence, its potential for reagent and environmental contamination, requires specific procedures for storage, handling, and preirradiation chemistry. Radioassay techniques for radionuclides vary depending on their half-lives and decay characteristics. Described in this paper are optimized procedures for aluminum and selenium. While 28 Al (T 1/2 = 2.24 min) and 77m Se(T 1/2 = 17.4s) have short half-lives, their gamma-ray spectra are quite different. Aluminum-28 decays by a 1779-keV gamma and 77m Se by a 162-keV gamma. Unlike selenium, aluminum is a ubiquitous element in the environment requiring special handling to minimize contamination in all phases of its analytical determination

  7. Assessing temporal variations in connectivity through suspended sediment hysteresis analysis

    Sherriff, Sophie; Rowan, John; Fenton, Owen; Jordan, Phil; Melland, Alice; Mellander, Per-Erik; hUallacháin, Daire Ó.

    2016-04-01

    Connectivity provides a valuable concept for understanding catchment-scale sediment dynamics. In intensive agricultural catchments, land management through tillage, high livestock densities and extensive land drainage practices significantly change hydromorphological behaviour and alter sediment supply and downstream delivery. Analysis of suspended sediment-discharge hysteresis has offered insights into sediment dynamics but typically on a limited selection of events. Greater availability of continuous high-resolution discharge and turbidity data and qualitative hysteresis metrics enables assessment of sediment dynamics during more events and over time. This paper assesses the utility of this approach to explore seasonal variations in connectivity. Data were collected from three small (c. 10 km2) intensive agricultural catchments in Ireland with contrasting morphologies, soil types, land use patterns and management practices, and are broadly defined as low-permeability supporting grassland, moderate-permeability supporting arable and high-permeability supporting arable. Suspended sediment concentration (using calibrated turbidity measurements) and discharge data were collected at 10-min resolution from each catchment outlet and precipitation data were collected from a weather station within each catchment. Event databases (67-90 events per catchment) collated information on sediment export metrics, hysteresis category (e.g., clockwise, anti-clockwise, no hysteresis), numeric hysteresis index, and potential hydro-meteorological controls on sediment transport including precipitation amount, duration, intensity, stream flow and antecedent soil moisture and rainfall. Statistical analysis of potential controls on sediment export was undertaken using Pearson's correlation coefficient on separate hysteresis categories in each catchment. Sediment hysteresis fluctuations through time were subsequently assessed using the hysteresis index. Results showed the numeric

  8. Principal component analysis to evaluate the spatial variation of ...

    Discretisation of the particle sizes is highlighted as both a challenge and an opportunity and it is recommended that it be used as a tuning parameter in gauging kaolin variations across samples and in validating new predictive modeling applications. Successful applications will depend on how clay and data scientists keep ...

  9. Multidimensional analysis of Drosophila wing variation in Evolution ...

    In this study, using Drosophila melanogaster isofemale lines derived from wild flies collected on both slopes of the canyon, we investigated the effect of developmental temperature upon the different components of phenotypic variation of a complex trait: the wing. Combining geometric and traditional morphometrics, we find ...

  10. Analysis of spin and gauge models with variational methods

    Dagotto, E.; Masperi, L.; Moreo, A.; Della Selva, A.; Fiore, R.

    1985-01-01

    Since independent-site (link) or independent-link (plaquette) variational states enhance the order or the disorder, respectively, in the treatment of spin (gauge) models, we prove that mixed states are able to improve the critical coupling while giving the qualitatively correct behavior of the relevant parameters

  11. Validity of covariance models for the analysis of geographical variation

    Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio

    2014-01-01

    1. Due to the availability of large molecular data-sets, covariance models are increasingly used to describe the structure of genetic variation as an alternative to more heavily parametrised biological models. 2. We focus here on a class of parametric covariance models that received sustained att...

  12. A variational analysis for large deflection of skew plates under ...

    In the present paper, the static behaviour of thin isotropic skew plates under uniformly distributed load is analyzed with the geometric nonlinearity of the model properly handled. A variational method based on total potential energy has been implemented through assumed displacement field. The computational work has ...

  13. A scenario-based procedure for seismic risk analysis

    Kluegel, J.-U.; Mualchin, L.; Panza, G.F.

    2006-12-01

    A new methodology for seismic risk analysis based on probabilistic interpretation of deterministic or scenario-based hazard analysis, in full compliance with the likelihood principle and therefore meeting the requirements of modern risk analysis, has been developed. The proposed methodology can easily be adjusted to deliver its output in a format required for safety analysts and civil engineers. The scenario-based approach allows the incorporation of all available information collected in a geological, seismotectonic and geotechnical database of the site of interest as well as advanced physical modelling techniques to provide a reliable and robust deterministic design basis for civil infrastructures. The robustness of this approach is of special importance for critical infrastructures. At the same time a scenario-based seismic hazard analysis allows the development of the required input for probabilistic risk assessment (PRA) as required by safety analysts and insurance companies. The scenario-based approach removes the ambiguity in the results of probabilistic seismic hazard analysis (PSHA) which relies on the projections of Gutenberg-Richter (G-R) equation. The problems in the validity of G-R projections, because of incomplete to total absence of data for making the projections, are still unresolved. Consequently, the information from G-R must not be used in decisions for design of critical structures or critical elements in a structure. The scenario-based methodology is strictly based on observable facts and data and complemented by physical modelling techniques, which can be submitted to a formalised validation process. By means of sensitivity analysis, knowledge gaps related to lack of data can be dealt with easily, due to the limited amount of scenarios to be investigated. The proposed seismic risk analysis can be used with confidence for planning, insurance and engineering applications. (author)

  14. PROCEDURE FOR ANALYSIS AND EVALUATION OF MARKET POSITION PRODUCTION ORGANIZATION

    A. N. Polozova

    2014-01-01

    Full Text Available Summary. Methodical procedures economic monitoring market position of industrial organization, particularly those relating to food production, including the 5 elements: matrix «component of business processes», matrix «materiality – efficiency», matrix «materiality – relevant», matrix emption and hindering factors matrix operation scenarios. Substantiated components assess the strengths and weaknesses of the business activities of organizations that characterize the state of internal business environment on the elements: production, organization, personnel, finance, marketing. The advantages of the matrix «materiality – relevance» consisting of 2 materiality level - high and low, and 3 directions relevance – «no change», «gain importance in the future», «lose importance in the future». Presented the contents of the matrix «scenarios functioning of the organization», involving 6 attribute levels, 10 classes of scenarios, 19 activities, including an optimistic and pessimistic. The evaluation of primary classes of scenarios, characterized by the properties of «development», «dynamic equilibrium», «quality improvement», «competitiveness», «favorable realization of opportunities», «competition resistance».

  15. Analysis of generalized Schwarz alternating procedure for domain decomposition

    Engquist, B.; Zhao, Hongkai [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    The Schwartz alternating method(SAM) is the theoretical basis for domain decomposition which itself is a powerful tool both for parallel computation and for computing in complicated domains. The convergence rate of the classical SAM is very sensitive to the overlapping size between each subdomain, which is not desirable for most applications. We propose a generalized SAM procedure which is an extension of the modified SAM proposed by P.-L. Lions. Instead of using only Dirichlet data at the artificial boundary between subdomains, we take a convex combination of u and {partial_derivative}u/{partial_derivative}n, i.e. {partial_derivative}u/{partial_derivative}n + {Lambda}u, where {Lambda} is some {open_quotes}positive{close_quotes} operator. Convergence of the modified SAM without overlapping in a quite general setting has been proven by P.-L.Lions using delicate energy estimates. The important questions remain for the generalized SAM. (1) What is the most essential mechanism for convergence without overlapping? (2) Given the partial differential equation, what is the best choice for the positive operator {Lambda}? (3) In the overlapping case, is the generalized SAM superior to the classical SAM? (4) What is the convergence rate and what does it depend on? (5) Numerically can we obtain an easy to implement operator {Lambda} such that the convergence is independent of the mesh size. To analyze the convergence of the generalized SAM we focus, for simplicity, on the Poisson equation for two typical geometry in two subdomain case.

  16. Procedure for conducting a human-reliability analysis for nuclear power plants. Final report

    Bell, B.J.; Swain, A.D.

    1983-05-01

    This document describes in detail a procedure to be followed in conducting a human reliability analysis as part of a probabilistic risk assessment when such an analysis is performed according to the methods described in NUREG/CR-1278, Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. An overview of the procedure describing the major elements of a human reliability analysis is presented along with a detailed description of each element and an example of an actual analysis. An appendix consists of some sample human reliability analysis problems for further study

  17. Failure mode and effects analysis: an empirical comparison of failure mode scoring procedures.

    Ashley, Laura; Armitage, Gerry

    2010-12-01

    To empirically compare 2 different commonly used failure mode and effects analysis (FMEA) scoring procedures with respect to their resultant failure mode scores and prioritization: a mathematical procedure, where scores are assigned independently by FMEA team members and averaged, and a consensus procedure, where scores are agreed on by the FMEA team via discussion. A multidisciplinary team undertook a Healthcare FMEA of chemotherapy administration. This included mapping the chemotherapy process, identifying and scoring failure modes (potential errors) for each process step, and generating remedial strategies to counteract them. Failure modes were scored using both an independent mathematical procedure and a team consensus procedure. Almost three-fifths of the 30 failure modes generated were scored differently by the 2 procedures, and for just more than one-third of cases, the score discrepancy was substantial. Using the Healthcare FMEA prioritization cutoff score, almost twice as many failure modes were prioritized by the consensus procedure than by the mathematical procedure. This is the first study to empirically demonstrate that different FMEA scoring procedures can score and prioritize failure modes differently. It found considerable variability in individual team members' opinions on scores, which highlights the subjective and qualitative nature of failure mode scoring. A consensus scoring procedure may be most appropriate for FMEA as it allows variability in individuals' scores and rationales to become apparent and to be discussed and resolved by the team. It may also yield team learning and communication benefits unlikely to result from a mathematical procedure.

  18. Analysis of Relational Communication in Dyads: New Measurement Procedures.

    Rogers, L. Edna; Farace, Richard

    Relational communication refers to the control or dominance aspects of message exchange in dyads--distinguishing it from the report or referential aspects of communication. In relational communicational analysis, messages as transactions are emphasized; major theoretical concepts which emerge are symmetry, transitoriness, and complementarity of…

  19. A Bayesian multidimensional scaling procedure for the spatial analysis of revealed choice data

    DeSarbo, WS; Kim, Y; Fong, D

    1999-01-01

    We present a new Bayesian formulation of a vector multidimensional scaling procedure for the spatial analysis of binary choice data. The Gibbs sampler is gainfully employed to estimate the posterior distribution of the specified scalar products, bilinear model parameters. The computational procedure

  20. Responding to Self-Harm: A Documentary Analysis of Agency Policy and Procedure

    Paul, Sally; Hill, Malcolm

    2013-01-01

    This paper reports on the findings of a documentary analysis of policies and procedures relating to self-harm from a range of organisations working with young people in the UK. It identifies the extent to which policies and/or procedures relating to self-harm are available for service providers and offers a wider understanding of the concepts of…

  1. Genetic analysis of variation in human meiotic recombination.

    Reshmi Chowdhury

    2009-09-01

    Full Text Available The number of recombination events per meiosis varies extensively among individuals. This recombination phenotype differs between female and male, and also among individuals of each gender. In this study, we used high-density SNP genotypes of over 2,300 individuals and their offspring in two datasets to characterize recombination landscape and to map the genetic variants that contribute to variation in recombination phenotypes. We found six genetic loci that are associated with recombination phenotypes. Two of these (RNF212 and an inversion on chromosome 17q21.31 were previously reported in the Icelandic population, and this is the first replication in any other population. Of the four newly identified loci (KIAA1462, PDZK1, UGCG, NUB1, results from expression studies provide support for their roles in meiosis. Each of the variants that we identified explains only a small fraction of the individual variation in recombination. Notably, we found different sequence variants associated with female and male recombination phenotypes, suggesting that they are regulated by different genes. Characterization of genetic variants that influence natural variation in meiotic recombination will lead to a better understanding of normal meiotic events as well as of non-disjunction, the primary cause of pregnancy loss.

  2. Fast analysis procedure of radiochemical coordinat uptake for methotrexate

    Caston, J.D.; Kamen, B.A.

    1976-01-01

    Under this invention, a radio-chemical analysis is submitted to determine the concentration of methotrexate or its equivalents in analysis in a biological medium. The amounts taken up of the labelled compound and the known concentrations of the unlabelled compound to be determined are radio-isotopically related to a first system containing a pre-determined amount of the labelled compound and a pre-determined amount of the unlabelled compound. In a second system, identical to the first, save that the sample of the biological medium to be analyzed takes the place of the unlabelled compound, the amount of labelled compound taken up is determined radio-isotopically. The concentration of the compound in the sample is then determined by correlation of the labelled compound uptake determined in the second system with the relation determined in the first system. The radio-isotopic relations and determinations may be made by direct and sequential analytical techniques [fr

  3. Development of the quantification procedures for in situ XRF analysis

    Kump, P.; Necemer, M.; Rupnik, P.

    2005-01-01

    For in situ XRF applications, two excitation systems (radioisotope and tube excited) and an X ray spectrometer based on an Si-PIN detector were assembled and used. The radioisotope excitation system with an Am-241 source was assembled into a prototype of a compact XRF analyser PEDUZO-01, which is also applicable in field work. The existing quantification software QAES (quantitative analysis of environmental samples) was assessed to be adequate also in field work. This QAES software was also integrated into a new software attached to the developed XRF analyser PEDUZO-01, which includes spectrum acquisition, spectrum analysis and quantification and runs in the LABVIEW environment. In a process of assessment of the Si-PIN based X ray spectrometers and QAES quantification software in field work, a comparison was made with the results obtained by the standard Si(Li) based spectrometer. The results of this study prove that the use of this spectrometer is adequate for field work. This work was accepted for publication in X ray Spectrometry. Application of a simple sample preparation of solid samples was studied in view of the analytical results obtained. It has been established that under definite conditions the results are not very different from the ones obtained by the homogenized sample pressed into the pellet. The influence of particle size and mineralogical effects on quantitative results was studied. A simple sample preparation kit was proposed. Sample preparation for the analysis of water samples by precipitation with APDC and aerosol analysis using a dichotomous sampler were also adapted and used in the field work. An adequate sample preparation kit was proposed. (author)

  4. A comparative examination of sample treatment procedures for ICAP-AES analysis of biological tissue

    De Boer, J. L. M.; Maessen, F. J. M. J.

    The objective of this study was to contribute to the evaluation of existing sample preparation procedures for ICAP-AES analysis of biological material. Performance characteristics were established of current digestion procedures comprising extraction, solubilization, pressure digestion, and wet and dry ashing methods. Apart from accuracy and precision, a number of criteria of special interest for the analytical practice was applied. As a test sample served SRM bovine liver. In this material six elements were simultaneously determined. Results showed that every procedure has its defects and advantages. Hence, unambiguous recommendation of standard digestion procedures can be made only when taking into account the specific analytical problem.

  5. A comparison of various procedures in photon activation analysis with the same irradiation setup

    Sun, Z.J. [Chemical Sciences and Engineering Division, Argonne National Laboratory, 9700 S. Cass Ave., Argonne, IL 60439 (United States); Wells, D. [Physics Department, South Dakota School of Mines and Technology, 501 E. Saint Joseph St., Rapid City, SD 57701 (United States); Segebade, C. [Idaho Accelerator Center, Idaho State University, 921 S. 8th Ave., Pocatello, ID 83209 (United States); Quigley, K.; Chemerisov, S. [Chemical Sciences and Engineering Division, Argonne National Laboratory, 9700 S. Cass Ave., Argonne, IL 60439 (United States)

    2014-11-15

    A sample of known elemental concentrations was activated in the bremsstrahlung photon beam which was created by a pulsed electron LINAC. Several procedures of photon activation analysis, including those applied with/without reference material and with/without photon flux monitor, were conducted to make a comparison of their precision and accuracy in practice. Experimental results have indicated that: (1) relative procedures usually produce better outcome despite that the absolute measurement is straightforward and eliminate the assistance of reference materials; (2) among relative procedures, the method with internal flux monitor yields higher quality of the analytical results. In the article, the pros and cons of each procedure are discussed as well.

  6. Processes and Procedures for Application of CFD to Nuclear Reactor Safety Analysis

    Richard W. Johnson; Richard R. Schultz; Patrick J. Roache; Ismail B. Celik; William D. Pointer; Yassin A. Hassan

    2006-01-01

    Traditionally, nuclear reactor safety analysis has been performed using systems analysis codes such as RELAP5, which was developed at the INL. However, goals established by the Generation IV program, especially the desire to increase efficiency, has lead to an increase in operating temperatures for the reactors. This increase pushes reactor materials to operate towards their upper temperature limits relative to structural integrity. Because there will be some finite variation of the power density in the reactor core, there will be a potential for local hot spots to occur in the reactor vessel. Hence, it has become apparent that detailed analysis will be required to ensure that local ''hot spots'' do not exceed safety limits. It is generally accepted that computational fluid dynamics (CFD) codes are intrinsically capable of simulating fluid dynamics and heat transport locally because they are based on ''first principles''. Indeed, CFD analysis has reached a fairly mature level of development, including the commercial level. However, CFD experts are aware that even though commercial codes are capable of simulating local fluid and thermal physics, great care must be taken in their application to avoid errors caused by such things as inappropriate grid meshing, low-order discretization schemes, lack of iterative convergence and inaccurate time-stepping. Just as important is the choice of a turbulence model for turbulent flow simulation. Turbulence models model the effects of turbulent transport of mass, momentum and energy, but are not necessarily applicable for wide ranges of flow types. Therefore, there is a well-recognized need to establish practices and procedures for the proper application of CFD to simulate flow physics accurately and establish the level of uncertainty of such computations. The present document represents contributions of CFD experts on what the basic practices, procedures and guidelines should be to aid CFD analysts to obtain accurate estimates

  7. Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis

    Che, E.; Olsen, M. J.

    2017-09-01

    Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.

  8. Using plant procedures as the basis for conducting a job and task analysis

    Haynes, F.H.; Ruth, B.W.

    1985-01-01

    Plant procedures were selected, by Northeast Utilities (NU), as the basis for conducting Job and Task Analyses (JTA). The resultant JTA was used to design procedure based simulator training programs for Millstone 1, 2, and Connecticut Yankee. The task listings were both plant specific and exhibited excellent correlation to INPO's generic PWR and BWR task analyses. Using the procedures based method enabled us to perform the JTA using plant and training staff. This proved cost effective in terms of both time and money. Learning objectives developed from the JTA were easily justified and correlated directly to job performance within the context of the plant procedures. In addition, the analysis generated a comprehensive review of plant procedures and, conversely, the plant's normal procedure revision process generated an automatic trigger for updating the task data

  9. Quality assurance procedures for the analysis of TRU waste samples

    Glasgow, D.C. Giaquinto, J.M.; Robinson, L.

    1995-01-01

    The Waste Isolation Pilot Plant (WIPP) project was undertaken in response to the growing need for a national repository for transuranic (TRU) waste. Guidelines for WIPP specify that any waste item to be interred must be fully characterized and analyzed to determine the presence of chemical compounds designated hazardous and certain toxic elements. The Transuranic Waste Characterization Program (TWCP) was launched to develop analysis and quality guidelines, certify laboratories, and to oversee the actual waste characterizations at the laboratories. ORNL is participating in the waste characterization phase and brings to bear a variety of analytical techniques including ICP-AES, cold vapor atomic absorption, and instrumental neutron activation analysis (INAA) to collective determine arsenic, cadmium, barium, chromium, mercury, selenium, silver, and other elements. All of the analytical techniques involved participate in a cooperative effort to meet the project objectives. One important component of any good quality assurance program is determining when an alternate method is more suitable for a given analytical problem. By bringing to bear a whole arsenal of analytical techniques working toward common objectives, few analytical problems prove to be insurmountable. INAA and ICP-AES form a powerful pair when functioning in this cooperative manner. This paper will provide details of the quality assurance protocols, typical results from quality control samples for both INAA and ICP-AES, and detail method cooperation schemes used

  10. Procedures for uncertainty and sensitivity analysis in repository performance assessment

    Poern, K.; Aakerlund, O.

    1985-10-01

    The objective of the project was mainly a literature study of available methods for the treatment of parameter uncertainty propagation and sensitivity aspects in complete models such as those concerning geologic disposal of radioactive waste. The study, which has run parallel with the development of a code package (PROPER) for computer assisted analysis of function, also aims at the choice of accurate, cost-affective methods for uncertainty and sensitivity analysis. Such a choice depends on several factors like the number of input parameters, the capacity of the model and the computer reresources required to use the model. Two basic approaches are addressed in the report. In one of these the model of interest is directly simulated by an efficient sampling technique to generate an output distribution. Applying the other basic method the model is replaced by an approximating analytical response surface, which is then used in the sampling phase or in moment matching to generate the output distribution. Both approaches are illustrated by simple examples in the report. (author)

  11. Geographical variation in dementia: systematic review with meta-analysis

    Russ, Tom C; Batty, G David; Hearnshaw, Gena F; Fenton, Candida; Starr, John M

    2012-01-01

    Background Geographical variation in dementia prevalence and incidence may indicate important socio-environmental contributions to dementia aetiology. However, previous comparisons have been hampered by combining studies with different methodologies. This review systematically collates and synthesizes studies examining geographical variation in the prevalence and incidence of dementia based on comparisons of studies using identical methodologies. Methods Papers were identified by a comprehensive electronic search of relevant databases, scrutinising the reference sections of identified publications, contacting experts in the field and re-examining papers already known to us. Identified articles were independently reviewed against inclusion/exclusion criteria and considered according to geographical scale. Rural/urban comparisons were meta-analysed. Results Twelve thousand five hundred and eighty records were reviewed and 51 articles were included. Dementia prevalence and incidence varies at a number of scales from the national down to small areas, including some evidence of an effect of rural living [prevalence odds ratio (OR) = 1.11, 90% confidence interval (CI) 0.79–1.57; incidence OR = 1.20, 90% CI 0.84–1.71]. However, this association of rurality was stronger for Alzheimer disease, particularly when early life rural living was captured (prevalence OR = 2.22, 90% CI 1.19–4.16; incidence OR = 1.64, 90% CI 1.08–2.50). Conclusions There is evidence of geographical variation in rates of dementia in affluent countries at a variety of geographical scales. Rural living is associated with an increased risk of Alzheimer disease, and there is a suggestion that early life rural living further increases this risk. However, the fact that few studies have been conducted in resource-poor countries limits conclusions. PMID:22798662

  12. Systems genetics analysis of pharmacogenomics variation during antidepressant treatment

    Madsen, Majbritt Busk; Kogelman, L J A; Kadarmideen, H N

    2016-01-01

    Selective serotonin reuptake inhibitors (SSRIs) are the most widely used antidepressants, but the efficacy of the treatment varies significantly among individuals. It is believed that complex genetic mechanisms play a part in this variation. We have used a network based approach to unravel the in...... genes involved in calcium homeostasis. In conclusion, we suggest a difference in genetic interaction networks between initial and subsequent SSRI response.The Pharmacogenomics Journal advance online publication, 18 October 2016; doi:10.1038/tpj.2016.68....

  13. Variational methods for crystalline microstructure analysis and computation

    Dolzmann, Georg

    2003-01-01

    Phase transformations in solids typically lead to surprising mechanical behaviour with far reaching technological applications. The mathematical modeling of these transformations in the late 80s initiated a new field of research in applied mathematics, often referred to as mathematical materials science, with deep connections to the calculus of variations and the theory of partial differential equations. This volume gives a brief introduction to the essential physical background, in particular for shape memory alloys and a special class of polymers (nematic elastomers). Then the underlying mathematical concepts are presented with a strong emphasis on the importance of quasiconvex hulls of sets for experiments, analytical approaches, and numerical simulations.

  14. Establishment of analysis procedure for control rod reactivity worth

    Song, Hoon; Kim, Young Il; Kim, Sang Ji; Kim, Young In

    2001-03-01

    As to the calculation method of control rod reactivity relating to hexagonal assembly, which are used generally in fast reactor, we have investigated the calculation method, the problems to rise during calculation, the degrees of calculation and the enhancement of calculation modeling so on, and estimated the application of calculation method through comparison and analysis of calculation result using the effective cross section generation system, TRANSX/TWODANT, and neutron flux calculation system, diffusion theory code DIF-3D, which are belonged to K-CORE System, and determined the basic calculation method, and extracted the present calculation problem in case of application in K-CORE System and the future improvement items so on.

  15. Establishment of analysis procedure for control rod reactivity worth

    Song, Hoon; Kim, Young Il; Kim, Sang Ji; Kim, Young In

    2001-03-01

    As to the calculation method of control rod reactivity relating to hexagonal assembly, which are used generally in fast reactor, we have investigated the calculation method, the problems to rise during calculation, the degrees of calculation and the enhancement of calculation modeling so on, and estimated the application of calculation method through comparison and analysis of calculation result using the effective cross section generation system, TRANSX/TWODANT, and neutron flux calculation system, diffusion theory code DIF-3D, which are belonged to K-CORE System, and determined the basic calculation method, and extracted the present calculation problem in case of application in K-CORE System and the future improvement items so on

  16. Variation in plasma calcium analysis in primary care in Sweden - a multilevel analysis

    Eggertsen Robert

    2010-05-01

    Full Text Available Abstract Background Primary hyperparathyroidism (pHPT is a common disease that often remains undetected and causes severe disturbance especially in postmenopausal women. Therefore, national recommendations promoting early pHPT detection by plasma calcium (P-Ca have been issued in Sweden. In this study we aimed to investigate variation of P-Ca analysis between physicians and health care centres (HCCs in primary care in county of Skaraborg, Sweden. Methods In this cross sectional study of patients' records during 2005 we analysed records from 154 629 patients attending 457 physicians at 24 HCCs. We used multilevel logistic regression analysis (MLRA and adjusted for patient, physician and HCC characteristics. Differences were expressed as median odds ratio (MOR. Results There was a substantial variation in number of P-Ca analyses between both HCCs (MORHCC 1.65 [1.44-2.07] and physicians (MORphysician 1.95 [1.85-2.08]. The odds for a P-Ca analysis were lower for male patients (OR 0.80 [0.77-0.83] and increased with the number of diagnoses (OR 25.8 [23.5-28.5]. Sex of the physician had no influence on P-Ca test ordering (OR 0.93 [0.78-1.09]. Physicians under education ordered most P-Ca analyses (OR 1.69 [1.35-2.24] and locum least (OR 0.73 [0.57-0.94]. More of the variance was attributed to the physician level than the HCC level. Different mix of patients did not explain this variance between physicians. Theoretically, if a patient were able to change both GP and HCC, the odds of a P-Ca analysis would in median increase by 2.45. Including characteristics of the patients, physicians and HCCs in the MLRA model did not explain the variance. Conclusions The physician level was more important than the HCC level for the variation in P-Ca analysis, but further exploration of unidentified contextual factors is crucial for future monitoring of practice variation.

  17. ANALYSIS THE DIURNAL VARIATIONS ON SELECTED PHYSICAL AND PHYSIOLOGICAL PARAMETERS

    A. MAHABOOBJAN

    2010-12-01

    Full Text Available The purpose of the study was to analyze the diurnal variations on selected physical and physiological parameters such as speed, explosive power, resting heart rate and breath holding time among college students. To achieve the purpose of this study, a total of twenty players (n=20 from Government Arts College, Salem were selected as subjects To study the diurnal variation of the players on selected physiological and performance variables, the data were collected 4 times a day with every four hours in between the times it from 6.00 to 18.00 hours were selected as another categorical variable. One way repeated measures (ANOVA was used to analyze the data. If the obtained F-ratio was significant, Seheffe’s post-hoc test was used to find out the significant difference if anyamong the paired means. The level of significance was fixed at.05 level. It has concluded that both physical and physiological parameters were significantly deferred with reference to change of temperature in a day

  18. Statistical analysis of geomagnetic field variations during solar eclipses

    Kim, Jung-Hee; Chang, Heon-Young

    2018-04-01

    We investigate the geomagnetic field variations recorded by INTERMAGNET geomagnetic observatories, which are observed while the Moon's umbra or penumbra passed over them during a solar eclipse event. Though it is generally considered that the geomagnetic field can be modulated during solar eclipses, the effect of the solar eclipse on the observed geomagnetic field has proved subtle to be detected. Instead of exploring the geomagnetic field as a case study, we analyze 207 geomagnetic manifestations acquired by 100 geomagnetic observatories during 39 solar eclipses occurring from 1991 to 2016. As a result of examining a pattern of the geomagnetic field variation on average, we confirm that the effect can be seen over an interval of 180 min centered at the time of maximum eclipse on a site of a geomagnetic observatory. That is, demonstrate an increase in the Y component of the geomagnetic field and decreases in the X component and the total strength of the geomagnetic field. We also find that the effect can be overwhelmed, depending more sensitively on the level of daily geomagnetic events than on the level of solar activity and/or the phase of solar cycle. We have demonstrated it by dividing the whole data set into subsets based on parameters of the geomagnetic field, solar activity, and solar eclipses. It is suggested, therefore, that an evidence of the solar eclipse effect can be revealed even at the solar maximum, as long as the day of the solar eclipse is magnetically quiet.

  19. Local Analysis Approach for Short Wavelength Geopotential Variations

    Bender, P. L.

    2009-12-01

    The value of global spherical harmonic analyses for determining 15 day to 30 day changes in the Earth's gravity field has been demonstrated extensively using data from the GRACE mission and previous missions. However, additional useful information appears to be obtainable from local analyses of the data. A number of such analyses have been carried out by various groups. In the energy approximation, the changes in the height of the satellite altitude geopotential can be determined from the post-fit changes in the satellite separation during individual one-revolution arcs of data from a GRACE-type pair of satellites in a given orbit. For a particular region, it is assumed that short wavelength spatial variations for the arcs crossing that region during a time T of interest would be used to determine corrections to the spherical harmonic results. The main issue in considering higher measurement accuracy in future missions is how much improvement in spatial resolution can be achieved. For this, the shortest wavelengths that can be determined are the most important. And, while the longer wavelength variations are affected by mass distribution changes over much of the globe, the shorter wavelength ones hopefully will be determined mainly by more local changes in the mass distribution. Future missions are expected to have much higher accuracy for measuring changes in the satellite separation than GRACE. However, how large an improvement in the derived results in hydrology will be achieved is still very much a matter of study, particularly because of the effects of uncertainty in the time variations in the atmospheric and oceanic mass distributions. To be specific, it will be assumed that improving the spatial resolution in continental regions away from the coastlines is the objective, and that the satellite altitude is in the range of roughly 290 to 360 km made possible for long missions by drag-free operation. The advantages of putting together the short wavelength

  20. Oxygen isotope analysis of plant water without extraction procedure

    Gan, K.S.; Wong, S.C.; Farquhar, G.D.; Yong, J.W.H.

    2001-01-01

    Isotopic analyses of plant water (mainly xylem, phloem and leaf water) are gaming importance as the isotopic signals reflect plant-environment interactions, affect the oxygen isotopic composition of atmospheric O 2 and CO 2 and are eventually incorporated into plant organic matter. Conventionally, such isotopic measurements require a time-consuming process of isolating the plant water by azeotropic distillation or vacuum extraction, which would not complement the speed of isotope analysis provided by continuous-flow IRMS (Isotope-Ratio Mass Spectrometry), especially when large data sets are needed for statistical calculations in biological studies. Further, a substantial amount of plant material is needed for water extraction and leaf samples would invariably include unenriched water from the fine veins. To measure sub-microlitre amount of leaf mesophyll water, a new approach is undertaken where a small disc of fresh leaf is cut using a specially designed leaf punch, and pyrolysed directly in an IRMS. By comparing with results from pyrolysis of the dry matter of the same leaf, the 18 O content of leaf water can be determined without extraction from fresh leaves. This method is validated using a range of cellulose-water mixtures to simulate the constituents of fresh leaf. Cotton leaf water δ 18 O obtained from both methods of fresh leaf pyrolysis and azeotropic distillation will be compared. The pyrolysis technique provides a robust approach to measure the isotopic content of water or any volatile present in a homogeneous solution or solid hydrous substance

  1. Integrating population variation and protein structural analysis to improve clinical interpretation of missense variation: application to the WD40 domain.

    Laskowski, Roman A; Tyagi, Nidhi; Johnson, Diana; Joss, Shelagh; Kinning, Esther; McWilliam, Catherine; Splitt, Miranda; Thornton, Janet M; Firth, Helen V; Wright, Caroline F

    2016-03-01

    We present a generic, multidisciplinary approach for improving our understanding of novel missense variants in recently discovered disease genes exhibiting genetic heterogeneity, by combining clinical and population genetics with protein structural analysis. Using six new de novo missense diagnoses in TBL1XR1 from the Deciphering Developmental Disorders study, together with population variation data, we show that the β-propeller structure of the ubiquitous WD40 domain provides a convincing way to discriminate between pathogenic and benign variation. Children with likely pathogenic mutations in this gene have severely delayed language development, often accompanied by intellectual disability, autism, dysmorphology and gastrointestinal problems. Amino acids affected by likely pathogenic missense mutations are either crucial for the stability of the fold, forming part of a highly conserved symmetrically repeating hydrogen-bonded tetrad, or located at the top face of the β-propeller, where 'hotspot' residues affect the binding of β-catenin to the TBLR1 protein. In contrast, those altered by population variation are significantly less likely to be spatially clustered towards the top face or to be at buried or highly conserved residues. This result is useful not only for interpreting benign and pathogenic missense variants in this gene, but also in other WD40 domains, many of which are associated with disease. © The Author 2016. Published by Oxford University Press.

  2. An analysis of tolerance levels in IMRT quality assurance procedures

    Basran, Parminder S.; Woo, Milton K.

    2008-01-01

    Increased use of intensity modulated radiation therapy (IMRT) has resulted in increased efforts in patient quality assurance (QA). Software and detector systems intended to streamline the IMRT quality assurance process often report metrics, such as percent discrepancies between measured and computed doses, which can be compared to benchmark or threshold values. The purpose of this work is to examine the relationships between two different types of IMRT QA processes in order to define, or refine, appropriate tolerances values. For 115 IMRT plans delivered in a 3 month period, we examine the discrepancies between (a) the treatment planning system (TPS) and results from a commercial independent monitor unit (MU) calculation program; (b) TPS and results from a commercial diode-array measurement system; and (c) the independent MU calculation and the diode-array measurements. Statistical tests were performed to assess significance in the IMRT QA results for different disease site and machine models. There is no evidence that the average total dose discrepancy in the monitor unit calculation depends on the disease site. Second, the discrepancies in the two IMRT QA methods are independent: there is no evidence that a better --or worse--monitor unit validation result is related to a better--or worse--diode-array measurement result. Third, there is marginal benefit in repeating the independent MU calculation with a more suitable dose point, if the initial IMRT QA failed a certain tolerance. Based on these findings, the authors conclude at some acceptable tolerances based on disease site and IMRT QA method. Specifically, monitor unit validations are expected to have a total dose discrepancy of 3% overall, and 5% per beam, independent of disease site. Diode array measurements are expected to have a total absolute dose discrepancy of 3% overall, and 3% per beam, independent of disease site. The percent of pixels exceeding a 3% and 3 mm threshold in a gamma analysis should be

  3. Variational Bayesian Causal Connectivity Analysis for fMRI

    Martin eLuessi

    2014-05-01

    Full Text Available The ability to accurately estimate effective connectivity among brain regions from neuroimaging data could help answering many open questions in neuroscience. We propose a method which uses causality to obtain a measure of effective connectivity from fMRI data. The method uses a vector autoregressive model for the latent variables describing neuronal activity in combination with a linear observation model based on a convolution with a hemodynamic response function. Due to the employed modeling, it is possible to efficiently estimate all latent variables of the model using a variational Bayesian inference algorithm. The computational efficiency of the method enables us to apply it to large scale problems with high sampling rates and several hundred regions of interest. We use a comprehensive empirical evaluation with synthetic and real fMRI data to evaluate the performance of our method under various conditions.

  4. High resolution analysis of temporal variation of airborne radionuclides

    Komura, K.; Yamaguchi, Y.; Manikandan, M.; Murata, Y.; Iida, T.; Moriizumi, J.

    2004-01-01

    One of the application of ultra low-background gamma spectrometry, we tried to measure temporal variation of airborne radionuclides at intervals of 1 to few hours in extreme case. Airborne radionuclides were collected on a filter paper made of quartz fiber at the Low Level Radioactivity Laboratory (LLRL), Kanazawa Univ. in Tatsunokuchi (since Nov. 2002), Hegra Island located 50 km from Noto peninsula (since Apr. 2003) to investigate influence of Asian continent and Shishiku plateau at 640 m above sea to know vertical difference (since Sep., 2003). Pb-210, Pb-212 and Be-7 were measured nondestructively by ultra low background Ge detectors in Ogoya Underground Laboratory (270 meter water Concentration of Rn-222 was monitored 1 hour intervals and wind direction and speed were recorded 10 min or 2 min intervals (Hegra Is.) as support data in data analyses. In the regular monitoring, sampling was made at 1-2 day (LLRL and Shishiku) or 1 week intervals (Hegra) to know daily and seasonal variations and similarity or difference between sampling locations. When drastic meteorological change, such as passage of front or typhoon, occurrence of inversion layer and snow fall etc., short sampling at 1-2 hours of intervals was conducted to find the corrlation with meteorological factors at single point or 2 points simultaneously. As a results, it was found that concentrations of Pb-210, Po-210, Pb-212 and Be-7 were found to vary very quickly in a short time (see Figure below) due mainly to horizontal or vertical mixing of air-masses. (authors)

  5. Anatomical Variations in the Sinoatrial Nodal Artery: A Meta-Analysis and Clinical Considerations.

    Jens Vikse

    Full Text Available The sinoatrial nodal artery (SANa is a highly variable vessel which supplies blood to the sinoatrial node (SAN. Due to its variability and susceptibility to iatrogenic injury, our study aimed to assess the anatomy of the SANa and determine the prevalence of its anatomical variations.An extensive search of major electronic databases was performed to identify all articles reporting anatomical data on the SANa. No lower date limit or language restrictions were applied. Anatomical data regarding the artery were extracted and pooled into a meta-analysis.Sixty-six studies (n = 21455 hearts were included in the meta-analysis. The SANa usually arose as a single vessel with a pooled prevalence of 95.5% (95%CI:93.6-96.9. Duplication and triplication of the artery were also observed with pooled prevalence of 4.3% (95%CI:2.8-6.0 and 0.3% (95%CI:0-0.7, respectively. The most common origin of the SANa was from the right coronary artery (RCA, found in 68.0% (95%CI:55.6-68.9 of cases, followed by origin from the left circumflex artery, and origin from the left coronary artery with pooled prevalence of 22.1% (95%CI:15.0-26.2 and 2.7 (95%CI:0.7-5.2, respectively. A retrocaval course of the SANa was the most common course of the artery with a pooled prevalence of 47.1% (95%CI:36.0-55.5. The pooled prevalence of an S-shaped SANa was 7.6% (95%CI:2.9-14.1.The SANa is most commonly reported as a single vessel, originating from the RCA, and taking a retrocaval course to reach the SAN. Knowledge of high risk anatomical variants of the SANa, such as an S-shaped artery, must be taken into account by surgeons to prevent iatrogenic injuries. Specifically, interventional or cardiosurgical procedures, such as the Cox maze procedure for atrial fibrillation, open heart surgeries through the right atrium or intraoperative cross-clamping or dissection procedures during mitral valve surgery using the septal approach can all potentiate the risk for injury in the setting of high

  6. Analysis of DNA methylation variation in sibling tobacco ( Nicotiana ...

    Amplified fragment length polymorphism (AFLP) and methylation-sensitive amplification polymorphism (MSAP) analysis were used to investigate the genome of two sibling tobacco cultivars, Yunyan85 and Yunyan87, their parent K326 and the other tobacco cultivar NC89. AFLP analysis indicated that, the genome primary ...

  7. Variational formulation based analysis on growth of yield front in ...

    user

    The analysis of rotating disk behavior has been of great interest to many ... strain hardening using Tresca's yield condition and its associated flow rule ...... Determination of Stresses in Gas-Turbine Disks Subjected to Plastic Flow and Creep.

  8. Analysis of Daily Setup Variation With Tomotherapy Megavoltage Computed Tomography

    Zhou Jining; Uhl, Barry; Dewit, Kelly; Young, Mark; Taylor, Brian; Fei Dingyu; Lo, Y-C

    2010-01-01

    The purpose of this study was to evaluate different setup uncertainties for various anatomic sites with TomoTherapy (registered) pretreatment megavoltage computed tomography (MVCT) and to provide optimal margin guidelines for these anatomic sites. Ninety-two patients with tumors in head and neck (HN), brain, lung, abdominal, or prostate regions were included in the study. MVCT was used to verify patient position and tumor target localization before each treatment. With the anatomy registration tool, MVCT provided real-time tumor shift coordinates relative to the positions where the simulation CT was performed. Thermoplastic facemasks were used for HN and brain treatments. Vac-Lok TM cushions were used to immobilize the lower extremities up to the thighs for prostate patients. No respiration suppression was administered for lung and abdomen patients. The interfractional setup variations were recorded and corrected before treatment. The mean interfractional setup error was the smallest for HN among the 5 sites analyzed. The average 3D displacement in lateral, longitudinal, and vertical directions for the 5 sites ranged from 2.2-7.7 mm for HN and lung, respectively. The largest movement in the lung was 2.0 cm in the longitudinal direction, with a mean error of 6.0 mm and standard deviation of 4.8 mm. The mean interfractional rotation variation was small and ranged from 0.2-0.5 deg., with the standard deviation ranging from 0.7-0.9 deg. Internal organ displacement was also investigated with a posttreatment MVCT scan for HN, lung, abdomen, and prostate patients. The maximum 3D intrafractional displacement across all sites was less than 4.5 mm. The interfractional systematic errors and random errors were analyzed and the suggested margins for HN, brain, prostate, abdomen, and lung in the lateral, longitudinal, and vertical directions were between 4.2 and 8.2 mm, 5.0 mm and 12.0 mm, and 1.5 mm and 6.8 mm, respectively. We suggest that TomoTherapy (registered) pretreatment

  9. Analysis of daily setup variation with tomotherapy megavoltage computed tomography.

    Zhou, Jining; Uhl, Barry; Dewit, Kelly; Young, Mark; Taylor, Brian; Fei, Ding-Yu; Lo, Yeh-Chi

    2010-01-01

    The purpose of this study was to evaluate different setup uncertainties for various anatomic sites with TomoTherapy pretreatment megavoltage computed tomography (MVCT) and to provide optimal margin guidelines for these anatomic sites. Ninety-two patients with tumors in head and neck (HN), brain, lung, abdominal, or prostate regions were included in the study. MVCT was used to verify patient position and tumor target localization before each treatment. With the anatomy registration tool, MVCT provided real-time tumor shift coordinates relative to the positions where the simulation CT was performed. Thermoplastic facemasks were used for HN and brain treatments. Vac-Lok cushions were used to immobilize the lower extremities up to the thighs for prostate patients. No respiration suppression was administered for lung and abdomen patients. The interfractional setup variations were recorded and corrected before treatment. The mean interfractional setup error was the smallest for HN among the 5 sites analyzed. The average 3D displacement in lateral, longitudinal, and vertical directions for the 5 sites ranged from 2.2-7.7 mm for HN and lung, respectively. The largest movement in the lung was 2.0 cm in the longitudinal direction, with a mean error of 6.0 mm and standard deviation of 4.8 mm. The mean interfractional rotation variation was small and ranged from 0.2-0.5 degrees, with the standard deviation ranging from 0.7-0.9 degrees. Internal organ displacement was also investigated with a posttreatment MVCT scan for HN, lung, abdomen, and prostate patients. The maximum 3D intrafractional displacement across all sites was less than 4.5 mm. The interfractional systematic errors and random errors were analyzed and the suggested margins for HN, brain, prostate, abdomen, and lung in the lateral, longitudinal, and vertical directions were between 4.2 and 8.2 mm, 5.0 mm and 12.0 mm, and 1.5 mm and 6.8 mm, respectively. We suggest that TomoTherapy pretreatment MVCT can be used to

  10. Analysis on Precipitation Variation in Anyang and Nanyang in Recent 57 Years

    2011-01-01

    [Objective] The aim was to analyze precipitation variation in Anyang and Nanyang in recent 57 years. [Method] Based on the data of annual precipitation in Anyang and Nanyang from 1953 to 2009, the changes of precipitation in Anyang and Nanyang were compared by means of mathematical statistics, regression analysis and wavelet analysis. [Result] In recent 57 years, annual precipitation in Anyang and Nanyang showed decrease trend, especially Anyang with obvious decrease trend; from seasonal variation, average ...

  11. CONSIDERATIONS FOR THE TREATMENT OF COMPUTERIZED PROCEDURES IN HUMAN RELIABILITY ANALYSIS

    Ronald L. Boring; David I. Gertman

    2012-07-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room. Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  12. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES, PART TWO: APPLICABILITY OF CURRENT METHODS

    Ronald L. Boring; David I. Gertman

    2012-10-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no U.S. nuclear power plant has implemented CPs in its main control room. Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  13. Application of human error theory in case analysis of wrong procedures.

    Duthie, Elizabeth A

    2010-06-01

    The goal of this study was to contribute to the emerging body of literature about the role of human behaviors and cognitive processes in the commission of wrong procedures. Case analysis of 5 wrong procedures in operative and nonoperative settings using James Reason's human error theory was performed. The case analysis showed that cognitive underspecification, cognitive flips, automode processing, and skill-based errors were contributory to wrong procedures. Wrong-site procedures accounted for the preponderance of the cases. Front-line supervisory staff used corrective actions that focused on the performance of the individual without taking into account cognitive factors. System fixes using human cognition concepts have a greater chance of achieving sustainable safety outcomes than those that are based on the traditional approach of counseling, education, and disciplinary action for staff.

  14. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness.

    Graneheim, U H; Lundman, B

    2004-02-01

    Qualitative content analysis as described in published literature shows conflicting opinions and unsolved issues regarding meaning and use of concepts, procedures and interpretation. This paper provides an overview of important concepts (manifest and latent content, unit of analysis, meaning unit, condensation, abstraction, content area, code, category and theme) related to qualitative content analysis; illustrates the use of concepts related to the research procedure; and proposes measures to achieve trustworthiness (credibility, dependability and transferability) throughout the steps of the research procedure. Interpretation in qualitative content analysis is discussed in light of Watzlawick et al.'s [Pragmatics of Human Communication. A Study of Interactional Patterns, Pathologies and Paradoxes. W.W. Norton & Company, New York, London] theory of communication.

  15. Procedures for conducting common cause failure analysis in probabilistic safety assessment

    1992-05-01

    The principal objective of this report is to supplement the procedure developed in Mosleh et al. (1988, 1989) by providing more explicit guidance for a practical approach to common cause failures (CCF) analysis. The detailed CCF analysis following that procedure would be very labour intensive and time consuming. This document identifies a number of options for performing the more labour intensive parts of the analysis in an attempt to achieve a balance between the need for detail, the purpose of the analysis and the resources available. The document is intended to be compatible with the Agency's Procedures for Conducting Probabilistic Safety Assessments for Nuclear Power Plants (IAEA, 1992), but can be regarded as a stand-alone report to be used in conjunction with NUREG/CR-4780 (Mosleh et al., 1988, 1989) to provide additional detail, and discussion of key technical issues

  16. Experience with conventional inelastic analysis procedures in very high temperature applications

    Mallett, R.H.; Thompson, J.M.; Swindeman, R.W.

    1991-01-01

    Conventional incremental plasticity and creep analysis procedures for inelastic analysis are applied to hot flue gas cleanup system components. These flue gas systems operate at temperatures where plasticity and creep are very much intertwined while the two phenomena are treated separately in the conventional inelastic analysis procedure. Data for RA333 material are represented in forms appropriate for the conventional inelastic analysis procedures. Behavior is predicted for typical operating cycles. Creep-fatigue damage is estimated based upon usage fractions. Excessive creep damage is predicted; the major contributions occur during high stress short term intervals caused by rapid temperature changes. In this paper these results are presented for discussion of the results and their interpretation in terms of creep-fatigue damage for very high temperature applications

  17. Procedure for statistical analysis of one-parameter discrepant experimental data

    Badikov, Sergey A.; Chechev, Valery P.

    2012-01-01

    A new, Mandel–Paule-type procedure for statistical processing of one-parameter discrepant experimental data is described. The procedure enables one to estimate a contribution of unrecognized experimental errors into the total experimental uncertainty as well as to include it in analysis. A definition of discrepant experimental data for an arbitrary number of measurements is introduced as an accompanying result. In the case of negligible unrecognized experimental errors, the procedure simply reduces to the calculation of the weighted average and its internal uncertainty. The procedure was applied to the statistical analysis of half-life experimental data; Mean half-lives for 20 actinides were calculated and results were compared to the ENSDF and DDEP evaluations. On the whole, the calculated half-lives are consistent with the ENSDF and DDEP evaluations. However, the uncertainties calculated in this work essentially exceed the ENSDF and DDEP evaluations for discrepant experimental data. This effect can be explained by adequately taking into account unrecognized experimental errors. - Highlights: ► A new statistical procedure for processing one-parametric discrepant experimental data has been presented. ► Procedure estimates a contribution of unrecognized errors in the total experimental uncertainty. ► Procedure was applied for processing half-life discrepant experimental data. ► Results of the calculations are compared to the ENSDF and DDEP evaluations.

  18. Analysis of longitudinal variations in North Pacific alkalinity

    Fry, C.; Tyrrell, T.; Achterberg, E. P.

    2016-02-01

    Carbon measurements in the ocean lack the coverage of physical measurements, so approximate alkalinity is predicted where data is unavailable. Surface alkalinity in the North Pacific is poorly characterised by predictive algorithms. Understanding the processes affecting alkalinity in this area can improve the equations. We investigated the causes of regional variations in alkalinity using GLODAPv2. We tested different hypotheses for the causes of three longitudinal phenomena in surface ocean values of Alk*, a tracer of calcium carbonate cycling. These phenomena are: (a) an increase in Alk* from east to west at 50°N, (b) an increase in Alk* from west to east at 30°N, and (c) a lack of a strong increase in Alk* from west to east in the equatorial upwelling area. We found that the most likely cause of higher Alk* on the western side of the subpolar North Pacific (at 50°N) is that denser isopycnals with higher Alk* lie at shallower depths on the western side than the eastern side. At 30°N, the main cause of higher Alk* on the eastern side of the basin is upwelling along the continental shelf of southwestern North America. Along the equator, our analyses suggest that the absence of a strong east-west trend is because the more intense upwelling on the eastern side of the basin does not, under normal conditions, lead to strong elevation of Alk*. However, surface Alk* is more strongly elevated in the eastern Equatorial Pacific during negative phases of the El-Nino-Southern Oscillation, probably because the upwelled water comes from greater depth at these times.

  19. Analysis of the intersexual variation in Thalassophryne maculosa fish venoms.

    Lopes-Ferreira, Mônica; Sosa-Rosales, Ines; Bruni, Fernanda M; Ramos, Anderson D; Vieira Portaro, Fernanda Calheta; Conceição, Katia; Lima, Carla

    2016-06-01

    Gender related variation in the molecular composition of venoms and secretions have been described for some animal species, and there are some evidences that the difference in the toxin (s) profile among males and females may be related to different physiopathological effects caused by the envenomation by either gender. In order to investigate whether this same phenomenon occurs to the toadfish Thalassophryne maculosa, we have compared some biological and biochemical properties of female and male venoms. Twenty females and males were collected in deep waters of the La Restinga lagoon (Venezuela) and, after protein concentration assessed, the induction of toxic activities in mice and the biochemical properties were analyzed. Protein content is higher in males than in females, which may be associated to a higher size and weight of the male body. In vivo studies showed that mice injected with male venoms presented higher nociception when compared to those injected with female venoms, and both venoms induced migration of macrophages into the paw of mice. On the other hand, mice injected with female venoms had more paw edema and extravasation of Evans blue in peritoneal cavity than mice injected with male venoms. We observed that the female venoms had more capacity for necrosis induction when compared with male venoms. The female samples present a higher proteolytic activity then the male venom when gelatin, casein and FRETs were used as substrates. Evaluation of the venoms of females and males by SDS-PAGE and chromatographic profile showed that, at least three components (present in two peaks) are only present in males. Although the severity of the lesion, characterized by necrosis development, is related with the poisoning by female specimens, the presence of exclusive toxins in the male venoms could be associated with the largest capacity of nociception induction by this sample. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Copy number variations in affective disorders and meta-analysis

    Olsen, Line; Hansen, Thomas; Djurovic, Srdjan

    2011-01-01

    in a combined analysis of three case-control samples from Denmark, Norway and Iceland. A total of 1897 cases (n=1223 unipolar and n=463 bipolar) and 11 231 controls were analyzed for CNVs at the 10 genomic loci, but we found no combined association between these CNVs and affective disorders....

  1. analysis of pressure variation of fluid in bounded circular reservoirs

    user

    analysis of the analysed finite element, imposing the boundary conditions and finally, getting the results that ... in reservoir engineering applications [2–7]. ... THEORY. The law of conservation of mass, Darcy's law and the equation of state has been combined to obtain the ..... fields in laser-two-layer solids weak interactions.

  2. Genetic variation and DNA markers in forensic analysis

    SAM

    2014-07-30

    Jul 30, 2014 ... Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License. 4.0 International ... (mtDNA) is today a routine method of analysis of biological ... A promising approach in this context seems to be .... 1985; Armour et al., 1996). ...... management.

  3. Analysis of genetic variation in Erianthus arundinaceum by random ...

    STORAGESEVER

    2008-10-06

    Oct 6, 2008 ... MATERIALS AND METHODS. Fifty-one E. arundinaceum accessions were used in the RAPD analysis. Figure 1. Plant materials planted in the sugarcane germ- plasm garden of Yunnan Agricultural University (YAU). Name and origin of the accessions are shown in Table 1. DNA was extracted from leaves ...

  4. Elliptic Fourier Analysis of body shape variation of Hippocampus spp. (seahorse in Danajon Bank, Philippines

    S. R. M. Tabugo-Rico

    2017-12-01

    Full Text Available Seahorses inhabit various ecosystems hence, had become a flagship species of the marine environment. The Philippines as a hot spot of biodiversity in Asia holds a number of species of seahorses. This serve as an exploratory study to describe body shape variation of selected common seahorse species: Hippocampus comes, Hippocampus histrix, Hippocampus spinosissimus and Hippocampus kuda from Danajon bank using Elliptic Fourier Analysis. The method was done to test whether significant yet subtle differences in body shape variation can be species-specific, habitat-influenced and provide evidence of sexual dimorphism. It is hypothesized that phenotypic divergence may provide evidence for genetic differentiation or mere adaptations to habitat variation. Results show significant considerable differences in the body shapes of the five populations based on the canonical variate analysis (CVA and multivariate analysis of variance (MANOVA with significant p values. Populations were found to be distinct from each other suggesting that body shape variation is species-specific, habitat-influenced and provided evidence for sexual dimorphism. Results of discriminant analysis show further support for species specific traits and sexual dimorphism. This study shows the application of the method of geometric morphometrics specifically elliptic fourier analysis in describing subtle body shape variation of selected Hippocampus species.

  5. Variations analysis of the Society's preference structure regarding environmental issues

    Angel S, Enrique; Zambrano B, Ana Maria

    2005-01-01

    Society's preference structure regarding environmental issues is understood as the relative importance the society gives to various topics that collectively conform the environmental issues. Based on the hypothesis that this structure behavior and its definition vary with time, proposals are presented related to the concepts and a working plan allowing performing the structure's dynamic analysis. A method is described to gather information based on the systematic reading of a nation wide newspaper during a period time. A comparison is done between the resulting structure and several aspects as the environmental legislation, government plans and summits and environmental milestones

  6. Design analysis of vertical wind turbine with airfoil variation

    Maulana, Muhammad Ilham; Qaedy, T. Masykur Al; Nawawi, Muhammad

    2016-03-01

    With an ever increasing electrical energy crisis occurring in the Banda Aceh City, it will be important to investigate alternative methods of generating power in ways different than fossil fuels. In fact, one of the biggest sources of energy in Aceh is wind energy. It can be harnessed not only by big corporations but also by individuals using Vertical Axis Wind Turbines (VAWT). This paper presents a three-dimensional CFD analysis of the influence of airfoil design on performance of a Darrieus-type vertical-axis wind turbine (VAWT). The main objective of this paper is to develop an airfoil design for NACA 63-series vertical axis wind turbine, for average wind velocity 2,5 m/s. To utilize both lift and drag force, some of designs of airfoil are analyzed using a commercial computational fluid dynamics solver such us Fluent. Simulation is performed for this airfoil at different angles of attach rearranging from -12°, -8°, -4°, 0°, 4°, 8°, and 12°. The analysis showed that the significant enhancement in value of lift coefficient for airfoil NACA 63-series is occurred for NACA 63-412.

  7. Implicit functions and solution mappings a view from variational analysis

    Dontchev, Asen L

    2014-01-01

    The implicit function theorem is one of the most important theorems in analysis and its many variants are basic tools in partial differential equations and numerical analysis. This second edition of Implicit Functions and Solution Mappings presents an updated and more complete picture of the field by including solutions of problems that have been solved since the first edition was published, and places old and new results in a broader perspective. The purpose of this self-contained work is to provide a reference on the topic and to provide a unified collection of a number of results which are currently scattered throughout the literature. Updates to this edition include new sections in almost all chapters, new exercises and examples, updated commentaries to chapters and an enlarged index and references section. From reviews of the first edition: “The book commences with a helpful context-setting preface followed by six chapters. Each chapter starts with a useful preamble and concludes with a careful and ins...

  8. Procedures for multielement analysis using high-flux fast-neutron activation

    Williams, R.E.; Hopke, P.K.; Meyer, R.A.

    1981-06-01

    Improvements have been made in the rabbit system used for multi-element fast-neutron activation analysis at the Lawrence Livermore National Laboratory Rotating Target Neutron Source, RTNS-I. Procedures have been developed for the analysis of 20 to 25 elements in samples with an inorganic matrix and 10 to 15 elements in biological samples, without the need for prohibitively expensive, long irradiations. Results are presented for the analysis of fly ash, orchard leaves, and bovine liver

  9. Reliability assessment of a manual-based procedure towards learning curve modeling and fmea analysis

    Gustavo Rech

    2013-03-01

    Full Text Available Separation procedures in drug Distribution Centers (DC are manual-based activities prone to failures such as shipping exchanged, expired or broken drugs to the customer. Two interventions seem as promising in improving the reliability in the separation procedure: (i selection and allocation of appropriate operators to the procedure, and (ii analysis of potential failure modes incurred by selected operators. This article integrates Learning Curves (LC and FMEA (Failure Mode and Effect Analysis aimed at reducing the occurrence of failures in the manual separation of a drug DC. LCs parameters enable generating an index to identify the recommended operators to perform the procedures. The FMEA is then applied to the separation procedure carried out by the selected operators in order to identify failure modes. It also deployed the traditional FMEA severity index into two sub-indexes related to financial issues and damage to company´s image in order to characterize failures severity. When applied to a drug DC, the proposed method significantly reduced the frequency and severity of failures in the separation procedure.

  10. Formalizing the Austrian Procedure Catalogue: A 4-step methodological analysis approach.

    Neururer, Sabrina Barbara; Lasierra, Nelia; Peiffer, Karl Peter; Fensel, Dieter

    2016-04-01

    Due to the lack of an internationally accepted and adopted standard for coding health interventions, Austria has established its own country-specific procedure classification system - the Austrian Procedure Catalogue (APC). Even though the APC is an elaborate coding standard for medical procedures, it has shortcomings that limit its usability. In order to enhance usability and usefulness, especially for research purposes and e-health applications, we developed an ontologized version of the APC. In this paper we present a novel four-step approach for the ontology engineering process, which enables accurate extraction of relevant concepts for medical ontologies from written text. The proposed approach for formalizing the APC consists of the following four steps: (1) comparative pre-analysis, (2) definition analysis, (3) typological analysis, and (4) ontology implementation. The first step contained a comparison of the APC to other well-established or elaborate health intervention coding systems in order to identify strengths and weaknesses of the APC. In the second step, a list of definitions of medical terminology used in the APC was obtained. This list of definitions was used as input for Step 3, in which we identified the most important concepts to describe medical procedures using the qualitative typological analysis approach. The definition analysis as well as the typological analysis are well-known and effective methods used in social sciences, but not commonly employed in the computer science or ontology engineering domain. Finally, this list of concepts was used in Step 4 to formalize the APC. The pre-analysis highlighted the major shortcomings of the APC, such as the lack of formal definition, leading to implicitly available, but not directly accessible information (hidden data), or the poor procedural type classification. After performing the definition and subsequent typological analyses, we were able to identify the following main characteristics of

  11. Power analysis for multivariate and repeated measures designs: a flexible approach using the SPSS MANOVA procedure.

    D'Amico, E J; Neilands, T B; Zambarano, R

    2001-11-01

    Although power analysis is an important component in the planning and implementation of research designs, it is often ignored. Computer programs for performing power analysis are available, but most have limitations, particularly for complex multivariate designs. An SPSS procedure is presented that can be used for calculating power for univariate, multivariate, and repeated measures models with and without time-varying and time-constant covariates. Three examples provide a framework for calculating power via this method: an ANCOVA, a MANOVA, and a repeated measures ANOVA with two or more groups. The benefits and limitations of this procedure are discussed.

  12. Phoenix – A model-based Human Reliability Analysis methodology: Qualitative Analysis Procedure

    Ekanem, Nsimah J.; Mosleh, Ali; Shen, Song-Hua

    2016-01-01

    Phoenix method is an attempt to address various issues in the field of Human Reliability Analysis (HRA). Built on a cognitive human response model, Phoenix incorporates strong elements of current HRA good practices, leverages lessons learned from empirical studies, and takes advantage of the best features of existing and emerging HRA methods. Its original framework was introduced in previous publications. This paper reports on the completed methodology, summarizing the steps and techniques of its qualitative analysis phase. The methodology introduces the “Crew Response Tree” which provides a structure for capturing the context associated with Human Failure Events (HFEs), including errors of omission and commission. It also uses a team-centered version of the Information, Decision and Action cognitive model and “macro-cognitive” abstractions of crew behavior, as well as relevant findings from cognitive psychology literature and operating experience, to identify potential causes of failures and influencing factors during procedure-driven and knowledge-supported crew-plant interactions. The result is the set of identified HFEs and likely scenarios leading to each. The methodology itself is generic in the sense that it is compatible with various quantification methods, and can be adapted for use across different environments including nuclear, oil and gas, aerospace, aviation, and healthcare. - Highlights: • Produces a detailed, consistent, traceable, reproducible and properly documented HRA. • Uses “Crew Response Tree” to capture context associated with Human Failure Events. • Models dependencies between Human Failure Events and influencing factors. • Provides a human performance model for relating context to performance. • Provides a framework for relating Crew Failure Modes to its influencing factors.

  13. Effectiveness of internet-based affect induction procedures: A systematic review and meta-analysis.

    Ferrer, Rebecca A; Grenen, Emily G; Taber, Jennifer M

    2015-12-01

    Procedures used to induce affect in a laboratory are effective and well-validated. Given recent methodological and technological advances in Internet research, it is important to determine whether affect can be effectively induced using Internet methodology. We conducted a meta-analysis and systematic review of prior research that has used Internet-based affect induction procedures, and examined potential moderators of the effectiveness of affect induction procedures. Twenty-six studies were included in final analyses, with 89 independent effect sizes. Affect induction procedures effectively induced general positive affect, general negative affect, fear, disgust, anger, sadness, and guilt, but did not significantly induce happiness. Contamination of other nontarget affect did not appear to be a major concern. Video inductions resulted in greater effect sizes. Overall, results indicate that affect can be effectively induced in Internet studies, suggesting an important venue for the acceleration of affective science. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  14. Pretreatment procedures applied to samples to be analysed by neutron activation analysis at CDTN/CNEN

    Francisco, Dovenir; Menezes, Maria Angela de Barros Correia

    2009-01-01

    The neutron activation technique - using several methods - has been applied in 80% of the analytical demand of Division for Reactor and Analytical Techniques at CDTN/CNEN, Belo Horizonte, Minas Gerais. This scenario emphasizes the responsibility of the Laboratory to provide and assure the quality of the measurements. The first step to assure the results quality is the preparation of the samples. Therefore, this paper describes the experimental procedures adopted at CDTN/CNEN in order to uniform conditions of analysis and to avoid contaminations by elements present everywhere. Some of the procedures are based on methods described in the literature; others are based on many years of experience preparing samples from many kinds of matrices. The procedures described are related to geological material - soil, sediment, rock, gems, clay, archaeological ceramics and ore - biological materials - hair, fish, plants, food - water, etc. Analytical results in sediment samples are shown as n example pointing out the efficiency of the experimental procedure. (author)

  15. Analysis of interspecies physicochemical variation of grain legume seeds

    Rybiński, Wojciech; Rusinek, Robert; Szot, Bogusław; Bocianowski, Jan; Starzycki, Michał

    2014-10-01

    The paper presents an attempt to assess the reaction of seeds to mechanical loads taking into account their geometry expressed as seed thickness and 1000 seed weight. The initial material comprised 33 genotypes of grain legume plants and included cultivars registered in the country and breeding lines that are subject to pre-registration trials. The analysis of variance revealed significant diversity of the cultivars and lines of the species studied in terms of each of the analysed trait. The highest weight of 1000 seeds were obtained for white lupine seeds and peas, the lowest for andean lupine seeds. The maximum deformation and energy were obtained for white lupine seeds, the lowest for pea seeds, the maximum force and module the lowest values were determined for narrow-leafed lupine and pea. The highest values of protein were obtained for andean and yellow lupine, a fat content for andean and white lupine. The fatty acid profile as much as 70% or more were linoleic and oleic acids. Against the background of all the species are distinguished by white lupine seeds with a high content of oleic acid and the lowest of linoleic acid, for yellow lupine were obtained the inverse ratio of the two acids.

  16. Genetic variation analysis of the Bali street dog using microsatellites

    Wilton Alan N

    2005-02-01

    Full Text Available Abstract Background Approximately 800,000 primarily feral dogs live on the small island of Bali. To analyze the genetic diversity in this population, forty samples were collected at random from dogs in the Denpasar, Bali region and tested using 31 polymorphic microsatellites. Australian dingoes and 28 American Kennel Club breeds were compared to the Bali Street Dog (BSD for allelic diversity, heterozygosities, F-statistics, GST estimates, Nei's DA distance and phylogenetic relationships. Results The BSD proved to be the most heterogeneous, exhibiting 239 of the 366 total alleles observed across all groups and breeds and had an observed heterozygosity of 0.692. Thirteen private alleles were observed in the BSD with an additional three alleles observed only in the BSD and the Australian dingo. The BSD was related most closely to the Chow Chow with a FST of 0.088 and also with high bootstrap support to the Australian dingo and Akita in the phylogenetic analysis. Conclusions This preliminary study into the diversity and relationship of the BSD to other domestic and feral dog populations shows the BSD to be highly heterogeneous and related to populations of East Asian origin. These results indicate that a viable and diverse population of dogs existed on the island of Bali prior to its geographic isolation approximately 12,000 years ago and has been little influenced by domesticated European dogs since that time.

  17. Comprehensive analysis of NuMA variation in breast cancer

    Aittomäki Kristiina

    2008-03-01

    Full Text Available Abstract Background A recent genome wide case-control association study identified NuMA region on 11q13 as a candidate locus for breast cancer susceptibility. Specifically, the variant Ala794Gly was suggested to be associated with increased risk of breast cancer. Methods In order to evaluate the NuMa gene for breast cancer susceptibility, we have here screened the entire coding region and exon-intron boundaries of NuMa in 92 familial breast cancer patients and constructed haplotypes of the identified variants. Five missense variants were further screened in 341 breast cancer cases with a positive family history and 368 controls. We examined the frequency of Ala794Gly in an extensive series of familial (n = 910 and unselected (n = 884 breast cancer cases and controls (n = 906, with a high power to detect the suggested breast cancer risk. We also tested if the variant is associated with histopathologic features of breast tumors. Results Screening of NuMA resulted in identification of 11 exonic variants and 12 variants in introns or untranslated regions. Five missense variants that were further screened in breast cancer cases with a positive family history and controls, were each carried on a unique haplotype. None of the variants, or the haplotypes represented by them, was associated with breast cancer risk although due to low power in this analysis, very low risk alleles may go unrecognized. The NuMA Ala794Gly showed no difference in frequency in the unselected breast cancer case series or familial case series compared to control cases. Furthermore, Ala794Gly did not show any significant association with histopathologic characteristics of the tumors, though Ala794Gly was slightly more frequent among unselected cases with lymph node involvement. Conclusion Our results do not support the role of NuMA variants as breast cancer susceptibility alleles.

  18. Comprehensive analysis of NuMA variation in breast cancer

    Kilpivaara, Outi; Rantanen, Matias; Tamminen, Anitta; Aittomäki, Kristiina; Blomqvist, Carl; Nevanlinna, Heli

    2008-01-01

    A recent genome wide case-control association study identified NuMA region on 11q13 as a candidate locus for breast cancer susceptibility. Specifically, the variant Ala794Gly was suggested to be associated with increased risk of breast cancer. In order to evaluate the NuMa gene for breast cancer susceptibility, we have here screened the entire coding region and exon-intron boundaries of NuMa in 92 familial breast cancer patients and constructed haplotypes of the identified variants. Five missense variants were further screened in 341 breast cancer cases with a positive family history and 368 controls. We examined the frequency of Ala794Gly in an extensive series of familial (n = 910) and unselected (n = 884) breast cancer cases and controls (n = 906), with a high power to detect the suggested breast cancer risk. We also tested if the variant is associated with histopathologic features of breast tumors. Screening of NuMA resulted in identification of 11 exonic variants and 12 variants in introns or untranslated regions. Five missense variants that were further screened in breast cancer cases with a positive family history and controls, were each carried on a unique haplotype. None of the variants, or the haplotypes represented by them, was associated with breast cancer risk although due to low power in this analysis, very low risk alleles may go unrecognized. The NuMA Ala794Gly showed no difference in frequency in the unselected breast cancer case series or familial case series compared to control cases. Furthermore, Ala794Gly did not show any significant association with histopathologic characteristics of the tumors, though Ala794Gly was slightly more frequent among unselected cases with lymph node involvement. Our results do not support the role of NuMA variants as breast cancer susceptibility alleles

  19. Partial differential equations with variable exponents variational methods and qualitative analysis

    Radulescu, Vicentiu D

    2015-01-01

    Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis provides researchers and graduate students with a thorough introduction to the theory of nonlinear partial differential equations (PDEs) with a variable exponent, particularly those of elliptic type. The book presents the most important variational methods for elliptic PDEs described by nonhomogeneous differential operators and containing one or more power-type nonlinearities with a variable exponent. The authors give a systematic treatment of the basic mathematical theory and constructive meth

  20. Analysis of indel variations in the human disease-associated genes ...

    Keywords. insertion–deletion variations; haematological disease; tumours; human genetics. Journal of Genetics ... domly selected healthy Korean individuals using a blood genomic DNA ... Bioinformatics annotation and 3-D protein structure analysis. In this study ..... 2009 A genome-wide meta-analysis identifies. Journal of ...

  1. A cross-sectional analysis of variation in charges and prices across California for percutaneous coronary intervention.

    Renee Y Hsia

    Full Text Available Though past studies have shown wide variation in aggregate hospital price indices and specific procedures, few have documented or explained such variation for distinct and common episodes of care.We sought to examine the variability in charges for percutaneous coronary intervention (PCI with a drug-eluting stent and without major complications (MS-DRG-247, and determine whether hospital and market characteristics influenced these charges.We conducted a cross-sectional analysis of adults admitted to California hospitals in 2011 for MS-DRG-247 using patient discharge data from the California Office of Statewide Health Planning and Development. We used a two-part linear regression model to first estimate hospital-specific charges adjusted for patient characteristics, and then examine whether the between-hospital variation in those estimated charges was explained by hospital and market characteristics.Adjusted charges for the average California patient admitted for uncomplicated PCI ranged from $22,047 to $165,386 (median: $88,350 depending on which hospital the patient visited. Hospitals in areas with the highest cost of living, those in rural areas, and those with more Medicare patients had higher charges, while government-owned hospitals charged less. Overall, our model explained 43% of the variation in adjusted charges. Estimated discounted prices paid by private insurers ranged from $3,421 to $80,903 (median: $28,571.Charges and estimated discounted prices vary widely between hospitals for the average California patient undergoing PCI without major complications, a common and relatively homogeneous episode of care. Though observable hospital characteristics account for some of this variation, the majority remains unexplained.

  2. A finite volume procedure for fluid flow, heat transfer and solid-body stress analysis

    Jagad, P. I.

    2018-04-12

    A unified cell-centered unstructured mesh finite volume procedure is presented for fluid flow, heat transfer and solid-body stress analysis. An in-house procedure (A. W. Date, Solution of Transport Equations on Unstructured Meshes with Cell-Centered Colocated Variables. Part I: Discretization, International Journal of Heat and Mass Transfer, vol. 48 (6), 1117-1127, 2005) is extended to include the solid-body stress analysis. The transport terms for a cell-face are evaluated in a structured grid-like manner. The Cartesian gradients at the center of each cell-face are evaluated using the coordinate transformation relations. The accuracy of the procedure is demonstrated by solving several benchmark problems involving different boundary conditions, source terms, and types of loading.

  3. An automated sensitivity analysis procedure for the performance assessment of nuclear waste isolation systems

    Pin, F.G.; Worley, B.A.; Oblow, E.M.; Wright, R.Q.; Harper, W.V.

    1986-01-01

    To support an effort in making large-scale sensitivity analyses feasible, cost efficient and quantitatively complete, the authors have developed an automated procedure making use of computer calculus. The procedure, called GRESS (GRadient Enhanced Software System), is embodied in a precompiler that can process Fortran computer codes and add derivative-taking capabilities to the normal calculation scheme. In this paper, the automated GRESS procedure is described and applied to the code UCB-NE-10.2, which simulates the migration through a sorption medium of the radionuclide members of a decay chain. The sensitivity calculations for a sample problem are verified using comparison with analytical and perturbation analysis results. Conclusions are drawn relative to the applicability of GRESS for more general large-scale sensitivity studies, and the role of such techniques in an overall sensitivity and uncertainty analysis program is discussed

  4. Experimental verification for standard analysis procedure of 241Am in food

    Liu Qingfen; Zhu Hongda; Liu Shutian; Pan Jingshun; Yang Dating

    2005-01-01

    Objective: The briefly experimental verification for 'determination of 241 Am in food' has been described. Methods: The overall recovery, the MDL of method and decontamination experiment has been done by standard analysis procedure. Results: The overall recovery is 76.26 ± 4.1%. The MDL is 3.4 x 10 -5 Bq/g ash, decontamination factor is higher than 10 3 for Po, 10 2 for U, Th, Pu and 60 for 237 Np. Conclusion: The results showed that the overall recovery is quite high and reliable, the MDL of method is able to meet examining 241 Am limited values in foods. the obtained decontamination factors of recommended procedure can meet analysis of 241 Am in food examination. Venifying results of the procedure are satisfied by using 243 Am spike and 241 Am standard reference material. (authors)

  5. Variation in rates of breast cancer surgery: A national analysis based on French Hospital Episode Statistics.

    Rococo, E; Mazouni, C; Or, Z; Mobillion, V; Koon Sun Pat, M; Bonastre, J

    2016-01-01

    Minimum volume thresholds were introduced in France in 2008 to improve the quality of cancer care. We investigated whether/how the quality of treatment decisions in breast cancer surgery had evolved before and after this policy was implemented. We used Hospital Episode Statistics for all women having undergone breast conserving surgery (BCS) or mastectomy in France in 2005 and 2012. Three surgical procedures considered as better treatment options were analyzed: BCS, immediate breast reconstruction (IBR) and sentinel lymph node biopsy (SLNB). We studied the mean rates and variation according to the hospital profile and volume. Between 2005 and 2012, the volume of breast cancer surgery increased by 11% whereas one third of the hospitals no longer performed this type of surgery. In 2012, the mean rate of BCS was 74% and similar in all hospitals whatever the volume. Conversely, IBR and SLNB rates were much higher in cancer centers (CC) and regional teaching hospitals (RTH) [IBR: 19% and 14% versus 8% on average; SLNB: 61% and 47% versus 39% on average]; the greater the hospital volume, the higher the IBR and SLNB rates (p < 0.0001). Overall, whatever the surgical procedure considered, inter-hospital variation in rates declined substantially in CC and RTH. We identified considerable variation in IBR and SLNB rates between French hospitals. Although more complex and less standardized than BCS, most clinical guidelines recommended these procedures. This apparent heterogeneity suggests unequal access to high-quality procedures for women with breast cancer. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Methodologies for uncertainty analysis in the level 2 PSA and their implementation procedures

    Ahn, Kwang Il; Yang, Joon Eun; Kim, Dong Ha

    2002-04-01

    Main purpose of this report to present standardized methodologies for uncertainty analysis in the Level 2 Probabilistic Safety Assessment (PSA) and their implementation procedures, based on results obtained through a critical review of the existing methodologies for the analysis of uncertainties employed in the Level 2 PSA, especially Accident Progression Event Tree (APET). Uncertainties employed in the Level 2 PSA, quantitative expressions of overall knowledge of analysts' and experts' participating in the probabilistic quantification process of phenomenological accident progressions ranging from core melt to containment failure, their numerical values are directly related to the degree of confidence that the analyst has that a given phenomenological event or accident process will or will not occur, or analyst's subjective probabilities of occurrence. These results that are obtained from Level 2 PSA uncertainty analysis, become an essential contributor to the plant risk, in addition to the Level 1 PSA and Level 3 PSA uncertainties. Uncertainty analysis methodologies and their implementation procedures presented in this report was prepared based on the following criteria: 'uncertainty quantification process must be logical, scrutable, complete, consistent and in an appropriate level of detail, as mandated by the Level 2 PSA objectives'. For the aforementioned purpose, this report deals mainly with (1) summary of general or Level 2 PSA specific uncertainty analysis methodologies, (2) selection of phenomenological branch events for uncertainty analysis in the APET, methodology for quantification of APET uncertainty inputs and its implementation procedure, (3) statistical propagation of uncertainty inputs through APET and its implementation procedure, and (4) formal procedure for quantification of APET uncertainties and source term categories (STCs) through the Level 2 PSA quantification codes

  7. A Procedure for the Computerized Analysis of Cleft Palate Speech Transcription

    Fitzsimons, David A.; Jones, David L.; Barton, Belinda; North, Kathryn N.

    2012-01-01

    The phonetic symbols used by speech-language pathologists to transcribe speech contain underlying hexadecimal values used by computers to correctly display and process transcription data. This study aimed to develop a procedure to utilise these values as the basis for subsequent computerized analysis of cleft palate speech. A computer keyboard…

  8. Computer-Aided Diagnosis of Solid Breast Lesions Using an Ultrasonic Multi-Feature Analysis Procedure

    2011-01-01

    ultrasound. 1. BACKGROUND AND INTRODUCTION Breast cancer affects one of every eight women, it kills one of 29 women in the United States, and is the leading...feature analysis procedure for computer-aided diagnosis of solid breast lesions,” Ultrason Imag, 2010 (In Press). 22. C. B. Shakespeare , personal

  9. A Numerical Procedure for Analysis of W/R Contact Using Explicit Finite Element Methods

    Ma, Y.; Markine, V.L.

    2015-01-01

    Since no effective experimental approaches have been proposed to assess wheel and rail (W/R) contact performance till now, numerical computational analysis is known as an alternative to approximately simulate the W/R interaction. In this paper, one numerical procedure is proposed on the basis of

  10. Development of residual stress analysis procedure for fitness-for-service assessment of welded structure

    Kim, Jong Sung; Jin, Tae Eun; Dong, P.; Prager, M.

    2003-01-01

    In this study, a state of art review of existing residual stress analysis techniques and representative solutions is presented in order to develop the residual stress analysis procedure for Fitness-For-Service(FFS) assessment of welded structure. Critical issues associated with existing residual stress solutions and their treatments in performing FFS are discussed. It should be recognized that detailed residual stress evolution is an extremely complicated phenomenon that typically involves material-specific thermomechanical/metallurgical response, welding process physics, and structural interactions within a component being welded. As a result, computational procedures can vary significantly from highly complicated numerical techniques intended only to elucidate a small part of the process physics to cost-effective procedures that are deemed adequate for capturing some of the important features in a final residual stress distribution. Residual stress analysis procedure for FFS purposes belongs to the latter category. With this in mind, both residual stress analysis techniques and their adequacy for FFS are assessed based on both literature data and analyses performed in this investigation

  11. Methodology for LOCA analysis and its qualification procedures for PWR reload licensing

    Serrano, M.A.B.

    1986-01-01

    The methodology for LOCA analysis developed by FURNAS and its qualification procedure for PWR reload licensing are presented. Digital computer codes developed by NRC and published collectively as the WREM package were modified to get versions that comply to each requirement of Brazilian Licensing Criteria. This metodology is applied to Angra-1 basic case to conclude the qualification process. (Author) [pt

  12. Interdisciplinary analysis procedures in the modeling and control of large space-based structures

    Cooper, Paul A.; Stockwell, Alan E.; Kim, Zeen C.

    1987-01-01

    The paper describes a computer software system called the Integrated Multidisciplinary Analysis Tool, IMAT, that has been developed at NASA Langley Research Center. IMAT provides researchers and analysts with an efficient capability to analyze satellite control systems influenced by structural dynamics. Using a menu-driven interactive executive program, IMAT links a relational database to commercial structural and controls analysis codes. The paper describes the procedures followed to analyze a complex satellite structure and control system. The codes used to accomplish the analysis are described, and an example is provided of an application of IMAT to the analysis of a reference space station subject to a rectangular pulse loading at its docking port.

  13. Multicriteria decision analysis in ranking of analytical procedures for aldrin determination in water.

    Tobiszewski, Marek; Orłowski, Aleksander

    2015-03-27

    The study presents the possibility of multi-criteria decision analysis (MCDA) application when choosing analytical procedures with low environmental impact. A type of MCDA, Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE), was chosen as versatile tool that meets all the analytical chemists--decision makers requirements. Twenty five analytical procedures for aldrin determination in water samples (as an example) were selected as input alternatives to MCDA analysis. Nine different criteria describing the alternatives were chosen from different groups--metrological, economical and the most importantly--environmental impact. The weights for each criterion were obtained from questionnaires that were sent to experts, giving three different scenarios for MCDA results. The results of analysis show that PROMETHEE is very promising tool to choose the analytical procedure with respect to its greenness. The rankings for all three scenarios placed solid phase microextraction and liquid phase microextraction--based procedures high, while liquid-liquid extraction, solid phase extraction and stir bar sorptive extraction--based procedures were placed low in the ranking. The results show that although some of the experts do not intentionally choose green analytical chemistry procedures, their MCDA choice is in accordance with green chemistry principles. The PROMETHEE ranking results were compared with more widely accepted green analytical chemistry tools--NEMI and Eco-Scale. As PROMETHEE involved more different factors than NEMI, the assessment results were only weakly correlated. Oppositely, the results of Eco-Scale assessment were well-correlated as both methodologies involved similar criteria of assessment. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Towards Structural Analysis of Audio Recordings in the Presence of Musical Variations

    Müller Meinard

    2007-01-01

    Full Text Available One major goal of structural analysis of an audio recording is to automatically extract the repetitive structure or, more generally, the musical form of the underlying piece of music. Recent approaches to this problem work well for music, where the repetitions largely agree with respect to instrumentation and tempo, as is typically the case for popular music. For other classes of music such as Western classical music, however, musically similar audio segments may exhibit significant variations in parameters such as dynamics, timbre, execution of note groups, modulation, articulation, and tempo progression. In this paper, we propose a robust and efficient algorithm for audio structure analysis, which allows to identify musically similar segments even in the presence of large variations in these parameters. To account for such variations, our main idea is to incorporate invariance at various levels simultaneously: we design a new type of statistical features to absorb microvariations, introduce an enhanced local distance measure to account for local variations, and describe a new strategy for structure extraction that can cope with the global variations. Our experimental results with classical and popular music show that our algorithm performs successfully even in the presence of significant musical variations.

  15. An expert system-based aid for analysis of Emergency Operating Procedures in NPPs

    Jakubowski, Z.; Beraha, D.

    1996-01-01

    Emergency Operating Procedures (EOPs) generally and an accident management (AM) particularly play a significant part in the safety philosophy on NPPs since many years. A better methodology for development and validation of EOPs is desired. A prototype of an Emergency Operating Procedures Analysis System (EOPAS), which has been developed at GRS, is presented in the paper. The hardware configuration and software organisation of the system is briefly reviewed. The main components of the system such as the knowledge base of an expert system and the engineering simulator are described. (author)

  16. A P-value model for theoretical power analysis and its applications in multiple testing procedures

    Fengqing Zhang

    2016-10-01

    Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.

  17. Quantitative measurement of phase variation amplitude of ultrasonic diffraction grating based on diffraction spectral analysis

    Pan, Meiyan, E-mail: yphantomohive@gmail.com; Zeng, Yingzhi; Huang, Zuohua, E-mail: zuohuah@163.com [Laboratory of Quantum Engineering and Quantum Materials, School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou, Guangdong 510006 (China)

    2014-09-15

    A new method based on diffraction spectral analysis is proposed for the quantitative measurement of the phase variation amplitude of an ultrasonic diffraction grating. For a traveling wave, the phase variation amplitude of the grating depends on the intensity of the zeroth- and first-order diffraction waves. By contrast, for a standing wave, this amplitude depends on the intensity of the zeroth-, first-, and second-order diffraction waves. The proposed method is verified experimentally. The measured phase variation amplitude ranges from 0 to 2π, with a relative error of approximately 5%. A nearly linear relation exists between the phase variation amplitude and driving voltage. Our proposed method can also be applied to ordinary sinusoidal phase grating.

  18. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves

    2017-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2 k experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2 3 and the 2 4 , depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  19. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves, E-mail: uandapaula@gmail.com, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2{sup k} experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2{sup 3} and the 2{sup 4}, depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  20. A finite element based substructuring procedure for design analysis of large smart structural systems

    Ashwin, U; Raja, S; Dwarakanathan, D

    2009-01-01

    A substructuring based design analysis procedure is presented for large smart structural system using the Craig–Bampton method. The smart structural system is distinctively characterized as an active substructure, modelled as a design problem, and a passive substructure, idealized as an analysis problem. Furthermore, a novel thought has been applied by introducing the electro–elastic coupling into the reduction scheme to solve the global structural control problem in a local domain. As an illustration, a smart composite box beam with surface bonded actuators/sensors is considered, and results of the local to global control analysis are presented to show the potential use of the developed procedure. The present numerical scheme is useful for optimally designing the active substructures to study their locations, coupled structure–actuator interaction and provide a solution to the global design of large smart structural systems

  1. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis

    E, Jianwei; Bao, Yanling; Ye, Jimin

    2017-10-01

    As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.

  2. Stability Analysis and Variational Integrator for Real-Time Formation Based on Potential Field

    Shengqing Yang

    2014-01-01

    Full Text Available This paper investigates a framework of real-time formation of autonomous vehicles by using potential field and variational integrator. Real-time formation requires vehicles to have coordinated motion and efficient computation. Interactions described by potential field can meet the former requirement which results in a nonlinear system. Stability analysis of such nonlinear system is difficult. Our methodology of stability analysis is discussed in error dynamic system. Transformation of coordinates from inertial frame to body frame can help the stability analysis focus on the structure instead of particular coordinates. Then, the Jacobian of reduced system can be calculated. It can be proved that the formation is stable at the equilibrium point of error dynamic system with the effect of damping force. For consideration of calculation, variational integrator is introduced. It is equivalent to solving algebraic equations. Forced Euler-Lagrange equation in discrete expression is used to construct a forced variational integrator for vehicles in potential field and obstacle environment. By applying forced variational integrator on computation of vehicles' motion, real-time formation of vehicles in obstacle environment can be implemented. Algorithm based on forced variational integrator is designed for a leader-follower formation.

  3. Procedure of trace element analysis in oyster tissues by using X-ray fluorescence

    Vo Thi Tuong Hanh; Dinh Thi Bich Lieu; Dinh Thien Lam and Nguyen Manh Hung

    2004-01-01

    The procedure of trace element analysis such as Ca, Mn, Fe, Zn, Cu, Pb in molluscs (oyster tissues) was established by using X-ray fluorescence techniques. The procedure was investigated from the sample collection, drying, ashing ratio to the analytical techniques by using Cd-109, detector Si (Li) and the peak processing MCAPLUS program was applied for this study. The procedure is based on direct comparison with certified concentrations of international standard reference SRM 1566b Oyster Tissue of National Institute of Standards and Technology, Department of commerce, United States of America for Ca, Mn, Fe, Zn, Cu and the Standard Addition Methods for Pb. The accuracy of the Standard Addition Methods was estimated by CRM281 Rye Grass of Community Bureau of Reference-BCR, European Commission. The results of 10 samples which were collected from several markets in Hanoi are shown. (author)

  4. Analysis and Comprehensive Analytical Modeling of Statistical Variations in Subthreshold MOSFET's High Frequency Characteristics

    Rawid Banchuin

    2014-01-01

    Full Text Available In this research, the analysis of statistical variations in subthreshold MOSFET's high frequency characteristics defined in terms of gate capacitance and transition frequency, have been shown and the resulting comprehensive analytical models of such variations in terms of their variances have been proposed. Major imperfection in the physical level properties including random dopant fluctuation and effects of variations in MOSFET's manufacturing process, have been taken into account in the proposed analysis and modeling. The up to dated comprehensive analytical model of statistical variation in MOSFET's parameter has been used as the basis of analysis and modeling. The resulting models have been found to be both analytic and comprehensive as they are the precise mathematical expressions in terms of physical level variables of MOSFET. Furthermore, they have been verified at the nanometer level by using 65~nm level BSIM4 based benchmarks and have been found to be very accurate with smaller than 5 % average percentages of errors. Hence, the performed analysis gives the resulting models which have been found to be the potential mathematical tool for the statistical and variability aware analysis and design of subthreshold MOSFET based VHF circuits, systems and applications.

  5. Policy analysis of authorisation procedures for wind energy deployment in Spain

    Iglesias, Guillermo; Rio, Pablo del; Dopico, Jesus Angel

    2011-01-01

    The aim of this paper is to analyse the administrative procedures for the granting of authorisations for the siting of wind farms in Spain, currently the competency of regional authorities. The analysis reveals some commonalities and differences between the procedures across regions. Furthermore, some aspects regarding these procedures have raised the concern of different stakeholders, including the central government and wind energy investors. A conflict between the interests of the central and regional governments can be observed. Lack of coordination between the different administrative levels and the 'more is better mentality' of regional authorities have led to a significant growth of wind energy requests for the (national) feed-in tariff. In turn, investors have complained about the discretionarity and non-transparency of those procedures and the lack of homogeneity across regions. This is likely to result in delays, uncertainty for investors and higher transaction costs. Although there has been a trend to a model which involves the use of multicriteria bidding procedures with more explicit, objective and precise criteria regarding project selection, the aforementioned problems suggest the need to improve coordination between the different administrative levels. - Highlights: → A conflict between the interests of the central and regional governments in the granting of administrative procedures can be observed. → Lack of coordination between different administrative levels have led to a significant growth of wind energy requests for the (national) feed-in tariff. → The resulting increase in the total costs of wind energy promotion has been a major concern for national policy-makers. → In turn, investors have complained about the discretionarity and non-transparency of those procedures and the lack of homogeneity across regions. → Those problems suggest the need to improve coordination between the different administrative levels.

  6. Analysis of dynamic Cournot learning models for generation companies based on conjectural variations and forward expectation

    Gutierrez-Alcaraz, G.; Tovar-Hernandez, Jose H.; Moreno-Goytia, Edgar L.

    2009-01-01

    Electricity spot markets generally operate on an hourly basis; under this condition GENCOs can closely observe their competitors' market behavior. For this purposes, a detailed dynamic model is one of the tools used by GENCOs to understand the behavioral variations of competitors over time. The required abilities to rapidly adjust one's own decision-making create a need for new learning procedures and models. Conjectural variations (CV) have been proposed as a learning approach. In this paper a model based on forward expectations (FE) is proposed as a learning approach, and through illustrative examples it is shown that the market equilibria found by the CV model are also obtained by the FE model. (author)

  7. Copy number variation analysis of matched ovarian primary tumors and peritoneal metastasis.

    Joel A Malek

    Full Text Available Ovarian cancer is the most deadly gynecological cancer. The high rate of mortality is due to the large tumor burden with extensive metastatic lesion of the abdominal cavity. Despite initial chemosensitivity and improved surgical procedures, abdominal recurrence remains an issue and results in patients' poor prognosis. Transcriptomic and genetic studies have revealed significant genome pathologies in the primary tumors and yielded important information regarding carcinogenesis. There are, however, few studies on genetic alterations and their consequences in peritoneal metastatic tumors when compared to their matched ovarian primary tumors. We used high-density SNP arrays to investigate copy number variations in matched primary and metastatic ovarian cancer from 9 patients. Here we show that copy number variations acquired by ovarian tumors are significantly different between matched primary and metastatic tumors and these are likely due to different functional requirements. We show that these copy number variations clearly differentially affect specific pathways including the JAK/STAT and cytokine signaling pathways. While many have shown complex involvement of cytokines in the ovarian cancer environment we provide evidence that ovarian tumors have specific copy number variation differences in many of these genes.

  8. Estimating the Cost of Neurosurgical Procedures in a Low-Income Setting: An Observational Economic Analysis.

    Abdelgadir, Jihad; Tran, Tu; Muhindo, Alex; Obiga, Doomwin; Mukasa, John; Ssenyonjo, Hussein; Muhumza, Michael; Kiryabwire, Joel; Haglund, Michael M; Sloan, Frank A

    2017-05-01

    There are no data on cost of neurosurgery in low-income and middle-income countries. The objective of this study was to estimate the cost of neurosurgical procedures in a low-resource setting to better inform resource allocation and health sector planning. In this observational economic analysis, microcosting was used to estimate the direct and indirect costs of neurosurgical procedures at Mulago National Referral Hospital (Kampala, Uganda). During the study period, October 2014 to September 2015, 1440 charts were reviewed. Of these patients, 434 had surgery, whereas the other 1006 were treated nonsurgically. Thirteen types of procedures were performed at the hospital. The estimated mean cost of a neurosurgical procedure was $542.14 (standard deviation [SD], $253.62). The mean cost of different procedures ranged from $291 (SD, $101) for burr hole evacuations to $1,221 (SD, $473) for excision of brain tumors. For most surgeries, overhead costs represented the largest proportion of the total cost (29%-41%). This is the first study using primary data to determine the cost of neurosurgery in a low-resource setting. Operating theater capacity is likely the binding constraint on operative volume, and thus, investing in operating theaters should achieve a higher level of efficiency. Findings from this study could be used by stakeholders and policy makers for resource allocation and to perform economic analyses to establish the value of neurosurgery in achieving global health goals. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Cost analysis of radiological interventional procedures and reimbursement within a clinic

    Strotzer, M.; Voelk, M.; Lenhart, M.; Fruend, R.; Feuerbach, S.

    2002-01-01

    Purpose: Analysis of costs for vascular radiological interventions on a per patient basis and comparison with reimbursement based on GOAe(Gebuehrenordnung fuer Aerzte) and DKG-NT (Deutsche Krankenhausgesellschaft-Nebenkostentarif). Material and Methods: The ten procedures most frequently performed within 12 months were evaluated. Personnel costs were derived from precise costs per hour and estimated procedure time for each intervention. Costs for medical devices were included. Reimbursement based on GOAewas calculated using the official conversion factor of 0.114 DM for each specific relative value unit and a multiplication factor of 1.0. The corresponding conversion factor for DKG-NT, determined by the DKG, was 0.168 DM. Results: A total of 832 interventional procedures were included. Marked differences between calculated costs and reimbursement rates were found. Regarding the ten most frequently performed procedures, there was a deficit of 1.06 million DM according GOAedata (factor 1.0) and 0.787 million DM according DKG-NT. The percentage of reimbursement was only 34.2 (GOAe; factor 1.0) and 51.3 (DKG-NT), respectively. Conclusion: Reimbursement of radiological interventional procedures based on GOAeand DKG-NT data is of limited value for economic controlling purposes within a hospital. (orig.) [de

  10. Integrative analysis of RNA, translation, and protein levels reveals distinct regulatory variation across humans.

    Cenik, Can; Cenik, Elif Sarinay; Byeon, Gun W; Grubert, Fabian; Candille, Sophie I; Spacek, Damek; Alsallakh, Bilal; Tilgner, Hagen; Araya, Carlos L; Tang, Hua; Ricci, Emiliano; Snyder, Michael P

    2015-11-01

    Elucidating the consequences of genetic differences between humans is essential for understanding phenotypic diversity and personalized medicine. Although variation in RNA levels, transcription factor binding, and chromatin have been explored, little is known about global variation in translation and its genetic determinants. We used ribosome profiling, RNA sequencing, and mass spectrometry to perform an integrated analysis in lymphoblastoid cell lines from a diverse group of individuals. We find significant differences in RNA, translation, and protein levels suggesting diverse mechanisms of personalized gene expression control. Combined analysis of RNA expression and ribosome occupancy improves the identification of individual protein level differences. Finally, we identify genetic differences that specifically modulate ribosome occupancy--many of these differences lie close to start codons and upstream ORFs. Our results reveal a new level of gene expression variation among humans and indicate that genetic variants can cause changes in protein levels through effects on translation. © 2015 Cenik et al.; Published by Cold Spring Harbor Laboratory Press.

  11. COMPARATIVE ANALYSIS OF QUANTUM EFFECTS IN NANOSCALE MULTIGATE MOSFETS USING VARIATIONAL APPROACH

    V. PALANICHAMY

    2015-02-01

    Full Text Available In this work, the performance of multiple-gate SOI MOSFETs is analysed using variational approach including quantum effects. An analytical model is derived to accounting the quantum effects at the silicon (Si/silicon dioxide (SiO2 interface. A general procedure is used for calculating the quantum inversion charge density. Using this inversion charge density, the drain current is obtained. Our model results are compared with the simulation results and its shows very good agreement. Our results highlighted that cylindrical surrounding gate MOSFET is a good candidate to obtain the high drain current compared with other two devices.

  12. Human factor analysis related to new symptom based procedures used by control room crews during treatment of emergency states

    Holy, J.

    1999-01-01

    New symptom based emergency procedures have been developed for Nuclear Power Plant Dukovany in the Czech Republic. As one point of the process of verification and validation of the procedures, a specific effort was devoted to detailed analysis of the procedures from human factors and human reliability point of view. The course and results of the analysis are discussed in this article. Although the analyzed procedures have been developed for one specific plant of WWER-440/213 type, most of the presented results may be valid for many other procedures recently developed for semi-automatic control of those technological units which are operated under measurable level of risk. (author)

  13. A systematic review and meta-analysis of variations in branching patterns of the adult aortic arch.

    Popieluszko, Patrick; Henry, Brandon Michael; Sanna, Beatrice; Hsieh, Wan Chin; Saganiak, Karolina; Pękala, Przemysław A; Walocha, Jerzy A; Tomaszewski, Krzysztof A

    2018-07-01

    The aortic arch (AA) is the main conduit of the left side of the heart, providing a blood supply to the head, neck, and upper limbs. As it travels through the thorax, the pattern in which it gives off the branches to supply these structures can vary. Variations of these branching patterns have been studied; however, a study providing a comprehensive incidence of these variations has not yet been conducted. The objective of this study was to perform a meta-analysis of all the studies that report prevalence data on AA variants and to provide incidence data on the most common variants. A systematic search of online databases including PubMed, Embase, Scopus, ScienceDirect, Web of Science, SciELO, BIOSIS, and CNKI was performed for literature describing incidence of AA variations in adults. Studies including prevalence data on adult patients or cadavers were collected and their data analyzed. A total of 51 articles were included (N = 23,882 arches). Seven of the most common variants were analyzed. The most common variants found included the classic branching pattern, defined as a brachiocephalic trunk, a left common carotid, and a left subclavian artery (80.9%); the bovine arch variant (13.6%); and the left vertebral artery variant (2.8%). Compared by geographic data, bovine arch variants were noted to have a prevalence as high as 26.8% in African populations. Although patients who have an AA variant are often asymptomatic, they compose a significant portion of the population of patients and pose a greater risk of hemorrhage and ischemia during surgery in the thorax. Because of the possibility of encountering such variants, it is prudent for surgeons to consider potential variations in planning procedures, especially of an endovascular nature, in the thorax. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  14. An analysis of technical aspects of the arthroscopic Bankart procedure as performed in the United States.

    Burks, Robert T; Presson, Angela P; Weng, Hsin-Yi

    2014-10-01

    The purpose of this study was to investigate the intersurgeon variation in technical aspects of performing an arthroscopic Bankart repair. A unique approach with experienced equipment representatives from 3 different arthroscopic companies was used. Experienced representatives were identified by DePuy Mitek, Smith & Nephew, and Arthrex and filled out questionnaires on how their surgeons performed arthroscopic Bankart procedures. This was performed in a blinded fashion with no knowledge of the identities of the specific surgeons or representatives by us. A video on different aspects of the procedure was observed by each representative before filling out the questionnaire to help standardize responses. Data were collected using REDCap (Research Electronic Data Capture). Data were analyzed as an infrequent observation with 0% to 30% of representatives reporting the observation; sometimes, 31% to 70% reporting the observation; and often, greater than 70% of representatives reporting. Seventy-six percent of representatives had 6 or more years of arthroscopic experience. Forty-three percent of representatives reported that their surgeons use 3 portals for the procedure often. Forty-four percent reported that viewing was performed exclusively from the posterior portal while the surgeon was performing the repair. Seventy-three percent reported that the Hill-Sachs lesion was observed often, and 61% reported that the posterior labrum was evaluated often before the repair. Only 25% of representatives reported that the Bankart lesion was extensively released and mobilized often. Thirty-three percent reported 3 anchors as being used often. Seventy-five percent reported biocomposite anchors as being used often. Single-loaded anchors were reported as being used often by 47%. Eighty-one percent reported that sutures were placed in a simple fashion. Eighty-three percent reported the use of any posterior sutures or anchors for additional plication as infrequent. There is significant

  15. A novel procedure on next generation sequencing data analysis using text mining algorithm.

    Zhao, Weizhong; Chen, James J; Perkins, Roger; Wang, Yuping; Liu, Zhichao; Hong, Huixiao; Tong, Weida; Zou, Wen

    2016-05-13

    Next-generation sequencing (NGS) technologies have provided researchers with vast possibilities in various biological and biomedical research areas. Efficient data mining strategies are in high demand for large scale comparative and evolutional studies to be performed on the large amounts of data derived from NGS projects. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. We report a novel procedure to analyse NGS data using topic modeling. It consists of four major procedures: NGS data retrieval, preprocessing, topic modeling, and data mining using Latent Dirichlet Allocation (LDA) topic outputs. The NGS data set of the Salmonella enterica strains were used as a case study to show the workflow of this procedure. The perplexity measurement of the topic numbers and the convergence efficiencies of Gibbs sampling were calculated and discussed for achieving the best result from the proposed procedure. The output topics by LDA algorithms could be treated as features of Salmonella strains to accurately describe the genetic diversity of fliC gene in various serotypes. The results of a two-way hierarchical clustering and data matrix analysis on LDA-derived matrices successfully classified Salmonella serotypes based on the NGS data. The implementation of topic modeling in NGS data analysis procedure provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data. The implementation of topic modeling in NGS data analysis provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data.

  16. Forensic analysis of Salvia divinorum using multivariate statistical procedures. Part I: discrimination from related Salvia species.

    Willard, Melissa A Bodnar; McGuffin, Victoria L; Smith, Ruth Waddell

    2012-01-01

    Salvia divinorum is a hallucinogenic herb that is internationally regulated. In this study, salvinorin A, the active compound in S. divinorum, was extracted from S. divinorum plant leaves using a 5-min extraction with dichloromethane. Four additional Salvia species (Salvia officinalis, Salvia guaranitica, Salvia splendens, and Salvia nemorosa) were extracted using this procedure, and all extracts were analyzed by gas chromatography-mass spectrometry. Differentiation of S. divinorum from other Salvia species was successful based on visual assessment of the resulting chromatograms. To provide a more objective comparison, the total ion chromatograms (TICs) were subjected to principal components analysis (PCA). Prior to PCA, the TICs were subjected to a series of data pretreatment procedures to minimize non-chemical sources of variance in the data set. Successful discrimination of S. divinorum from the other four Salvia species was possible based on visual assessment of the PCA scores plot. To provide a numerical assessment of the discrimination, a series of statistical procedures such as Euclidean distance measurement, hierarchical cluster analysis, Student's t tests, Wilcoxon rank-sum tests, and Pearson product moment correlation were also applied to the PCA scores. The statistical procedures were then compared to determine the advantages and disadvantages for forensic applications.

  17. Development of a 3-dimensional flow analysis procedure for axial pump impellers

    Kim, Min Hwan; Kim, Jong In; Park, Jin Seok; Huh, Houng Huh; Chang, Moon Hee

    1999-06-01

    A fluid dynamic analysis procedure was developed using the three-dimensional solid model of an axial pump impeller which was theoretically designed using I-DEAS CAD/CAM/CAE software. The CFD software FLUENT was used in the flow field analysis. The steady-state flow regime in the MCP impeller and diffuser was simulated using the developed procedure. The results of calculation were analyzed to confirm whether the design requirements were properly implemented in the impeller model. The validity of the developed procedure was demonstrated by comparing the calculation results with the experimental data available. The pump performance at the design point could be effectively predicted using the developed procedure. The computed velocity distributions have shown a good agreement with the experimental data except for the regions near the wall. The computed head, however, was over-predicted than the experiment. The design period and cost required for the development of an axial pump impeller can be significantly reduced by applying the proposed methodology. (author). 7 refs., 2 tabs

  18. Cultural Variations across Academic Genres: A Generic Analysis of Intertextuality in Master's Theses Introductions

    Ketabi, Saeed; Rahavard, Shaahin

    2013-01-01

    Genre analysis of texts has always been significant. The current study aimed at investigating intertextuality considering cultural variations and differences in students' discourse communities. Social studies, philosophy, and biology were chosen as the representatives of social sciences, humanities and sciences. Tehran University, one of the most…

  19. An innovative procedure of genome-wide association analysis fits studies on germplasm population and plant breeding.

    He, Jianbo; Meng, Shan; Zhao, Tuanjie; Xing, Guangnan; Yang, Shouping; Li, Yan; Guan, Rongzhan; Lu, Jiangjie; Wang, Yufeng; Xia, Qiuju; Yang, Bing; Gai, Junyi

    2017-11-01

    The innovative RTM-GWAS procedure provides a relatively thorough detection of QTL and their multiple alleles for germplasm population characterization, gene network identification, and genomic selection strategy innovation in plant breeding. The previous genome-wide association studies (GWAS) have been concentrated on finding a handful of major quantitative trait loci (QTL), but plant breeders are interested in revealing the whole-genome QTL-allele constitution in breeding materials/germplasm (in which tremendous historical allelic variation has been accumulated) for genome-wide improvement. To match this requirement, two innovations were suggested for GWAS: first grouping tightly linked sequential SNPs into linkage disequilibrium blocks (SNPLDBs) to form markers with multi-allelic haplotypes, and second utilizing two-stage association analysis for QTL identification, where the markers were preselected by single-locus model followed by multi-locus multi-allele model stepwise regression. Our proposed GWAS procedure is characterized as a novel restricted two-stage multi-locus multi-allele GWAS (RTM-GWAS, https://github.com/njau-sri/rtm-gwas ). The Chinese soybean germplasm population (CSGP) composed of 1024 accessions with 36,952 SNPLDBs (generated from 145,558 SNPs, with reduced linkage disequilibrium decay distance) was used to demonstrate the power and efficiency of RTM-GWAS. Using the CSGP marker information, simulation studies demonstrated that RTM-GWAS achieved the highest QTL detection power and efficiency compared with the previous procedures, especially under large sample size and high trait heritability conditions. A relatively thorough detection of QTL with their multiple alleles was achieved by RTM-GWAS compared with the linear mixed model method on 100-seed weight in CSGP. A QTL-allele matrix (402 alleles of 139 QTL × 1024 accessions) was established as a compact form of the population genetic constitution. The 100-seed weight QTL-allele matrix was

  20. Theoretical analysis about early detection of hepatocellular carcinoma by medical imaging procedure

    Odano, Ikuo; Hinata, Hiroshi; Hara, Keiji; Sakai, Kunio [Niigata Univ. (Japan). School of Medicine

    1983-04-01

    It is well-known that patients with chronic hepatitis and liver cirrhosis are frequently accompanied by hepatocellular carcinoma (hepatoma). They are called high risk group for hepatoma. In order to detect a small hepatoma, it is reasonable for us to perform screening examinations on these high risk group patients. Optimal screening interval, however, has not been established. In this report, a theoretical analysis was made to estimate optimal screening interval by imaging procedure such as ultrasonography, x-ray computed tomography and scintigraphy. By the analysis of eight cases, mean doubling time of hepatoma was estimated about four months (73 - 143 days). If we want to detect a hepatoma not greater than 3.0cm in diameter, medical screening procedure combining ultrasonography and scintigraphy should be performed once per about nine months.

  1. Supplement to procedures, analysis, and comparison of groundwater velocity measurement methods for unconfined aquifers

    Zinkl, R.J.; Kearl, P.M.

    1988-09-01

    This report is a supplement to Procedures, Analysis, and Comparison of Groundwater Velocity Measurement Methods for Unconfined Aquifers and provides computer program descriptions, type curves, and calculations for the analysis of field data in determining groundwater velocity in unconfined aquifers. The computer programs analyze bail or slug tests, pumping tests, Geoflo Meter data, and borehole dilution data. Appendix A is a description of the code, instructions for using the code, an example data file, and the calculated results to allow checking the code after installation on the user's computer. Calculations, development of formulas, and correction factors for the various programs are presented in Appendices B through F. Appendix G provides a procedure for calculating transmissivity and specific yield for pumping tests performed in unconfined aquifers

  2. An Information Processing Analysis of the Function of Conceptual Understanding in the Learning of Arithmetic Procedures

    1988-08-01

    by Gelman and co-workers with respect to counting (Gelman & Gallistel , 1978; Gelman & Meck, 1983, 1987; Gelman, Mack, & Merkin, 1986; Greeno, Riley...Gelman, 1984). Gelman and Gallistel (1978) formulated a set of princples that determine the correct procedure for counting. The three most Important...Gelman & Gallistel , 1978). Greeno, Riley, and Gelman (1984) and Smith, Greeno, and Vitolo (in press) have proposed a theoretical analysis that shows how

  3. Analysis of the acceptance procedure and quality control a virtual simulation system

    Gonzalez Ruiz, C.; Pedrero de Aristizabal, D.; Jimenez Rojas, R.; Garcia Hernandez, M. J.; Ruiz Galan, G.; Ayala Lazaro, R.; Garcia Marcos, R.

    2011-01-01

    Acceptance has been made, determining the reference state, commissioning and implementation of control protocol virtual simulation system consists of an image acquisition unit of computerized tomography (CT), an independent external location laser locator and a simulation module associated with the existing scheduler for clinical dosimetry in radiotherapy. This paper summarizes the path followed in this process, together with the established procedure for periodic monitoring and analysis system of the results obtained in the two years of clinical and control.

  4. A highly rationalized procedure of neutron activation analysis for routine applications in dairy science

    Heine, K.; Wiechen, A.

    1976-01-01

    A rational procedure was developed for the multi-element analysis by neutron activation for applications in dairy science. The preparation of samples prior to irradiation consists of drying, weighing, and welding in quartz ampoules. The neutron flux, samples are exposed to during reactor irradiation , is determined by the mono-comparator technique for which the Co-content of a commercial aluminium foil was chosen as the flux monitor. Constancy of the Co-content and uncomplicated handling of the foil essentially simplify the determination of flux. The samples are irradiated for 72 h, dissolved in HNO 3 /H 2 SO 4 and measured in the liquid state after waiting times of 1-2, 4 and 8 weeks by a Ge(Li)-detector and a 4,096 channel spectrometer. The procedure was confirmed by investigations of the biological KALE standard and by participation in inter-comparisons of biological substances of the Analytical Quality Control Service of the IAEA for the analysis of the elements Na, Ca, Cr, Fe, Co, Zn, Se, Rb, and Cs. So a procedure was developed suitable for routine multi-element analysis of biologic samples by optimizing and rationalizing all analytical steps. (orig./MG) [de

  5. Safety of Running Two Rooms: A Systematic Review and Meta-Analysis of Overlapping Neurosurgical Procedures.

    Self, D Mitchell; Ilyas, Adeel; Stetler, William R

    2018-04-27

    Overlapping surgery, a long-standing practice within academic neurosurgery centers nationwide, has recently come under scrutiny from the government and media as potentially harmful to patients. Therefore, the objective of this systematic review and meta-analysis is to determine the safety of overlapping neurosurgical procedures. The authors performed a systematic review and meta-analysis in accordance with PRISMA guidelines. A review of PubMed and Medline databases was undertaken with the search phrase "overlapping surgery AND neurosurgery AND outcomes." Data regarding patient demographics, type of neurosurgical procedure, and outcomes and complications were extracted from each study. The principle summary measure was odds ratio (OR) of the association of overlapping versus non-overlapping surgery with outcomes. The literature search yielded a total of 36 studies, of which 5 studies met inclusion criteria and were included in this study. These studies included a total of 25,764 patients undergoing neurosurgical procedures. Overlapping surgery was associated with an increased likelihood of being discharged home (OR = 1.32; 95% CI 1.20 to 1.44; P < 0.001) and a reduced 30-day unexpected return to the operating room (OR = 0.79; 95% CI 0.72 to 0.87; P < 0.001). Overlapping surgery did not significantly affect OR of length of surgery, 30-day mortality, or 30-day readmission. Overlapping neurosurgical procedures were not associated with worse patient outcomes. Additional, prospective studies are needed to further assess the safety overlapping procedures. Copyright © 2018. Published by Elsevier Inc.

  6. Variational analysis and aerospace engineering mathematical challenges for the aerospace of the future

    Mohammadi, Bijan; Pironneau, Olivier; Cipolla, Vittorio

    2016-01-01

    This book presents papers surrounding the extensive discussions that took place from the ‘Variational Analysis and Aerospace Engineering’ workshop held at the Ettore Majorana Foundation and Centre for Scientific Culture in 2015. Contributions to this volume focus on advanced mathematical methods in aerospace engineering and industrial engineering such as computational fluid dynamics methods, optimization methods in aerodynamics, optimum controls, dynamic systems, the theory of structures, space missions, flight mechanics, control theory, algebraic geometry for CAD applications, and variational methods and applications. Advanced graduate students, researchers, and professionals in mathematics and engineering will find this volume useful as it illustrates current collaborative research projects in applied mathematics and aerospace engineering.

  7. Completely automated modal analysis procedure based on the combination of different OMA methods

    Ripamonti, Francesco; Bussini, Alberto; Resta, Ferruccio

    2018-03-01

    In this work a completely automated output-only Modal Analysis procedure is presented and all its benefits are listed. Based on the merging of different Operational Modal Analysis methods and a statistical approach, the identification process has been improved becoming more robust and giving as results only the real natural frequencies, damping ratios and mode shapes of the system. The effect of the temperature can be taken into account as well, leading to the creation of a better tool for automated Structural Health Monitoring. The algorithm has been developed and tested on a numerical model of a scaled three-story steel building present in the laboratories of Politecnico di Milano.

  8. Multi-response permutation procedure as an alternative to the analysis of variance: an SPSS implementation.

    Cai, Li

    2006-02-01

    A permutation test typically requires fewer assumptions than does a comparable parametric counterpart. The multi-response permutation procedure (MRPP) is a class of multivariate permutation tests of group difference useful for the analysis of experimental data. However, psychologists seldom make use of the MRPP in data analysis, in part because the MRPP is not implemented in popular statistical packages that psychologists use. A set of SPSS macros implementing the MRPP test is provided in this article. The use of the macros is illustrated by analyzing example data sets.

  9. Sampling, storage and sample preparation procedures for X ray fluorescence analysis of environmental materials

    1997-06-01

    X ray fluorescence (XRF) method is one of the most commonly used nuclear analytical technique because of its multielement and non-destructive character, speed, economy and ease of operation. From the point of view of quality assurance practices, sampling and sample preparation procedures are the most crucial steps in all analytical techniques, (including X ray fluorescence) applied for the analysis of heterogeneous materials. This technical document covers recent modes of the X ray fluorescence method and recent developments in sample preparation techniques for the analysis of environmental materials. Refs, figs, tabs

  10. Procedure proposed for performance of a probabilistic safety analysis for the event of ''Air plane crash''

    Hoffmann, H.H.

    1998-01-01

    A procedures guide for a probabilistic safety analysis for the external event 'Air plane crash' has been prepared. The method is based on analysis done within the framework of PSA for German NPPs as well as on international documents. Both crashes of military air planes and commercial air planes contribute to the plant risk. For the determination of the plant related crash rate the air traffic will be divided into 3 different categories of air traffic: - The landing and takeoff phase, - the airlane traffic and waiting loop traffic, - the free air traffic, and the air planes into different types and weight classes. (orig./GL) [de

  11. Random analysis of bearing capacity of square footing using the LAS procedure

    Kawa, Marek; Puła, Wojciech; Suska, Michał

    2016-09-01

    In the present paper, a three-dimensional problem of bearing capacity of square footing on random soil medium is analyzed. The random fields of strength parameters c and φ are generated using LAS procedure (Local Average Subdivision, Fenton and Vanmarcke 1990). The procedure used is re-implemented by the authors in Mathematica environment in order to combine it with commercial program. Since the procedure is still tested the random filed has been assumed as one-dimensional: the strength properties of soil are random in vertical direction only. Individual realizations of bearing capacity boundary-problem with strength parameters of medium defined the above procedure are solved using FLAC3D Software. The analysis is performed for two qualitatively different cases, namely for the purely cohesive and cohesive-frictional soils. For the latter case the friction angle and cohesion have been assumed as independent random variables. For these two cases the random square footing bearing capacity results have been obtained for the range of fluctuation scales from 0.5 m to 10 m. Each time 1000 Monte Carlo realizations have been performed. The obtained results allow not only the mean and variance but also the probability density function to be estimated. An example of application of this function for reliability calculation has been presented in the final part of the paper.

  12. Risk analysis procedure for post-wildfire natural hazards in British Columbia

    Jordan, Peter

    2010-05-01

    Following a severe wildfire season in 2003, and several subsequent damaging debris flow and flood events, the British Columbia Forest Service developed a procedure for analysing risks to public safety and infrastructure from such events. At the same time, the Forest Service undertook a research program to determine the extent of post-wildfire hazards, and examine the hydrologic and geomorphic processes contributing to the hazards. The risk analysis procedure follows the Canadian Standards Association decision-making framework for risk management (which in turn is based on international standards). This has several steps: identification of risk, risk analysis and estimation, evaluation of risk tolerability, developing control or mitigation strategies, and acting on these strategies. The Forest Service procedure deals only with the first two steps. The results are passed on to authorities such as the Provincial Emergency Program and local government, who are responsible for evaluating risks, warning residents, and applying mitigation strategies if appropriate. The objective of the procedure is to identify and analyse risks to public safety and infrastructure. The procedure is loosely based on the BAER (burned area emergency response) program in the USA, with some important differences. Our procedure focuses on identifying risks and warning affected parties, not on mitigation activities such as broadcast erosion control measures. Partly this is due to limited staff and financial resources. Also, our procedure is not multi-agency, but is limited to wildfires on provincial forest land; in British Columbia about 95% of forest land is in the publicly-owned provincial forest. Each fire season, wildfires are screened by size and proximity to values at risk such as populated areas. For selected fires, when the fire is largely contained, the procedure begins with an aerial reconnaissance of the fire, and photography with a hand-held camera, which can be used to make a

  13. Analysis of genetic variation and potential applications in genome-scale metabolic modeling

    Cardoso, Joao; Andersen, Mikael Rørdam; Herrgard, Markus

    2015-01-01

    scale and resolution by re-sequencing thousands of strains systematically. In this article, we review challenges in the integration and analysis of large-scale re-sequencing data, present an extensive overview of bioinformatics methods for predicting the effects of genetic variants on protein function......Genetic variation is the motor of evolution and allows organisms to overcome the environmental challenges they encounter. It can be both beneficial and harmful in the process of engineering cell factories for the production of proteins and chemicals. Throughout the history of biotechnology......, there have been efforts to exploit genetic variation in our favor to create strains with favorable phenotypes. Genetic variation can either be present in natural populations or it can be artificially created by mutagenesis and selection or adaptive laboratory evolution. On the other hand, unintended genetic...

  14. Variation in practice: an analysis of Scottish Surgical Profiles ENT data.

    Yeo, J C L; Ah-See, K W; Mackenzie, K

    2013-02-01

    Variation in otolaryngology intervention rates is reported in the Scottish Surgical Profiles Project. Tonsillectomy is one of the selected key indicator procedures. The variation in practice was discussed nationally at the Scottish Otolaryngology Society summer meetings in 2009 and 2010. NHS Grampian had a significantly higher tonsillectomy rate compared with other Scottish NHS boards. To determine the accuracy of NHS Grampian data reported by the Information Service Division (ISD) and to record the appropriateness of listing of patients for tonsillectomy with reference to the Scottish Intercollegiate Guidelines Network (SIGN). Retrospective review of case notes and surgical records of patients who had undergone tonsillectomy between March 2007 and March 2008 in NHS Grampian. Between March 2007 and March 2008, 509 tonsillectomy cases were performed in NHS Grampian. This corresponded to the data received from ISD. 87% of tonsillectomies performed were compliant with SIGN guidelines. The Scottish otolaryngology clinicians have found the reporting of the intervention rates stimulating and challenging. Discussion of the surgical profile project regularly at national specialty meetings resulted in a preliminary detailed targeted audit of those who were persistent outliers for tonsillectomy. This refuted the presumed reasons for this variation, namely inaccurate figures from ISD and inappropriate listings by clinicians.

  15. Analysis of interfraction and intrafraction variation during tangential breast irradiation with an electronic portal imaging device

    Smith, Ryan P.; Bloch, Peter; Harris, Eleanor E.; McDonough, James; Sarkar, Abhirup; Kassaee, Alireza; Avery, Steven; Solin, Lawrence J.

    2005-01-01

    Purpose: To evaluate the daily setup variation and the anatomic movement of the heart and lungs during breast irradiation with tangential photon beams, as measured with an electronic portal imaging device. Methods and materials: Analysis of 1,709 portal images determined changes in the radiation field during a treatment course in 8 patients. Values obtained for every image included central lung distance (CLD) and area of lung and heart within the irradiated field. The data from these measurements were used to evaluate variation from setup between treatment days and motion due to respiration and/or patient movement during treatment delivery. Results: The effect of respiratory motion and movement during treatment was minimal: the maximum range in CLD for any patient on any day was 0.25 cm. The variation caused by day-to-day setup variation was greater, with CLD values for patients ranging from 0.59 cm to 2.94 cm. Similar findings were found for heart and lung areas. Conclusions: There is very little change in CLD and corresponding lung and heart area during individual radiation treatment fractions in breast tangential fields, compared with a relatively greater amount of variation that occurs between days

  16. AFLP analysis of Cynodon dactylon (L.) Pers. var. dactylon genetic variation.

    Wu, Y Q; Taliaferro, C M; Bai, G H; Anderson, M P

    2004-08-01

    Cynodon dactylon (L.) Pers. var. dactylon (common bermudagrass) is geographically widely distributed between about lat 45 degrees N and lat 45 degrees S, penetrating to about lat 53 degrees N in Europe. The extensive variation of morphological and adaptive characteristics of the taxon is substantially documented, but information is lacking on DNA molecular variation in geographically disparate forms. Accordingly, this study was conducted to assess molecular genetic variation and genetic relatedness among 28 C. dactylon var. dactylon accessions originating from 11 countries on 4 continents (Africa, Asia, Australia, and Europe). A fluorescence-labeled amplified fragment length polymorphism (AFLP) DNA profiling method was used to detect the genetic diversity and relatedness. On the basis of 443 polymorphic AFLP fragments from 8 primer combinations, the accessions were grouped into clusters and subclusters associating with their geographic origins. Genetic similarity coefficients (SC) for the 28 accessions ranged from 0.53 to 0.98. Accessions originating from Africa, Australia, Asia, and Europe formed major groupings as indicated by cluster and principal coordinate analysis. Accessions from Australia and Asia, though separately clustered, were relatively closely related and most distantly related to accessions of European origin. African accessions formed two distant clusters and had the greatest variation in genetic relatedness relative to accessions from other geographic regions. Sampling the full extent of genetic variation in C. dactylon var. dactylon would require extensive germplasm collection in the major geographic regions of its distributional range.

  17. Analysis of Geomagnetic Field Variations during Total Solar Eclipses Using INTERMAGNET Data

    KIM, J. H.; Chang, H. Y.

    2017-12-01

    We investigate variations of the geomagnetic field observed by INTERMAGNET geomagnetic observatories over which the totality path passed during a solar eclipse. We compare results acquired by 6 geomagnetic observatories during the 4 total solar eclipses (11 August 1999, 1 August 2008, 11 July 2010, and 20 March 2015) in terms of geomagnetic and solar ecliptic parameters. These total solar eclipses are the only total solar eclipse during which the umbra of the moon swept an INTERMAGNET geomagnetic observatory and simultaneously variations of the geomagnetic field are recorded. We have confirmed previous studies that increase BY and decreases of BX, BZ and F are conspicuous. Interestingly, we have noted that variations of geomagnetic field components observed during the total solar eclipse at Isla de Pascua Mataveri (Easter Island) in Chile (IPM) in the southern hemisphere show distinct decrease of BY and increases of BX and BZ on the contrary. We have found, however, that variations of BX, BY, BZ and F observed at Hornsund in Norway (HRN) seem to be dominated by other geomagnetic occurrence. In addition, we have attempted to obtain any signatures of influence on the temporal behavior of the variation in the geomagnetic field signal during the solar eclipse by employing the wavelet analysis technique. Finally, we conclude by pointing out that despite apparent success a more sophisticate and reliable algorithm is required before implementing to make quantitative comparisons.

  18. Variation compensation and analysis on diaphragm curvature analysis for emphysema quantification on whole lung CT scans

    Keller, Brad M.; Reeves, Anthony P.; Barr, R. Graham; Yankelevitz, David F.; Henschke, Claudia I.

    2010-03-01

    CT scans allow for the quantitative evaluation of the anatomical bases of emphysema. Recently, a non-density based geometric measurement of lung diagphragm curvature has been proposed as a method for the quantification of emphysema from CT. This work analyzes variability of diaphragm curvature and evaluates the effectiveness of a compensation methodology for the reduction of this variability as compared to emphysema index. Using a dataset of 43 scan-pairs with less than a 100 day time-interval between scans, we find that the diaphragm curvature had a trend towards lower overall variability over emphysema index (95% CI:-9.7 to + 14.7 vs. -15.8 to +12.0), and that the variation of both measures was reduced after compensation. We conclude that the variation of the new measure can be considered comparable to the established measure and the compensation can reduce the apparent variation of quantitative measures successfully.

  19. Cath lab costs in patients undergoing percutaneous coronary angioplasty - detailed analysis of consecutive procedures.

    Dziki, Beata; Miechowicz, Izabela; Iwachów, Piotr; Kuzemczak, Michał; Kałmucki, Piotr; Szyszka, Andrzej; Baszko, Artur; Siminiak, Tomasz

    2017-01-01

    Costs of percutaneous coronary interventions (PCI) have an important impact on health care expenditures. Despite the present stress upon the cost-effectiveness issues in medicine, few comprehensive data exist on costs and resource use in different clinical settings. To assess catheterisation laboratory costs related to use of drugs and single-use devices in patients undergoing PCI due to coronary artery disease. Retrospective analysis of 1500 consecutive PCIs (radial approach, n = 1103; femoral approach, n = 397) performed due to ST segment elevation myocardial infarction (STEMI; n = 345) and non ST-segment elevation myocardial infarction (NSTEMI; n = 426) as well as unstable angina (UA; n = 489) and stable angina (SA; n = 241) was undertaken. Comparative cost analysis was performed and shown in local currency units (PLN). The cath lab costs were higher in STEMI (4295.01 ± 2384.54PLN, p costs were positively correlated with X-ray dose, fluoroscopy, and total procedure times. Patients' age negatively correlated with cath lab costs in STEMI/NSTEMI patients. Cath lab costs were higher in STEMI patients compared to other groups. In STEMI/NSTEMI they were lower in older patients. In all analysed groups costs were related to the level of procedural difficulty. In female patients, the costs of PCI performed via radial approach were higher compared to femoral approach. Despite younger age, male patients underwent more expensive procedures.

  20. Effect of music in endoscopy procedures: systematic review and meta-analysis of randomized controlled trials.

    Wang, Man Cai; Zhang, Ling Yi; Zhang, Yu Long; Zhang, Ya Wu; Xu, Xiao Dong; Zhang, You Cheng

    2014-10-01

    Endoscopies are common clinical examinations that are somewhat painful and even cause fear and anxiety for patients. We performed this systematic review and meta-analysis of randomized controlled trials to determine the effect of music on patients undergoing various endoscopic procedures. We searched the Cochrane Library, Issue 6, 2013, PubMed, and EMBASE databases up to July 2013. Randomized controlled trials comparing endoscopies, with and without the use of music, were included. Two authors independently abstracted data and assessed risk of bias. Subgroup analyses were performed to examine the impact of music on different types of endoscopic procedures. Twenty-one randomized controlled trials involving 2,134 patients were included. The overall effect of music on patients undergoing a variety of endoscopic procedures significantly improved pain score (weighted mean difference [WMD] = -1.53, 95% confidence interval [CI] [-2.53, -0.53]), anxiety (WMD = -6.04, 95% CI [-9.61, -2.48]), heart rate (P = 0.01), arterial pressure (P music group, compared with the control group. Furthermore, music had little effect for patients undergoing colposcopy and bronchoscopy in the subanalysis. Our meta-analysis suggested that music may offer benefits for patients undergoing endoscopy, except in colposcopy and bronchoscopy. Wiley Periodicals, Inc.

  1. Secondary Analysis of Audio Data. Technical Procedures for Virtual Anonymization and Pseudonymization

    Henning Pätzold

    2005-01-01

    Full Text Available Qualitative material presented as audio data requires a greater degree of protecting of anonymity than for example textual data. Apart from the verbal content, it carries paraverbal aspects including voice characteristics, thus making it easier to identify the speaker. This complicates secondary analysis or reanalysis conducted by researchers who were not involved in the data collection. Difficulties increase if the chances are high that the researcher and the interviewee come in contact for example through a meeting. This paper describes the technical procedures that are used to modify the sound of the audio source in a way that it reduces the possibility of recognition (i.e. similar to that of a carefully written transcript. A discussion of the technical possibilities of this procedure along with an exploration of the boundaries of anonymization is presented. URN: urn:nbn:de:0114-fqs0501249

  2. Cost analysis of procedures related to the management of renal artery stenosis from various perspectives

    Helvoort-Postulart, Debby van; Dirksen, Carmen D.; Kessels, Alfons G.H.; Kroon, Abraham A.; Leeuw, Peter W. de; Nelemans, Patricia J.; Engelshoven, Jos M.A. van; Myriam Hunink, M.G.

    2006-01-01

    To determine the costs associated with the diagnostic work-up and percutaneous revascularization of renal artery stenosis from various perspectives. A prospective multicenter comparative study was conducted between 1998 and 2001. A total of 402 hypertensive patients with suspected renal artery stenosis were included. Costs were assessed of computed tomography angiography (CTA), magnetic resonance angiography (MRA), digital subtraction angiography (DSA), and percutaneous revascularization. From the societal perspective, DSA was the most costly (EUR 1,721) and CTA the least costly diagnostic technique (EUR 424). CTA was the least costly imaging procedure irrespective of the perspective used. The societal costs associated with percutaneous renal artery revascularization ranged from EUR 2,680 to EUR 6,172. Overall the radiology department incurred the largest proportion of the total societal costs. For the management of renal artery stenosis, performing the analysis from different perspectives leads to the same conclusion concerning the least costly diagnostic imaging and revascularization procedure. (orig.)

  3. Simplified Procedure For The Free Vibration Analysis Of Rectangular Plate Structures With Holes And Stiffeners

    Cho Dae Seung

    2015-04-01

    Full Text Available Thin and thick plates, plates with holes, stiffened panels and stiffened panels with holes are primary structural members in almost all fields of engineering: civil, mechanical, aerospace, naval, ocean etc. In this paper, a simple and efficient procedure for the free vibration analysis of such elements is presented. It is based on the assumed mode method and can handle different plate thickness, various shapes and sizes of holes, different framing sizes and types as well as different combinations of boundary conditions. Natural frequencies and modes are determined by solving an eigenvalue problem of a multi-degree-of-freedom system matrix equation derived by using Lagrange’s equations. Mindlin theory is applied for a plate and Timoshenko beam theory for stiffeners. The applicability of the method in the design procedure is illustrated with several numerical examples obtained by the in-house developed code VAPS. Very good agreement with standard commercial finite element software is achieved.

  4. Procedure for conducting probabilistic safety assessment: level 1 full power internal event analysis

    Jung, Won Dae; Lee, Y. H.; Hwang, M. J. [and others

    2003-07-01

    This report provides guidance on conducting a Level I PSA for internal events in NPPs, which is based on the method and procedure that was used in the PSA for the design of Korea Standard Nuclear Plants (KSNPs). Level I PSA is to delineate the accident sequences leading to core damage and to estimate their frequencies. It has been directly used for assessing and modifying the system safety and reliability as a key and base part of PSA. Also, Level I PSA provides insights into design weakness and into ways of preventing core damage, which in most cases is the precursor to accidents leading to major accidents. So Level I PSA has been used as the essential technical bases for risk-informed application in NPPs. The report consists six major procedural steps for Level I PSA; familiarization of plant, initiating event analysis, event tree analysis, system fault tree analysis, reliability data analysis, and accident sequence quantification. The report is intended to assist technical persons performing Level I PSA for NPPs. A particular aim is to promote a standardized framework, terminology and form of documentation for PSAs. On the other hand, this report would be useful for the managers or regulatory persons related to risk-informed regulation, and also for conducting PSA for other industries.

  5. Contribution of the ergonomic analysis to the improvement of the design of operating procedures in nuclear power plants

    Dien, Y.; Montmayeul, R.

    1992-11-01

    The design of operating procedures for continuous processes is much too often based on implicit assumptions both concerning the operators and the operating conditions that must be dealt with. The merit of the ergonomic approach to the design of procedures is to take account of the way the various operators actually use operating procedures. The actual use is determined from the analysis of on-site operation (normal and incident operating conditions) and the analysis of full-scale simulators tests (incident operating conditions). The introduction of the ergonomic approach in the procedure design results in new design principles being proposed

  6. Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas

    Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.

    In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.

  7. Variation and diversity in Homo erectus: a 3D geometric morphometric analysis of the temporal bone.

    Terhune, Claire E; Kimbel, William H; Lockwood, Charles A

    2007-07-01

    Although the level of taxonomic diversity within the fossil hominin species Homo erectus (sensu lato) is continually debated, there have been relatively few studies aiming to quantify the morphology of this species. Instead, most researchers have relied on qualitative descriptions or the evaluation of nonmetric characters, which in many cases display continuous variation. Also, only a few studies have used quantitative data to formally test hypotheses regarding the taxonomic composition of the "erectus" hypodigm. Despite these previous analyses, however, and perhaps in part due to these varied approaches for assessing variation within specimens typically referred to H. erectus (sensu lato) and the general lack of rigorous statistical testing of how variation within this taxon is partitioned, there is currently little consensus regarding whether this group is a single species, or whether it should instead be split into separate temporal or geographically delimited taxa. In order to evaluate possible explanations for variation within H. erectus, we tested the general hypothesis that variation within the temporal bone morphology of H. erectus is consistent with that of a single species, using great apes and humans as comparative taxa. Eighteen three-dimensional (3D) landmarks of the temporal bone were digitized on a total of 520 extant and fossil hominid crania. Landmarks were registered by Generalized Procrustes Analysis, and Procrustes distances were calculated for comparisons of individuals within and between the extant taxa. Distances between fossil specimens and between a priori groupings of fossils were then compared to the distances calculated within the extant taxa to assess the variation within the H. erectus sample relative to that of known species, subspecies, and populations. Results of these analyses indicate that shape variation within the entire H. erectus sample is generally higher than extant hominid intraspecific variation, and putative H. ergaster

  8. PGen: large-scale genomic variations analysis workflow and browser in SoyKB.

    Liu, Yang; Khan, Saad M; Wang, Juexin; Rynge, Mats; Zhang, Yuanxun; Zeng, Shuai; Chen, Shiyuan; Maldonado Dos Santos, Joao V; Valliyodan, Babu; Calyam, Prasad P; Merchant, Nirav; Nguyen, Henry T; Xu, Dong; Joshi, Trupti

    2016-10-06

    With the advances in next-generation sequencing (NGS) technology and significant reductions in sequencing costs, it is now possible to sequence large collections of germplasm in crops for detecting genome-scale genetic variations and to apply the knowledge towards improvements in traits. To efficiently facilitate large-scale NGS resequencing data analysis of genomic variations, we have developed "PGen", an integrated and optimized workflow using the Extreme Science and Engineering Discovery Environment (XSEDE) high-performance computing (HPC) virtual system, iPlant cloud data storage resources and Pegasus workflow management system (Pegasus-WMS). The workflow allows users to identify single nucleotide polymorphisms (SNPs) and insertion-deletions (indels), perform SNP annotations and conduct copy number variation analyses on multiple resequencing datasets in a user-friendly and seamless way. We have developed both a Linux version in GitHub ( https://github.com/pegasus-isi/PGen-GenomicVariations-Workflow ) and a web-based implementation of the PGen workflow integrated within the Soybean Knowledge Base (SoyKB), ( http://soykb.org/Pegasus/index.php ). Using PGen, we identified 10,218,140 single-nucleotide polymorphisms (SNPs) and 1,398,982 indels from analysis of 106 soybean lines sequenced at 15X coverage. 297,245 non-synonymous SNPs and 3330 copy number variation (CNV) regions were identified from this analysis. SNPs identified using PGen from additional soybean resequencing projects adding to 500+ soybean germplasm lines in total have been integrated. These SNPs are being utilized for trait improvement using genotype to phenotype prediction approaches developed in-house. In order to browse and access NGS data easily, we have also developed an NGS resequencing data browser ( http://soykb.org/NGS_Resequence/NGS_index.php ) within SoyKB to provide easy access to SNP and downstream analysis results for soybean researchers. PGen workflow has been optimized for the most

  9. Advances in variational and hemivariational inequalities theory, numerical analysis, and applications

    Migórski, Stanisław; Sofonea, Mircea

    2015-01-01

    Highlighting recent advances in variational and hemivariational inequalities with an emphasis on theory, numerical analysis and applications, this volume serves as an indispensable resource to graduate students and researchers interested in the latest results from recognized scholars in this relatively young and rapidly-growing field. Particularly, readers will find that the volume’s results and analysis present valuable insights into the fields of pure and applied mathematics, as well as civil, aeronautical, and mechanical engineering. Researchers and students will find new results on well posedness to stationary and evolutionary inequalities and their rigorous proofs. In addition to results on modeling and abstract problems, the book contains new results on the numerical methods for variational and hemivariational inequalities. Finally, the applications presented illustrate the use of these results in the study of miscellaneous mathematical models which describe the contact between deformable bodies and a...

  10. Functional analysis and applied optimization in Banach spaces applications to non-convex variational models

    Botelho, Fabio

    2014-01-01

    This book introduces the basic concepts of real and functional analysis. It presents the fundamentals of the calculus of variations, convex analysis, duality, and optimization that are necessary to develop applications to physics and engineering problems. The book includes introductory and advanced concepts in measure and integration, as well as an introduction to Sobolev spaces. The problems presented are nonlinear, with non-convex variational formulation. Notably, the primal global minima may not be attained in some situations, in which cases the solution of the dual problem corresponds to an appropriate weak cluster point of minimizing sequences for the primal one. Indeed, the dual approach more readily facilitates numerical computations for some of the selected models. While intended primarily for applied mathematicians, the text will also be of interest to engineers, physicists, and other researchers in related fields.

  11. Chronic subdural hematoma: a systematic review and meta-analysis of surgical procedures.

    Liu, Weiming; Bakker, Nicolaas A; Groen, Rob J M

    2014-09-01

    In this paper the authors systematically evaluate the results of different surgical procedures for chronic subdural hematoma (CSDH). The MEDLINE, Embase, Cochrane Central Register of Controlled Trials, and other databases were scrutinized according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) statement, after which only randomized controlled trials (RCTs) and quasi-RCTs were included. At least 2 different neurosurgical procedures in the management of chronic subdural hematoma (CSDH) had to be evaluated. Included studies were assessed for the risk of bias. Recurrence rates, complications, and outcome including mortality were taken as outcome measures. Statistical heterogeneity in each meta-analysis was assessed using the T(2) (tau-squared), I(2), and chi-square tests. The DerSimonian-Laird method was used to calculate the summary estimates using the fixed-effect model in meta-analysis. Of the 297 studies identified, 19 RCTs were included. Of them, 7 studies evaluated the use of postoperative drainage, of which the meta-analysis showed a pooled OR of 0.36 (95% CI 0.21-0.60; p < 0.001) in favor of drainage. Four studies compared twist drill and bur hole procedures. No significant differences between the 2 methods were present, but heterogeneity was considered to be significant. Three studies directly compared the use of irrigation before drainage. A fixed-effects meta-analysis showed a pooled OR of 0.49 (95% CI 0.21-1.14; p = 0.10) in favor of irrigation. Two studies evaluated postoperative posture. The available data did not reveal a significant advantage in favor of the postoperative supine posture. Regarding positioning of the catheter used for drainage, it was shown that a frontal catheter led to a better outcome. One study compared duration of drainage, showing that 48 hours of drainage was as effective as 96 hours of drainage. Postoperative drainage has the advantage of reducing recurrence without increasing complications

  12. Meningococcal genetic variation mechanisms viewed through comparative analysis of serogroup C strain FAM18.

    Stephen D Bentley

    2007-02-01

    Full Text Available The bacterium Neisseria meningitidis is commonly found harmlessly colonising the mucosal surfaces of the human nasopharynx. Occasionally strains can invade host tissues causing septicaemia and meningitis, making the bacterium a major cause of morbidity and mortality in both the developed and developing world. The species is known to be diverse in many ways, as a product of its natural transformability and of a range of recombination and mutation-based systems. Previous work on pathogenic Neisseria has identified several mechanisms for the generation of diversity of surface structures, including phase variation based on slippage-like mechanisms and sequence conversion of expressed genes using information from silent loci. Comparison of the genome sequences of two N. meningitidis strains, serogroup B MC58 and serogroup A Z2491, suggested further mechanisms of variation, including C-terminal exchange in specific genes and enhanced localised recombination and variation related to repeat arrays. We have sequenced the genome of N. meningitidis strain FAM18, a representative of the ST-11/ET-37 complex, providing the first genome sequence for the disease-causing serogroup C meningococci; it has 1,976 predicted genes, of which 60 do not have orthologues in the previously sequenced serogroup A or B strains. Through genome comparison with Z2491 and MC58 we have further characterised specific mechanisms of genetic variation in N. meningitidis, describing specialised loci for generation of cell surface protein variants and measuring the association between noncoding repeat arrays and sequence variation in flanking genes. Here we provide a detailed view of novel genetic diversification mechanisms in N. meningitidis. Our analysis provides evidence for the hypothesis that the noncoding repeat arrays in neisserial genomes (neisserial intergenic mosaic elements provide a crucial mechanism for the generation of surface antigen variants. Such variation will have an

  13. AFLP and MS-AFLP analysis of the variation within saffron crocus (Crocus sativus L. germplasm.

    Matteo Busconi

    Full Text Available The presence and extent of genetic variation in saffron crocus are still debated, as testified by several contradictory articles providing contrasting results about the monomorphism or less of the species. Remarkably, phenotypic variations have been frequently observed in the field, such variations are usually unstable and can change from one growing season to another. Considering that gene expression can be influenced both by genetic and epigenetic changes, epigenetics could be a plausible cause of the alternative phenotypes. In order to obtain new insights into this issue, we carried out a molecular marker analysis of 112 accessions from the World Saffron and Crocus Collection. The accessions were grown for at least three years in the same open field conditions. The same samples were analysed using Amplified Fragment Length Polymorphism (AFLP and Methyl Sensitive AFLP in order to search for variation at the genetic (DNA sequence and epigenetic (cytosine methylation level. While the genetic variability was low (4.23% polymorphic peaks and twelve (12 effective different genotypes, the methyl sensitive analysis showed the presence of high epigenetic variability (33.57% polymorphic peaks and twenty eight (28 different effective epigenotypes. The pattern obtained by Factorial Correspondence Analysis of AFLP and, in particular, of MS-AFLP data was consistent with the geographical provenance of the accessions. Very interestingly, by focusing on Spanish accessions, it was observed that the distribution of the accessions in the Factorial Correspondence Analysis is not random but tends to reflect the geographical origin. Two clearly defined clusters grouping accessions from the West (Toledo and Ciudad Real and accessions from the East (Cuenca and Teruel were clearly recognised.

  14. Statistical analysis of activation and reaction energies with quasi-variational coupled-cluster theory

    Black, Joshua A.; Knowles, Peter J.

    2018-06-01

    The performance of quasi-variational coupled-cluster (QV) theory applied to the calculation of activation and reaction energies has been investigated. A statistical analysis of results obtained for six different sets of reactions has been carried out, and the results have been compared to those from standard single-reference methods. In general, the QV methods lead to increased activation energies and larger absolute reaction energies compared to those obtained with traditional coupled-cluster theory.

  15. Cell-based land use screening procedure for regional siting analysis

    Jalbert, J.S.; Dobson, J.E.

    1976-01-01

    An energy facility site-screening methodology which permits the land resource planner to identify candidate siting areas was developed. Through the use of spatial analysis procedures and computer graphics, a selection of candidate areas is obtained. Specific sites then may be selected from among candidate areas for environmental impact analysis. The computerized methodology utilizes a cell-based geographic information system for specifying the suitability of candidate areas for an energy facility. The criteria to be considered may be specified by the user and weighted in terms of importance. Three primary computer programs have been developed. These programs produce thematic maps, proximity calculations, and suitability calculations. Programs are written so as to be transferrable to regional planning or regulatory agencies to assist in rational and comprehensive power plant site identification and analysis

  16. General Staining and Segmentation Procedures for High Content Imaging and Analysis.

    Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S

    2018-01-01

    Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.

  17. Sensitivity analysis in oxidation ditch modelling: the effect of variations in stoichiometric, kinetic and operating parameters on the performance indices

    Abusam, A.A.A.; Keesman, K.J.; Straten, van G.; Spanjers, H.; Meinema, K.

    2001-01-01

    This paper demonstrates the application of the factorial sensitivity analysis methodology in studying the influence of variations in stoichiometric, kinetic and operating parameters on the performance indices of an oxidation ditch simulation model (benchmark). Factorial sensitivity analysis

  18. Anatomical Variations of the Vertebral Artery in the Upper Cervical Spine: Clinical Relevance for Procedures Targeting the C1/C2 and C2/C3 Joints.

    Elgueta, Maria Francisca; Ortiz Jimenez, Johanna; Wang, Nina Nan; Pérez Lara, Almudena; Chankowsky, Jeffrey; Charghi, Roshanak; Tran, De Q; Finlayson, Roderick J

    2018-05-01

    Accidental breach of the vertebral artery (VA) during the performance of cervical pain blocks can result in significant morbidity. Whereas anatomical variations have been described for the foraminal (V2) segment of the VA, those involving its V3 portion (between the C2 transverse process and dura) have not been investigated and may be of importance for procedures targeting the third occipital nerve or the lateral atlantoaxial joint. Five hundred computed tomography angiograms of the neck performed in patients older than 50 years for the management of cerebrovascular accident or cervical trauma (between January 2010 and May 2016) were retrospectively and independently reviewed by 2 neuroradiologists. Courses of the VA in relation to the lateral aspect of the C2/C3 joint and the posterior surface of the C1/C2 joint were examined. For the latter, any medial encroachment of the VA (or one of its branches) was noted. The presence of a VA loop between C1 and C2 and its distance from the upper border of the superior articular process (SAP) of C3 were also recorded. If the VA loop coursed posteriorly, its position in relation to 6 fields found on the lateral aspects of the articular pillars of C2 and C3 was tabulated. At the C1/C2 level, the VA coursed medially over the lateral quarter of the dorsal joint surface in 1% of subjects (0.6% and 0.4% on the left and right sides, respectively; P = 0.998). A VA loop originating between C1 and C2 was found to travel posteroinferiorly over the anterolateral aspect of the inferior articular pillar of C2 in 55.5% of patients on the left and 41.9% on the right side (P < 0.001), as well as over the SAP of C3 in 0.4% of subjects. When present in the quadrant immediately cephalad to the C3 SAP, VA loops coursed within 2.0 ± 1.5 and 3.3 ± 2.5 mm on the left and right sides, respectively, of its superior aspect (P < 0.001). The VA commonly travels adjacent to areas targeted by third occipital nerve procedures and more rarely over the

  19. Factors Associated with Variations in Population HIV Prevalence across West Africa: Findings from an Ecological Analysis

    Prudden, Holly J.; Beattie, Tara S.; Bobrova, Natalia; Panovska-Griffiths, Jasmina; Mukandavire, Zindoga; Gorgens, Marelize; Wilson, David; Watts, Charlotte H.

    2015-01-01

    Background Population HIV prevalence across West Africa varies substantially. We assess the national epidemiological and behavioural factors associated with this. Methods National, urban and rural data on HIV prevalence, the percentage of younger (15–24) and older (25–49) women and men reporting multiple (2+) partners in the past year, HIV prevalence among female sex workers (FSWs), men who have bought sex in the past year (clients), and ART coverage, were compiled for 13 countries. An Ecological analysis using linear regression assessed which factors are associated with national variations in population female and male HIV prevalence, and with each other. Findings National population HIV prevalence varies between 0 4–2 9% for men and 0 4–5.6% for women. ART coverage ranges from 6–23%. National variations in HIV prevalence are not shown to be associated with variations in HIV prevalence among FSWs or clients. Instead they are associated with variations in the percentage of younger and older males and females reporting multiple partners. HIV prevalence is weakly negatively associated with ART coverage, implying it is not increased survival that is the cause of variations in HIV prevalence. FSWs and younger female HIV prevalence are associated with client population sizes, especially older men. Younger female HIV prevalence is strongly associated with older male and female HIV prevalence. Interpretation In West Africa, population HIV prevalence is not significantly higher in countries with high FSW HIV prevalence. Our analysis suggests, higher prevalence occurs where more men buy sex, and where a higher percentage of younger women, and older men and women have multiple partnerships. If a sexual network between clients and young females exists, clients may potentially bridge infection to younger females. HIV prevention should focus both on commercial sex and transmission between clients and younger females with multiple partners. PMID:26698854

  20. Factors Associated with Variations in Population HIV Prevalence across West Africa: Findings from an Ecological Analysis.

    Holly J Prudden

    Full Text Available Population HIV prevalence across West Africa varies substantially. We assess the national epidemiological and behavioural factors associated with this.National, urban and rural data on HIV prevalence, the percentage of younger (15-24 and older (25-49 women and men reporting multiple (2+ partners in the past year, HIV prevalence among female sex workers (FSWs, men who have bought sex in the past year (clients, and ART coverage, were compiled for 13 countries. An Ecological analysis using linear regression assessed which factors are associated with national variations in population female and male HIV prevalence, and with each other.National population HIV prevalence varies between 0 4-2 9% for men and 0 4-5.6% for women. ART coverage ranges from 6-23%. National variations in HIV prevalence are not shown to be associated with variations in HIV prevalence among FSWs or clients. Instead they are associated with variations in the percentage of younger and older males and females reporting multiple partners. HIV prevalence is weakly negatively associated with ART coverage, implying it is not increased survival that is the cause of variations in HIV prevalence. FSWs and younger female HIV prevalence are associated with client population sizes, especially older men. Younger female HIV prevalence is strongly associated with older male and female HIV prevalence.In West Africa, population HIV prevalence is not significantly higher in countries with high FSW HIV prevalence. Our analysis suggests, higher prevalence occurs where more men buy sex, and where a higher percentage of younger women, and older men and women have multiple partnerships. If a sexual network between clients and young females exists, clients may potentially bridge infection to younger females. HIV prevention should focus both on commercial sex and transmission between clients and younger females with multiple partners.

  1. Genome size variation among and within Camellia species by using flow cytometric analysis.

    Hui Huang

    Full Text Available BACKGROUND: The genus Camellia, belonging to the family Theaceae, is economically important group in flowering plants. Frequent interspecific hybridization together with polyploidization has made them become taxonomically "difficult taxa". The DNA content is often used to measure genome size variation and has largely advanced our understanding of plant evolution and genome variation. The goals of this study were to investigate patterns of interspecific and intraspecific variation of DNA contents and further explore genome size evolution in a phylogenetic context of the genus. METHODOLOGY/PRINCIPAL FINDINGS: The DNA amount in the genus was determined by using propidium iodide flow cytometry analysis for a total of 139 individual plants representing almost all sections of the two subgenera, Camellia and Thea. An improved WPB buffer was proven to be suitable for the Camellia species, which was able to counteract the negative effects of secondary metabolite and generated high-quality results with low coefficient of variation values (CV <5%. Our results showed trivial effects on different tissues of flowers, leaves and buds as well as cytosolic compounds on the estimation of DNA amount. The DNA content of C. sinensis var. assamica was estimated to be 1C = 3.01 pg by flow cytometric analysis, which is equal to a genome size of about 2940 Mb. CONCLUSION: Intraspecific and interspecific variations were observed in the genus Camellia, and as expected, the latter was larger than the former. Our study suggests a directional trend of increasing genome size in the genus Camellia probably owing to the frequent polyploidization events.

  2. SU-E-T-139: Automated Daily EPID Exit Dose Analysis Uncovers Treatment Variations

    Olch, A [University of Southern California, Los Angeles, CA (United States)

    2015-06-15

    Purpose: To evaluate a fully automated EPID exit dose system for its ability to detect daily treatment deviations including patient setup, delivery, and anatomy changes. Methods: PerFRACTION (Sun Nuclear Corporation) software is a system that uses integrated EPID images taken during patient treatment and automatically pulled from the Aria database and analyzed based on user-defined comparisons. This was used to monitor 20 plans consisting of a total of 859 fields for 18 patients, for a total of 251 fractions. Nine VMAT, 5 IMRT, and 6 3D plans were monitored. The Gamma analysis was performed for each field within a plan, comparing the first fraction against each of the other fractions in each treatment course. A 2% dose difference, 1 mm distance-to-agreement, and 10% dose threshold was used. These tight tolerances were chosen to achieve a high sensitivity to treatment variations. The field passed if 93% of the pixels had a Gamma of 1 or less. Results: Twenty-nine percent of the fields failed. The average plan passing rate was 92.5%.The average 3D plan passing rate was less than for VMAT or IMRT, 84%, vs. an average of 96.2%. When fields failed, an investigation revealed changes in patient anatomy or setup variations, often also leading to variations of transmission through immobilization devices. Conclusion: PerFRACTION is a fully automated system for determining daily changes in dose transmission through the patient that requires no effort other than for the imager panel to be deployed during treatment. A surprising number of fields failed the analysis and can be attributed to important treatment variations that would otherwise not be appreciated. Further study of inter-fraction treatment variations is possible and warranted. Sun Nuclear Corporation provided a license to the software described.

  3. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  4. U.F.F.A.: A numerical procedure for fatigue analysis according to ASME code

    Bellettato, W.; Ticozzi, C.; Zucchini, C.

    1981-01-01

    A new procedure is developed, which employs some already used methodologies and brings some new concepts. The computer code UFFA employs the so obtained procedure. This paper in the first part describes the methodology used for the usage factor calculation, in the second part carries a general description of the code and in the third part shows some example and their respective results. We suppose an elastic behaviour of the materials and we do not consider the effect of the application order of the loads. Moreover, we suppose valid the hypothesis of cumulative damage, that is we apply the Miner's rule. One of the problems in the nuclear components fatigue analysis is that in the load histories there is a high number of operational cycles for which we cannot specify a succession in the time. Therefore, it was introduced the concept of 'level' (or steady working status) by which we can approximate the load conditions in realistic way. As regard the problem of multiaxial cases, it is possible to show that it is not right, an neither conservative, to make a distinguished analysis of the 3 stress differences and then take the maximum of the 3 compuoted usage factors as component usage factor. Indeed, as the stresses act on the structure at the same time, it is necessary a contemporary analysis of the 3 stress difference curves. The computer code can deal as well with the case of sher stresses (varying principal stress directions) through the ASME 'normalization' procedure. The results of the UFFA program, compared with the results of other programs used at present, come up to the expectations. (orig./HP)

  5. Use of safety analysis to site comfirmation procedure in case of hard rock repository

    Peltonen, E.K.

    1984-02-01

    The role of safety analysis in a confirmation procedure of a candidate disposal site of radioactive wastes is discussed. Items dealt with include principle reasons and practical goals of the use of safety analysis, methodology of safety analysis and assessment, as well as usefulness and adequacy of the present safety analysis. Safety analysis is a tool, which enables one to estimate quantitatively the possible radiological impacts from the disposal. The results can be compared with the criteria and the suitability conclusions drawn. Because of its systems analytical nature safety analysis is an effective method to reveal, what are the most important factors of the disposal system and the most critical site characteristics inside the lumped parameters often provided by the experimental site investigation methods. Furthermore it gives information on the accuracy needs of different site properties. This can be utilized to judge whether the quality and quantity of the measurements for the characterization are sufficient as well as to guide the further site investigations. A more practical discussion regarding the applicability of the use of safety analysis is presented by an example concerning the assessment of a Finnish candidate site for low- and intermediate-level radioactive waste repository. (author)

  6. A procedure for partitioning bulk sediments into distinct grain-size fractions for geochemical analysis

    Barbanti, A.; Bothner, Michael H.

    1993-01-01

    A method to separate sediments into discrete size fractions for geochemical analysis has been tested. The procedures were chosen to minimize the destruction or formation of aggregates and involved gentle sieving and settling of wet samples. Freeze-drying and sonication pretreatments, known to influence aggregates, were used for comparison. Freeze-drying was found to increase the silt/clay ratio by an average of 180 percent compared to analysis of a wet sample that had been wet sieved only. Sonication of a wet sample decreased the silt/clay ratio by 51 percent. The concentrations of metals and organic carbon in the separated fractions changed depending on the pretreatment procedures in a manner consistent with the hypothesis that aggregates consist of fine-grained organic- and metal-rich particles. The coarse silt fraction of a freeze-dried sample contained 20–44 percent higher concentrations of Zn, Cu, and organic carbon than the coarse silt fraction of the wet sample. Sonication resulted in concentrations of these analytes that were 18–33 percent lower in the coarse silt fraction than found in the wet sample. Sonication increased the concentration of lead in the clay fraction by an average of 40 percent compared to an unsonicated sample. Understanding the magnitude of change caused by different analysis protocols is an aid in designing future studies that seek to interpret the spatial distribution of contaminated sediments and their transport mechanisms.

  7. Legitimization Arguments for Procedural Reforms: a semio-linguistic analysis of statement of reasons from the Civil Procedure Code of 1939 and of the draft bill of the New Civil Procedure Code of 2010.

    Matheus Guarino Sant’Anna Lima de Almeida

    2016-08-01

    Full Text Available This research aims to analyze the arguments of legitimization that were used in the reform of Brazilian procedural legal codes, by comparing the texts of the statement of reasons of the Civil Procedure Code of 1939 and the draft bill of the New Civil Procedure Code. We consider these codes as milestones: the Civil Procedure Code of 1939 was the first one with a national scope; the draft bill of the New Civil Procedure Code was the first one produced during a democratic period. Our goal is to search for similarities and contrasts between the legitimization arguments used in each historical and political period, by asking if they were only arguments to bestow legitimacy to such reforms. We decided to use the methodological tools of sociolinguistic analysis of speech developed by Patrick Charaudeau in his analyses of political speech in order to elucidate how the uses of language and elements of meaning in the speech construction provide justification for the concept of procedure, in both 1939 and 2010. As a result, we conclude that the process of drafting the CPC of 1939 and the New CPC, even if they are very distant in terms of political and historical contexts, they are also very close in their rhetorical construction and their attempt to find justification and adherence. On balance, some of the differences depend on the vocabulary used when the codes were developed, their justification and the need for change. 

  8. Drawing up of a procedure for vanadium determination in mussels using the neutron activation analysis method

    Seo, Daniele; Vasconcellos, Marina B.A.; Saiki, Mitiko; Catharino, Marilia G.M.; Moreira, Edson G.; Sousa, Eduinetty C.P.M. de; Pereira, Camilo D.S.

    2009-01-01

    This work establishes an adequate procedure for obtaining reliable results for determination of vanadium in mussels, leg by leg by the neutron activation analysis (NAA), viewing the posterior application on the bio monitoring the coastal pollution, particularly near the petroleum terminals.For the evaluation of result quality concerning to the quality of those results, the work analysed the reference material certification NIST SRM 1566b Oyster Tissue. The precision of the results were also analysed using repetitions of mussel samples collected at the coastal of northern Sao Paulo state, Brazil. The NAA procedure consisted of 200 mg of sample and a synthetic standard of vanadium during a period of 8 s and under a thermal neutron flux of 6.6 x 10 12 n cm -2 s -1 at the pneumatic station 4 of the IEA-R1 nuclear reactor of IPEN-CNEN/SP. After a 3 min decay, the measurements of the gamma activities of the sample and the standard were done using a Ge hyper pure semi-conductor detector, connected to gamma ray multichannel analyser. The vanadium were determined by the measurement of the gamma activity of the 52 V through the 1434.08 keV peak, and half-life time of 3.75 min. The concentration of V were calculated by the comparative method. The obtained results indicated the viability of the NAA procedure established for the determination of vanadium in mussels

  9. Analysis and application of ratcheting evaluation procedure of Japanese high temperature design code DDS

    Lee, H. Y.; Kim, J. B.; Lee, J. H.

    2002-01-01

    In this study, the evaluation procedure of Japanese DDS code which was recently developed to assess the progressive inelastic deformation occurring under repetition of secondary stresses was analyzed and the evaluation results according to DDS was compared those of the thermal ratchet structural test carried out by KAERI to analyze the conservativeness of the code. The existing high temperature codes of US ASME-NH and French RCC-MR suggest the limited ratcheting procedures for only the load cases of cyclic secondary stresses under primary stresses. So they are improper to apply to the actual ratcheting problem which can occur under cyclic secondary membrane stresses due to the movement of hot free surface for the pool type LMR. DDS provides explicitly an analysis procedure of ratcheting due to moving thermal gradients near hot free surface. A comparison study was carried out between the results by the design code of DDS and by the structural test to investigate the conservativeness of DDS code, which showed that the evaluation results by DDS were in good agreement with those of the structural test

  10. New analysis procedure for fast and reliable size measurement of nanoparticles from atomic force microscopy images

    Boyd, Robert D.; Cuenat, Alexandre

    2011-01-01

    Accurate size measurement during nanoparticle production is essential for the continuing innovation, quality and safety of nano-enabled products. Size measurement by analysing a number of separate particles individually has particular advantages over ensemble methods. In the latter case nanoparticles have to be well dispersed in a fluid and changes that may occur during analysis, such as agglomeration and degradation, will not be detected which could lead to misleading results. Atomic force microscopy (AFM) allows imaging of particles both in air and liquid, however, the strong interactions between the probe and the particle will cause the broadening of the lateral dimension in the final image. In this paper a new procedure to measure the size of spherical nanoparticles from AFM images via vertical height measurement is described. This procedure will quickly analyse hundred of particles simultaneously and reproduce the measurements obtained from electron microscopy (EM). Nanoparticles samples that were difficult, if not impossible, to analyse with EM were successfully measured using this method. The combination of this procedure with the use of a metrological AFM moves closer to true traceable measurements of nanoparticle dispersions.

  11. Procedure for the record, calculation and analysis of costs at the Post Company of Cuba.

    María Luisa Lara Zayas

    2012-12-01

    Full Text Available The Cuban Company is immersed in important changes, which lead to a new economic model that requires to increase the productivity of work and to enlarge the economic efficiency by means of rational use of material resources, financial and humans. In the present work it is proposed a procedure based on the application of cost techniques, for the record, calculation and costs analysis of activities in the Post Company of Cuba in Sancti Spiritus with the objective to obtain a major efficiency from the rational use of resources.

  12. Criteria and procedures for validating biomathematical models of human performance and fatigue : procedures for analysis of work schedules.

    2013-01-01

    Each railroad covered by 49 CFR 228.407 must perform an analysis of the work schedules of its train employees who are engaged in commuter or intercity rail passenger transportation and identify those schedules that, if worked by such a train employee...

  13. Transcriptome analysis of the sea cucumber (Apostichopus japonicus) with variation in individual growth.

    Gao, Lei; He, Chongbo; Bao, Xiangbo; Tian, Meilin; Ma, Zhen

    2017-01-01

    The sea cucumber (Apostichopus japonicus) is an economically important aquaculture species in China. However, the serious individual growth variation often caused financial losses to farmers and the genetic mechanisms are poorly understood. In the present study, the extensively analysis at the transcriptome level for individual growth variation in sea cucumber was carried out. A total of 118946 unigenes were assembled from 255861 transcripts, with N50 of 1700. Of all unigenes, about 23% were identified with at least one significant match to known databases. In all four pair of comparison, 1840 genes were found to be expressed differently. Global hypometabolism was found to be occurred in the slow growing population, based on which the hypothesis was raised that growth retardation in individual growth variation of sea cucumber is one type of dormancy which is used to be against to adverse circumstances. Besides, the pathways such as ECM-receptor interaction and focal adhesion were enriched in the maintenance of cell and tissue structure and communication. Further, 76645 SSRs, 765242 SNPs and 146886 ins-dels were detected in the current study providing an extensive set of data for future studies of genetic mapping and selective breeding. In summary, these results will provides deep insight into the molecular basis of individual growth variation in marine invertebrates, and be valuable for understanding the physiological differences of growth process.

  14. Transcriptome analysis of the sea cucumber (Apostichopus japonicus with variation in individual growth.

    Lei Gao

    Full Text Available The sea cucumber (Apostichopus japonicus is an economically important aquaculture species in China. However, the serious individual growth variation often caused financial losses to farmers and the genetic mechanisms are poorly understood. In the present study, the extensively analysis at the transcriptome level for individual growth variation in sea cucumber was carried out. A total of 118946 unigenes were assembled from 255861 transcripts, with N50 of 1700. Of all unigenes, about 23% were identified with at least one significant match to known databases. In all four pair of comparison, 1840 genes were found to be expressed differently. Global hypometabolism was found to be occurred in the slow growing population, based on which the hypothesis was raised that growth retardation in individual growth variation of sea cucumber is one type of dormancy which is used to be against to adverse circumstances. Besides, the pathways such as ECM-receptor interaction and focal adhesion were enriched in the maintenance of cell and tissue structure and communication. Further, 76645 SSRs, 765242 SNPs and 146886 ins-dels were detected in the current study providing an extensive set of data for future studies of genetic mapping and selective breeding. In summary, these results will provides deep insight into the molecular basis of individual growth variation in marine invertebrates, and be valuable for understanding the physiological differences of growth process.

  15. National Variation in Urethroplasty Cost and Predictors of Extreme Cost: A Cost Analysis with Policy Implications

    Harris, Catherine R.; Osterberg, E. Charles; Sanford, Thomas; Alwaal, Amjad; Gaither, Thomas W.; McAninch, Jack W.; McCulloch, Charles E.; Breyer, Benjamin N.

    2016-01-01

    To determine which factors are associated with higher costs of urethroplasty procedure and whether these factors have been increasing over time. Identification of determinants of extreme costs may help reduce cost while maintaining quality.We conducted a retrospective analysis using the 2001-2010 Healthcare Cost and Utilization Project-Nationwide Inpatient Sample (HCUP-NIS). The HCUP-NIS captures hospital charges which we converted to cost using the HCUP cost-to-charge ratio. Log cost linear ...

  16. Comparative analysis of lockout programs and procedures applied to industrial machines

    Chinniah, Y.; Champoux, M.; Burlet-Vienney, D.; Daigle, R. [Institut de recherche Robert-Sauve en sante et en securite du travail, Montreal, PQ (Canada)

    2008-09-15

    In 2005, approximately 20 workers in Quebec were killed by dangerous machines. Approximately 13,000 accidents in the province were linked to the use of machines. The resulting cost associated with these accidents was estimated to be $70 million to the Quebec Occupational Health and Safety Commission (CSST) in compensation and salary replacement. According to article 185 of the Quebec Occupational Health and Safety Regulation (RSST), workers intervening in hazardous zones of machines and processes during maintenance, repairs, and unjamming activities must apply lockout procedures. Lockout is defined as the placement of a lock or tag on an energy-isolating device in accordance with an established procedure, indicating that the energy-isolating device is not to be operated until removal of the lock or tag in accordance with an established procedure. This report presented a comparative analysis of lockout programs and procedures applied to industrial machines. The study attempted to answer several questions regarding the concept of lockout and its definition in the literature; the differences between legal lockout requirements among provinces and countries; different standards on lockout; the contents of lockout programs as described by different documents; and the compliance of lockout programs in a sample of industries in Quebec in terms of Canadian standard on lockout, the CSA Z460-05 (2005). The report discussed the research objectives, methodology, and results of the study. It was concluded that the concept of lockout has different meanings or definitions in the literature, especially in regulations. However, definitions of lockout which are found in standards have certain similarities. 50 refs., 52 tabs., 2 appendices.

  17. Electrically evoked compound action potentials artefact rejection by independent component analysis: procedure automation.

    Akhoun, Idrick; McKay, Colette; El-Deredy, Wael

    2015-01-15

    Independent-components-analysis (ICA) successfully separated electrically-evoked compound action potentials (ECAPs) from the stimulation artefact and noise (ECAP-ICA, Akhoun et al., 2013). This paper shows how to automate the ECAP-ICA artefact cancellation process. Raw-ECAPs without artefact rejection were consecutively recorded for each stimulation condition from at least 8 intra-cochlear electrodes. Firstly, amplifier-saturated recordings were discarded, and the data from different stimulus conditions (different current-levels) were concatenated temporally. The key aspect of the automation procedure was the sequential deductive source categorisation after ICA was applied with a restriction to 4 sources. The stereotypical aspect of the 4 sources enables their automatic classification as two artefact components, a noise and the sought ECAP based on theoretical and empirical considerations. The automatic procedure was tested using 8 cochlear implant (CI) users and one to four stimulus electrodes. The artefact and noise sources were successively identified and discarded, leaving the ECAP as the remaining source. The automated ECAP-ICA procedure successfully extracted the correct ECAPs compared to standard clinical forward masking paradigm in 22 out of 26 cases. ECAP-ICA does not require extracting the ECAP from a combination of distinct buffers as it is the case with regular methods. It is an alternative that does not have the possible bias of traditional artefact rejections such as alternate-polarity or forward-masking paradigms. The ECAP-ICA procedure bears clinical relevance, for example as the artefact rejection sub-module of automated ECAP-threshold detection techniques, which are common features of CI clinical fitting software. Copyright © 2014. Published by Elsevier B.V.

  18. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L.

    1995-10-01

    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual's performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average

  19. PGDP [Paducah Gaseous Diffusion Plant]-UF6 handling, sampling, analysis and associated QC/QA and safety related procedures

    Harris, R.L.

    1987-01-01

    This document is a compilation of Paducah Gaseous Diffusion Plant procedures on UF 6 handling, sampling, and analysis, along with associated QC/QA and safety related procedures. It was assembled for transmission by the US Department of Energy to the Korean Advanced Energy Institute as a part of the US-Korea technical exchange program

  20. Development of SRC-I product analysis. Volume 3. Documentation of procedures

    Schweighardt, F.K.; Kingsley, I.S.; Cooper, F.E.; Kamzelski, A.Z.; Parees, D.M.

    1983-09-01

    This section documents the BASIC computer program written to simulate Wilsonville's GC-simulated distillation (GCSD) results at APCI-CRSD Trexlertown. The GC conditions used at APCI for the Wilsonville GCSD analysis of coal-derived liquid samples were described in the SRC-I Quarterly Technical Report, April-June 1981. The approach used to simulate the Wilsonville GCSD results is also from an SRC-I Quarterly Technical Report and is reproduced in Appendix VII-A. The BASIC computer program is described in the attached Appendix VII-B. Analysis of gases produced during coal liquefaction generates key information needed to determine product yields for material balance and process control. Gas samples from the coal process development unit (CPDU) and tubing bombs are the primary samples analyzed. A Carle gas chromatographic system was used to analyze coal liquefaction gas samples. A BASIC computer program was written to calculate the gas chromatographic peak area results into mole percent results. ICRC has employed several analytical workup procedures to determine the amount of distillate, oils, asphaltenes, preasphaltenes, and residue in SRC-I process streams. The ASE procedure was developed using Conoco's liquid column fractionation (LC/F) method as a model. In developing the ASE procedure, ICRC was able to eliminate distillation, and therefore quantify the oils fraction in one extraction step. ASE results were shown to be reproducible within +- 2 wt %, and to yield acceptable material balances. Finally, the ASE method proved to be the least affected by sample composition.

  1. NDVI-Based analysis on the influence of human activities on vegetation variation on Hainan Island

    Luo, Hongxia; Dai, Shengpei; Xie, Zhenghui; Fang, Jihua

    2018-02-01

    Using the Moderate Resolution Imaging Spectroradiometer-normalized difference vegetation index (NDVI) dataset, we analyzed the predicted NDVI values variation and the influence of human activities on vegetation on Hainan Island during 2001-2015. We investigated the roles of human activities in vegetation variation, particularly from 2002 when implemented the Grain-for-Greenprogram on Hainan Island. The trend analysis, linear regression model and residual analysis were used to analyze the data. The results of the study showed that (1) The predicted vegetation on Hainan Island showed an general upward trend with a linear growth rate of 0.0025/10y (phuman activities. (3) In general, human activities had played a positive role in the vegetation increase on Hainan Island, and the residual NDVI trend of this region showed positive outcomes for vegetation variation after implementing ecological engineering projects. However, it indicated a growing risk of vegetation degradation in the coastal region of Hainan Island as a result of rapid urbanization, land reclamation.

  2. Forest anisotropy assessment by means of spatial variations analysis of PolSAR backscattering

    A. V. Dmitriev

    2017-06-01

    Full Text Available The possibility to synthesize polarization response from earth covers at any desired combination of transmit and receive antenna polarizations is the significant advantage of polarimetric radar. It permits better identification of dominant scattering mechanisms especially when analyzing polarization signatures. These signatures depict more details of physical information from target backscattering in various polarization bases. However, polarization signatures cannot reveal spatial variations of the radar backscattering caused by volume heterogeneity of a target. This paper proposes a new approach for estimating volume target heterogeneity from polarimetric synthetic aperture radar (PolSAR images. The approach is based on the analysis of a novel type of polarization signature, which we call fractal polarization signature (FPS. This signature is a result of polarization synthesis of initial fully polarimetric data and subsequent fractal analysis of synthesized images. It is displayed as a 3D plot and can be produced for each point in an image. It is shown that FPS describes backscattering variations or image roughness at different states of polarization. Fully polarimetric data of SIR-C and ALOS PALSAR at ascending/descending orbits were used for testing the proposed approach. The azimuthal dependence of the radar backscattering variations is discovered when analyzing backscattering from a pine forest. It correlates with the results of a field survey of trees branch distribution.

  3. Measurement of isotope abundance variations in nature by gravimetric spiking isotope dilution analysis (GS-IDA).

    Chew, Gina; Walczyk, Thomas

    2013-04-02

    Subtle variations in the isotopic composition of elements carry unique information about physical and chemical processes in nature and are now exploited widely in diverse areas of research. Reliable measurement of natural isotope abundance variations is among the biggest challenges in inorganic mass spectrometry as they are highly sensitive to methodological bias. For decades, double spiking of the sample with a mix of two stable isotopes has been considered the reference technique for measuring such variations both by multicollector-inductively coupled plasma mass spectrometry (MC-ICPMS) and multicollector-thermal ionization mass spectrometry (MC-TIMS). However, this technique can only be applied to elements having at least four stable isotopes. Here we present a novel approach that requires measurement of three isotope signals only and which is more robust than the conventional double spiking technique. This became possible by gravimetric mixing of the sample with an isotopic spike in different proportions and by applying principles of isotope dilution for data analysis (GS-IDA). The potential and principle use of the technique is demonstrated for Mg in human urine using MC-TIMS for isotopic analysis. Mg is an element inaccessible to double spiking methods as it consists of three stable isotopes only and shows great potential for metabolically induced isotope effects waiting to be explored.

  4. [Genetic variation analysis of canine parvovirus VP2 gene in China].

    Yi, Li; Cheng, Shi-Peng; Yan, Xi-Jun; Wang, Jian-Ke; Luo, Bin

    2009-11-01

    To recognize the molecular biology character, phylogenetic relationship and the state quo prevalent of Canine parvovirus (CPV), Faecal samnples from pet dogs with acute enteritis in the cities of Beijing, Wuhan, and Nanjing were collected and tested for CPV by PCR and other assay between 2006 and 2008. There was no CPV to FPV (MEV) variation by PCR-RFLP analysis in all samples. The complete ORFs of VP2 genes were obtained by PCR from 15 clinical CPVs and 2 CPV vaccine strains. All amplicons were cloned and sequenced. Analysis of the VP2 sequences showed that clinical CPVs both belong to CPV-2a subtype, and could be classified into a new cluster by amino acids contrasting which contains Tyr-->Ile (324) mutation. Besides the 2 CPV vaccine strains belong to CPV-2 subtype, and both of them have scattered variation in amino acids residues of VP2 protein. Construction of the phylogenetic tree based on CPV VP2 sequence showed these 15 CPV clinical strains were in close relationship with Korea strain K001 than CPV-2a isolates in other countries at early time, It is indicated that the canine parvovirus genetic variation was associated with location and time in some degree. The survey of CPV capsid protein VP2 gene provided the useful information for the identification of CPV types and understanding of their genetic relationship.

  5. Sedation for pediatric radiological procedures: analysis of potential causes of sedation failure and paradoxical reactions

    Karian, V.E.; Burrows, P.E.; Connor, L. [Dept. of Radiology, Children' s Hospital, Boston, MA (United States); Zurakowski, D. [Dept. of Biostatistics, Children' s Hospital, Boston, MA (United States); Mason, K.P. [Dept. of Anesthesiology, Children' s Hospital, Boston, MA (United States)

    1999-11-01

    Background. Sedation for diagnostic imaging and interventional radiologic procedures in pediatrics has greatly increased over the past decade. With appropriate patient selection and monitoring, serious adverse effects are infrequent, but failure to sedate and paradoxical reactions do occur. Objective. The purpose of this study was to determine, among patients undergoing sedation for radiologic procedures, the incidence of sedation failure and paradoxical reaction to pentobarbital and to identify potentially correctable causes. Materials and methods. Records of 1665 patients who were sedated in the radiology department from 1 November 1997 to 1 July 1998 were reviewed. Patients failing sedation or experiencing paradoxical reaction were compared with respect to sex, age group, diagnosis, scan type, time of day, NPO status, use of IV contrast and type of sedation agent using the Fisher exact test, Pearson chi-square, analysis of variance (ANOVA), the Student t-test, and logistic regression. Results. Data analysis revealed a sedation failure rate of 1 % and paradoxical reaction rate of 1.2 %. Stepwise multiple logistic regression revealed that the only significant independent multivariate predictor of failure was the need for the administration of a combination of pentobarbital, fentanyl, and midazolam IV. Conclusion. The low rate of sedation failure and paradoxical reactions to pentobarbital was near optimal and probably cannot be improved with the currently available sedatives. (orig.)

  6. Sedation for pediatric radiological procedures: analysis of potential causes of sedation failure and paradoxical reactions

    Karian, V.E.; Burrows, P.E.; Connor, L.; Zurakowski, D.; Mason, K.P.

    1999-01-01

    Background. Sedation for diagnostic imaging and interventional radiologic procedures in pediatrics has greatly increased over the past decade. With appropriate patient selection and monitoring, serious adverse effects are infrequent, but failure to sedate and paradoxical reactions do occur. Objective. The purpose of this study was to determine, among patients undergoing sedation for radiologic procedures, the incidence of sedation failure and paradoxical reaction to pentobarbital and to identify potentially correctable causes. Materials and methods. Records of 1665 patients who were sedated in the radiology department from 1 November 1997 to 1 July 1998 were reviewed. Patients failing sedation or experiencing paradoxical reaction were compared with respect to sex, age group, diagnosis, scan type, time of day, NPO status, use of IV contrast and type of sedation agent using the Fisher exact test, Pearson chi-square, analysis of variance (ANOVA), the Student t-test, and logistic regression. Results. Data analysis revealed a sedation failure rate of 1 % and paradoxical reaction rate of 1.2 %. Stepwise multiple logistic regression revealed that the only significant independent multivariate predictor of failure was the need for the administration of a combination of pentobarbital, fentanyl, and midazolam IV. Conclusion. The low rate of sedation failure and paradoxical reactions to pentobarbital was near optimal and probably cannot be improved with the currently available sedatives. (orig.)

  7. Optimization of procedures for mercury-203 instrumental neutron activation analysis in human urine

    Blotcky, A J; Claassen, J P [Nebraska Univ., Omaha, NE (United States). Medical Center; Fung, Y K [Nebraska Univ., Lincoln, NE (United States). Dept. of Chemistry; Meade, A G; Rack, E P [Nebraska Univ., Lincoln, NE (United States)

    1995-08-01

    Mercury, a known neurotoxin, has been implicated in etiology and pathogenesis of such disease states as Alzheimer`s and Parkinson`s diseases. There is concern that the exposure to mercury vapor released from dental amalgam restorations is a potential health hazard. Measurement of mercury concentrations in blood or urine may be useful in diagnosis of mercury poisoning and in assessing the extent exposure. This study describes the optimization of pre-neutron activation analysis procedures such as sampling, selection of irradiation and counting vials and acid digestion in order to minimize mercury loss via volatilization and/or permeation through containers. Therefore, the determination of mercury can be complicated by these potential losses. In the optimized procedure 20mL of urine was spiked with three different concentrations of mercury, digested with concentrated nitric acid, and placed in polypropylene vials for irradiation and counting. Analysis was performed by subtracting the Se-75 photopeak contribution to the 279 keV Hg-203 photopeak and applying the method of standard additions. Urinary mercury concentrations in normal human subjects were determined to be of the order of 10ng/mL. (author). 22 refs., 1 fig., 5 tabs.

  8. Optimization of procedures for mercury-203 instrumental neutron activation analysis in human urine

    Blotcky, A.J.; Claassen, J.P.

    1995-01-01

    Mercury, a known neurotoxin, has been implicated in etiology and pathogenesis of such disease states as Alzheimer's and Parkinson's diseases. There is concern that the exposure to mercury vapor released from dental amalgam restorations is a potential health hazard. Measurement of mercury concentrations in blood or urine may be useful in diagnosis of mercury poisoning and in assessing the extent exposure. This study describes the optimization of pre-neutron activation analysis procedures such as sampling, selection of irradiation and counting vials and acid digestion in order to minimize mercury loss via volatilization and/or permeation through containers. Therefore, the determination of mercury can be complicated by these potential losses. In the optimized procedure 20mL of urine was spiked with three different concentrations of mercury, digested with concentrated nitric acid, and placed in polypropylene vials for irradiation and counting. Analysis was performed by subtracting the Se-75 photopeak contribution to the 279 keV Hg-203 photopeak and applying the method of standard additions. Urinary mercury concentrations in normal human subjects were determined to be of the order of 10ng/mL. (author). 22 refs., 1 fig., 5 tabs

  9. Neutron activation analysis with k{sub 0}-standardisation : general formalism and procedure

    Pomme, S.; Hardeman, F. [Centre de l`Etude de l`Energie Nucleaire, Mol (Belgium); Robouch, P.; Etxebarria, N.; Arana, G. [European Commission, Joint Research Centre, Institute for Reference Materials and Measurements, Geel (Belgium)

    1997-09-01

    Instrumental neutron activation analysis (INAA) with k{sub 0}-standardisation is a powerful tool for multi-element analysis at a broad range of trace element concentrations. An overview is given of the basic principles, fundamental equations, and general procedure of this method. Different aspects of the description of the neutron activation reaction rate are discussed, applying the Hogdahl convention. A general activation-decay formula is derived and its application to INAA is demonstrated. Relevant k{sub 0}-definitions for different activation decay schemes are summarised and upgraded to cases of extremely high fluxes. The main standardisation techniques for INAA are discussed, emphasizing the k{sub 0}-standardisation. Some general aspects of the basic equipment and its calibration are discussed, such as the characterisation of the neutron field and the tuning of the spectrometry part. A method for the prediction and optimisation of the analytical performance of INAA is presented.

  10. Neutron activation analysis with k0-standardisation : general formalism and procedure

    Pomme, S.; Hardeman, F.; Robouch, P.; Etxebarria, N.; Arana, G.

    1997-09-01

    Instrumental neutron activation analysis (INAA) with k 0 -standardisation is a powerful tool for multi-element analysis at a broad range of trace element concentrations. An overview is given of the basic principles, fundamental equations, and general procedure of this method. Different aspects of the description of the neutron activation reaction rate are discussed, applying the Hogdahl convention. A general activation-decay formula is derived and its application to INAA is demonstrated. Relevant k 0 -definitions for different activation decay schemes are summarised and upgraded to cases of extremely high fluxes. The main standardisation techniques for INAA are discussed, emphasizing the k 0 -standardisation. Some general aspects of the basic equipment and its calibration are discussed, such as the characterisation of the neutron field and the tuning of the spectrometry part. A method for the prediction and optimisation of the analytical performance of INAA is presented

  11. Headspace solid-phase microextraction procedures for gas chromatographic analysis of biological fluids and materials.

    Mills, G A; Walker, V

    2000-12-01

    Solid-phase microextraction (SPME) is a new solventless sample preparation technique that is finding wide usage. This review provides updated information on headspace SPME with gas chromatographic separation for the extraction and measurement of volatile and semivolatile analytes in biological fluids and materials. Firstly the background to the technique is given in terms of apparatus, fibres used, extraction conditions and derivatisation procedures. Then the different matrices, urine, blood, faeces, breast milk, hair, breath and saliva are considered separately. For each, methods appropriate for the analysis of drugs and metabolites, solvents and chemicals, anaesthetics, pesticides, organometallics and endogenous compounds are reviewed and the main experimental conditions outlined with specific examples. Then finally, the future potential of SPME for the analysis of biological samples in terms of the development of new devices and fibre chemistries and its coupling with high-performance liquid chromatography is discussed.

  12. "Life history space": a multivariate analysis of life history variation in extant and extinct Malagasy lemurs.

    Catlett, Kierstin K; Schwartz, Gary T; Godfrey, Laurie R; Jungers, William L

    2010-07-01

    Studies of primate life history variation are constrained by the fact that all large-bodied extant primates are haplorhines. However, large-bodied strepsirrhines recently existed. If we can extract life history information from their skeletons, these species can contribute to our understanding of primate life history variation. This is particularly important in light of new critiques of the classic "fast-slow continuum" as a descriptor of variation in life history profiles across mammals in general. We use established dental histological methods to estimate gestation length and age at weaning for five extinct lemur species. On the basis of these estimates, we reconstruct minimum interbirth intervals and maximum reproductive rates. We utilize principal components analysis to create a multivariate "life history space" that captures the relationships among reproductive parameters and brain and body size in extinct and extant lemurs. Our data show that, whereas large-bodied extinct lemurs can be described as "slow" in some fashion, they also varied greatly in their life history profiles. Those with relatively large brains also weaned their offspring late and had long interbirth intervals. These were not the largest of extinct lemurs. Thus, we distinguish size-related life history variation from variation that linked more strongly to ecological factors. Because all lemur species larger than 10 kg, regardless of life history profile, succumbed to extinction after humans arrived in Madagascar, we argue that large body size increased the probability of extinction independently of reproductive rate. We also provide some evidence that, among lemurs, brain size predicts reproductive rate better than body size. (c) 2010 Wiley-Liss, Inc.

  13. Systematic documentation and analysis of human genetic variation using the microattribution approach

    Giardine, Belinda; Borg, Joseph; Higgs, Douglas R.; Peterson, Kenneth R.; Maglott, Donna; Basak, A. Nazli; Clark, Barnaby; Faustino, Paula; Felice, Alex E.; Francina, Alain; Gallivan, Monica V. E.; Georgitsi, Marianthi; Gibbons, Richard J.; Giordano, Piero C.; Harteveld, Cornelis L.; Joly, Philippe; Kanavakis, Emmanuel; Kollia, Panagoula; Menzel, Stephan; Miller, Webb; Moradkhani, Kamran; Old, John; Papachatzopoulou, Adamantia; Papadakis, Manoussos N.; Papadopoulos, Petros; Pavlovic, Sonja; Philipsen, Sjaak; Radmilovic, Milena; Riemer, Cathy; Schrijver, Iris; Stojiljkovic, Maja; Thein, Swee Lay; Traeger-Synodinos, Jan; Tully, Ray; Wada, Takahito; Waye, John; Wiemann, Claudia; Zukic, Branka; Chui, David H. K.; Wajcman, Henri; Hardison, Ross C.; Patrinos, George P.

    2013-01-01

    We developed a series of interrelated locus-specific databases to store all published and unpublished genetic variation related to these disorders, and then implemented microattribution to encourage submission of unpublished observations of genetic variation to these public repositories 1. A total of 1,941 unique genetic variants in 37 genes, encoding globins (HBA2, HBA1, HBG2, HBG1, HBD, HBB) and other erythroid proteins (ALOX5AP, AQP9, ARG2, ASS1, ATRX, BCL11A, CNTNAP2, CSNK2A1, EPAS1, ERCC2, FLT1, GATA1, GPM6B, HAO2, HBS1L, KDR, KL, KLF1, MAP2K1, MAP3K5, MAP3K7, MYB, NOS1, NOS2, NOS3, NOX3, NUP133, PDE7B, SMAD3, SMAD6, and TOX) are currently documented in these databases with reciprocal attribution of microcitations to data contributors. Our project provides the first example of implementing microattribution to incentivise submission of all known genetic variation in a defined system. It has demonstrably increased the reporting of human variants and now provides a comprehensive online resource for systematically describing human genetic variation in the globin genes and other genes contributing to hemoglobinopathies and thalassemias. The large repository of previously reported data, together with more recent data, acquired by microattribution, demonstrates how the comprehensive documentation of human variation will provide key insights into normal biological processes and how these are perturbed in human genetic disease. Using the microattribution process set out here, datasets which took decades to accumulate for the globin genes could be assembled rapidly for other genes and disease systems. The principles established here for the globin gene system will serve as a model for other systems and the analysis of other common and/or complex human genetic diseases. PMID:21423179

  14. Variations in inpatient pediatric anesthesia in California from 2000 to 2009: a caseload and geographic analysis.

    Mudumbai, Seshadri C; Honkanen, Anita; Chan, Jia; Schmitt, Susan; Saynina, Olga; Hackel, Alvin; Gregory, George; Phibbs, Ciaran S; Wise, Paul H

    2014-12-01

    Regional referral systems are considered important for children hospitalized for surgery, but there is little information on existing systems. To examine geographic variations in anesthetic caseloads in California for surgical inpatients ≤6 years and to evaluate the feasibility of regionalizing anesthetic care. We reviewed California's unmasked patient discharge database between 2000 and 2009 to determine surgical procedures, dates, and inpatient anesthetic caseloads. Hospitals were classified as urban or rural and were further stratified as low, intermediate, high, and very high volume. We reviewed 257,541 anesthetic cases from 402 hospitals. Seventeen California Children's Services (CCS) hospitals conducted about two-thirds of all inpatient anesthetics; 385 non-CCS hospitals accounted for the rest. Urban hospitals comprised 82% of low- and intermediate-volume centers (n = 297) and 100% of the high- and very high-volume centers (n = 41). Ninety percent (n = 361) of hospitals performed risk procedures such as appendectomies were the most frequent in urban low- and intermediate-volume hospitals, fairly complex neurosurgical and general surgeries were also performed. The median distance from urban lower-volume hospitals to the nearest high- or very high-volume center was 12 miles. Up to 98% (n = 40,316) of inpatient anesthetics at low- or intermediate-volume centers could have been transferred to higher-volume centers within 25 miles of smaller centers. Many urban California hospitals maintained low annual inpatient anesthetic caseloads for children ≤6 years while conducting potentially more complex procedures. Further efforts are necessary to define the scope of pediatric anesthetic care at urban low- and intermediate-volume hospitals in California. © 2014 John Wiley & Sons Ltd.

  15. Simulation and analysis of data for enhancing low cycle fatigue test procedures

    Sarajaervi, U.; Cronvall, O. [VTT Technical Research Centre of Finland (Finland)

    2006-04-15

    The simulation and analysis of data for enhancing low cycle fatigue test procedures is discussed in this report. The analysed materials are an austenitic stainless piping steel and an austenitic weld material. This project continues the work performed in 2003 and 2004. The fatigue test data treatment application developed within the project in 2004 for the preparation of the fatigue data has been developed further. Also, more fatigue test data has been analysed with the application than in 2004. In addition to this numerical fatigue simulations were performed with FEM code ABAQUS. With the fatigue test data treatment application one can e.g. both calculate cyclically certain relevant characteristic values, e.g. elastic range, and form a set of certain cyclical parameter values needed as a part of ABAQUS analysis input files. The hardening properties of metals were modelled with both isotropic and kinematic hardening models. The further development of the application included trimming of the analysed data, and consequently trimming of resulting hardening parameters. The need for the trimming arose from the fact that the analysed fatigue test data presents some scatter caused by the limited accuracy of the test equipment and the sampling rate. The hardening parameters obtained from the application analysis results were used in the subsequent ABAQUS analyses, and then the fatigue test data were compared with the ABAQUS simulation results. After finding a procedure to trim result data to get smooth curves for cyclic hardening, hardening and softening could be reproduced in ABAQUS analysis with a reasonable accuracy. The modelling of the fatigue induced initiation and growth of cracks was not considered in this study. On the other hand, a considerable part of the fatigue life of nuclear power plant (NPP) piping components is spent in the phase preceding the initiation and growth of cracks. (au)

  16. Simulation and analysis of data for enhancing low cycle fatigue test procedures

    Sarajaervi, U.; Cronvall, O.

    2006-04-01

    The simulation and analysis of data for enhancing low cycle fatigue test procedures is discussed in this report. The analysed materials are an austenitic stainless piping steel and an austenitic weld material. This project continues the work performed in 2003 and 2004. The fatigue test data treatment application developed within the project in 2004 for the preparation of the fatigue data has been developed further. Also, more fatigue test data has been analysed with the application than in 2004. In addition to this numerical fatigue simulations were performed with FEM code ABAQUS. With the fatigue test data treatment application one can e.g. both calculate cyclically certain relevant characteristic values, e.g. elastic range, and form a set of certain cyclical parameter values needed as a part of ABAQUS analysis input files. The hardening properties of metals were modelled with both isotropic and kinematic hardening models. The further development of the application included trimming of the analysed data, and consequently trimming of resulting hardening parameters. The need for the trimming arose from the fact that the analysed fatigue test data presents some scatter caused by the limited accuracy of the test equipment and the sampling rate. The hardening parameters obtained from the application analysis results were used in the subsequent ABAQUS analyses, and then the fatigue test data were compared with the ABAQUS simulation results. After finding a procedure to trim result data to get smooth curves for cyclic hardening, hardening and softening could be reproduced in ABAQUS analysis with a reasonable accuracy. The modelling of the fatigue induced initiation and growth of cracks was not considered in this study. On the other hand, a considerable part of the fatigue life of nuclear power plant (NPP) piping components is spent in the phase preceding the initiation and growth of cracks. (au)

  17. Evidence of increment of efficiency of the Mexican Stock Market through the analysis of its variations

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Huerta-Quintanilla, R.; Rodríguez-Achach, M.

    2007-07-01

    It is well known that there exist statistical and structural differences between the stock markets of developed and emerging countries. In this work, and in order to find out if the efficiency of the Mexican Stock Market has been changing over time, we have performed and compared several analyses of the variations of the Mexican Stock Market index (IPC) and Dow Jones industrial average index (DJIA) for different periods of their historical daily data. We have analyzed the returns autocorrelation function (ACF) and used detrended fluctuation analysis (DFA) to study returns variations. We also analyze the volatility, mean value and standard deviation of both markets and compare their evolution. We conclude from the overall result of these studies, that they show compelling evidence of the increment of efficiency of the Mexican Stock Market over time. The data samples analyzed here, correspond to daily values of the IPC and DJIA for the period 10/30/1978-02/28/2006.

  18. Study of Seasonal Variation in Groundwater Quality of Sagar City (India by Principal Component Analysis

    Hemant Pathak

    2011-01-01

    Full Text Available Groundwater is one of the major resources of the drinking water in Sagar city (India.. In this study 15 sampling station were selected for the investigations on 14 chemical parameters. The work was carried out during different months of the pre-monsoon, monsoon and post-monsoon seasons in June 2009 to June 2010. The multivariate statistics such as principal component and cluster analysis were applied to the datasets to investigate seasonal variations in groundwater quality. Principal axis factoring has been used to observe the mode of association of parameters and their interrelationships, for evaluating water quality. Average value of BOD, COD, ammonia and iron was high during entire study period. Elevated values of BOD and ammonia in monsoon, slightly more value of BOD in post-monsoon, BOD, ammonia and iron in pre-monsoon period reflected contribution on temporal effect on groundwater. Results of principal component analysis evinced that all the parameters equally and significantly contribute to groundwater quality variations. Factor 1 and factor 2 analysis revealed the DO value deteriorate due to organic load (BOD/Ammonia in different seasons. Hierarchical cluster analysis grouped 15 stations into four clusters in monsoon, five clusters in post-monsoon and five clusters in pre-monsoon with similar water quality features. Clustered group at monsoon, post-monsoon and pre-monsoon consisted one station exhibiting significant spatial variation in physicochemical composition. The anthropogenic nitrogenous species, as fallout from modernization activities. The study indicated that the groundwater sufficiently well oxygenated and nutrient-rich in study places.

  19. New non-cognitive procedures for medical applicant selection: a qualitative analysis in one school.

    Katz, Sara; Vinker, Shlomo

    2014-11-07

    Recent data have called into question the reliability and predictive validity of standard admission procedures to medical schools. Eliciting non-cognitive attributes of medical school applicants using qualitative tools and methods has thus become a major challenge. 299 applicants aged 18-25 formed the research group. A set of six research tools was developed in addition to the two existing ones. These included: a portfolio task, an intuitive task, a cognitive task, a personal task, an open self-efficacy questionnaire and field-notes. The criteria-based methodology design used constant comparative analysis and grounded theory techniques to produce a personal attributes profile per participant, scored on a 5-point scale holistic rubric. Qualitative validity of data gathering was checked by comparing the profiles elicited from the existing interview against the profiles elicited from the other tools, and by comparing two profiles of each of the applicants who handed in two portfolio tasks. Qualitative validity of data analysis was checked by comparing researcher results with those of an external rater (n =10). Differences between aggregated profile groups were checked by the Npar Wilcoxon Signed Ranks Test and by Spearman Rank Order Correlation Test. All subjects gave written informed consent to their participation. Privacy was protected by using code numbers. A concept map of 12 personal attributes emerged, the core constructs of which were motivation, sociability and cognition. A personal profile was elicited. Inter-rater agreement was 83.3%. Differences between groups by aggregated profiles were found significant (p < .05, p < .01, p < .001).A random sample of sixth year students (n = 12) underwent the same admission procedure as the research group. Rank order was different; and arrogance was a new construct elicited in the sixth year group. This study suggests a broadening of the methodology for selecting medical school applicants. This methodology

  20. Missing data treatments matter: an analysis of multiple imputation for anterior cervical discectomy and fusion procedures.

    Ondeck, Nathaniel T; Fu, Michael C; Skrip, Laura A; McLynn, Ryan P; Cui, Jonathan J; Basques, Bryce A; Albert, Todd J; Grauer, Jonathan N

    2018-04-09

    The presence of missing data is a limitation of large datasets, including the National Surgical Quality Improvement Program (NSQIP). In addressing this issue, most studies use complete case analysis, which excludes cases with missing data, thus potentially introducing selection bias. Multiple imputation, a statistically rigorous approach that approximates missing data and preserves sample size, may be an improvement over complete case analysis. The present study aims to evaluate the impact of using multiple imputation in comparison with complete case analysis for assessing the associations between preoperative laboratory values and adverse outcomes following anterior cervical discectomy and fusion (ACDF) procedures. This is a retrospective review of prospectively collected data. Patients undergoing one-level ACDF were identified in NSQIP 2012-2015. Perioperative adverse outcome variables assessed included the occurrence of any adverse event, severe adverse events, and hospital readmission. Missing preoperative albumin and hematocrit values were handled using complete case analysis and multiple imputation. These preoperative laboratory levels were then tested for associations with 30-day postoperative outcomes using logistic regression. A total of 11,999 patients were included. Of this cohort, 63.5% of patients had missing preoperative albumin and 9.9% had missing preoperative hematocrit. When using complete case analysis, only 4,311 patients were studied. The removed patients were significantly younger, healthier, of a common body mass index, and male. Logistic regression analysis failed to identify either preoperative hypoalbuminemia or preoperative anemia as significantly associated with adverse outcomes. When employing multiple imputation, all 11,999 patients were included. Preoperative hypoalbuminemia was significantly associated with the occurrence of any adverse event and severe adverse events. Preoperative anemia was significantly associated with the

  1. A “Cookbook” Cost Analysis Procedure for Medical Information Systems*

    Torrance, Janice L.; Torrance, George W.; Covvey, H. Dominic

    1983-01-01

    A costing procedure for medical information systems is described. The procedure incorporates state-of-the-art costing methods in an easy to follow “cookbook” format. Application of the procedure consists of filling out a series of Mac-Tor EZ-Cost forms. The procedure and forms have been field tested by application to a cardiovascular database system. This article describes the major features of the costing procedure. The forms and other details are available upon request.

  2. A Simulation Model for Tensile Fracture Procedure Analysis of Graphite Material based on Damage Evolution

    Zhao Erqiang; Ma Shaopeng; Wang Hongtao

    2014-01-01

    Graphite material is generally easy to be damaged by the widely distributed micro-cracks when subjects to load. For numerically analyzing of the structure made of graphite material, the influences of the degradation of the material in damaged areas need to be considered. In this paper, an axial tension test method is proposed to obtain the dynamic damage evolution rule of the material. Using the degradation rule (variation of elastic modulus), the finite element model is then constructed to analyze the tensile fracture process of the L-shaped graphite specimen. An axial tension test of graphite is performed to obtain the stress-strain curve. Based on the variation of the measured curve, the damage evolution rule of the material are fitted out. A simulation model based on the above measured results is then constructed on ABAQUS by user subroutine. Using this simulation model, the tension failure process of L-shaped graphite specimen with fillet are simulated. The calculated and experimental results on fracture load are in good agreement. The damage simulation model based on the stress-strain curve of axial tensile test can be used in other tensile fracture analysis. (author)

  3. Cost-consequence analysis of different active flowable hemostatic matrices in cardiac surgical procedures.

    Makhija, D; Rock, M; Xiong, Y; Epstein, J D; Arnold, M R; Lattouf, O M; Calcaterra, D

    2017-06-01

    A recent retrospective comparative effectiveness study found that use of the FLOSEAL Hemostatic Matrix in cardiac surgery was associated with significantly lower risks of complications, blood transfusions, surgical revisions, and shorter length of surgery than use of SURGIFLO Hemostatic Matrix. These outcome improvements in cardiac surgery procedures may translate to economic savings for hospitals and payers. The objective of this study was to estimate the cost-consequence of two flowable hemostatic matrices (FLOSEAL or SURGIFLO) in cardiac surgeries for US hospitals. A cost-consequence model was constructed using clinical outcomes from a previously published retrospective comparative effectiveness study of FLOSEAL vs SURGIFLO in adult cardiac surgeries. The model accounted for the reported differences between these products in length of surgery, rates of major and minor complications, surgical revisions, and blood product transfusions. Costs were derived from Healthcare Cost and Utilization Project's National Inpatient Sample (NIS) 2012 database and converted to 2015 US dollars. Savings were modeled for a hospital performing 245 cardiac surgeries annually, as identified as the average for hospitals in the NIS dataset. One-way sensitivity analysis and probabilistic sensitivity analysis were performed to test model robustness. The results suggest that if FLOSEAL is utilized in a hospital that performs 245 mixed cardiac surgery procedures annually, 11 major complications, 31 minor complications, nine surgical revisions, 79 blood product transfusions, and 260.3 h of cumulative operating time could be avoided. These improved outcomes correspond to a net annualized saving of $1,532,896. Cost savings remained consistent between $1.3m and $1.8m and between $911k and $2.4m, even after accounting for the uncertainty around clinical and cost inputs, in a one-way and probabilistic sensitivity analysis, respectively. Outcome differences associated with FLOSEAL vs SURGIFLO

  4. Clinical Outcomes of Root Reimplantation and Bentall Procedure: Propensity Score Matching Analysis.

    Lee, Heemoon; Cho, Yang Hyun; Sung, Kiick; Kim, Wook Sung; Park, Kay-Hyun; Jeong, Dong Seop; Park, Pyo Won; Lee, Young Tak

    2018-03-26

    This study aimed to evaluate the clinical outcomes of aortic root replacement(ARR) surgery:Root reimplantation as valve-sparing root replacement(VSR) and the Bentall procedure. We retrospectively reviewed 216 patients who underwent ARR between 1995 and 2013 at Samsung Medical Center. Patients were divided into two groups, depending on the procedure they underwent: Bentall(n=134) and VSR(n=82). The mean follow-up duration was 100.9±56.4 months. There were 2 early deaths in the Bentall group and none in the VSR group(p=0.53). Early morbidities were not different between the groups. Overall mortality was significantly lower in the VSR group (HR=0.12,p=0.04). Despite the higher reoperation rate in the VSR group(p=0.03), major adverse valve-related events(MAVRE) did not differ between the groups(p=0.28). Bleeding events were significantly higher in the Bentall group during follow-up(10 in Bentall group, 0 in VSR group, p=0.04). here were 6 thromboembolic events only in the Bentall group(p=0.11). We performed a propensity score matching analysis comparing the groups(134 Bentall vs 43 VSR). Matched analysis gave similar results, i.e. HR=0.17 and p=0.10 for overall mortality and HR=1.01 and p=0.99 for MAVRE. Although there was marginal significance in the propensity matched analysis, it is plausible to anticipate a survival benefit with VSR during long-term follow-up. Despite a higher reoperation for aortic valves, VSR can be a viable option in patients who decline life-long anticoagulation, especially the young or the patients in whom anticoagulation is contraindicated. Copyright © 2018. Published by Elsevier Inc.

  5. Application of a statistical thermal design procedure to evaluate the PWR DNBR safety analysis limits

    Robeyns, J.; Parmentier, F.; Peeters, G.

    2001-01-01

    In the framework of safety analysis for the Belgian nuclear power plants and for the reload compatibility studies, Tractebel Energy Engineering (TEE) has developed, to define a 95/95 DNBR criterion, a statistical thermal design method based on the analytical full statistical approach: the Statistical Thermal Design Procedure (STDP). In that methodology, each DNBR value in the core assemblies is calculated with an adapted CHF (Critical Heat Flux) correlation implemented in the sub-channel code Cobra for core thermal hydraulic analysis. The uncertainties of the correlation are represented by the statistical parameters calculated from an experimental database. The main objective of a sub-channel analysis is to prove that in all class 1 and class 2 situations, the minimum DNBR (Departure from Nucleate Boiling Ratio) remains higher than the Safety Analysis Limit (SAL). The SAL value is calculated from the Statistical Design Limit (SDL) value adjusted with some penalties and deterministic factors. The search of a realistic value for the SDL is the objective of the statistical thermal design methods. In this report, we apply a full statistical approach to define the DNBR criterion or SDL (Statistical Design Limit) with the strict observance of the design criteria defined in the Standard Review Plan. The same statistical approach is used to define the expected number of rods experiencing DNB. (author)

  6. Application procedures and analysis examples of the SIE ASME-NH program

    Kim, Seok Hoon; Koo, G. H.; Kim, J. B.

    2010-12-01

    In this report, the design rule of the ASME-NH Code was briefly summarized and the application procedures of SIE ASME-NH program were analysed, the analysis examples were described. The SIE ASME-NH program was developed according to the ASME Code Section III Subsection NH rules to perform the primary stress limits, the accumulated inelastic strain limits and the creep fatigue damage evaluations in the structural design of nuclear power plants operating with high temperatures over creep temperature at normal operating conditions. In the analysis examples, the benchmark problem for the high temperature reactor vessel which was discussed in the SIE ASME-NH user's seminar was described. Also, the preliminary structural analysis of an Advanced Burner Test Reactor internal structure was described. Considering the load combinations of the various cycle types submitted from significant operating conditions, the integrity of a reactor internal structure was reviewed according to the stress and strain limits of the ASME-NH rules and the analysis and evaluation results were summarized

  7. Relationship between climatic variables and the variation in bulk tank milk composition using canonical correlation analysis.

    Stürmer, Morgana; Busanello, Marcos; Velho, João Pedro; Heck, Vanessa Isabel; Haygert-Velho, Ione Maria Pereira

    2018-06-04

    A number of studies have addressed the relations between climatic variables and milk composition, but these works used univariate statistical approaches. In our study, we used a multivariate approach (canonical correlation) to study the impact of climatic variables on milk composition, price, and monthly milk production at a dairy farm using bulk tank milk data. Data on milk composition, price, and monthly milk production were obtained from a dairy company that purchased the milk from the farm, while climatic variable data were obtained from the National Institute of Meteorology (INMET). The data are from January 2014 to December 2016. Univariate correlation analysis and canonical correlation analysis were performed. Few correlations between the climatic variables and milk composition were found using a univariate approach. However, using canonical correlation analysis, we found a strong and significant correlation (r c  = 0.95, p value = 0.0029). Lactose, ambient temperature measures (mean, minimum, and maximum), and temperature-humidity index (THI) were found to be the most important variables for the canonical correlation. Our study indicated that 10.2% of the variation in milk composition, pricing, and monthly milk production can be explained by climatic variables. Ambient temperature variables, together with THI, seem to have the most influence on variation in milk composition.

  8. A genome-wide analysis of putative functional and exonic variation associated with extremely high intelligence.

    Spain, S L; Pedroso, I; Kadeva, N; Miller, M B; Iacono, W G; McGue, M; Stergiakouli, E; Davey Smith, G; Putallaz, M; Lubinski, D; Meaburn, E L; Plomin, R; Simpson, M A

    2016-08-01

    Although individual differences in intelligence (general cognitive ability) are highly heritable, molecular genetic analyses to date have had limited success in identifying specific loci responsible for its heritability. This study is the first to investigate exome variation in individuals of extremely high intelligence. Under the quantitative genetic model, sampling from the high extreme of the distribution should provide increased power to detect associations. We therefore performed a case-control association analysis with 1409 individuals drawn from the top 0.0003 (IQ >170) of the population distribution of intelligence and 3253 unselected population-based controls. Our analysis focused on putative functional exonic variants assayed on the Illumina HumanExome BeadChip. We did not observe any individual protein-altering variants that are reproducibly associated with extremely high intelligence and within the entire distribution of intelligence. Moreover, no significant associations were found for multiple rare alleles within individual genes. However, analyses using genome-wide similarity between unrelated individuals (genome-wide complex trait analysis) indicate that the genotyped functional protein-altering variation yields a heritability estimate of 17.4% (s.e. 1.7%) based on a liability model. In addition, investigation of nominally significant associations revealed fewer rare alleles associated with extremely high intelligence than would be expected under the null hypothesis. This observation is consistent with the hypothesis that rare functional alleles are more frequently detrimental than beneficial to intelligence.

  9. Quasi-static Cycle Performance Analysis of Micro Modular Reactor for Heat Sink Temperature Variation

    Cho, Seong Kuk; Lee, Jekyoung; Ahn, Yoonhan; Lee, Jeong Ik [KAIST, Daejeon (Korea, Republic of); Cha, Jae Eun [KAERI, Daejeon (Korea, Republic of)

    2015-10-15

    A Supercritical CO{sub 2} (S-CO{sub 2}) cycle has potential for high thermal efficiency in the moderate turbine inlet temperature (450 - 750 .deg. C) and achieving compact system size because of small specific volume and simple cycle layouts. Owing to small specific volume of S-CO{sub 2} and the development of heat exchanger technology, it can accomplish complete modularization of the system. The previous works focused on the cycle performance analysis for the design point only. However, the heat sink temperature can be changed depending on the ambient atmosphere condition, i.e. weather, seasonal change. This can influence the compressor inlet temperature, which alters the cycle operating condition overall. To reflect the heat sink temperature variation, a quasi-static analysis code for a simple recuperated S-CO{sub 2} Brayton cycle has been developed by the KAIST research team. Thus, cycle performance analysis is carried out with a compressor inlet temperature variation in this research. In the case of dry air-cooling system, the ambient temperature of the local surrounding can affect the compressor inlet temperature. As the compressor inlet temperature increases, thermal efficiency and generated electricity decrease. As further works, the experiment of S-CO{sub 2} integral test loop will be performed to validate in-house codes, such as KAIST{sub T}MD and the quasi-static code.

  10. Analysis of Long-Term Temperature Variations in the Human Body.

    Dakappa, Pradeepa Hoskeri; Mahabala, Chakrapani

    2015-01-01

    Body temperature is a continuous physiological variable. In normal healthy adults, oral temperature is estimated to vary between 36.1°C and 37.2°C. Fever is a complex host response to many external and internal agents and is a potential contributor to many clinical conditions. Despite being one of the foremost vital signs, temperature and its analysis and variations during many pathological conditions has yet to be examined in detail using mathematical techniques. Classical fever patterns based on recordings obtained every 8-12 h have been developed. However, such patterns do not provide meaningful information in diagnosing diseases. Because fever is a host response, it is likely that there could be a unique response to specific etiologies. Continuous long-term temperature monitoring and pattern analysis using specific analytical methods developed in engineering and physics could aid in revealing unique fever responses of hosts and in different clinical conditions. Furthermore, such analysis can potentially be used as a novel diagnostic tool and to study the effect of pharmaceutical agents and other therapeutic protocols. Thus, the goal of our article is to present a comprehensive review of the recent relevant literature and analyze the current state of research regarding temperature variations in the human body.

  11. Analysis of temporal variation in human masticatory cycles during gum chewing.

    Crane, Elizabeth A; Rothman, Edward D; Childers, David; Gerstner, Geoffrey E

    2013-10-01

    The study investigated modulation of fast and slow opening (FO, SO) and closing (FC, SC) chewing cycle phases using gum-chewing sequences in humans. Twenty-two healthy adult subjects participated by chewing gum for at least 20s on the right side and at least 20s on the left side while jaw movements were tracked with a 3D motion analysis system. Jaw movement data were digitized, and chewing cycle phases were identified and analysed for all chewing cycles in a complete sequence. All four chewing cycle phase durations were more variant than total cycle durations, a result found in other non-human primates. Significant negative correlations existed between the opening phases, SO and FO, and between the closing phases, SC and FC; however, there was less consistency in terms of which phases were negatively correlated both between subjects, and between chewing sides within subjects, compared with results reported in other species. The coordination of intra-cycle phases appears to be flexible and to follow complex rules during gum-chewing in humans. Alternatively, the observed intra-cycle phase relationships could simply reflect: (1) variation in jaw kinematics due to variation in how gum was handled by the tongue on a chew-by-chew basis in our experimental design or (2) by variation due to data sampling noise and/or how phases were defined and identified. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. System Requirements Analysis for a Computer-based Procedure in a Research Reactor Facility

    Park, Jaek Wan; Jang, Gwi Sook; Seo, Sang Moon; Shin, Sung Ki [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    This can address many of the routine problems related to human error in the use of conventional, hard-copy operating procedures. An operating supporting system is also required in a research reactor. A well-made CBP can address the staffing issues of a research reactor and reduce the human errors by minimizing the operator's routine tasks. A CBP for a research reactor has not been proposed yet. Also, CBPs developed for nuclear power plants have powerful and various technical functions to cover complicated plant operation situations. However, many of the functions may not be required for a research reactor. Thus, it is not reasonable to apply the CBP to a research reactor directly. Also, customizing of the CBP is not cost-effective. Therefore, a compact CBP should be developed for a research reactor. This paper introduces high level requirements derived by the system requirements analysis activity as the first stage of system implementation. Operation support tools are under consideration for application to research reactors. In particular, as a full digitalization of the main control room, application of a computer-based procedure system has been required as a part of man-machine interface system because it makes an impact on the operating staffing and human errors of a research reactor. To establish computer-based system requirements for a research reactor, this paper addressed international standards and previous practices on nuclear plants.

  13. Interpretation of engine cycle-to-cycle variation by chaotic time series analysis

    Daw, C.S.; Kahl, W.K.

    1990-01-01

    In this paper we summarize preliminary results from applying a new mathematical technique -- chaotic time series analysis (CTSA) -- to cylinder pressure data from a spark-ignition (SI) four-stroke engine fueled with both methanol and iso-octane. Our objective is to look for the presence of deterministic chaos'' dynamics in peak pressure variations and to investigate the potential usefulness of CTSA as a diagnostic tool. Our results suggest that sequential peak cylinder pressures exhibit some characteristic features of deterministic chaos and that CTSA can extract previously unrecognized information from such data. 18 refs., 11 figs., 2 tabs.

  14. Analysis of the average daily radon variations in the soil air

    Holy, K.; Matos, M.; Boehm, R.; Stanys, T.; Polaskova, A.; Hola, O.

    1998-01-01

    In this contribution the search of the relation between the daily variations of the radon concentration and the regular daily oscillations of the atmospheric pressure are presented. The deviation of the radon activity concentration in the soil air from the average daily value reaches only a few percent. For the dry summer months the average daily course of the radon activity concentration can be described by the obtained equation. The analysis of the average daily courses could give the information concerning the depth of the gas permeable soil layer. The soil parameter is determined by others method with difficulty

  15. ANALYSIS ON THE VARIATION OF MEDIAL ROTATION VALUES ACCORDING TO THE POSITION OF THE HUMERAL DIAPHYSIS.

    Miyazaki, Alberto Naoki; Fregoneze, Marcelo; Santos, Pedro Doneux; da Silva, Luciana Andrade; do Val Sella, Guilherme; Cohen, Carina; Busin Giora, Taís Stedile; Checchia, Sergio Luiz; Raia, Fabio; Pekelman, Hélio; Cymrot, Raquel

    2012-01-01

    To analyze the validity of measurements of medial rotation (MR) of the shoulder, using vertebral levels, according to the variation in the position of the humeral diaphysis, and to test the bi-goniometer as a new measuring instrument. 140 shoulders (70 patients) were prospectively evaluated in cases presenting unilateral shoulder MR limitation. The vertebral level was evaluated by means of a visual scale and was correlated with the angle obtained according to the position of the humeral diaphysis, using the bi-goniometer developed with the Department of Mechanical Engineering of Mackenzie University. The maximum vertebral level reached through MR on the unaffected side ranged from T3 to T12, and on the affected side, from T6 to the trochanter. Repositioning of the affected limb in MR according to the angular values on the normal side showed that 57.13% of the patients reached lower levels, between the sacrum, gluteus and trochanter. From analysis on the maximum vertebral level attained and the variation between the affected angle x (frontal plane: abduction and MR of the shoulder) and the unaffected angle x in MR, we observed that the greater the angle of the diaphyseal axis was, the lower the variation in the vertebral level attained was. From evaluating the linear correlation between the variables of difference in maximum vertebral level reached and variation in the affected angle y (extension and abduction of the shoulder) and the unaffected angle y in MR, we observed that there was no well-established linear relationship between these variables. Measurement of MR using vertebral levels does not correspond to the real values, since it varies according to the positioning of the humeral diaphysis.

  16. Meta-Analysis of Mitochondrial DNA Variation in the Iberian Peninsula.

    Ruth Barral-Arca

    Full Text Available The Iberian Peninsula has been the focus of attention of numerous studies dealing with mitochondrial DNA (mtDNA variation, most of them targeting the control region segment. In the present study we sequenced the control region of 3,024 Spanish individuals from areas where available data were still limited. We also compiled mtDNA haplotypes from the literature involving 4,588 sequences and 28 population groups or small regions. We meta-analyzed all these data in order to shed further light on patterns of geographic variation, taking advantage of the large sample size and geographic coverage, in contrast with the atomized sampling strategy of previous work. The results indicate that the main mtDNA haplogroups show primarily clinal geographic patterns across the Iberian geography, roughly along a North-South axis. Haplogroup HV0 (where haplogroup U is nested is more prevalent in the Franco Cantabrian region, in good agreement with previous findings that identified this area as a climate refuge during the Last Glacial Maximum (LGM, prior to a subsequent demographic re-expansion towards Central Europe and the Mediterranean. Typical sub-Saharan and North African lineages are slightly more prevalent in South Iberia, although at low frequencies; this pattern has been shaped mainly by the transatlantic slave trade and the Arab invasion of the Iberian Peninsula. The results also indicate that summary statistics that aim to measure molecular variation, or AMOVA, have limited sensitivity to detect population substructure, in contrast to patterns revealed by phylogeographic analysis. Overall, the results suggest that mtDNA variation in Iberia is substantially stratified. These patterns might be relevant in biomedical studies given that stratification is a common cause of false positives in case-control mtDNA association studies, and should be also considered when weighting the DNA evidence in forensic casework, which is strongly dependent on haplotype

  17. Meta-Analysis of Mitochondrial DNA Variation in the Iberian Peninsula.

    Barral-Arca, Ruth; Pischedda, Sara; Gómez-Carballa, Alberto; Pastoriza, Ana; Mosquera-Miguel, Ana; López-Soto, Manuel; Martinón-Torres, Federico; Álvarez-Iglesias, Vanesa; Salas, Antonio

    2016-01-01

    The Iberian Peninsula has been the focus of attention of numerous studies dealing with mitochondrial DNA (mtDNA) variation, most of them targeting the control region segment. In the present study we sequenced the control region of 3,024 Spanish individuals from areas where available data were still limited. We also compiled mtDNA haplotypes from the literature involving 4,588 sequences and 28 population groups or small regions. We meta-analyzed all these data in order to shed further light on patterns of geographic variation, taking advantage of the large sample size and geographic coverage, in contrast with the atomized sampling strategy of previous work. The results indicate that the main mtDNA haplogroups show primarily clinal geographic patterns across the Iberian geography, roughly along a North-South axis. Haplogroup HV0 (where haplogroup U is nested) is more prevalent in the Franco Cantabrian region, in good agreement with previous findings that identified this area as a climate refuge during the Last Glacial Maximum (LGM), prior to a subsequent demographic re-expansion towards Central Europe and the Mediterranean. Typical sub-Saharan and North African lineages are slightly more prevalent in South Iberia, although at low frequencies; this pattern has been shaped mainly by the transatlantic slave trade and the Arab invasion of the Iberian Peninsula. The results also indicate that summary statistics that aim to measure molecular variation, or AMOVA, have limited sensitivity to detect population substructure, in contrast to patterns revealed by phylogeographic analysis. Overall, the results suggest that mtDNA variation in Iberia is substantially stratified. These patterns might be relevant in biomedical studies given that stratification is a common cause of false positives in case-control mtDNA association studies, and should be also considered when weighting the DNA evidence in forensic casework, which is strongly dependent on haplotype frequencies.

  18. Assessment of soil/structure interaction analysis procedures for nuclear power plant structures

    Young, G.A.; Wei, B.C.

    1977-01-01

    The paper presents an assessment of two state-of-the-art soil/structure interaction analysis procedures that are frequently used to provide seismic analyses of nuclear power plant structures. The advantages of large three-dimensional, elastic, discrete mass models and two-dimensional finite element models are compared. The discrete mass models can provide three-dimensional response capability with economical computer costs but only fair soil/structure interaction representation. The two-dimensional finite element models provide good soil/structure interaction representation, but cannot provide out-of-plane response. Three-dimensional finite element models would provide the most informative and complete analyses. For this model, computer costs would be much greater, but modeling costs would be approximately the same as those required for three-dimensional discrete mass models

  19. A Procedure for 3-D Contact Stress Analysis of Spiral Bevel Gears

    Kumar, A.; Bibel, G.

    1994-01-01

    Contact stress distribution of spiral bevel gears using nonlinear finite element static analysis is presented. Procedures have been developed to solve the nonlinear equations that identify the gear and pinion surface coordinates based on the kinematics of the cutting process and orientate the pinion and the gear in space to mesh with each other. Contact is simulated by connecting GAP elements along the intersection of a line from each pinion point (parallel to the normal at the contact point) with the gear surface. A three dimensional model with four gear teeth and three pinion teeth is used to determine the contact stresses at two different contact positions in a spiral bevel gearset. A summary of the elliptical contact stress distribution is given. This information will be helpful to helicopter and aircraft transmission designers who need to minimize weight of the transmission and maximize reliability.

  20. Fracture analysis procedure for cast austenitic stainless steel pipe with an axial crack

    Kamaya, Masayuki

    2012-01-01

    Since the ductility of cast austenitic stainless steel pipes decreases due to thermal aging embrittlement after long term operation, not only plastic collapse failure but also unstable ductile crack propagation (elastic-plastic failure) should be taken into account for the structural integrity assessment of cracked pipes. In the fitness-for-service code of the Japan Society of Mechanical Engineers (JSME), Z-factor is used to incorporate the reduction in failure load due to elastic-plastic failure. However, the JSME code does not provide the Z-factor for axial cracks. In this study, Z-factor for axial cracks in aged cast austenitic stainless steel pipes was derived. Then, a comparison was made for the elastic-plastic failure load obtained from different analysis procedures. It was shown that the obtained Z-factor could derive reasonable elastic-plastic failure loads, although the failure loads were more conservative than those obtained by the two-parameter method. (author)

  1. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist

    2015-01-01

    The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......) identifiability problems. By using the presented approach it is possible to detect potential identifiability problems and avoid pointless calibration (and experimental!) effort....

  2. Fundamental quantification procedure for total reflection X-ray fluorescence spectra analysis and elements determination

    Wegrzynek, D.; Holynska, B.

    1997-01-01

    A method for the determination of the concentrations of elements in particulate-like samples measured in total reflection geometry is proposed. In the proposed method the fundamental parameters are utilized for calculating the sensitivities of elements and an internal standard is used to account for the unknown mass per unit area of a sample and geometrical constant of the spectrometer. The modification of the primary excitation spectrum on its way to a sample has been taken into consideration. The concentrations of the elements to be determined are calculated simultaneously with the spectra deconvolution procedure. In the process of quantitative analysis the intensities of all X-ray peaks corresponding to K and L-series lines present in the analyzed spectrum are taken into account. (Author)

  3. The significance of the probabilistic safety analysis (PSA) in administrative procedures under nuclear law

    Berg, H.P.

    1994-01-01

    The probabilistic safety analysis (PSA) is a useful tool for safety relevant evaluation of nuclear power plant designed on the basis of deterministic specifications. The PSA yields data identifying reliable or less reliable systems, or frequent or less frequent failure modes to be taken into account for safety engineering. Performance of a PSA in administrative procedures under nuclear law, e.g. licensing, is an obligation laid down in a footnote to criterion 1.1 of the BMI safety criteria catalogue, which has been in force unaltered since 1977. The paper explains the application and achievements of PSA in the phase of reactor development concerned with the conceptual design basis and design features, using as an example the novel PWR. (orig./HP) [de

  4. Report on nuclear industry quality assurance procedures for safety analysis computer code development and use

    Sheron, B.W.; Rosztoczy, Z.R.

    1980-08-01

    As a result of a request from Commissioner V. Gilinsky to investigate in detail the causes of an error discovered in a vendor Emergency Core Cooling System (ECCS) computer code in March, 1978, the staff undertook an extensive investigation of the vendor quality assurance practices applied to safety analysis computer code development and use. This investigation included inspections of code development and use practices of the four major Light Water Reactor Nuclear Steam Supply System vendors and a major reload fuel supplier. The conclusion reached by the staff as a result of the investigation is that vendor practices for code development and use are basically sound. A number of areas were identified, however, where improvements to existing vendor procedures should be made. In addition, the investigation also addressed the quality assurance (QA) review and inspection process for computer codes and identified areas for improvement

  5. Task Analysis of Emergency Operating Procedures for Generating Quantitative HRA Data

    Kim, Yochan; Park, Jinkyun; Kim, Seunghwan; Choi, Sun Yeong; Jung, Wondea; Jang, Inseok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this paper, the analysis results of the emergency task in the procedures (EOPs; emergency operating procedures) that can be observed from the simulator data are introduced. The task type, component type, system type, and additional information related with the performance of the operators were described. In addition, a prospective application of the analyzed information to HEP quantification process was discussed. In the probabilistic safety analysis (PSA) field, various human reliability analyses (HRAs) have been performed to produce estimates of human error probabilities (HEPs) for significant tasks in complex socio-technical systems. To this end, Many HRA methods have provided basic or nominal HEPs for typical tasks and the quantitative relations describing how a certain performance context or performance shaping factors (PSFs) affects the HEPs. In the HRA community, however, the necessity of appropriate and sufficient human performance data has been recently indicated. This is because a wide range of quantitative estimates in the previous HRA methods are not supported by solid empirical bases. Hence, there have been attempts to collect HRA supporting data. For example, KAERI has started to collect information on both unsafe acts of operators and the relevant PSFs. A characteristic of the database that is being developed at KAERI is that human errors and related PSF surrogates that can be objectively observable are collected from full-scope simulator experiences. In this environment, to produce concretely grounded bases of the HEPs, the traits or attributes of tasks where significant human errors can be observed should be definitely determined. The determined traits should be applicable to compare the HEPs on the traits with the data in previous HRA methods or databases. In this study, task characteristics in a Westinghouse type of EOPs were analyzed with the defining task, component, and system taxonomies.

  6. [Delirium in stroke patients : Critical analysis of statistical procedures for the identification of risk factors].

    Nydahl, P; Margraf, N G; Ewers, A

    2017-04-01

    Delirium is a relevant complication following an acute stroke. It is a multifactor occurrence with numerous interacting risk factors that alternately influence each other. The risk factors of delirium in stroke patients are often based on limited clinical studies. The statistical procedures and clinical relevance of delirium related risk factors in adult stroke patients should therefore be questioned. This secondary analysis includes clinically relevant studies that give evidence for the clinical relevance and statistical significance of delirium-associated risk factors in stroke patients. The quality of the reporting of regression analyses was assessed using Ottenbacher's quality criteria. The delirium-associated risk factors identified were examined with regard to statistical significance using the Bonferroni method of multiple testing for forming incorrect positive hypotheses. This was followed by a literature-based discussion on clinical relevance. Nine clinical studies were included. None of the studies fulfilled all the prerequisites and assumptions given for the reporting of regression analyses according to Ottenbacher. Of the 108 delirium-associated risk factors, a total of 48 (44.4%) were significant, whereby a total of 28 (58.3%) were false positive after Bonferroni correction. Following a literature-based discussion on clinical relevance, the assumption of statistical significance and clinical relevance could be found for only four risk factors (dementia or cognitive impairment, total anterior infarct, severe infarct and infections). The statistical procedures used in the existing literature are questionable, as are their results. A post-hoc analysis and critical appraisal reduced the number of possible delirium-associated risk factors to just a few clinically relevant factors.

  7. Neutron activation analysis characterization procedures for fish consumed at São Paulo City

    Tappiz, Bruno; Moreira, Edson G.

    2017-01-01

    The characterization of edible tissues of fishes consumed by humans is very important for determination of several toxic and potentially toxic elements, ensuring the food safety. The Instrumental Neutron Activation Analysis (INAA) comparative method allows the determination of several of these elements, as well as others, for example of nutritional character. This study is part of the International Atomic Energy Agency (IAEA) technical cooperation project of Latin America and Caribbean countries to ensure the quality of food and biomonitoring of contaminants in molluscs and fishes. Ten specimens of 5 of the most consumed fish in São Paulo city: white mouth croaker (Micropogonias Furnieri), smooth weakfish (Cynoscion learchus), common snook (Centropomus undecimalis), Brazilian sardine (Sardinella brasiliensis) and bluefish (Pomatomus Saltatrix) were analyzed. Complete procedures for analysis, which includes purchase in the largest warehouse in Latin America, transport to the laboratory, storage, freeze-drying, milling, weighting and others preparations of the subsamples, and the short irradiation parameters for the determination of Br, Cl, K, Mn and Na are reported. Results obtained for macro and microelements are presented and are in agreement with analysis of oyster tissue and mussel tissue certified reference materials under the same irradiation conditions, with z-score values ranging from -3.0 to 2.2. (author)

  8. A single extraction and HPLC procedure for simultaneous analysis of phytosterols, tocopherols and lutein in soybeans.

    Slavin, Margaret; Yu, Liangli Lucy

    2012-12-15

    A saponification/extraction procedure and high performance liquid chromatography (HPLC) analysis method were developed and validated for simultaneous analysis of phytosterols, tocopherols and lutein (a carotenoid) in soybeans. Separation was achieved on a phenyl column with a ternary, isocratic solvent system of acetonitrile, methanol and water (48:22.5:29.5, v/v/v). Evaporative light scattering detection (ELSD) was used to quantify β-sitosterol, stigmasterol, campesterol, and α-, δ- and γ-tocopherols, while lutein was quantified with visible light absorption at 450 nm. Peak identification was verified by retention times and spikes with external standards. Standard curves were constructed (R(2)>0.99) to allow for sample quantification. Recovery of the saponification and extraction was demonstrated via analysis of spiked samples. Also, the accuracy of results of four soybeans using the described saponification and HPLC analytical method was validated against existing methods. This method offers a more efficient alternative to individual methods for quantifying lutein, tocopherols and sterols in soybeans. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Neutron activation analysis characterization procedures for fish consumed at São Paulo City

    Tappiz, Bruno; Moreira, Edson G., E-mail: brunotappiz2@gmail.com, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The characterization of edible tissues of fishes consumed by humans is very important for determination of several toxic and potentially toxic elements, ensuring the food safety. The Instrumental Neutron Activation Analysis (INAA) comparative method allows the determination of several of these elements, as well as others, for example of nutritional character. This study is part of the International Atomic Energy Agency (IAEA) technical cooperation project of Latin America and Caribbean countries to ensure the quality of food and biomonitoring of contaminants in molluscs and fishes. Ten specimens of 5 of the most consumed fish in São Paulo city: white mouth croaker (Micropogonias Furnieri), smooth weakfish (Cynoscion learchus), common snook (Centropomus undecimalis), Brazilian sardine (Sardinella brasiliensis) and bluefish (Pomatomus Saltatrix) were analyzed. Complete procedures for analysis, which includes purchase in the largest warehouse in Latin America, transport to the laboratory, storage, freeze-drying, milling, weighting and others preparations of the subsamples, and the short irradiation parameters for the determination of Br, Cl, K, Mn and Na are reported. Results obtained for macro and microelements are presented and are in agreement with analysis of oyster tissue and mussel tissue certified reference materials under the same irradiation conditions, with z-score values ranging from -3.0 to 2.2. (author)

  10. A DNA fingerprinting procedure for ultra high-throughput genetic analysis of insects.

    Schlipalius, D I; Waldron, J; Carroll, B J; Collins, P J; Ebert, P R

    2001-12-01

    Existing procedures for the generation of polymorphic DNA markers are not optimal for insect studies in which the organisms are often tiny and background molecular information is often non-existent. We have used a new high throughput DNA marker generation protocol called randomly amplified DNA fingerprints (RAF) to analyse the genetic variability in three separate strains of the stored grain pest, Rhyzopertha dominica. This protocol is quick, robust and reliable even though it requires minimal sample preparation, minute amounts of DNA and no prior molecular analysis of the organism. Arbitrarily selected oligonucleotide primers routinely produced approximately 50 scoreable polymorphic DNA markers, between individuals of three independent field isolates of R. dominica. Multivariate cluster analysis using forty-nine arbitrarily selected polymorphisms generated from a single primer reliably separated individuals into three clades corresponding to their geographical origin. The resulting clades were quite distinct, with an average genetic difference of 37.5 +/- 6.0% between clades and of 21.0 +/- 7.1% between individuals within clades. As a prelude to future gene mapping efforts, we have also assessed the performance of RAF under conditions commonly used in gene mapping. In this analysis, fingerprints from pooled DNA samples accurately and reproducibly reflected RAF profiles obtained from individual DNA samples that had been combined to create the bulked samples.

  11. Transit Timing Variation analysis with Kepler light curves of KOI 227 and Kepler 93b

    Dulz, Shannon; Reed, Mike

    2017-01-01

    By searching for transit signals in approximately 150,000 stars, NASA’s Kepler Space telescope found thousands of exoplanets over its primary mission from 2009 to 2013 (Tenenbaum et al. 2014, ApJS, 211, 6). Yet, a detailed follow-up examination of Kepler light curves may contribute more evidence on system dynamics and planetary atmospheres of these objects. Kepler’s continuous observing of these systems over the mission duration produced light curves of sufficient duration to allow for the search for transit timing variations. Transit timing variations over the course of many orbits may indicate a precessing orbit or the existence of a non-transiting third body such as another exoplanet. Flux contributions of the planet just prior to secondary eclipse may provide a measurement of bond albedo from the day-side of the transiting planet. Any asymmetries of the transit shape may indicate thermal asymmetries which can measure upper atmosphere motion of the planet. These two factors can constrain atmospheric models of close orbiting exoplanets. We first establish our procedure with the well-documented TTV system, KOI 227 (Nesvorny et al. 2014, ApJ, 790, 31). Using the test case of KOI 227, we analyze Kepler-93b for TTVs and day-side flux contributions. Kepler-93b is likely a rocky planet with R = 1.50 ± 0.03 Earth Radii and M = 2.59 ± 2.0 Earth Masses (Marcy et al. 2014, ApJS, 210, 20). This research is funded by a NASA EPSCoR grant.

  12. Challenges from variation across regions in cost effectiveness analysis in multi-regional clinical trials

    Yunbo Chu

    2016-10-01

    Full Text Available Economic evaluation in the form of cost-effectiveness analysis has become a popular means to inform decisions in healthcare. With multi-regional clinical trials in a global development program becoming a new venue for drug efficacy testing in recent decades, questions in methods for cost-effectiveness analysis in the multi-regional clinical trials setting also emerge. This paper addresses some challenges from variation across regions in cost effectiveness analysis in multi-regional clinical trials. Several discussion points are raised for further attention and a multi-regional clinical trial example is presented to illustrate the implications in industrial application. A general message is delivered to call for a depth discussion by all stakeholders to reach an agreement on a good practice in cost-effectiveness analysis in the multi-regional clinical trials. Meanwhile, we recommend an additional consideration of cost-effectiveness analysis results based on the clinical evidence from a certain homogeneous population as sensitivity or scenario analysis upon data availability.

  13. Anatomical variations of hepatic arterial system, coeliac trunk and renal arteries: an analysis with multidetector CT angiography.

    Ugurel, M S; Battal, B; Bozlar, U; Nural, M S; Tasar, M; Ors, F; Saglam, M; Karademir, I

    2010-08-01

    The purpose of our investigation was to determine the anatomical variations in the coeliac trunk-hepatic arterial system and the renal arteries in patients who underwent multidetector CT (MDCT) angiography of the abdominal aorta for various reasons. A total of 100 patients were analysed retrospectively. The coeliac trunk, hepatic arterial system and renal arteries were analysed individually and anatomical variations were recorded. Statistical analysis of the relationship between hepatocoeliac variations and renal artery variations was performed using a chi(2) test. There was a coeliac trunk trifurcation in 89% and bifurcation in 8% of the cases. Coeliac trunk was absent in 1%, a hepatosplenomesenteric trunk was seen in 1% and a splenomesenteric trunk was present in 1%. Hepatic artery variation was present in 48% of patients. Coeliac trunk and/or hepatic arterial variation was present in 23 (39.7%) of the 58 patients with normal renal arteries, and in 27 (64.3%) of the 42 patients with accessory renal arteries. There was a statistically significant correlation between renal artery variations and coeliac trunk-hepatic arterial system variations (p = 0.015). MDCT angiography permits a correct and detailed evaluation of hepatic and renal vascular anatomy. The prevalence of variations in the coeliac trunk and/or hepatic arteries is increased in people with accessory renal arteries. For that reason, when undertaking angiographic examinations directed towards any single organ, the possibility of variations in the vascular structure of other organs should be kept in mind.

  14. Pan-Genome Analysis Links the Hereditary Variation of Leptospirillum ferriphilum With Its Evolutionary Adaptation

    Xian Zhang

    2018-03-01

    Full Text Available Niche adaptation has long been recognized to drive intra-species differentiation and speciation, yet knowledge about its relatedness with hereditary variation of microbial genomes is relatively limited. Using Leptospirillum ferriphilum species as a case study, we present a detailed analysis of genomic features of five recognized strains. Genome-to-genome distance calculation preliminarily determined the roles of spatial distance and environmental heterogeneity that potentially contribute to intra-species variation within L. ferriphilum species at the genome level. Mathematical models were further constructed to extrapolate the expansion of L. ferriphilum genomes (an ‘open’ pan-genome, indicating the emergence of novel genes with new sequenced genomes. The identification of diverse mobile genetic elements (MGEs (such as transposases, integrases, and phage-associated genes revealed the prevalence of horizontal gene transfer events, which is an important evolutionary mechanism that provides avenues for the recruitment of novel functionalities and further for the genetic divergence of microbial genomes. Comprehensive analysis also demonstrated that the genome reduction by gene loss in a broad sense might contribute to the observed diversification. We thus inferred a plausible explanation to address this observation: the community-dependent adaptation that potentially economizes the limiting resources of the entire community. Now that the introduction of new genes is accompanied by a parallel abandonment of some other ones, our results provide snapshots on the biological fitness cost of environmental adaptation within the L. ferriphilum genomes. In short, our genome-wide analyses bridge the relation between genetic variation of L. ferriphilum with its evolutionary adaptation.

  15. [Analysis of genomic copy number variations in two sisters with primary amenorrhea and hyperandrogenism].

    Zhang, Yanliang; Xu, Qiuyue; Cai, Xuemei; Li, Yixun; Song, Guibo; Wang, Juan; Zhang, Rongchen; Dai, Yong; Duan, Yong

    2015-12-01

    To analyze genomic copy number variations (CNVs) in two sisters with primary amenorrhea and hyperandrogenism. G-banding was performed for karyotype analysis. The whole genome of the two sisters were scanned and analyzed by array-based comparative genomic hybridization (array-CGH). The results were confirmed with real-time quantitative PCR (RT-qPCR). No abnormality was found by conventional G-banded chromosome analysis. Array-CGH has identified 11 identical CNVs from the sisters which, however, overlapped with CNVs reported by the Database of Genomic Variants (http://projects.tcag.ca/variation/). Therefore, they are likely to be benign. In addition, a -8.44 Mb 9p11.1-p13.1 duplication (38,561,587-47,002,387 bp, hg18) and a -80.9 kb 4q13.2 deletion (70,183,990-70,264,889 bp, hg18) were also detected in the elder and younger sister, respectively. The relationship between such CNVs and primary amenorrhea and hyperandrogenism was however uncertain. RT-qPCR results were in accordance with array-CGH. Two CNVs were detected in two sisters by array-CGH, for which further studies are needed to clarify their correlation with primary amenorrhea and hyperandrogenism.

  16. Some analysis on the diurnal variation of rainfall over the Atlantic Ocean

    Gill, T.; Perng, S.; Hughes, A.

    1981-01-01

    Data collected from the GARP Atlantic Tropical Experiment (GATE) was examined. The data were collected from 10,000 grid points arranged as a 100 x 100 array; each grid covered a 4 square km area. The amount of rainfall was measured every 15 minutes during the experiment periods using c-band radars. Two types of analyses were performed on the data: analysis of diurnal variation was done on each of grid points based on the rainfall averages at noon and at midnight, and time series analysis on selected grid points based on the hourly averages of rainfall. Since there are no known distribution model which best describes the rainfall amount, nonparametric methods were used to examine the diurnal variation. Kolmogorov-Smirnov test was used to test if the rainfalls at noon and at midnight have the same statistical distribution. Wilcoxon signed-rank test was used to test if the noon rainfall is heavier than, equal to, or lighter than the midnight rainfall. These tests were done on each of the 10,000 grid points at which the data are available.

  17. Genesis of theory and analysis of practice of applying the analytical procedures in auditing

    Сурніна, К. С.

    2012-01-01

    Determination of concept "Analytical procedures" in an audit by different researchers is investigated in the article, ownvision of necessity of wideuse of analytical procedures in audit is defined. Classification of analytical procedures is presentedtaking into account the specifity of auditing process on the whole

  18. High procedural fairness heightens the effect of outcome favorability on self-evaluations : An attributional analysis

    Brockner, J.; Heuer, L.; Magner, N.; Folger, R.; Umphress, E.; Bos, K. van den; Vermunt, Riël; Magner, M.; Siegel, P.

    2003-01-01

    Previous research has shown that outcome favorability and procedural fairness often interact to influence employees work attitudes and behaviors. Moreover, the form of the interaction effect depends upon the dependent variable. Relative to when procedural fairness is low, high procedural fairness:

  19. Melting temperature and enthalpy variations of phase change materials (PCMs): a differential scanning calorimetry (DSC) analysis

    Sun, Xiaoqin; Lee, Kyoung Ok; Medina, Mario A.; Chu, Youhong; Li, Chuanchang

    2018-06-01

    Differential scanning calorimetry (DSC) analysis is a standard thermal analysis technique used to determine the phase transition temperature, enthalpy, heat of fusion, specific heat and activation energy of phase change materials (PCMs). To determine the appropriate heating rate and sample mass, various DSC measurements were carried out using two kinds of PCMs, namely N-octadecane paraffin and calcium chloride hexahydrate. The variations in phase transition temperature, enthalpy, heat of fusion, specific heat and activation energy were observed within applicable heating rates and sample masses. It was found that the phase transition temperature range increased with increasing heating rate and sample mass; while the heat of fusion varied without any established pattern. The specific heat decreased with the increase of heating rate and sample mass. For accuracy purpose, it is recommended that for PCMs with high thermal conductivity (e.g. hydrated salt) the focus will be on heating rate rather than sample mass.

  20. EFFECTS OF PARAMETRIC VARIATIONS ON SEISMIC ANALYSIS METHODS FOR NON-CLASSICALLY DAMPED COUPLED SYSTEMS

    XU, J.; DEGRASSI, G.

    2000-01-01

    A comprehensive benchmark program was developed by Brookhaven National Laboratory (BNL) to perform an evaluation of state-of-the-art methods and computer programs for performing seismic analyses of coupled systems with non-classical damping. The program, which was sponsored by the US Nuclear Regulatory Commission (NRC), was designed to address various aspects of application and limitations of these state-of-the-art analysis methods to typical coupled nuclear power plant (NPP) structures with non-classical damping, and was carried out through analyses of a set of representative benchmark problems. One objective was to examine the applicability of various analysis methods to problems with different dynamic characteristics unique to coupled systems. The examination was performed using parametric variations for three simple benchmark models. This paper presents the comparisons and evaluation of the program participants' results to the BNL exact solutions for the applicable ranges of modeling dynamic characteristic parameters

  1. Genetic variation analysis and relationships among environmental strains of Scedosporium apiospermum sensu stricto in Bangkok, Thailand.

    Thanwa Wongsuk

    Full Text Available The Scedosporium apiospermum species complex is an emerging filamentous fungi that has been isolated from environment. It can cause a wide range of infections in both immunocompetent and immunocompromised individuals. We aimed to study the genetic variation and relationships between 48 strains of S. apiospermum sensu stricto isolated from soil in Bangkok, Thailand. For PCR, sequencing and phylogenetic analysis, we used the following genes: actin; calmodulin exons 3 and 4; the second largest subunit of the RNA polymerase II; ß-tubulin exon 2-4; manganese superoxide dismutase; internal transcribed spacer; transcription elongation factor 1α; and beta-tubulin exons 5 and 6. The present study is the first phylogenetic analysis of relationships among S. apiospermum sensu stricto in Thailand and South-east Asia. This result provides useful information for future epidemiological study and may be correlated to clinical manifestation.

  2. Developing Integrated Remote Sensing and Geographical Information Sciences Procedures to Assess Impacts of Climate Variations on Spatio-Temporal Distribution of Mangroves

    Qaisar, Maha

    2016-07-01

    Pakistan's periled treasures of mangroves require protection from devastating anthropogenic activities, which can only be achieved through the identification and management of this habitat. The primary objective of this study is to identify the potential habitat of mangroves along the coastline of Pakistan with the help of Remote Sensing (RS) and Geographical Information System (GIS) techniques. Once the mangroves were identified, species of mangroves need to be separated through Object Based Image Analysis (OBIA) which gave the area of mangroves and non mangroves sites. Later other parameters of Sea Surface Temperature, Sea Surface Salinity, chlorophyll-a along with altimetry data were used to assess the climatic variations on the spatio-temporal distribution of mangroves. Since mangroves provide economical, ecological, biological indication of Coastal Change or Sea Level Rise. Therefore, this provides a strong platform to assess the climatic variations which are posing negative impacts on the mangroves ecosystem. The results indicate that mangroves are present throughout along the coastline, proving that Pakistan is rich in these diverse ecosystems. Pakistan being at important geo strategic position can also benefit from its vast mangroves and other coastal resources such as coral reefs and fish varieties. Moreover, coastal zone management through involvement of the local community and establishment of Marine Protected Area (MPA) is the need of the hour to avoid deforestation of mangroves, which can prove to be deadly damaging for the fish populace since it provides habitats to various marine animals. However, the established relationship among SST, SSS, chlorophyll-a and altimetry data assisted to know the suitable sites for mangroves. But due to enhanced climatic impacts these relationships are distorted which has posed devastating effects on the growth and distribution of mangroves. Study area was Karachi Coast, Pakistan. The total area of Karachi is about 70

  3. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  4. Evaluation of Different Normalization and Analysis Procedures for Illumina Gene Expression Microarray Data Involving Small Changes

    Johnstone, Daniel M.; Riveros, Carlos; Heidari, Moones; Graham, Ross M.; Trinder, Debbie; Berretta, Regina; Olynyk, John K.; Scott, Rodney J.; Moscato, Pablo; Milward, Elizabeth A.

    2013-01-01

    While Illumina microarrays can be used successfully for detecting small gene expression changes due to their high degree of technical replicability, there is little information on how different normalization and differential expression analysis strategies affect outcomes. To evaluate this, we assessed concordance across gene lists generated by applying different combinations of normalization strategy and analytical approach to two Illumina datasets with modest expression changes. In addition to using traditional statistical approaches, we also tested an approach based on combinatorial optimization. We found that the choice of both normalization strategy and analytical approach considerably affected outcomes, in some cases leading to substantial differences in gene lists and subsequent pathway analysis results. Our findings suggest that important biological phenomena may be overlooked when there is a routine practice of using only one approach to investigate all microarray datasets. Analytical artefacts of this kind are likely to be especially relevant for datasets involving small fold changes, where inherent technical variation—if not adequately minimized by effective normalization—may overshadow true biological variation. This report provides some basic guidelines for optimizing outcomes when working with Illumina datasets involving small expression changes. PMID:27605185

  5. An analysis of contingency statements in a DRO procedure: A case report.

    Gerow, Stephanie; Rispoli, Mandy; Boles, Margot B; Neely, Leslie C

    2015-06-01

    To examine latency to criterion for reduction of challenging behaviour with and without stating a contingency statement immediately prior to a DRO procedure. An ABAC design in which A was baseline, B was used to evaluate the efficacy of a DRO procedure, and C was used to evaluate the efficacy of a DRO procedure with a contingency statement. The DRO with the contingency statement intervention was associated with a shorter latency to behaviour change than the DRO procedure without the contingency statement. These preliminary findings from this case study highlight the importance of examining the efficiency of behaviour change procedures. Directions for future research are provided.

  6. Laboratory manual on sample preparation procedures for x-ray micro-analysis

    1997-01-01

    X-ray micro fluorescence is a non-destructive and sensitive method for studying the microscopic distribution of different elements in almost all kinds of samples. Since the beginning of this century, x-rays and electrons have been used for the analysis of many different kinds of material. Techniques which rely on electrons are mainly developed for microscopic studies, and are used in conventional Electron Microscopy (EM) or Scanning Electron Microscopy (SEM), while x-rays are widely used for chemical analysis at the microscopic level. The first chemical analysis by fluorescence spectroscopy using small x-ray beams was conducted in 1928 by Glockner and Schreiber. Since then much work has been devoted to developing different types of optical systems for focusing an x-ray beam, but the efficiency of these systems is still inferior to the conventional electron optical systems. However, even with a poor optical efficiency, the x-ray microbeam has many advantages compared with electron or proton induced x-ray emission methods. These include: The analyses are non-destructive, losses of mass are negligible, and due to the low thermal loading of x-rays, materials which may be thermally degraded can be analysed; Samples can be analysed in air, and no vacuum is required, therefore specimens with volatile components such as water in biological samples, can be imaged at normal pressure and temperature; No charging occurs during analysis and therefore coating of the sample with a conductive layer is not necessary; With these advantages, simpler sample preparation procedures including mounting and preservation can be used

  7. Prediction of ppm level electrical failure by using physical variation analysis

    Hou, Hsin-Ming; Kung, Ji-Fu; Hsu, Y.-B.; Yamazaki, Y.; Maruyama, Kotaro; Toyoshima, Yuya; Chen, Chu-en

    2016-03-01

    their spatial correlation distance. For local variations (LV) there is no correlation, whereas for global variations (GV) the correlation distance is very large [7]-[9]. This is the first time to certificate the validation of spatial distribution from the affordable bias contour big data fundamental infrastructures. And then apply statistical techniques to dig out the variation sources. The GV come from systematic issue, which could be compensated by adaptive LT condition or OPC correction. But LV comes from random issue, which being considered as intrinsic problem such as structure, material, tool capability… etc. In this paper studying, we can find out the advanced technology node SRAM contact CD local variation (LV) dominates in total variation, about 70%. It often plays significant in-line real time catching WP-DPMO role of the product yield loss, especially for wafer edge is the worst loss within wafer distribution and causes serious reliability concern. The major root cause of variations comes from the PR material induced burr defect (LV), the second one comes from GV enhanced wafer edge short opportunity, which being attributed to three factors, first one factor is wafer edge CD deliberated enlargement for yield improvement as shown in Fig. 10. Second factor is overlaps/AA shifts due to tool capability dealing with incoming wafer's war page issue and optical periphery layout dependent working pitch issue as shown in Fig. 9 (1)., the last factor comes from wafer edge burr enhanced by wafer edge larger Photo Resistance (PR) spin centrifugal force. After implementing KPIs such as GV related AA/CD indexes as shown in Fig. 9 (1) and 10, respectively, and LV related burr index as shown in Fig. 11., we can construct the parts per million (PPM) level short probability model via multi-variables regression, canonical correlation analysis and logistic transformation. The model provides prediction of PPM level electrical failure by using in-line real time physical

  8. Variation in cancer surgical outcomes associated with physician and nurse staffing: a retrospective observational study using the Japanese Diagnosis Procedure Combination Database

    Yasunaga Hideo

    2012-05-01

    Full Text Available Abstract Background Little is known about the effects of professional staffing on cancer surgical outcomes. The present study aimed to investigate the association between cancer surgical outcomes and physician/nurse staffing in relation to hospital volume. Methods We analyzed 131,394 patients undergoing lung lobectomy, esophagectomy, gastrectomy, colorectal surgery, hepatectomy or pancreatectomy for cancer between July and December, 2007–2008, using the Japanese Diagnosis Procedure Combination database linked to the Survey of Medical Institutions data. Physician-to-bed ratio (PBR and nurse-to-bed ratio (NBR were determined for each hospital. Hospital volume was categorized into low, medium and high for each of six cancer surgeries. Failure to rescue (FTR was defined as a proportion of inhospital deaths among those with postoperative complications. Multi-level logistic regression analysis was performed to examine the association between physician/nurse staffing and FTR, adjusting for patient characteristics and hospital volume. Results Overall inhospital mortality was 1.8%, postoperative complication rate was 15.2%, and FTR rate was 11.9%. After adjustment for hospital volume, FTR rate in the group with high PBR (≥19.7 physicians per 100 beds and high NBR (≥77.0 nurses per 100 beds was significantly lower than that in the group with low PBR ( Conclusions Well-staffed hospitals confer a benefit for cancer surgical patients regarding reduced FTR, irrespective of hospital volume. These results suggest that consolidation of surgical centers linked with migration of medical professionals may improve the quality of cancer surgical management.

  9. Development of an optimized procedure bridging design and structural analysis codes for the automatized design of the SMART

    Kim, Tae Wan; Park, Keun Bae; Choi, Suhn; Kim, Kang Soo; Jeong, Kyeong Hoon; Lee, Gyu Mahn

    1998-09-01

    In this report, an optimized design and analysis procedure is established to apply to the SMART (System-integrated Modular Advanced ReacTor) development. The development of an optimized procedure is to minimize the time consumption and engineering effort by squeezing the design and feedback interactions. To achieve this goal, the data and information generated through the design development should be directly transferred to the analysis program with minimum operation. The verification of the design concept requires considerable effort since the communication between the design and analysis involves time consuming stage for the conversion of input information. In this report, an optimized procedure is established bridging the design and analysis stage utilizing the IDEAS, ABAQUS and ANSYS. (author). 3 refs., 2 tabs., 5 figs

  10. Super-delta: a new differential gene expression analysis procedure with robust data normalization.

    Liu, Yuhang; Zhang, Jinfeng; Qiu, Xing

    2017-12-21

    Normalization is an important data preparation step in gene expression analyses, designed to remove various systematic noise. Sample variance is greatly reduced after normalization, hence the power of subsequent statistical analyses is likely to increase. On the other hand, variance reduction is made possible by borrowing information across all genes, including differentially expressed genes (DEGs) and outliers, which will inevitably introduce some bias. This bias typically inflates type I error; and can reduce statistical power in certain situations. In this study we propose a new differential expression analysis pipeline, dubbed as super-delta, that consists of a multivariate extension of the global normalization and a modified t-test. A robust procedure is designed to minimize the bias introduced by DEGs in the normalization step. The modified t-test is derived based on asymptotic theory for hypothesis testing that suitably pairs with the proposed robust normalization. We first compared super-delta with four commonly used normalization methods: global, median-IQR, quantile, and cyclic loess normalization in simulation studies. Super-delta was shown to have better statistical power with tighter control of type I error rate than its competitors. In many cases, the performance of super-delta is close to that of an oracle test in which datasets without technical noise were used. We then applied all methods to a collection of gene expression datasets on breast cancer patients who received neoadjuvant chemotherapy. While there is a substantial overlap of the DEGs identified by all of them, super-delta were able to identify comparatively more DEGs than its competitors. Downstream gene set enrichment analysis confirmed that all these methods selected largely consistent pathways. Detailed investigations on the relatively small differences showed that pathways identified by super-delta have better connections to breast cancer than other methods. As a new pipeline, super

  11. Manual of Standard Operating Procedures for Veterinary Drug Residue Analysis (Spanish Edition)

    2017-01-01

    Laboratories are crucial to national veterinary drug residue monitoring programmes. However, one of the main challenges laboratories encounter is obtaining access to relevant methods of analysis. Thus, in addition to training, providing technical advice and transferring technology, the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture has resolved to develop clear and practical manuals to support Member State laboratories. The Coordinated Research Project (CRP) on Development of Radiometric and Allied Analytical Methods to Strengthen Residue Control Programs for Antibiotic and Anthelmintic Veterinary Drug Residues has developed a number of analytical methods as standard operating procedures (SOPs), which are now compiled here. This publication contains SOPs on chromatographic and spectrometric techniques, as well as radioimmunoassay and associated screening techniques, for various anthelmintic and antimicrobial veterinary drug residue analysis. Some analytical method validation protocols are also included. The publication is primarily aimed at food and environmental safety laboratories involved in testing veterinary drug residues, including under organized national residue monitoring programmes. It is expected to enhance laboratory capacity building and competence through the use of radiometric and complementary tools and techniques. The publication is also relevant for applied research on residues of veterinary drugs in food and environmental samples

  12. Manual of Standard Operating Procedures for Veterinary Drug Residue Analysis (French Edition)

    2017-01-01

    Laboratories are crucial to national veterinary drug residue monitoring programmes. However, one of the main challenges laboratories encounter is obtaining access to relevant methods of analysis. Thus, in addition to training, providing technical advice and transferring technology, the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture has resolved to develop clear and practical manuals to support Member State laboratories. The Coordinated Research Project (CRP) on Development of Radiometric and Allied Analytical Methods to Strengthen Residue Control Programs for Antibiotic and Anthelmintic Veterinary Drug Residues has developed a number of analytical methods as standard operating procedures (SOPs), which are now compiled here. This publication contains SOPs on chromatographic and spectrometric techniques, as well as radioimmunoassay and associated screening techniques, for various anthelmintic and antimicrobial veterinary drug residue analysis. Some analytical method validation protocols are also included. The publication is primarily aimed at food and environmental safety laboratories involved in testing veterinary drug residues, including under organized national residue monitoring programmes. It is expected to enhance laboratory capacity building and competence through the use of radiometric and complementary tools and techniques. The publication is also relevant for applied research on residues of veterinary drugs in food and environmental samples

  13. A cost and time analysis of laryngology procedures in the endoscopy suite versus the operating room.

    Hillel, Alexander T; Ochsner, Matthew C; Johns, Michael M; Klein, Adam M

    2016-06-01

    To assess the costs, charges, reimbursement, and efficiency of performing awake laryngology procedures in an endoscopy suite (ES) compared with like procedures performed in the operating room (OR). Retrospective review of billing records. Cost, charges, and reimbursements for the hospital, surgeon, and anesthesiologist were compared between ES injection laryngoplasty and laser excision procedures and matched case controls in the OR. Time spent in 1) the preoperative unit, 2) the operating or endoscopy suite, and 3) recovery unit were compared between OR and ES procedures. Hospital expenses were significantly less for ES procedures when compared to OR procedures. Reimbursement was similar for ES and OR injection laryngoplasty, though greater for OR laser excisions. Net balance (reimbursement-expenses) was greater for ES procedures. A predictive model of payer costs over a 3-year period showed similar costs for ES and OR laser procedures and reduced costs for ES compared to OR injection laryngoplasty. Times spent preoperatively and the procedure were significantly less for ES procedures. For individual laryngology procedures, the ES reduces time and costs compared to the OR, increasing otolaryngologist and hospital efficiency. This reveals cost and time savings of ES injection laryngoplasty, which occurs at a similar frequency as OR injection laryngoplasty. Given the increased frequency for ES laser procedures, total costs are similar for ES and OR laser excision of papilloma, which usually require repeated procedures. When regulated office space is unavailable, endoscopy rooms represent an alternative setting for unsedated laryngology procedures. NA Laryngoscope, 126:1385-1389, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  14. A Meta-Analysis of Bilateral Essure® Procedural Placement Success Rates on First Attempt.

    Frietze, Gabriel; Leyser-Whalen, Ophra; Rahman, Mahbubur; Rouhani, Mahta; Berenson, Abbey B

    2015-12-01

    Background: The Essure ® (Bayer HealthCare Pharmaceuticals, Leverkusen, Germany) female sterilization procedure entails using a hysteroscope to guide a microinsert into the Fallopian tube openings. Failed placement can lead to patient dissatisfaction, repeat procedures, unintended or ectopic pregnancy, perforation of internal organs, or need for subsequent medical interventions. Additional interventions increase women's health risks, and costs for patients and the health care industry. Demonstrated successful placement rates are 63%-100%. To date, there have not been any systematic analyses of variables associated with placement rates. Objectives: The aims of this review were: (1) to estimate the average rate of successful bilateral Essure microinsert placement on first attempt; and (2) to identify variables associated with successful placement. Materials and Methods: A meta-analysis was conducted on 64 published studies and 19 variables. Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, all published studies between November 2001 and February 2015 were reviewed. The studies were taken from from PubMed and Google Scholar, and by using the the "snowball" method that reported variables associated with successful bilateral Essure placement rates. Results: The weighted average rate of successful bilateral microinsert placement on first attempt was 92% (0.92 [95% confidence interval: 0.904-0.931]). Variables associated with successful placements were: (1) newer device models; (2) higher body mass index; and (3) a higher percent of patients who received local anesthesia. Conclusions: The data gathered for this review indicate that the highest bilateral success rates may be obtained by utilizing the newest Essure device model with local anesthesia in heavier patients. More standardized data reporting in published Essure studies is recommended. (J GYNECOL SURG 31:308).

  15. Formal analysis and evaluation of the back-off procedure in IEEE802.11P VANET

    Jin, Li; Zhang, Guoan; Zhu, Xiaojun

    2017-07-01

    The back-off procedure is one of the media access control technologies in 802.11P communication protocol. It plays an important role in avoiding message collisions and allocating channel resources. Formal methods are effective approaches for studying the performances of communication systems. In this paper, we establish a discrete time model for the back-off procedure. We use Markov Decision Processes (MDPs) to model the non-deterministic and probabilistic behaviors of the procedure, and use the probabilistic computation tree logic (PCTL) language to express different properties, which ensure that the discrete time model performs their basic functionality. Based on the model and PCTL specifications, we study the effect of contention window length on the number of senders in the neighborhood of given receivers, and that on the station’s expected cost required by the back-off procedure to successfully send packets. The variation of the window length may increase or decrease the maximum probability of correct transmissions within a time contention unit. We propose to use PRISM model checker to describe our proposed back-off procedure for IEEE802.11P protocol in vehicle network, and define different probability properties formulas to automatically verify the model and derive numerical results. The obtained results are helpful for justifying the values of the time contention unit.

  16. Procedure-related risk of miscarriage following amniocentesis and chorionic villus sampling: a systematic review and meta-analysis.

    Akolekar, R; Beta, J; Picciarelli, G; Ogilvie, C; D'Antonio, F

    2015-01-01

    To estimate procedure-related risks of miscarriage following amniocentesis and chorionic villus sampling (CVS) based on a systematic review of the literature and a meta-analysis. A search of MEDLINE, EMBASE, CINHAL and The Cochrane Library (2000-2014) was performed to review relevant citations reporting procedure-related complications of amniocentesis and CVS. Only studies reporting data on more than 1000 procedures were included in this review to minimize the effect of bias from smaller studies. Heterogeneity between studies was estimated using Cochran's Q, the I(2) statistic and Egger bias. Meta-analysis of proportions was used to derive weighted pooled estimates for the risk of miscarriage before 24 weeks' gestation. Incidence-rate difference meta-analysis was used to estimate pooled procedure-related risks. The weighted pooled risks of miscarriage following invasive procedures were estimated from analysis of controlled studies including 324 losses in 42 716 women who underwent amniocentesis and 207 losses in 8899 women who underwent CVS. The risk of miscarriage prior to 24 weeks in women who underwent amniocentesis and CVS was 0.81% (95% CI, 0.58-1.08%) and 2.18% (95% CI, 1.61-2.82%), respectively. The background rates of miscarriage in women from the control group that did not undergo any procedures were 0.67% (95% CI, 0.46-0.91%) for amniocentesis and 1.79% (95% CI, 0.61-3.58%) for CVS. The weighted pooled procedure-related risks of miscarriage for amniocentesis and CVS were 0.11% (95% CI, -0.04 to 0.26%) and 0.22% (95% CI, -0.71 to 1.16%), respectively. The procedure-related risks of miscarriage following amniocentesis and CVS are much lower than are currently quoted. Copyright © 2014 ISUOG. Published by John Wiley & Sons Ltd.

  17. Development of three-dimensional shoulder kinematic and electromyographic exposure variation analysis methodology in violin musicians.

    Reynolds, Jonathan F; Leduc, Robert E; Kahnert, Emily K; Ludewig, Paula M

    2014-01-01

    A total of 11 male and 19 female violinists performed 30-second random-ordered slow and fast musical repertoire while right shoulder three-dimensional kinematic, and upper trapezius and serratus anterior surface electromyography (EMG) data were summarised using exposure variation analysis (EVA), a bivariate distribution of work time spent at categories of signal amplitude, and duration spent at a fixed category of amplitude. Sixty-two per cent of intraclass correlation coefficients [1,1] for all kinematic and EMG variables exceeded 0.75, and 40% of standard error of the measurement results were below 5%, confirming EVA reliability. When fast repertoire was played, increases in odds ratios in short duration cells were seen in 23 of 24 possible instances, and decreases in longer duration cells were seen in 17 instances in all EVA arrays using multinomial logistic regression with random effects, confirming a shift towards shorter duration. A reliable technique to assess right shoulder kinematic and EMG exposure in violinists was identified. A reliable method of measuring right shoulder motion and muscle activity exposure variation in violinists was developed which can be used to assess ergonomic risk in other occupations. Recently developed statistical methods enabled differentiation between fast and slow musical performance of standardised musical repertoire.

  18. Analysis of Pressure Variations in a Low-Pressure Nickel-Hydrogen Battery - Part 1.

    Purushothaman, B K; Wainright, J S

    2012-05-15

    A low pressure nickel-hydrogen battery using either a metal hydride or gaseous hydrogen for H(2) storage has been developed for use in implantable neuroprosthetic devices. In this paper, pressure variations inside the cell for the gaseous hydrogen version are analyzed and correlated with oxygen evolution side reaction at the end of charging, the recombination of oxygen with hydrogen during charging and a subsequent rest period, and the self-discharge of the nickel electrode. About 70% of the recombination occurred simultaneously with oxygen evolution during charging and the remaining oxygen recombined with hydrogen during the 1(st) hour after charging. Self-discharge of the cell varies linearly with hydrogen pressure at a given state of charge and increased with increasing battery charge levels. The coulometric efficiency calculated based on analysis of the pressure-time data agreed well with the efficiency calculated based on the current-time data. Pressure variations in the battery are simulated accurately to predict coulometric efficiency and the state of charge of the cell, factors of extreme importance for a battery intended for implantation within the human body.

  19. Analysis of Pressure Variations in a Low-Pressure Nickel-Hydrogen Battery – Part 1

    Purushothaman, B. K.; Wainright, J. S.

    2012-01-01

    A low pressure nickel-hydrogen battery using either a metal hydride or gaseous hydrogen for H2 storage has been developed for use in implantable neuroprosthetic devices. In this paper, pressure variations inside the cell for the gaseous hydrogen version are analyzed and correlated with oxygen evolution side reaction at the end of charging, the recombination of oxygen with hydrogen during charging and a subsequent rest period, and the self-discharge of the nickel electrode. About 70% of the recombination occurred simultaneously with oxygen evolution during charging and the remaining oxygen recombined with hydrogen during the 1st hour after charging. Self-discharge of the cell varies linearly with hydrogen pressure at a given state of charge and increased with increasing battery charge levels. The coulometric efficiency calculated based on analysis of the pressure-time data agreed well with the efficiency calculated based on the current-time data. Pressure variations in the battery are simulated accurately to predict coulometric efficiency and the state of charge of the cell, factors of extreme importance for a battery intended for implantation within the human body. PMID:22423175

  20. Moveout analysis of wide-azimuth data in the presence of lateral velocity variation

    Takanashi, Mamoru

    2012-05-01

    Moveout analysis of wide-azimuth reflection data seldom takes into account lateral velocity variations on the scale of spreadlength. However, velocity lenses (such as channels and reefs) in the overburden can cause significant, laterally varying errors in the moveout parameters and distortions in data interpretation. Here, we present an analytic expression for the normal-moveout (NMO) ellipse in stratified media with lateral velocity variation. The contribution of lateral heterogeneity (LH) is controlled by the second derivatives of the interval vertical traveltime with respect to the horizontal coordinates, along with the depth and thickness of the LH layer. This equation provides a quick estimate of the influence of velocity lenses and can be used to substantially mitigate the lens-induced distortions in the effective and interval NMO ellipses. To account for velocity lenses in nonhyperbolic moveout inversion of wide-azimuth data, we propose a prestack correction algorithm that involves computation of the lens-induced traveltime distortion for each recorded trace. The overburden is assumed to be composed of horizontal layers (one of which contains the lens), but the target interval can be laterally heterogeneous with dipping or curved interfaces. Synthetic tests for horizontally layered models confirm that our algorithm accurately removes lens-related azimuthally varying traveltime shifts and errors in the moveout parameters. The developed methods should increase the robustness of seismic processing of wide-azimuth surveys, especially those acquired for fracture-characterization purposes. © 2012 Society of Exploration Geophysicists.

  1. Social variations in fetal growth in a Russian setting: an analysis of medical records.

    Grjibovski, Andrej M; Bygren, Lars O; Svartbo, Boo; Magnus, Per

    2003-10-01

    The study examines variations in fetal growth by maternal social circumstances in a Russian town. All pregnant women registered at the antenatal clinics in 1999 in Severodvinsk (north-west Russia) and their live born infants comprised the study base (n=1399). Multivariate linear regression analysis was applied to quantify the effect of socio-demographic factors on birthweight and the ponderal index (PI). A clear gradient of birthweight in relation to mothers' education was revealed. Babies of the most educated mothers were 207 g (95% CI, 55, 358) heavier than babies of mothers with basic education. The average weight of those born to mothers with secondary and vocational levels of education was 172 g (95% CI, 91, 253) and 83 g (95% CI, 9, 163) lower compared with infants born to mothers with a university level of education after adjustment for age, parity, pre-pregnancy weight, marital status, maternal occupation, length of gestation, and sex of the baby. Maternal education also influenced the PI. Further studies should focus on the mechanisms of the coherence of maternal education and fetal growth. To ensure that all parts of the society benefit equally from economic and social reforms, social variations in pregnancy outcomes should be monitored during the time of transition.

  2. Extrapolating cosmic ray variations and impacts on life: Morlet wavelet analysis

    Zarrouk, N.; Bennaceur, R.

    2009-07-01

    Exposure to cosmic rays may have both a direct and indirect effect on Earth's organisms. The radiation may lead to higher rates of genetic mutations in organisms, or interfere with their ability to repair DNA damage, potentially leading to diseases such as cancer. Increased cloud cover, which may cool the planet by blocking out more of the Sun's rays, is also associated with cosmic rays. They also interact with molecules in the atmosphere to create nitrogen oxide, a gas that eats away at our planet's ozone layer, which protects us from the Sun's harmful ultraviolet rays. On the ground, humans are protected from cosmic particles by the planet's atmosphere. In this paper we give estimated results of wavelet analysis from solar modulation and cosmic ray data incorporated in time-dependent cosmic ray variation. Since solar activity can be described as a non-linear chaotic dynamic system, methods such as neural networks and wavelet methods should be very suitable analytical tools. Thus we have computed our results using Morlet wavelets. Many have used wavelet techniques for studying solar activity. Here we have analysed and reconstructed cosmic ray variation, and we have better depicted periods or harmonics other than the 11-year solar modulation cycles.

  3. Moveout analysis of wide-azimuth data in the presence of lateral velocity variation

    Takanashi, Mamoru; Tsvankin, Ilya

    2012-01-01

    Moveout analysis of wide-azimuth reflection data seldom takes into account lateral velocity variations on the scale of spreadlength. However, velocity lenses (such as channels and reefs) in the overburden can cause significant, laterally varying errors in the moveout parameters and distortions in data interpretation. Here, we present an analytic expression for the normal-moveout (NMO) ellipse in stratified media with lateral velocity variation. The contribution of lateral heterogeneity (LH) is controlled by the second derivatives of the interval vertical traveltime with respect to the horizontal coordinates, along with the depth and thickness of the LH layer. This equation provides a quick estimate of the influence of velocity lenses and can be used to substantially mitigate the lens-induced distortions in the effective and interval NMO ellipses. To account for velocity lenses in nonhyperbolic moveout inversion of wide-azimuth data, we propose a prestack correction algorithm that involves computation of the lens-induced traveltime distortion for each recorded trace. The overburden is assumed to be composed of horizontal layers (one of which contains the lens), but the target interval can be laterally heterogeneous with dipping or curved interfaces. Synthetic tests for horizontally layered models confirm that our algorithm accurately removes lens-related azimuthally varying traveltime shifts and errors in the moveout parameters. The developed methods should increase the robustness of seismic processing of wide-azimuth surveys, especially those acquired for fracture-characterization purposes. © 2012 Society of Exploration Geophysicists.

  4. Morphological variation and phylogenetic analysis of the dinoflagellate Gymnodinium aureolum from a tributary of Chesapeake Bay.

    Tang, Ying Zhong; Egerton, Todd A; Kong, Lesheng; Marshall, Harold G

    2008-01-01

    Cultures of four strains of the dinoflagellate Gymnodinium aureolum (Hulburt) G. Hansen were established from the Elizabeth River, a tidal tributary of the Chesapeake Bay, USA. Light microscopy, scanning electron microscopy, nuclear-encoded large sub-unit rDNA sequencing, and culturing observations were conducted to further characterize this species. Observations of morphology included: a multiple structured apical groove; a peduncle located between the emerging points of the two flagella; pentagonal and hexagonal vesicles on the amphiesma; production and germination of resting cysts; variation in the location of the nucleus within the center of the cell; a longitudinal ventral concavity; and considerable variation in cell width/length and overall cell size. A fish bioassay using juvenile sheepshead minnows detected no ichthyotoxicity from any of the strains over a 48-h period. Molecular analysis confirmed the dinoflagellate was conspecific with G. aureolum strains from around the world, and formed a cluster along with several other Gymnodinium species. Morphological evidence suggests that further research is necessary to examine the relationship between G. aureolum and a possibly closely related species Gymnodinium maguelonnense.

  5. Multilocus analysis of nucleotide variation and speciation in three closely related Populus (Salicaceae) species.

    Du, Shuhui; Wang, Zhaoshan; Ingvarsson, Pär K; Wang, Dongsheng; Wang, Junhui; Wu, Zhiqiang; Tembrock, Luke R; Zhang, Jianguo

    2015-10-01

    Historical tectonism and climate oscillations can isolate and contract the geographical distributions of many plant species, and they are even known to trigger species divergence and ultimately speciation. Here, we estimated the nucleotide variation and speciation in three closely related Populus species, Populus tremuloides, P. tremula and P. davidiana, distributed in North America and Eurasia. We analysed the sequence variation in six single-copy nuclear loci and three chloroplast (cpDNA) fragments in 497 individuals sampled from 33 populations of these three species across their geographic distributions. These three Populus species harboured relatively high levels of nucleotide diversity and showed high levels of nucleotide differentiation. Phylogenetic analysis revealed that P. tremuloides diverged earlier than the other two species. The cpDNA haplotype network result clearly illustrated the dispersal route from North America to eastern Asia and then into Europe. Molecular dating results confirmed that the divergence of these three species coincided with the sundering of the Bering land bridge in the late Miocene and a rapid uplift of the Qinghai-Tibetan Plateau around the Miocene/Pliocene boundary. Vicariance-driven successful allopatric speciation resulting from historical tectonism and climate oscillations most likely played roles in the formation of the disjunct distributions and divergence of these three Populus species. © 2015 John Wiley & Sons Ltd.

  6. Spatio-temporal variation analysis of hydrochemical characteristics in the Luanhe River Basin, China.

    Xie, Ying; Li, Xuyong; Wang, Huiliang; Li, Wenzan

    2013-01-01

    The analysis of river pollution and assessment of spatial and temporal variation in hydrochemistry are essential to river water pollution control in the context of rapid economic growth and growing pollution threats in China. In this study, we focused on hydrochemical characteristics of the Luanhe River Basin (China) and evaluation of 12 hydrochemical variables obtained from 32 monitoring stations during 2001-2010. In each study year, the streams were monitored in the three hydrological periods (April, August, and October) to observe differences in the impacts of agricultural activity and rainfall pattern. Multivariate statistical methods were applied to the data set, and the river water hydrochemical characteristics were assessed using the water quality identification index (WQIIM). The results showed that parameters had variable contribution to water quality status in different months except for ammonia nitrogen (NH4-N) and total nitrogen (TN), which were the most important parameters in contributing to water quality variations for all three periods. Results of WQIIM revealed that 18 sites were classified as 'meeting standard' while the other 14 sites were classified as 'not meeting standard', with most of the seriously polluted sites located in urban area, mainly due to discharge of wastewater from domestic and industrial sources. Sites with low pollution level were located primarily in smaller tributaries, whereas sites of medium and high pollution levels were in the main river channel and the larger tributaries. Our findings provide valuable information and guidance for water pollution control and water resource management in the Luanhe River Basin.

  7. SULT1A1 copy number variation: ethnic distribution analysis in an Indian population.

    Almal, Suhani; Padh, Harish

    2017-11-01

    Cytosolic sulfotransferases (SULTs) are phase II detoxification enzymes involved in metabolism of numerous xenobiotics, drugs and endogenous compounds. Interindividual variation in sulfonation capacity is important for determining an individual's response to xenobiotics. SNPs in SULTs, mainly SULT1A1 have been associated with cancer risk and also with response to therapeutic agents. Copy number variation (CNVs) in SULT1A1 is found to be correlated with altered enzyme activity. This short report primarily focuses on CNV in SULT1A1 and its distribution among different ethnic populations around the globe. Frequency distribution of SULT1A1 copy number (CN) in 157 healthy Indian individuals was assessed using florescent-based quantitative PCR assay. A range of 1 to >4 copies, with a frequency of SULT1A1 CN =2 (64.9%) the highest, was observed in our (Indian) population. Upon comparative analysis of frequency distribution of SULT1A1 CN among diverse population groups, a statistically significant difference was observed between Indians (our data) and African-American (AA) (p = 0.0001) and South African (Tswana) (p populations. Distribution of CNV in the Indian population was found to be similar to that in European-derived populations of American and Japanese. CNV of SULT1A1 varies significantly among world populations and may be one of the determinants of health and diseases.

  8. Paleosecular variation analysis of high-latitude paleomagnetic data from the volcanic island of Jan Mayen

    Cromwell, G.; Tauxe, L.; Staudigel, H.; Pedersen, L. R.; Constable, C.; Pedersen, R.; Duncan, R. A.; Staudigel, P.

    2009-12-01

    Recent investigation of high-latitude paleomagnetic data from the Erebus Volcanic Province (EVP), Antarctica shows a departure from magnetic dipole predictions for paleointensity data for the period 0-5 Ma. The average EVP paleointensity (31.5 +/- 2.4 μT) is equivalent to low-latitude measurements (1) or approximately half the strength predicted for a dipole at high-latitude. Also, paleosecular variation models (e.g., 2,3) predict dispersions of directions that are much lower than the high latitude observations. Observed low intensity values may be the result of reduced convective flow inside the tangent cylinder of the Earth’s core or insufficient temporal sampling (1). More high-latitude paleomagnetic data are necessary in order to investigate the cause of the depressed intensity values and to provide better geographic and temporal resolution for future statistical paleosecular variation models. To address this, we carried out two field seasons, one in Spitzbergen (79°N, 14°E) and one on the young volcanic island of Jan Mayen (71°N, 8°W). The latter sampling effort was guided by age analyses of samples obtained by P. Imsland (unpublished and 4). We will present new paleodirectional and paleointensity data from a total of 25 paleomagnetic sites. These data enhance the temporal resolution of global paleomagnetic data and allow for a more complete evaluation of the time-averaged magnetic field from 0-5 Ma. We will present a new analysis of paleosecular variation based on our new data, in combination with other recently published data sets. (1) Lawrence, K.P., L.Tauxe, H. Staudigel, C.G. Constable, A. Koppers, W. MacIntosh, C.L. Johnson, Paleomagnetic field properties at high southern latitude. Geochemistry Geophysics Geosystems 10 (2009). (2) McElhinny, M.W., P.L. McFadden, Paleosecular variation over the past 5 Myr based on a new generalized database. Geophysics Journal International 131 (1997), 240-252. (3) Tauxe, L., Kent, D.V., A simplified statistical

  9. Increasing spelling achievement: an analysis of treatment procedures utilizing an alternating treatments design.

    Ollendick, T H; Matson, J L; Esveldt-Dawson, K; Shapiro, E S

    1980-01-01

    Two studies which examine the effectiveness of spelling remediation procedures are reported. In both studies, an alternating treatment design was employed. In the first study, positive practice overcorrection plus positive reinforcement was compared to positive practice alone and a no-remediation control condition. In the second study, positive practice plus positive reinforcement was compared to a traditional corrective procedure plus positive reinforcement and a traditional procedure when u...

  10. Analysis of rare, exonic variation amongst subjects with autism spectrum disorders and population controls.

    Li Liu

    2013-04-01

    Full Text Available We report on results from whole-exome sequencing (WES of 1,039 subjects diagnosed with autism spectrum disorders (ASD and 870 controls selected from the NIMH repository to be of similar ancestry to cases. The WES data came from two centers using different methods to produce sequence and to call variants from it. Therefore, an initial goal was to ensure the distribution of rare variation was similar for data from different centers. This proved straightforward by filtering called variants by fraction of missing data, read depth, and balance of alternative to reference reads. Results were evaluated using seven samples sequenced at both centers and by results from the association study. Next we addressed how the data and/or results from the centers should be combined. Gene-based analyses of association was an obvious choice, but should statistics for association be combined across centers (meta-analysis or should data be combined and then analyzed (mega-analysis? Because of the nature of many gene-based tests, we showed by theory and simulations that mega-analysis has better power than meta-analysis. Finally, before analyzing the data for association, we explored the impact of population structure on rare variant analysis in these data. Like other recent studies, we found evidence that population structure can confound case-control studies by the clustering of rare variants in ancestry space; yet, unlike some recent studies, for these data we found that principal component-based analyses were sufficient to control for ancestry and produce test statistics with appropriate distributions. After using a variety of gene-based tests and both meta- and mega-analysis, we found no new risk genes for ASD in this sample. Our results suggest that standard gene-based tests will require much larger samples of cases and controls before being effective for gene discovery, even for a disorder like ASD.

  11. Analysis of Rare, Exonic Variation amongst Subjects with Autism Spectrum Disorders and Population Controls

    Liu, Li; Sabo, Aniko; Neale, Benjamin M.; Nagaswamy, Uma; Stevens, Christine; Lim, Elaine; Bodea, Corneliu A.; Muzny, Donna; Reid, Jeffrey G.; Banks, Eric; Coon, Hillary; DePristo, Mark; Dinh, Huyen; Fennel, Tim; Flannick, Jason; Gabriel, Stacey; Garimella, Kiran; Gross, Shannon; Hawes, Alicia; Lewis, Lora; Makarov, Vladimir; Maguire, Jared; Newsham, Irene; Poplin, Ryan; Ripke, Stephan; Shakir, Khalid; Samocha, Kaitlin E.; Wu, Yuanqing; Boerwinkle, Eric; Buxbaum, Joseph D.; Cook, Edwin H.; Devlin, Bernie; Schellenberg, Gerard D.; Sutcliffe, James S.; Daly, Mark J.; Gibbs, Richard A.; Roeder, Kathryn

    2013-01-01

    We report on results from whole-exome sequencing (WES) of 1,039 subjects diagnosed with autism spectrum disorders (ASD) and 870 controls selected from the NIMH repository to be of similar ancestry to cases. The WES data came from two centers using different methods to produce sequence and to call variants from it. Therefore, an initial goal was to ensure the distribution of rare variation was similar for data from different centers. This proved straightforward by filtering called variants by fraction of missing data, read depth, and balance of alternative to reference reads. Results were evaluated using seven samples sequenced at both centers and by results from the association study. Next we addressed how the data and/or results from the centers should be combined. Gene-based analyses of association was an obvious choice, but should statistics for association be combined across centers (meta-analysis) or should data be combined and then analyzed (mega-analysis)? Because of the nature of many gene-based tests, we showed by theory and simulations that mega-analysis has better power than meta-analysis. Finally, before analyzing the data for association, we explored the impact of population structure on rare variant analysis in these data. Like other recent studies, we found evidence that population structure can confound case-control studies by the clustering of rare variants in ancestry space; yet, unlike some recent studies, for these data we found that principal component-based analyses were sufficient to control for ancestry and produce test statistics with appropriate distributions. After using a variety of gene-based tests and both meta- and mega-analysis, we found no new risk genes for ASD in this sample. Our results suggest that standard gene-based tests will require much larger samples of cases and controls before being effective for gene discovery, even for a disorder like ASD. PMID:23593035

  12. Evaluation of some procedures relevant to the determination of trace elemental components in biological materials by destructive neutron activation analysis

    Berry, D.L.

    1979-01-01

    The development of a simplified procedure for the analysis of biological materials by destructive neutron activation analysis (DNAA) is described. The sample manipulations preceding gamma ray assay were investigated as five specific stages of processing: (1) pre-irradiation treatment; (2) sample irradiation; (3) removal of the organic matrix; (4) removal of interfering radioactivities; and (5) concentration and separation of analyte activities. Each stage was evaluated with respect to susceptibility to sample contamination, loss of trace elemental components, and compatibility with other operations in the overall DNAA procedures. A complete DNAA procedure was proposed and evaluated for the analysis of standard bovine liver and blood samples. The DNAA system was effective for the determination of As, Cu, Fe, Hg, Mo, Rb, Sb, Se, and Zn without yield determinations and with a minimum turn-around time of approximately 3 days

  13. Evaluation of some procedures relevant to the determination of trace elemental components in biological materials by destructive neutron activation analysis

    Berry, D.L.

    1979-01-01

    The development of a simplified procedure for the analysis of biological materials by destructive neutron activation analysis (DNAA) is described. The sample manipulations preceding gamma ray assay were investigated as five specific stages of processing: (1) pre-irradiation treatment; (2) sample irradiation; (3) removal of the organic matrix; (4) removal of interfering radioactivities; and (5) concentration and separation of analyte activities. Each stage was evaluated with respect to susceptibility to sample contamination, loss of trace elemental components, and compatibility with other operations in the overall DNAA procedures. A complete DNAA procedure was proposed and evaluated for the analysis of standard bovine liver and blood samples. The DNAA system was effective for the determination of As, Cu, Fe, Hg, Mo, Rb, Sb, Se, and Zn without yield determinations and with a minimum turn-around time of approximately 3 days.

  14. SECOND-ORDER VARIATIONAL ANALYSIS IN CONIC PROGRAMMING WITH APPLICATIONS TO OPTIMALITY AND STABILITY

    Mordukhovich, B. S.; Outrata, Jiří; Ramírez, H. C.

    2015-01-01

    Roč. 25, č. 1 (2015), s. 76-101 ISSN 1052-6234 R&D Projects: GA ČR(CZ) GAP201/12/0671 Grant - others:Australian Research Council(AU) DP-110102011; USA National Science Foundation(US) DMS-1007132; Australian Reseach Council(AU) DP-12092508; Portuguese Foundation of Science and Technologies(PT) MAT/11109; FONDECYT Project(CL) 1110888; Universidad de Chile(CL) BASAL Project Centro de Modelamiento Matematico Institutional support: RVO:67985556 Keywords : variational analysis * second-order theory * conic programming * generalized differentiation * optimality conditions * isolated calmness * tilt stability Subject RIV: BA - General Mathematics Impact factor: 2.659, year: 2015 http://library.utia.cas.cz/separaty/2015/MTR/outrata-0439413.pdf

  15. Statistical intensity variation analysis for rapid volumetric imaging of capillary network flux.

    Lee, Jonghwan; Jiang, James Y; Wu, Weicheng; Lesage, Frederic; Boas, David A

    2014-04-01

    We present a novel optical coherence tomography (OCT)-based technique for rapid volumetric imaging of red blood cell (RBC) flux in capillary networks. Previously we reported that OCT can capture individual RBC passage within a capillary, where the OCT intensity signal at a voxel fluctuates when an RBC passes the voxel. Based on this finding, we defined a metric of statistical intensity variation (SIV) and validated that the mean SIV is proportional to the RBC flux [RBC/s] through simulations and measurements. From rapidly scanned volume data, we used Hessian matrix analysis to vectorize a segment path of each capillary and estimate its flux from the mean of the SIVs gathered along the path. Repeating this process led to a 3D flux map of the capillary network. The present technique enabled us to trace the RBC flux changes over hundreds of capillaries with a temporal resolution of ~1 s during functional activation.

  16. Numerical Analysis of Through Transmission Pulsed Eddy Current Testing and Effects of Pulse Width Variation

    Shin, Young Kil; Choi, Dong Myung

    2007-01-01

    By using numerical analysis methods, through transmission type pulsed eddy current (PEC) testing is modeled and PEC signal responses due to varying material conductivity, permeability, thickness, lift-off and pulse width are investigated. Results show that the peak amplitude of PEC signal gets reduced and the time to reach the peak amplitude is increased as the material conductivity, permeability, and specimen thickness increase. Also, they indicate that the pulse width needs to be shorter when evaluating the material conductivity and the plate thickness using the peak amplitude, and when the pulse width is long, the peak time is found to be more useful. Other results related to lift-off variation are reported as well

  17. Variational analysis for simulating free-surface flows in a porous medium

    Shabbir Ahmed

    2003-01-01

    is used to obtain a discrete form of equations for a two-dimensional domain. The matrix characteristics and the stability criteria have been investigated to develop a stable numerical algorithm for solving the governing equation. A computer programme has been written to solve a symmetric positive definite system obtained from the variational finite element analysis. The system of equations is solved using the conjugate gradient method. The solution generates time-varying hydraulic heads in the subsurface. The interfacing free surface between the unsaturated and saturated zones in the variably saturated domain is located, based on the computed hydraulic heads. Example problems are investigated. The finite element solutions are compared with the exact solutions for the example problems. The numerical characteristics of the finite element solution method are also investigated using the example problems.

  18. Is there much variation in variation? Revisiting statistics of small area variation in health services research

    Ibáñez Berta

    2009-04-01

    Full Text Available Abstract Background The importance of Small Area Variation Analysis for policy-making contrasts with the scarcity of work on the validity of the statistics used in these studies. Our study aims at 1 determining whether variation in utilization rates between health areas is higher than would be expected by chance, 2 estimating the statistical power of the variation statistics; and 3 evaluating the ability of different statistics to compare the variability among different procedures regardless of their rates. Methods Parametric bootstrap techniques were used to derive the empirical distribution for each statistic under the hypothesis of homogeneity across areas. Non-parametric procedures were used to analyze the empirical distribution for the observed statistics and compare the results in six situations (low/medium/high utilization rates and low/high variability. A small scale simulation study was conducted to assess the capacity of each statistic to discriminate between different scenarios with different degrees of variation. Results Bootstrap techniques proved to be good at quantifying the difference between the null hypothesis and the variation observed in each situation, and to construct reliable tests and confidence intervals for each of the variation statistics analyzed. Although the good performance of Systematic Component of Variation (SCV, Empirical Bayes (EB statistic shows better behaviour under the null hypothesis, it is able to detect variability if present, it is not influenced by the procedure rate and it is best able to discriminate between different degrees of heterogeneity. Conclusion The EB statistics seems to be a good alternative to more conventional statistics used in small-area variation analysis in health service research because of its robustness.

  19. Application of Archimedean copulas to the analysis of drought decadal variation in China

    Zuo, Dongdong; Feng, Guolin; Zhang, Zengping; Hou, Wei

    2017-12-01

    Based on daily precipitation data collected from 1171 stations in China during 1961-2015, the monthly standardized precipitation index was derived and used to extract two major drought characteristics which are drought duration and severity. Next, a bivariate joint model was established based on the marginal distributions of the two variables and Archimedean copula functions. The joint probability and return period were calculated to analyze the drought characteristics and decadal variation. According to the fit analysis, the Gumbel-Hougaard copula provided the best fit to the observed data. Based on four drought duration classifications and four severity classifications, the drought events were divided into 16 drought types according to the different combinations of duration and severity classifications, and the probability and return period were analyzed for different drought types. The results showed that the occurring probability of six common drought types (0 accounted for 76% of the total probability of all types. Moreover, due to their greater variation, two drought types were particularly notable, i.e., the drought types where D ≥ 6 and S ≥ 2. Analyzing the joint probability in different decades indicated that the location of the drought center had a distinctive stage feature, which cycled from north to northeast to southwest during 1961-2015. However, southwest, north, and northeast China had a higher drought risk. In addition, the drought situation in southwest China should be noted because the joint probability values, return period, and the analysis of trends in the drought duration and severity all indicated a considerable risk in recent years.

  20. Segmental Quantitative MR Imaging analysis of diurnal variation of water content in the lumbar intervertebral discs

    Zhu, Ting Ting; Ai, Tao; Zhang, Wei; Li, Tao; Li, Xiao Ming

    2015-01-01

    To investigate the changes in water content in the lumbar intervertebral discs by quantitative T2 MR imaging in the morning after bed rest and evening after a diurnal load. Twenty healthy volunteers were separately examined in the morning after bed rest and in the evening after finishing daily work. T2-mapping images were obtained and analyzed. An equally-sized rectangular region of interest (ROI) was manually placed in both, the anterior and the posterior annulus fibrosus (AF), in the outermost 20% of the disc. Three ROIs were placed in the space defined as the nucleus pulposus (NP). Repeated-measures analysis of variance and paired 2-tailed t tests were used for statistical analysis, with p < 0.05 as significantly different. T2 values significantly decreased from morning to evening, in the NP (anterior NP = -13.9 ms; central NP = -17.0 ms; posterior NP = -13.3 ms; all p < 0.001). Meanwhile T2 values significantly increased in the anterior AF (+2.9 ms; p = 0.025) and the posterior AF (+5.9 ms; p < 0.001). T2 values in the posterior AF showed the largest degree of variation among the 5 ROIs, but there was no statistical significance (p = 0.414). Discs with initially low T2 values in the center NP showed a smaller degree of variation in the anterior NP and in the central NP, than in discs with initially high T2 values in the center NP (10.0% vs. 16.1%, p = 0.037; 6.4% vs. 16.1%, p = 0.006, respectively). Segmental quantitative T2 MRI provides valuable insights into physiological aspects of normal discs.

  1. Analysis of conformational variations of the cricoid cartilages in Thoroughbred horses using computed tomography.

    Dahlberg, J A; Valdes-Martinez, A; Boston, R C; Parente, E J

    2011-03-01

    Loss of arytenoid abduction is a common post operative complication of laryngoplasty without a definitive cause. It has been a clinical impression during laryngoplasty surgery that there is great conformational variability along the caudal edge of the Thoroughbred cricoid cartilage that could impact post operative retention of suture position. A change in suture position would probably lead to some loss of abduction. Defining any structural variability of the cricoid would be an initial step in determining whether this variability could impact on the retention of suture position. Anatomical variations in the larynx of Thoroughbred horses may be detected and measured using objective analysis and computed tomography. Larynges were harvested from 15 mature Thoroughbred horses. Helical CT scans were performed on each specimen. Three independent observers performed a series of measurements on 2D and 3D reconstruction images using digital software. Measurements included the lateral cricoid angle, the caudal cricoid prominences, the distance to the cricoid slope, the angle of the cricoarytenoid joints (CAJ), the cricoid thickness and the suture angle. Mean, standard deviation, coefficient of variation and linear regression analysis were performed among all observers and all measurements. Notable conformational differences were evident on the 3D reconstructions. The highest degree of variability was found in 3 measurements: the distance to the lateral cricoid slope, the lateral cricoid angle and the cricoid thickness. A larger left CAJ angle directly and significantly correlated with a larger suture angle. There are notable conformational differences among cricoid specimens in the Thoroughbred larynx. The morphometric differences identified may impact on optimal prosthesis placement and long-term retention. Since a larger lateral cricoid angle may facilitate abduction loss secondary to a displaced and loosened suture, alternative techniques for suture placement may be of

  2. Systematic documentation and analysis of human genetic variation in hemoglobinopathies using the microattribution approach

    B. Giardine (Belinda); J. Borg (Joseph); D.R. Higgs (Douglas); K.R. Peterson (Kenneth R.); J.N.J. Philipsen (Sjaak); D. Maglott (Donna); B.K. Singleton (Belinda K.); D.J. Anstee (David J.); A.N. Basak (Nazli); B.H. Clark (Bruce); F.C. Costa (Flavia C.); P. Faustino (Paula); H. Fedosyuk (Halyna); A.E. Felice (Alex); A. Francina (Alain); R. Galanello (Renzo); M.V.E. Gallivan (Monica V. E.); M. Georgitsi (Marianthi); R.J. Gibbons (Richard J.); P.C. Giordano (Piero Carlo); C.L. Harteveld (Cornelis); J.D. Hoyer (James D.); M. Jarvis (Martin); P. Joly (Philippe); E. Kanavakis (Emmanuel); P. Kollia (Panagoula); S. Menzel (Stephan); W.G. Miller (William); K. Moradkhani (Kamran); J. Old (John); A. Papachatzpoulou (Adamantia); M.N. Papadakis (Manoussos); P. Papadopoulos (Petros); S. Pavlovic (Sonja); L. Perseu (Lucia); M. Radmilovic (Milena); C. Riemer (Cathy); S. Satta (Stefania); I.A. Schrijver (Ingrid); M. Stojiljkovic (Maja); S.L. Thein; J. Traeger-Synodinos (Joanne); R. Tully (Ray); T. Wada (Takahito); J.S. Waye (John); C. Wiemann (Claudia); B. Zukic (Branka); D.H.K. Chui (David H. K.); H. Wajcman (Henri); R. Hardison (Ross); G.P. Patrinos (George)

    2011-01-01

    textabstractWe developed a series of interrelated locus-specific databases to store all published and unpublished genetic variation related to hemoglobinopathies and thalassemia and implemented microattribution to encourage submission of unpublished observations of genetic variation to these public

  3. Semi-automated uranium analysis by a modified Davies--Gray procedure

    Swanson, G.C.

    1977-01-01

    To rapidly and reliably determine uranium in fuel materials a semi-automated implementation of the Davies-Gray uranium titration was developed. The Davies-Gray method is essentially a three step procedure. First uranium is reduced quantitatively from +6 valence to +4 valence by excess of iron (II) in strong phosphoric acid in the absence of nitrite. Prior to the uranium reduction nitrite is destroyed by addition of sulfamic acid. In the second step iron (II) is selectively oxidized to iron (III) by nitric acid in the presence of Mo (VI) catalyst. Finally after dilution to reduce phosphate concentration, the uranium is titrated to U (VI) by standard dichromate. The original sluggish colorimetric endpoint determination used by Davies and Gray is seldom used since New Brunswick Laboratory discovered that addition of vanadium (IV) just prior to titration sufficiently improves reaction rate to allow a potentiometric endpoint determination. One of the advantages of the Davies-Gray uranium titration is that it is quite specific for uranium, most common impurity elements do not interfere with the analysis, and specifically high levels of Pu, Th, and Fe are tolerated

  4. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression.

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio

    2016-10-01

    We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Error Analysis Of Students Working About Word Problem Of Linear Program With NEA Procedure

    Santoso, D. A.; Farid, A.; Ulum, B.

    2017-06-01

    Evaluation and assessment is an important part of learning. In evaluation process of learning, written test is still commonly used. However, the tests usually do not following-up by further evaluation. The process only up to grading stage not to evaluate the process and errors which done by students. Whereas if the student has a pattern error and process error, actions taken can be more focused on the fault and why is that happen. NEA procedure provides a way for educators to evaluate student progress more comprehensively. In this study, students’ mistakes in working on some word problem about linear programming have been analyzed. As a result, mistakes are often made students exist in the modeling phase (transformation) and process skills (process skill) with the overall percentage distribution respectively 20% and 15%. According to the observations, these errors occur most commonly due to lack of precision of students in modeling and in hastiness calculation. Error analysis with students on this matter, it is expected educators can determine or use the right way to solve it in the next lesson.

  6. A Sensitive Photometric Procedure for Cobalt Determination in Water Employing a Compact Multicommuted Flow Analysis System.

    da Silva Magalhães, Ticiane; Reis, Boaventura F

    2017-09-01

    In this work, a multicommuted flow analysis procedure is proposed for the spectrophotometric determination of cobalt in fresh water, employing an instrument setup of downsized dimension and improved cost-effectiveness. The method is based on the catalytic effect of Co(II) on the Tiron oxidation by hydrogen peroxide in alkaline medium, forming a complex that absorbs radiation at 425 nm. The photometric detection was accomplished using a homemade light-emitting-diode (LED)-based photometer designed to use a flow cell with an optical path-length of 100 mm to improve sensitivity. After selecting adequate values for the flow system variables, adherence to the Beer-Lambert-Bouguer law was observed for standard solution concentrations in the range of 0.13-1.5 µg L -1 Co(II). Other useful features including a relative standard deviation of 2.0% (n = 11) for a sample with 0.49 µg L -1 Co(II), a detection limit of 0.06 µg L -1 Co(II) (n = 20), an analytical frequency of 42 sample determinations per hour, and waste generation of 1.5 mL per determination were achieved.

  7. Reliable and Efficient Procedure for Steady-State Analysis of Nonautonomous and Autonomous Systems

    J. Dobes

    2012-04-01

    Full Text Available The majority of contemporary design tools do not still contain steady-state algorithms, especially for the autonomous systems. This is mainly caused by insufficient accuracy of the algorithm for numerical integration, but also by unreliable steady-state algorithms themselves. Therefore, in the paper, a very stable and efficient procedure for the numerical integration of nonlinear differential-algebraic systems is defined first. Afterwards, two improved methods are defined for finding the steady state, which use this integration algorithm in their iteration loops. The first is based on the idea of extrapolation, and the second utilizes nonstandard time-domain sensitivity analysis. The two steady-state algorithms are compared by analyses of a rectifier and a C-class amplifier, and the extrapolation algorithm is primarily selected as a more reliable alternative. Finally, the method based on the extrapolation naturally cooperating with the algorithm for solving the differential-algebraic systems is thoroughly tested on various electronic circuits: Van der Pol and Colpitts oscillators, fragment of a large bipolar logical circuit, feedback and distributed microwave oscillators, and power amplifier. The results confirm that the extrapolation method is faster than a classical plain numerical integration, especially for larger circuits with complicated transients.

  8. Hemodynamic outcomes of the Ross procedure versus other aortic valve replacement: a systematic review and meta-analysis.

    Um, Kevin J; McCLURE, Graham R; Belley-Cote, Emilie P; Gupta, Saurabh; Bouhout, Ismail; Lortie, Hugo; Alraddadi, Hatim; Alsagheir, Ali; Bossard, Matthias; McINTYRE, William F; Lengyel, Alexandra; Eikelboom, John W; Ouzounian, Maral; Chu, Michael W; Parry, Dominic; El-Hamamsy, Ismail; Whitlock, Richard P

    2018-01-09

    Life expectancy in young adults undergoing mechanical or bioprosthetic aortic valve replacement (AVR) may be reduced by up to 20 years compared to age matched controls. The Ross procedure is a durable, anticoagulation-sparing alternative. We performed a systematic review and meta-analysis to compare the valve hemodynamics of the Ross procedure versus other AVR. We searched Cochrane CENTRAL, MEDLINE and EMBASE from inception to February 2017 for randomized controlled trials (RCTs) and observational studies (n≥10 Ross). Independently and in duplicate, we performed title and abstract screening, full-text eligibility assessment, and data collection. We evaluated the risk of bias with the Cochrane and CLARITY tools, and the quality of evidence with the GRADE framework. We identified 2 RCTs and 13 observational studies that met eligibility criteria (n=1,412). In observational studies, the Ross procedure was associated with a lower mean aortic gradient at discharge (MD -9 mmHg, 95% CI [-13, -5], pRoss procedure was associated with a lower mean gradient at latest follow-up (MD -15 mmHg, 95% CI [-32, 2], p=0.08, I2=99%). The mean pulmonic gradient for the Ross procedure was 18.0 mmHg (95% CI [16, 20], pRoss procedure was associated with better aortic valve hemodynamics. Future studies should evaluate the impact of the Ross procedure on exercise capacity and quality of life.

  9. Probabilistic safety analysis procedures guide, Sections 8-12. Volume 2, Rev. 1

    McCann, M.; Reed, J.; Ruger, C.; Shiu, K.; Teichmann, T.; Unione, A.; Youngblood, R.

    1985-08-01

    A procedures guide for the performance of probabilistic safety assessment has been prepared for interim use in the Nuclear Regulatory Commission programs. It will be revised as comments are received, and as experience is gained from its use. The probabilistic safety assessment studies performed are intended to produce probabilistic predictive models that can be used and extended by the utilities and by NRC to sharpen the focus of inquiries into a range of issues affecting reactor safety. The first volume of the guide describes the determination of the probability (per year) of core damage resulting from accident initiators internal to the plant (i.e., intrinsic to plant operation) and from loss of off-site electric power. The scope includes human reliability analysis, a determination of the importance of various core damage accident sequences, and an explicit treatment and display of uncertainties for key accident sequences. This second volume deals with the treatment of the so-called external events including seismic disturbances, fires, floods, etc. Ultimately, the guide will be augmented to include the plant-specific analysis of in-plant processes (i.e., containment performance). This guide provides the structure of a probabilistic safety study to be performed, and indicates what products of the study are valuable for regulatory decision making. For internal events, methodology is treated in the guide only to the extent necessary to indicate the range of methods which is acceptable; ample reference is given to alternative methodologies which may be utilized in the performance of the study. For external events, more explicit guidance is given

  10. Cost-benefit analysis model: A tool for area-wide fruit fly management. Procedures manual

    Enkerlin, W.; Mumford, J.; Leach, A.

    2007-03-01

    The Generic Fruit Fly Cost-Benefit Analysis Model assists in economic decision making associated with area-wide fruit fly control options. The FRUIT FLY COST-BENEFIT ANALYSIS PROGRAM (available on 1 CD-ROM from the Joint FAO/IAEA Programme of Nuclear Techniques in Food and Agriculture) is an Excel 2000 Windows based program, for which all standard Windows and Excel conventions apply. The Model is user friendly and thus largely self-explanatory. Nevertheless, it includes a procedures manual that has been prepared to guide the user, and thus should be used together with the software. Please note that the table presenting the pest management options in the Introductory Page of the model is controlled by spin buttons and click boxes. These controls are linked to macros that hide non relevant tables and boxes. N.B. it is important that the medium level of security is selected from the Tools menu of Excel, to do this go to Tools|Macros|Security| and select Medium. When the file is opened a form will appear containing three buttons, click on the middle button, 'Enable Macros', so that the macros may be used. Ideally the model should be used as a support tool by working groups aiming at assessing the economic returns of different fruit fly control options (suppression, eradication, containment and prevention). The working group should include professionals in agriculture with experience in area-wide implementation of integrated pest management programmes, an economist or at least someone with basic knowledge in economics, and if relevant, an entomologist with some background in the application of the sterile insect technique (SIT)

  11. Probabilistic safety analysis procedures guide. Sections 1-7 and appendices. Volume 1, Revision 1

    Bari, R.A.; Buslik, A.J.; Cho, N.Z.

    1985-08-01

    A procedures guide for the performance of probabilistic safety assessment has been prepared for interim use in the Nuclear Regulatory Commission programs. It will be revised as comments are received, and as experience is gained from its use. The probabilistic safety assessment studies performed are intended to produce probabilistic predictive models that can be used and extended by the utilities and by NRC to sharpen the focus of inquiries into a range of issues affecting reactor safety. This first volume of the guide describes the determination of the probability (per year) of core damage resulting from accident initiators internal to the plant (i.e., intrinsic to plant operation) and from loss of off-site electric power. The scope includes human reliability analysis, a determination of the importance of various core damage accident sequences, and an explicit treatment and display of uncertainties for key accident sequences. The second volume deals with the treatment of the so-called external events including seismic disturbances, fires, floods, etc. Ultimately, the guide will be augmented to include the plant-specific analysis of in-plant processes (i.e., containment performance). This guide provides the structure of a probabilistic safety study to be performed, and indicates what products of the study are valuable for regulatory decision making. For internal events, methodology is treated in the guide only to the extent necessary to indicate the range of methods which is acceptable; ample reference is given to alternative methodologies which may be utilized in the performance of the study. For external events, more explicit guidance is given

  12. Role of regression analysis and variation of rheological data in calculation of pressure drop for sludge pipelines.

    Farno, E; Coventry, K; Slatter, P; Eshtiaghi, N

    2018-06-15

    Sludge pumps in wastewater treatment plants are often oversized due to uncertainty in calculation of pressure drop. This issue costs millions of dollars for industry to purchase and operate the oversized pumps. Besides costs, higher electricity consumption is associated with extra CO 2 emission which creates huge environmental impacts. Calculation of pressure drop via current pipe flow theory requires model estimation of flow curve data which depends on regression analysis and also varies with natural variation of rheological data. This study investigates impact of variation of rheological data and regression analysis on variation of pressure drop calculated via current pipe flow theories. Results compare the variation of calculated pressure drop between different models and regression methods and suggest on the suitability of each method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. SITE-94. Discrete-feature modelling of the Aespoe site: 4. Source data and detailed analysis procedures

    Geier, J E [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    Specific procedures and source data are described for the construction and application of discrete-feature hydrological models for the vicinity of Aespoe. Documentation is given for all major phases of the work, including: Statistical analyses to develop and validate discrete-fracture network models, Preliminary evaluation, construction, and calibration of the site-scale model based on the SITE-94 structural model of Aespoe, Simulation of multiple realizations of the integrated model, and variations, to predict groundwater flow, and Evaluation of near-field and far-field parameters for performance assessment calculations. Procedures are documented in terms of the computer batch files and executable scripts that were used to perform the main steps in these analyses, to provide for traceability of results that are used in the SITE-94 performance assessment calculations. 43 refs.

  14. SITE-94. Discrete-feature modelling of the Aespoe site: 4. Source data and detailed analysis procedures

    Geier, J.E.

    1996-12-01

    Specific procedures and source data are described for the construction and application of discrete-feature hydrological models for the vicinity of Aespoe. Documentation is given for all major phases of the work, including: Statistical analyses to develop and validate discrete-fracture network models, Preliminary evaluation, construction, and calibration of the site-scale model based on the SITE-94 structural model of Aespoe, Simulation of multiple realizations of the integrated model, and variations, to predict groundwater flow, and Evaluation of near-field and far-field parameters for performance assessment calculations. Procedures are documented in terms of the computer batch files and executable scripts that were used to perform the main steps in these analyses, to provide for traceability of results that are used in the SITE-94 performance assessment calculations. 43 refs

  15. Identifying the factors influencing practice variation in thrombosis medicine: A qualitative content analysis of published practice-pattern surveys.

    Skeith, Leslie; Gonsalves, Carol

    2017-11-01

    Practice variation, the differences in clinical management between physicians, is one reason why patient outcomes may differ. Identifying factors that contribute to practice variation in areas of clinical uncertainty or equipoise may have implications for understanding and improving patient care. To discern what factors may influence practice variation, we completed a qualitative content analysis of all practice-pattern surveys in thrombosis medicine in the last 10years. Out of 2117 articles screened using a systematic search strategy, 33 practice-pattern surveys met eligibility criteria. Themes were identified using constant comparative analysis of qualitative data. Practice variation was noted in all 33 practice-pattern surveys. Contributing factors to variation included lack of available evidence, lack of clear and specific guideline recommendations, past experience, patient context, institutional culture and the perceived risk and benefit of a particular treatment. Additional themes highlight the value placed on expertise in challenging clinical scenarios, the complexity of practice variation and the value placed on minimizing practice variation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…

  17. An analysis of marketing authorisation applications via the mutual recognition and decentralised procedures in Europe

    Ebbers, Hans C; Langedijk, Joris; Bouvy, Jacoline C

    2015-01-01

    the frequency of licensing failure prior to CMDh referrals. RESULTS: During the study period, 10392 MRP/DCP procedures were finalized. Three hundred seventy-seven (3.6%) resulted in a referral procedure, of which 70 (19%) resulted in licensing failure, defined as refusal or withdrawal of the application...

  18. Procedure for hazards analysis of plutonium gloveboxes used in analytical chemistry operations

    Delvin, W.L.

    1977-06-01

    A procedure is presented to identify and assess hazards associated with gloveboxes used for analytical chemistry operations involving plutonium. This procedure is based upon analytic tree methodology and it has been adapted from the US Energy Research and Development Administration's safety program, the Management Oversight and Risk Tree

  19. Emergency procedures

    Abd Nasir Ibrahim; Azali Muhammad; Ab Razak Hamzah; Abd Aziz Mohamed; Mohammad Pauzi Ismail

    2004-01-01

    The following subjects are discussed - Emergency Procedures: emergency equipment, emergency procedures; emergency procedure involving X-Ray equipment; emergency procedure involving radioactive sources

  20. Zinc injection implementation process at EDF: risk analysis, chemical specifications and operating procedures

    Tigeras, A.; Stutzmann, A.; Bremnes, O.; Claeys, M.; Ranchoux, G.; Segura, J.C.; Errera, J.; Bonne, S.

    2010-01-01

    's strategy and the different measures adopted by EDF to provide the necessary tools to the French units : zinc injection procedures, risk analysis, chemistry -radiochemistry surveillance programs, and chemical specifications. This work can be useful for other utilities, assisting them in optimizing and/or implementing the zinc injection in the most suitable conditions, which would help to obtain the expected results in the current and the future reactors. (author)

  1. Comparative cost analysis -- computed tomography vs. alternative diagnostic procedures, 1977-1980

    Gempel, P.A.; Harris, G.H.; Evans, R.G.

    1977-12-01

    In comparing the total national cost of utilizing computed tomography (CT) for medically indicated diagnoses with that of conventional x-ray, ultrasonography, nuclear medicine, and exploratory surgery, this investigation concludes that there was little, if any, added net cost from CT use in 1977 or will there be in 1980. Computed tomography, generally recognized as a reliable and useful diagnostic modality, has the potential to reduce net costs provided that an optimal number of units can be made available to physicians and patients to achieve projected reductions in alternative procedures. This study examines the actual cost impact of CT on both cranial and body diagnostic procedures. For abdominal and mediastinal disorders, CT scanning is just beginning to emerge as a diagnostic modality. As such, clinical experience is somewhat limited and the authors assume that no significant reduction in conventional procedures took place in 1977. It is estimated that the approximately 375,000 CT body procedures performed in 1977 represent only a 5 percent cost increase over use of other diagnostic modalities. It is projected that 2,400,000 CT body procedures will be performed in 1980 and, depending on assumptions used, total body diagnostic costs will increase only slightly or be reduced. Thirty-one tables appear throughout the text presenting cost data broken down by types of diagnostic procedures used and projections by years. Appendixes present technical cost components for diagnostic procedures, the comparative efficacy of CT as revealed in abstracts of published literature, selected medical diagnoses, and references

  2. Application of statistical process control and process capability analysis procedures in orbiter processing activities at the Kennedy Space Center

    Safford, Robert R.; Jackson, Andrew E.; Swart, William W.; Barth, Timothy S.

    1994-01-01

    Successful ground processing at KSC requires that flight hardware and ground support equipment conform to specifications at tens of thousands of checkpoints. Knowledge of conformance is an essential requirement for launch. That knowledge of conformance at every requisite point does not, however, enable identification of past problems with equipment, or potential problem areas. This paper describes how the introduction of Statistical Process Control and Process Capability Analysis identification procedures into existing shuttle processing procedures can enable identification of potential problem areas and candidates for improvements to increase processing performance measures. Results of a case study describing application of the analysis procedures to Thermal Protection System processing are used to illustrate the benefits of the approaches described in the paper.

  3. The analysis of APOL1 genetic variation and haplotype diversity provided by 1000 Genomes project.

    Peng, Ting; Wang, Li; Li, Guisen

    2017-08-11

    The APOL1 gene variants has been shown to be associated with an increased risk of multiple kinds of diseases, particularly in African Americans, but not in Caucasians and Asians. In this study, we explored the single nucleotide polymorphism (SNP) and haplotype diversity of APOL1 gene in different races provided by 1000 Genomes project. Variants of APOL1 gene in 1000 Genome Project were obtained and SNPs located in the regulatory region or coding region were selected for genetic variation analysis. Total 2504 individuals from 26 populations were classified as four groups that included Africa, Europe, Asia and Admixed populations. Tag SNPs were selected to evaluate the haplotype diversities in the four populations by HaploStats software. APOL1 gene was surrounded by some of the most polymorphic genes in the human genome, variation of APOL1 gene was common, with up to 613 SNP (1000 Genome Project reported) and 99 of them (16.2%) with MAF ≥ 1%. There were 79 SNPs in the URR and 92 SNPs in 3'UTR. Total 12 SNPs in URR and 24 SNPs in 3'UTR were considered as common variants with MAF ≥ 1%. It is worth noting that URR-1 was presents lower frequencies in European populations, while other three haplotypes taken an opposite pattern; 3'UTR presents several high-frequency variation sites in a short segment, and the differences of its haplotypes among different population were significant (P < 0.01), UTR-1 and UTR-5 presented much higher frequency in African population, while UTR-2, UTR-3 and UTR-4 were much lower. APOL1 coding region showed that two SNP of G1 with higher frequency are actually pull down the haplotype H-1 frequency when considering all populations pooled together, and the diversity among the four populations be widen by the G1 two mutation (P 1  = 3.33E-4 vs P 2  = 3.61E-30). The distributions of APOL1 gene variants and haplotypes were significantly different among the different populations, in either regulatory or coding regions. It could provide

  4. Sources of variation in under-5 mortality across sub-Saharan Africa: a spatial analysis.

    Burke, Marshall; Heft-Neal, Sam; Bendavid, Eran

    2016-12-01

    Detailed spatial understanding of levels and trends in under-5 mortality is needed to improve the targeting of interventions to the areas of highest need, and to understand the sources of variation in mortality. To improve this understanding, we analysed local-level information on child mortality across sub-Saharan Africa between 1980-2010. We used data from 82 Demographic and Health Surveys in 28 sub-Saharan African countries, including the location and timing of 3·24 million childbirths and 393 685 deaths, to develop high-resolution spatial maps of under-5 mortality in the 1980s, 1990s, and 2000s. These estimates were at a resolution of 0·1 degree latitude by 0·1 degree longitude (roughly 10 km × 10 km). We then analysed this spatial information to distinguish within-country versus between-country sources of variation in mortality, to examine the extent to which declines in mortality have been accompanied by convergence in the distribution of mortality, and to study localised drivers of mortality differences, including temperature, malaria burden, and conflict. In our sample of sub-Saharan African countries from the 1980s to the 2000s, within-country differences in under-5 mortality accounted for 74-78% of overall variation in under-5 mortality across space and over time. Mortality differed significantly across only 8-15% of country borders, supporting the role of local, rather than national, factors in driving mortality patterns. We found that by the end of the study period, 23% of the eligible children in the study countries continue to live in mortality hotspots-areas where, if current trends continue, the Sustainable Developent Goals mortality targets will not be met. In multivariate analysis, within-country mortality levels at each pixel were significantly related to local temperature, malaria burden, and recent history of conflict. Our findings suggest that sub-national determinants explain a greater portion of under-5 mortality than do country

  5. Seasonal variation in hemodialysis initiation: A single-center retrospective analysis.

    Yujiro Maeoka

    Full Text Available The number of new dialysis patients has been increasing worldwide, particularly among elderly individuals. However, information on seasonal variation in hemodialysis initiation in recent decades is lacking, and the seasonal distribution of patients' conditions immediately prior to starting dialysis remains unclear. Having this information could help in developing a modifiable approach to improving pre-dialysis care. We retrospectively investigated the records of 297 patients who initiated hemodialysis at Hiroshima Prefectural Hospital from January 1st, 2009 to December 31st, 2013. Seasonal differences were assessed by χ2 or Kruskal-Wallis tests. Multiple comparison analysis was performed with the Steel test. The overall number of patients starting dialysis was greatest in winter (n = 85, 28.6%, followed by spring (n = 74, 24.9%, summer (n = 70, 23.6%, and autumn (n = 68, 22.9%, though the differences were not significant. However, there was a significant winter peak in dialysis initiation among patients aged ≥65 years, but not in those aged <65 years. Fluid overload assessed by clinicians was the most common uremic symptom among all patients, but a winter peak was only detected in patients aged ≥65 years. The body weight gain ratio showed a similar trend to fluid overload assessed by clinicians. Pulmonary edema was most pronounced in winter among patients aged ≥65 years compared with other seasons. The incidences of infection were modestly increased in summer and winter, but not statistically significant. Cardiac complications were similar in all seasons. This study demonstrated the existence of seasonal variation in dialysis initiation, with a winter peak among patients aged ≥65 years. The winter increment in dialysis initiation was mainly attributable to increased fluid overload. These findings suggest that elderly individuals should be monitored particularly closely during the winter.

  6. A geometric morphometric analysis of hominin lower molars: Evolutionary implications and overview of postcanine dental variation.

    Gómez-Robles, Aida; Bermúdez de Castro, José María; Martinón-Torres, María; Prado-Simón, Leyre; Arsuaga, Juan Luis

    2015-05-01

    Lower molars have been extensively studied in the context of hominin evolution using classic and geometric morphometric analyses, 2D and 3D approaches, evaluations of the external (outer enamel surface) and internal anatomy (dentine, pulp chamber, and radicular canals), and studies of the crown and root variation. In this study, we present a 2D geometric morphometric analysis of the crown anatomy of lower first, second, and third molars of a broad sample of hominins, including Pliocene and Lower, Middle, and Upper Pleistocene species coming from Africa, Asia, and Europe. We show that shape variability increases from first to second and third molars. While first molars tend to retain a relatively stable 5-cusped conformation throughout the hominin fossil record, second and third molars show marked distal reductions in later Homo species. This trend to distal reduction is similar to that observed in previous studies of premolars and upper second and third molars, and points to a correlated reduction of distal areas across the whole postcanine dentition. Results on lower molar variation, as well as on other postcanine teeth, show certain trends in European Pleistocene populations from the Atapuerca sites. Middle Pleistocene hominins from Sima de los Huesos show Neanderthal affinities and strong dental reduction, especially in the most distal molars. The degree of dental reduction in this population is stronger than that observed in classic Neanderthals. Homo antecessor hominins from Gran Dolina-TD6 have primitive lower teeth that contrast with their more derived upper teeth. The evolutionary implications of these dental affinities are discussed in light of recent paleogenetic studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Analysis and evaluation of atom level composition variation and property change due to materials irradiation damage

    Furuya, Kazuo; Takeguchi, Masaki; Mitsuishi, Kazuki; Song Menhi; Saito, Tetsuya

    2000-01-01

    This study aims at general elucidation of atom level composition variation due to radiation damage and property change accompanied with this under intending to stressing application of the in-situ analysis and evaluation apparatus on material radiation damage. Then, by using the electron energy loss spectroscopy (EELS) capable of showing a power in composition analysis at microscopic region, measurement of EELS on crystal structure change after heat treatment of Xe precipitation was carried out under parallel of its high resolution observation. As a result, a precipitation with less than 30 nm in diameter was observed on a specimen inserted at 473 K. Xe precipitates in crystalline state began to change at 623 K, to be perfectly gassy state at 773 K. In an energy filter image (EFI) using low loss of Xe, distribution of crystalline and non-crystalline Xe was observed. In EELS, peaks at 14.6 and 15.3 eV were observed in 300 and 773 K, respectively, which were thought to be plasmon. And, difference of peak position at the core loss was thought to reflex difference of Xe state at 300 and 773 K. (G.K.)

  8. Meta-analysis of the performance variation in broilers experimentally challenged by Eimeria spp.

    Kipper, Marcos; Andretta, Ines; Lehnen, Cheila Roberta; Lovatto, Paulo Alberto; Monteiro, Silvia Gonzalez

    2013-09-01

    A meta-analysis was carried out to (1) study the relation of the variation in feed intake and weight gain in broilers infected with Eimeria acervulina, Eimeria maxima, Eimeria tenella, or a Pool of Eimeria species, and (2) to identify and to quantify the effects involved in the infection. A database of articles addressing the experimental infection with Coccidia in broilers was developed. These publications must present results of animal performance (weight gain, feed intake, and feed conversion ratio). The database was composed by 69 publications, totalling around 44 thousand animals. Meta-analysis followed three sequential analyses: graphical, correlation, and variance-covariance. The feed intake of the groups challenged by E. acervulina and E. tenella did not differ (P>0.05) to the control group. However, the feed intake in groups challenged by E. maxima and Pool showed an increase of 8% and 5% (PEimeria species, animal age, sex, and genetic line. In general the age effect is superior to the challenge effect, showing that age at the challenge is important to determine the impact of Eimeria infection. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Pedestrian-Vehicle Accidents Reconstruction with PC-Crash®: Sensibility Analysis of Factors Variation

    Martinez Gala, F.

    2016-07-01

    This paper describes the main findings of a study performed by INSIA-UPM about the improvement of the reconstruction process of real world vehicle-pedestrian accidents using PC-Crash® software, aimed to develop a software tool for the estimation of the variability of the collision speed due to the lack of real values of some parameters required during the reconstruction task. The methodology has been based on a sensibility analysis of the factors variation. A total of 9 factors have been analyzed with the objective of identifying which ones were significant. Four of them (pedestrian height, collision angle, hood height and pedestrian-road friction coefficient) were significant and were included in a full factorial experiment with the collision speed as an additional factor in order to obtain a regression model with up to third level interactions. Two different factorial experiments with the same structure have been performed because of pedestrian gender differences. The tool has been created as a collision speed predictor based on the regression models obtained, using the 4 significant factors and the projection distance measured or estimated in the accident site. The tool has been used on the analysis of real-world reconstructed accidents occurred in the city of Madrid (Spain). The results have been adequate in most cases with less than 10% of deviation between the predicted speed and the one estimated in the reconstructions. (Author)

  10. Analysis of evacuation procedure after the accident of the Fukushima Daiichi Nuclear Power Plant

    Murayama, T.; Iizuka, F.; El-Asaad, H. [Tokyo Inst. of Tech., Tokyo (Japan)

    2014-07-01

    After the Great East Japan Earthquake of March 2011 struck the coast of Eastern Japan, evacuation procedures were undermined due to the unexpected magnitude and severity of the disaster. Also, communications between local and national government were weakened, leading to dismemberment between society and government. Consequently this left the affected people without sufficient information or updates regarding evacuation procedures. This paper will concentrate on evacuation procedures led by locating residents with the help of media outlets (local newspapers and news reports). Analyzing movements of evacuees will help improve the evacuation method both for local residents and government bodies. (author)

  11. Analysis of evacuation procedure after the accident of the Fukushima Daiichi Nuclear Power Plant

    Murayama, T.; Iizuka, F.; El-Asaad, H.

    2014-01-01

    After the Great East Japan Earthquake of March 2011 struck the coast of Eastern Japan, evacuation procedures were undermined due to the unexpected magnitude and severity of the disaster. Also, communications between local and national government were weakened, leading to dismemberment between society and government. Consequently this left the affected people without sufficient information or updates regarding evacuation procedures. This paper will concentrate on evacuation procedures led by locating residents with the help of media outlets (local newspapers and news reports). Analyzing movements of evacuees will help improve the evacuation method both for local residents and government bodies. (author)

  12. Root coverage procedures improve patient aesthetics. A systematic review and Bayesian network meta-analysis.

    Cairo, Francesco; Pagliaro, Umberto; Buti, Jacopo; Baccini, Michela; Graziani, Filippo; Tonelli, Paolo; Pagavino, Gabriella; Tonetti, Maurizio S

    2016-11-01

    The aim of this study was to perform a systematic review (SR) of randomized controlled trials (RCTs) to explore if periodontal plastic surgery procedures for the treatment of single and multiple gingival recessions (Rec) may improve aesthetics at patient and professional levels. In order to combine evidence from direct and indirect comparisons by different trials a Bayesian network meta-analysis (BNM) was planned. A literature search on PubMed, Cochrane libraries, EMBASE, and hand-searched journals until January 2016 was conducted to identify RCTs presenting aesthetic outcomes after root coverage using standardized evaluations at patient and professional level. A total of 16 RCTs were selected in the SR; three RTCs presenting professional aesthetic evaluation with Root coverage Aesthetic Score (RES) and three showing final self-perception using the Visual Analogue Scale (VAS Est) could be included in a BNM model. Coronally Advanced Flap plus Connective Tissue Graft (CAF + CTG) and CAF + Acellular Dermal Matrix (ADM) and Autologous Fibroblasts (AF) were associated with the best RES outcomes (best probability = 24% and 64%, respectively), while CAF + CTG and CAF + CTG + Enamel matrix Derivatives (EMD) obtained highest values of VAS Est score (best probability = 44% and 26%, respectively). Periodontal Plastic Surgery (PPS) techniques applying grafts underneath CAF with or without the adding of EMD are associated with improved aesthetics assessed by final patient perception and RES as professional evaluation system. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. An alternative method for noise analysis using pixel variance as part of quality control procedures on digital mammography systems.

    Bouwman, R.; Young, K.; Lazzari, B.; Ravaglia, V.; Broeders, M.J.M.; Engen, R. van

    2009-01-01

    According to the European Guidelines for quality assured breast cancer screening and diagnosis, noise analysis is one of the measurements that needs to be performed as part of quality control procedures on digital mammography systems. However, the method recommended in the European Guidelines does

  14. 78 FR 37463 - Expedited Approval of Alternative Test Procedures for the Analysis of Contaminants Under the Safe...

    2013-06-21

    ... Approval of Alternative Test Procedures for the Analysis of Contaminants Under the Safe Drinking Water Act... page 32570, with the table entitled ``ALTERNATIVE TESTING METHODS FOR CONTAMINANTS LISTED AT 40 CFR 141... Contaminants Listed at 40 CFR 141.25(a) SM 21st Edition SM 22nd Edition Contaminant Methodology \\1\\ \\28\\ ASTM...

  15. 75 FR 32295 - Expedited Approval of Alternative Test Procedures for the Analysis of Contaminants Under the Safe...

    2010-06-08

    ... Approval of Alternative Test Procedures for the Analysis of Contaminants Under the Safe Drinking Water Act... methods for use in measuring the levels of contaminants in drinking water and determining compliance with... required to measure contaminants in drinking water samples. In addition, EPA Regions as well as States and...

  16. Household waste compositional analysis variation from insular communities in the framework of waste prevention strategy plans

    Zorpas, Antonis A.; Lasaridi, Katia; Voukkali, Irene; Loizia, Pantelitsa; Chroni, Christina

    2015-01-01

    Highlights: • Waste framework directive has set clear waste prevention procedures. • Household Compositional analysis. • Waste management plans. • Zero waste approach. • Waste generation. - Abstract: Waste management planning requires reliable data regarding waste generation, affecting factors on waste generation and forecasts of waste quantities based on facts. In order to decrease the environmental impacts of waste management the choice of prevention plan as well as the treatment method must be based on the features of the waste that are produced in a specific area. Factors such as culture, economic development, climate, and energy sources have an impact on waste composition; composition influences the need of collecting waste more or less frequently of waste collection and disposition. The research question was to discover the main barriers concerning the compositional analysis in Insular Communities under warm climate conditions and the findings from this study enabled the main contents of a waste management plan to be established. These included advice to residents on waste minimisation, liaison with stakeholders and the expansion of kerbside recycling schemes

  17. Household waste compositional analysis variation from insular communities in the framework of waste prevention strategy plans

    Zorpas, Antonis A., E-mail: antonis.zorpas@ouc.ac.cy [Cyprus Open University, Faculty of Pure and Applied Science, Environmental Conservation and Management, P.O. Box 12794, 2252 Latsia, Nicosia (Cyprus); Lasaridi, Katia, E-mail: klasaridi@hua.gr [Harokopio University, Department of Geography, 70 El. Venizelou, 176 71 Athens, Kallithea (Greece); Voukkali, Irene [Institute of Environmental Technology and Sustainable Development, ENVITECH LTD, Department of Research and Development, P.O. Box 34073, 5309 (Cyprus); Loizia, Pantelitsa, E-mail: irenevoukkali@envitech.org [Institute of Environmental Technology and Sustainable Development, ENVITECH LTD, Department of Research and Development, P.O. Box 34073, 5309 (Cyprus); Chroni, Christina [Harokopio University, Department of Geography, 70 El. Venizelou, 176 71 Athens, Kallithea (Greece)

    2015-04-15

    Highlights: • Waste framework directive has set clear waste prevention procedures. • Household Compositional analysis. • Waste management plans. • Zero waste approach. • Waste generation. - Abstract: Waste management planning requires reliable data regarding waste generation, affecting factors on waste generation and forecasts of waste quantities based on facts. In order to decrease the environmental impacts of waste management the choice of prevention plan as well as the treatment method must be based on the features of the waste that are produced in a specific area. Factors such as culture, economic development, climate, and energy sources have an impact on waste composition; composition influences the need of collecting waste more or less frequently of waste collection and disposition. The research question was to discover the main barriers concerning the compositional analysis in Insular Communities under warm climate conditions and the findings from this study enabled the main contents of a waste management plan to be established. These included advice to residents on waste minimisation, liaison with stakeholders and the expansion of kerbside recycling schemes.

  18. Chronic subdural hematoma : a systematic review and meta-analysis of surgical procedures

    Liu, Weiming; Bakker, Nicolaas A.; Groen, Rob J. M.

    Object. In this paper the authors systematically evaluate the results of different surgical procedures for chronic subdural hematoma (CSDH). Methods. The MEDLINE, Embase, Cochrane Central Register of Controlled Trials, and other databases were scrutinized according to the PRISMA (Preferred Reporting

  19. Health economic analysis of laparoscopic lavage versus Hartmann's procedure for diverticulitis in the randomized DILALA trial

    Gehrman, J.; Angenete, E; Björholt, I.

    2016-01-01

    Background: Open surgery with resection and colostomy (Hartmann's procedure) has been the standard treatment for perforated diverticulitis with purulent peritonitis. In recent years laparoscopic lavage has emerged as an alternative, with potential benefits for patients with purulent peritonitis...

  20. Patient safety during procedural sedation using capnography monitoring : A systematic review and meta-analysis

    Saunders, Rhodri; Struys, Michel M. R. F.; Pollock, Richard F.; Mestek, Michael; Lightdale, Jenifer R.

    2017-01-01

    Objective: To evaluate the effect of capnography monitoring on sedation-related adverse events during procedural sedation and analgesia (PSA) administered for ambulatory surgery relative to visual assessment and pulse oximetry alone. Design and setting: Systematic literature review and random

  1. Analysis of four dental alloys following torch/centrifugal and induction/ vacuum-pressure casting procedures.

    Thompson, Geoffrey A; Luo, Qing; Hefti, Arthur

    2013-12-01

    Previous studies have shown casting methodology to influence the as-cast properties of dental casting alloys. It is important to consider clinically important mechanical properties so that the influence of casting can be clarified. The purpose of this study was to evaluate how torch/centrifugal and inductively cast and vacuum-pressure casting machines may affect the castability, microhardness, chemical composition, and microstructure of 2 high noble, 1 noble, and 1 base metal dental casting alloys. Two commonly used methods for casting were selected for comparison: torch/centrifugal casting and inductively heated/ vacuum-pressure casting. One hundred and twenty castability patterns were fabricated and divided into 8 groups. Four groups were torch/centrifugally cast in Olympia (O), Jelenko O (JO), Genesis II (G), and Liberty (L) alloys. Similarly, 4 groups were cast in O, JO, G, and L by an inductively induction/vacuum-pressure casting machine. Each specimen was evaluated for casting completeness to determine a castability value, while porosity was determined by standard x-ray techniques. Each group was metallographically prepared for further evaluation that included chemical composition, Vickers microhardness, and grain analysis of microstructure. Two-way ANOVA was used to determine significant differences among the main effects. Statistically significant effects were examined further with the Tukey HSD procedure for multiple comparisons. Data obtained from the castability experiments were non-normal and the variances were unequal. They were analyzed statistically with the Kruskal-Wallis rank sum test. Significant results were further investigated statistically with the Steel-Dwass method for multiple comparisons (α=.05). The alloy type had a significant effect on surface microhardness (Pcasting did not affect the microhardness of the test specimen (P=.465). Similarly, the interaction between the alloy and casting technique was not significant (P=.119). A high

  2. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure.

    Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-07-28

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.

  3. Complications Following Common Inpatient Urological Procedures: Temporal Trend Analysis from 2000 to 2010.

    Meyer, Christian P; Hollis, Michael; Cole, Alexander P; Hanske, Julian; O'Leary, James; Gupta, Soham; Löppenberg, Björn; Zavaski, Mike E; Sun, Maxine; Sammon, Jesse D; Kibel, Adam S; Fisch, Margit; Chun, Felix K H; Trinh, Quoc-Dien

    2016-04-01

    Measuring procedure-specific complication-rate trends allows for benchmarking and improvement in quality of care but must be done in a standardized fashion. Using the Nationwide Inpatient Sample, we identified all instances of eight common inpatient urologic procedures performed in the United States between 2000 and 2010. This yielded 327218 cases including both oncologic and benign diseases. Complications were identified by International Classification of Diseases, Ninth Revision codes. Each complication was cross-referenced to the procedure code and graded according to the standardized Clavien system. The Mann-Whitney and chi-square were used to assess the statistical significance of medians and proportions, respectively. We assessed temporal variability in the rates of overall complications (Clavien grade 1-4), length of hospital stay, and in-hospital mortality using the estimated annual percent change (EAPC) linear regression methodology. We observed an overall reduction in length of stay (EAPC: -1.59; ptrends showed a significant increase in complications for inpatient ureterorenoscopy (EAPC: 5.53; ptrends of urologic procedures and their complications. A significant shift toward sicker patients and more complex procedures in the inpatient setting was found, but this did not result in higher mortality. These results are indicators of the high quality of care for urologic procedures in the inpatient setting. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  4. The Annual Trends between Neurointerventional and Neurosurgical Procedures in Korea: Analysis using HIRA Data from 2010 to 2016.

    Suh, Sang Hyun

    2017-09-01

    The purpose of this study was to evaluate the annual trend for several neurosurgical and neurointerventional procedures (NIPs) in Korea between 2010 and 2016 by using medical claim data. All data during the recent 7 years were collected from Bigdata Hub provided by the Health Insurance Review & Assessment Service (HIRA). Using several codes for medical practice, we estimated the number of in-patients for each practice and calculated the annual variation of each procedure. These data were compared between each NIP versus the equivalent surgical procedure and analyzed according to the geographic distribution and the hospital grade. During the past 7 years, the average growth rate of all stroke in-patients is 2.05%, and the number of in-patients with ischemic stroke is 5.2 times higher than those with hemorrhagic stroke. While clipping accounted for 53.7% (5,209/9,700) of total patients in 2010, 61.9% of patients (9,146/14,781) are currently undergoing coiling procedure in 2016. MT for acute ischemic stroke has grown annually by an average of 35.9% during the past 3 years. Stenting and angioplasty for carotid artery and intracranial artery showed relatively stable growth during the past 7 years. Especially, carotid artery stenting is rapidly increasing in secondary general hospitals, where more than 50% of these procedures were performed after 2014. In this study, we found that most of NIPs has increased annually over the past 7 years and that NIPs are rapidly replacing conventional surgical maneuvers in Korea.

  5. Linking dynamic phenotyping with metabolite analysis to study natural variation in drought responses of Brachypodium distachyon

    Lorraine H.C. Fisher

    2016-11-01

    Full Text Available Drought is an important environmental stress limiting the productivity of major crops worldwide. Understanding drought tolerance and possible mechanisms for improving drought resistance is therefore a prerequisite to develop drought-tolerant crops that produce significant yields with reduced amounts of water. Brachypodium distachyon (Brachypodium is a key model species for cereals, forage grasses and energy grasses. In this study, initial screening of a Brachypodium germplasm collection consisting of 138 different ecotypes exposed to progressive drought, highlighted the natural variation in morphology, biomass accumulation and responses to drought stress. A core set of ten ecotypes, classified as being either tolerant, susceptible or intermediate, in response to drought stress, were exposed to mild or severe (respectively 15% and 0% soil water content drought stress and phenomic parameters linked to growth and colour changes were assessed. When exposed to severe drought stress, phenotypic data and metabolite profiling combined with multivariate analysis revealed a remarkable consistency in separating the selected ecotypes into their different pre-defined drought tolerance groups. Increases in several metabolites, including for the phytohormones jasmonic acid and salicylic acid, and TCA-cycle intermediates, were positively correlated with biomass yield and with reduced yellow pixel counts; suggestive of delayed senescence, both key target traits for crop improvement to drought stress. While metabolite analysis also separated ecotypes into the distinct tolerance groupings after exposure to mild drought stress, similar analysis of the phenotypic data failed to do so, confirming the value of metabolomics to investigate early responses to drought stress. The results highlight the potential of combining the analyses of phenotypic and metabolic responses to identify key mechanisms and markers associated with drought tolerance in both the Brachypodium

  6. A structure-activity analysis of the variation in oxime efficacy against nerve agents

    Maxwell, Donald M.; Koplovitz, Irwin; Worek, Franz; Sweeney, Richard E.

    2008-01-01

    A structure-activity analysis was used to evaluate the variation in oxime efficacy of 2-PAM, obidoxime, HI-6 and ICD585 against nerve agents. In vivo oxime protection and in vitro oxime reactivation were used as indicators of oxime efficacy against VX, sarin, VR and cyclosarin. Analysis of in vivo oxime protection was conducted with oxime protective ratios (PR) from guinea pigs receiving oxime and atropine therapy after sc administration of nerve agent. Analysis of in vitro reactivation was conducted with second-order rate contants (k r2 ) for oxime reactivation of agent-inhibited acetylcholinesterase (AChE) from guinea pig erythrocytes. In vivo oxime PR and in vitro k r2 decreased as the volume of the alkylmethylphosphonate moiety of nerve agents increased from VX to cyclosarin. This effect was greater with 2-PAM and obidoxime (> 14-fold decrease in PR) than with HI-6 and ICD585 ( r2 as the volume of the agent moiety conjugated to AChE increased was consistent with a steric hindrance mechanism. Linear regression of log (PR-1) against log (k r2 · [oxime dose]) produced two offset parallel regression lines that delineated a significant difference between the coupling of oxime reactivation and oxime protection for HI-6 and ICD585 compared to 2-PAM and obidoxime. HI-6 and ICD585 appeared to be 6.8-fold more effective than 2-PAM and obidoxime at coupling oxime reactivation to oxime protection, which suggested that the isonicotinamide group that is common to both of these oximes, but absent from 2-PAM and obidoxime, is important for oxime efficacy

  7. Development of Procedures for the Analysis of Components of Dumped Chemical Weapons and Their Principal Transformation Products in Sea Water

    Saveleva, E. I.; Koryagina, N. L.; Radilov, A. S.; Khlebnikova, N. S.; Khrustaleva, V. S.

    2007-01-01

    A package of chemical analytical procedures was developed for the detection of products indicative of the presence of damped chemical weapons in the Baltic Sea. The principal requirements imposed upon the procedures were the following: high sensitivity, reliable identification of target compounds, wide range of components covered by survey analysis, and lack of interferences from sea salts. Thiodiglycol, a product of hydrolysis of sulfur mustard reportedly always detected in the sites of damping chemical weapons in the Baltic Sea, was considered the principal marker. We developed a high-sensitivity procedure for the determination of thiodiglycol in sea water, involving evaporation of samples to dryness in a vacuum concentrator, followed by tert-butyldimethylsilylation of the residue and GCMS analysis in the SIM mode with meta-fluorobenzoic acid as internal reference. The detection limit of thiodiglycol was 0.001 mg/l, and the procedure throughput was up to 30 samples per day. The same procedure, but with BSTFA as derivatizing agent instead of MTBSTFA, was used for preparing samples for survey analysis of nonvolatile components. In this case, full mass spectra were measured in the GCMS analysis. The use of BSTFA was motivated by the fact that trimethylsilyl derivatives are much wider represented in electronic mass spectral databases. The identification of sulfur mustard, volatile transformation products of sulfur mustard and lewisite, as well as chloroacetophenone in sea water was performed by means of GCMS in combination with SPME. The survey GC-MS analysis was focused on the identification of volatile and nonvolatile toxic chemicals whose mass spectra are included in the OPCW database (3219 toxic chemicals, precursors, and transformation products) with the use of AMDIS software (version 2.62). Using 2 GC-MS instruments, we could perform the survey analysis for volatile and nonvolatile components of up to 20 samples per day. Thus, the package of three procedures

  8. Variation of the Korotkoff Stethoscope Sounds During Blood Pressure Measurement: Analysis Using a Convolutional Neural Network.

    Pan, Fan; He, Peiyu; Liu, Chengyu; Li, Taiyong; Murray, Alan; Zheng, Dingchang

    2017-11-01

    Korotkoff sounds are known to change their characteristics during blood pressure (BP) measurement, resulting in some uncertainties for systolic and diastolic pressure (SBP and DBP) determinations. The aim of this study was to assess the variation of Korotkoff sounds during BP measurement by examining all stethoscope sounds associated with each heartbeat from above systole to below diastole during linear cuff deflation. Three repeat BP measurements were taken from 140 healthy subjects (age 21 to 73 years; 62 female and 78 male) by a trained observer, giving 420 measurements. During the BP measurements, the cuff pressure and stethoscope signals were simultaneously recorded digitally to a computer for subsequent analysis. Heartbeats were identified from the oscillometric cuff pressure pulses. The presence of each beat was used to create a time window (1 s, 2000 samples) centered on the oscillometric pulse peak for extracting beat-by-beat stethoscope sounds. A time-frequency two-dimensional matrix was obtained for the stethoscope sounds associated with each beat, and all beats between the manually determined SBPs and DBPs were labeled as "Korotkoff." A convolutional neural network was then used to analyze consistency in sound patterns that were associated with Korotkoff sounds. A 10-fold cross-validation strategy was applied to the stethoscope sounds from all 140 subjects, with the data from ten groups of 14 subjects being analyzed separately, allowing consistency to be evaluated between groups. Next, within-subject variation of the Korotkoff sounds analyzed from the three repeats was quantified, separately for each stethoscope sound beat. There was consistency between folds with no significant differences between groups of 14 subjects (P = 0.09 to P = 0.62). Our results showed that 80.7% beats at SBP and 69.5% at DBP were analyzed as Korotkoff sounds, with significant differences between adjacent beats at systole (13.1%, P = 0.001) and diastole (17.4%, P < 0

  9. An analysis of cross-sectional variations in total household energy requirements in India using micro survey data

    Pachauri, Shonali

    2004-01-01

    Using micro level household survey data from India, we analyse the variation in the pattern and quantum of household energy requirements, both direct and indirect, and the factors causing such variation. An econometric analysis using household survey data from India for the year 1993-1994 reveals that household socio-economic, demographic, geographic, family and dwelling attributes influence the total household energy requirements. There are also large variations in the pattern of energy requirements across households belonging to different expenditure classes. Results from the econometric estimation show that total household expenditure or income level is the most important explanatory variable causing variation in energy requirements across households. In addition, the size of the household dwelling and the age of the head of the household are related to higher household energy requirements. In contrast, the number of members in the household and literacy of the head are associated with lower household energy requirements

  10. An analysis of cross-sectional variations in total household energy requirements in India using micro survey data

    Pachauri, Shonali E-mail: shonali.pachauri@cepe.mavt.ethz.ch

    2004-10-01

    Using micro level household survey data from India, we analyse the variation in the pattern and quantum of household energy requirements, both direct and indirect, and the factors causing such variation. An econometric analysis using household survey data from India for the year 1993-1994 reveals that household socio-economic, demographic, geographic, family and dwelling attributes influence the total household energy requirements. There are also large variations in the pattern of energy requirements across households belonging to different expenditure classes. Results from the econometric estimation show that total household expenditure or income level is the most important explanatory variable causing variation in energy requirements across households. In addition, the size of the household dwelling and the age of the head of the household are related to higher household energy requirements. In contrast, the number of members in the household and literacy of the head are associated with lower household energy requirements.

  11. Morphology of the Iceland Basin Excursion from a spherical harmonics analysis and an iterative Bayesian inversion procedure of sedimentary records

    Lanci, Luca; Kissel, Catherine; Leonhardt, Roman; Laj, Carlo

    2008-08-01

    Based on 5 published marine high-resolution sedimentary records of the Iceland Basin Excursion [IBE; Channell, J.E.T., Hodell, D.A., Lehman, B., 1997. Relative geomagnetic paleointensity and ∂ 18O at ODP Site 983/Gardar Drift, North Atlantic since 350 ka. Earth Planet. Sci. Lett. 153, 103-118; Laj, C., Kissel, C., Roberts, A., 2006. Geomagnetic field behavior during the Iceland Basin and Laschamp geomagnetic excursions: a simple transitional field geometry? Geochem. Geophys. Geosystems. 7, Q03004, doi:10.1029/2005GC001122] dated around 186-190 kyr, we present models of the excursional geomagnetic field at the Earth's surface using two different approaches. First a spherical harmonics analysis is performed after synchronization of the records using their paleointensity profiles. Second, we have used an iterative Bayesian inversion procedure, calibrated using the single volcanic data available so far. Both modeling approaches suffer from imperfections of the paleomagnetic signals and mostly from the still poor geographical distribution of detailed records, presently available only from the North Atlantic and the West Pacific. For these reasons, our modeling results should only be regarded as preliminary models of the geomagnetic field during the IBE, susceptible to improvements when including results from future paleomagnetic studies. Nevertheless, both approaches show distinct similarities and are stable against moderate variations of modeling parameters. The general picture is that of a dipole field undergoing a strong reduction, but remaining higher than the non-dipole field all through the excursional process, except for a very short interval of time corresponding to the dipole minimum at the center of the excursion. On the other hand, some differences exist between the results of the two models with each other and with the real data when the virtual geomagnetic pole (VGP) paths are considered. The non-dipole field does not appear to undergo very significant

  12. Cost-effectiveness analysis comparing the essure tubal sterilization procedure and laparoscopic tubal sterilization.

    Thiel, John A; Carson, George D

    2008-07-01

    To analyze the financial implications of establishing a hysteroscopic sterilization program using the Essure micro-insert tubal sterilization system in an ambulatory clinic. A retrospective cohort study (Canadian Task Force classification Type II-2), in an ambulatory women's health clinic in a tertiary hospital, of 108 women undergoing Essure coil insertion between 2005 and 2006, and 104 women undergoing laparoscopic tubal sterilization for permanent sterilization between 2001 and 2004. The Essure procedures used a 4 mm single channel operative hysteroscope and conscious sedation (fentanyl and midazolam); the laparoscopic tubal sterilizations were completed under general anaesthesia with a 7 mm laparoscope and either bipolar cautery or Filshie clips. Costs associated with the procedure, follow-up, and management of any complications (including nursing, hospital charges, equipment, and disposables) were tabulated. The Essure coils were successfully placed on the first attempt in 103 of 108 women (95%). Three patients required a second attempt to complete placement and two patients required laparoscopic tubal sterilization after an unsuccessful Essure. All 104 laparoscopic tubals were completed on the first attempt with no complications reported. The total cost for the 108 Essure procedures, including follow-up evaluation, was $138,996 or $1287 per case. The total cost associated with the 104 laparoscopic tubal sterilization procedures was $148,227 or $1398 per case. The incremental cost-effectiveness ratio was $111. The Essure procedure in an ambulatory setting resulted in a statistically significant cost saving of $111 per sterilization procedure. Carrying out the Essure procedure in an ambulatory setting frees space in the operating room for other types of cases, improving access to care for more patients.

  13. Standardization of clinical enzyme analysis using frozen human serum pools with values assigned by the International Federation of Clinical Chemistry and Laboratory Medicine reference measurement procedures.

    Tong, Qing; Chen, Baorong; Zhang, Rui; Zuo, Chang

    Variation in clinical enzyme analysis, particularly across different measuring systems and laboratories, represents a critical but long-lasting problem in diagnosis. Calibrators with traceability and commutability are imminently needed to harmonize analysis in laboratory medicine. Fresh frozen human serum pools were assigned values for alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyltransferase (GGT), creatine kinase (CK) and lactate dehydrogenase (LDH) by six laboratories with established International Federation of Clinical Chemistry and Laboratory Medicine reference measurement procedures. These serum pools were then used across 76 laboratories as a calibrator in the analysis of five enzymes. Bias and imprecision in the measurement of the five enzymes tested were significantly reduced by using the value-assigned serum in analytical systems with open and single-point calibration. The median (interquartile range) of the relative biases of ALT, AST, GGT, CK and LDH were 2.0% (0.6-3.4%), 0.8% (-0.8-2.3%), 1.0% (-0.5-2.0%), 0.2% (-0.3-1.0%) and 0.2% (-0.9-1.1%), respectively. Before calibration, the interlaboratory coefficients of variation (CVs) in the analysis of patient serum samples were 8.0-8.2%, 7.3-8.5%, 8.1-8.7%, 5.1-5.9% and 5.8-6.4% for ALT, AST, GGT, CK and LDH, respectively; after calibration, the CVs decreased to 2.7-3.3%, 3.0-3.6%, 1.6-2.1%, 1.8-1.9% and 3.3-3.5%, respectively. The results suggest that the use of fresh frozen serum pools significantly improved the comparability of test results in analytical systems with open and single-point calibration.

  14. Analysis of blended fuel properties and cycle-to-cycle variation in a diesel engine with a diethyl ether additive

    Ali, Obed M.; Mamat, Rizalman; Masjuki, H.H.; Abdullah, Abdul Adam

    2016-01-01

    Highlights: • Viability of diethyl ether additive to improve palm biodiesel–diesel blend. • Numerical analysis of engine cyclic variation at different additive ratios. • Physicochemical properties of the blends improved with diethyl ether additive. • Blended fuel heating value is significantly affected. • Blended fuel with 4% diethyl ether shows comparable engine cyclic variation to diesel. - Abstract: In this study, the effect of adding small portions of a diethyl ether additive to biodiesel–diesel blended fuel (B30) was investigated. This study includes an evaluation of the fuel properties and a combustion analysis, specifically, an analysis of the cyclic variations in diesel engines. The amount of additive used with B30 is 2%, 4%, 6% and 8% (by volume). The experimental engine test was conducted at 2500 rpm which produce maximum torque, and the in-cylinder pressure data were collected over 200 consecutive engine cycles for each test. The indicated mean effective pressure time series is analyzed using the coefficient of variation and the wavelet analysis method. The test results for the properties show a slight improvement in density and acid value with a significant decrease in the viscosity, pour point and cloud point of the blended fuel with an 8% additive ratio by 26.5%, 4 °C and 3 °C, respectively, compared with blended fuel without additive. However, the heating value is reduced by approximately 4% with increasing the additive ratio to 8%. From the wavelet power spectrum, it is observed that the intermediate and long-term periodicities appear in diesel fuel, while the short-period oscillations become intermittently visible in pure blended fuel. The coefficient of variation for B30 was the lowest and increased as the additive ratios increased, which agrees with the wavelet analysis results. Furthermore, the spectral power increased with an increase in the additive ratio, indicating that the additive has a noticeable effect on increasing the

  15. Contemporary analysis of the intraoperative and perioperative complications of neurosurgical procedures performed in the sitting position.

    Himes, Benjamin T; Mallory, Grant W; Abcejo, Arnoley S; Pasternak, Jeffrey; Atkinson, John L D; Meyer, Fredric B; Marsh, W Richard; Link, Michael J; Clarke, Michelle J; Perkins, William; Van Gompel, Jamie J

    2017-07-01

    OBJECTIVE Historically, performing neurosurgery with the patient in the sitting position offered advantages such as improved visualization and gravity-assisted retraction. However, this position fell out of favor at many centers due to the perceived risk of venous air embolism (VAE) and other position-related complications. Some neurosurgical centers continue to perform sitting-position cases in select patients, often using modern monitoring techniques that may improve procedural safety. Therefore, this paper reports the risks associated with neurosurgical procedures performed in the sitting position in a modern series. METHODS The authors reviewed the anesthesia records for instances of clinically significant VAE and other complications for all neurosurgical procedures performed in the sitting position between January 1, 2000, and October 8, 2013. In addition, a prospectively maintained morbidity and mortality log of these procedures was reviewed for instances of subdural or intracerebral hemorrhage, tension pneumocephalus, and quadriplegia. Both overall and specific complication rates were calculated in relation to the specific type of procedure. RESULTS In a series of 1792 procedures, the overall complication rate related to the sitting position was 1.45%, which included clinically significant VAE, tension pneumocephalus, and subdural hemorrhage. The rate of any detected VAE was 4.7%, but the rate of VAE requiring clinical intervention was 1.06%. The risk of clinically significant VAE was highest in patients undergoing suboccipital craniotomy/craniectomy with a rate of 2.7% and an odds ratio (OR) of 2.8 relative to deep brain stimulator cases (95% confidence interval [CI] 1.2-70, p = 0.04). Sitting cervical spine cases had a comparatively lower complication rate of 0.7% and an OR of 0.28 as compared with all cranial procedures (95% CI 0.12-0.67, p < 0.01). Sitting cervical cases were further subdivided into extradural and intradural procedures. The rate of

  16. [Analysis of variation of monoterpene glycosides and polyhydroxy compounds in paeoniae radix alba during preliminary processing].

    Xu, Yuan; Liu, Pei; Yan, Hui; Qian, Da-Wei; Duan, Jin-Ao

    2014-05-01

    To investigate variation of monoterpene glycosides and polyhydroxy compounds in Paeoniae Radix Alba dried by different processing methods. The crude drugs were processed sequentially as washed, removed the head, tail, fine roots and dried. The samples were divided into eight groups by whether peeled and decocted or not. Each group was dried by 35, 45, 60, 80,100, 120 degrees C, sun-dried and shade-dried. HPLC-PDA method was adopted to determine the content of monoterpene glycosides compounds (paeoniflorin alibiflorin, oxypaeoniflorin and benzoylpaeoniflorin), polyhydroxy compounds (catechin and gallic acid) and benzoic acid. Chromatographic conditions: Phecad C18 column (250 mm x 4.6 mm, 5 microm). A principal component analysis (PCA) method was used subsequently to get data processed. The retained content of seven constituents decreased in those peeled crude drug, and after cooked, monoterpene glycosides and polyhydroxy compounds increased while the benzoic acid decreased. It was believed that rele- vant enzymes were inactivated while being cooked so that drying temperature showed little influence on the biotransformation. Contents of effective ingredients in Paeoniae Radix Alba are influenced by drying processing. The preferable method shows to be that crude drug should be cooked before being peeled and dried. As a matter of processing convtence, it is suggested to be peeled and sliced before being dried.

  17. Mutation scanning analysis of genetic variation within and among Echinococcus species: implications and future prospects.

    Jabbar, Abdul; Gasser, Robin B

    2013-07-01

    Adult tapeworms of the genus Echinococcus (family Taeniidae) occur in the small intestines of carnivorous definitive hosts and are transmitted to particular intermediate mammalian hosts, in which they develop as fluid-filled larvae (cysts) in internal organs (usually lung and liver), causing the disease echinococcosis. Echinococcus species are of major medical importance and also cause losses to the meat and livestock industries, mainly due to the condemnation of infected offal. Decisions regarding the treatment and control of echinococcosis rely on the accurate identification of species and population variants (strains). Conventional, phenetic methods for specific identification have some significant limitations. Despite advances in the development of molecular tools, there has been limited application of mutation scanning methods to species of Echinococcus. Here, we briefly review key genetic markers used for the identification of Echinococcus species and techniques for the analysis of genetic variation within and among populations, and the diagnosis of echinococcosis. We also discuss the benefits of utilizing mutation scanning approaches to elucidate the population genetics and epidemiology of Echinococcus species. These benefits are likely to become more evident following the complete characterization of the genomes of E. granulosus and E. multilocularis.

  18. The variation of root exudates from the hyperaccumulator Sedum alfredii under cadmium stress: metabonomics analysis.

    Qing Luo

    Full Text Available Hydroponic experiments were conducted to investigate the variation of root exudates from the hyperaccumulator Sedum alfredii under the stress of cadmium (Cd. S. alfredii was cultured for 4 days in the nutrient solution spiked with CdCl2 at concentrations of 0, 5, 10, 40, and 400 µM Cd after the pre-culture. The root exudates were collected and analyzed by GC-MS, and 62 compounds were identified. Of these compounds, the orthogonal partial least-squares discrimination analysis (OPLS-DA showed that there were a distinct difference among the root exudates with different Cd treatments and 20 compounds resulting in this difference were found out. Changing tendencies in the relative content of these 20 compounds under the different Cd treatments were analyzed. These results indicated that trehalose, erythritol, naphthalene, d-pinitol and n-octacosane might be closely related to the Cd stabilization, phosphoric acid, tetradecanoic acid, oxalic acid, threonic acid and glycine could be attributed to the Cd mobilization, and mannitol, oleic acid, 3-hydroxybutanoic acid, fructose, octacosanol and ribitol could copy well with the Cd stress.

  19. Regional Variations of Credits Obtained by LEED 2009 Certified Green Buildings—A Country Level Analysis

    Peng Wu

    2017-12-01

    Full Text Available Leadership in Energy and Environmental Design (LEED is one of the most widely recognized green building rating systems. With more than 20% of the projects certified in non-United States (US countries, LEED’s global impact has been increasing and it is critically important for developers and regulatory authorities to understand LEED’s performance at the country level to facilitate global implementation. This study therefore aims to investigate the credit achievement pattern of LEED 2009, which is one of the well-developed versions of LEED, by using 4021 certified projects in the US, China, Turkey, and Brazil. The results show that significant differences can be identified on most rating categories, including sustainable sites, water efficiency, energy and atmosphere, indoor environmental quality, and innovation in design. Using a post hoc analysis, country-specific credit allocation patterns are also identified to help developers to understand existing country-specific green building practices. In addition, it is also found that there is unbalanced achievement of regional priority credits. The study offers a useful reference and benchmark for international developers and contractors to understand the regional variations of LEED 2009 and for regulatory authorities, such as the U.S. Green Building Council, to improve the rating system, especially on designing regional priority credits.

  20. Genomic analysis of natural selection and phenotypic variation in high-altitude mongolians.

    Jinchuan Xing

    Full Text Available Deedu (DU Mongolians, who migrated from the Mongolian steppes to the Qinghai-Tibetan Plateau approximately 500 years ago, are challenged by environmental conditions similar to native Tibetan highlanders. Identification of adaptive genetic factors in this population could provide insight into coordinated physiological responses to this environment. Here we examine genomic and phenotypic variation in this unique population and present the first complete analysis of a Mongolian whole-genome sequence. High-density SNP array data demonstrate that DU Mongolians share genetic ancestry with other Mongolian as well as Tibetan populations, specifically in genomic regions related with adaptation to high altitude. Several selection candidate genes identified in DU Mongolians are shared with other Asian groups (e.g., EDAR, neighboring Tibetan populations (including high-altitude candidates EPAS1, PKLR, and CYP2E1, as well as genes previously hypothesized to be associated with metabolic adaptation (e.g., PPARG. Hemoglobin concentration, a trait associated with high-altitude adaptation in Tibetans, is at an intermediate level in DU Mongolians compared to Tibetans and Han Chinese at comparable altitude. Whole-genome sequence from a DU Mongolian (Tianjiao1 shows that about 2% of the genomic variants, including more than 300 protein-coding changes, are specific to this individual. Our analyses of DU Mongolians and the first Mongolian genome provide valuable insight into genetic adaptation to extreme environments.

  1. Strategies for integrated analysis of genetic, epigenetic and gene expression variation in cancer: addressing the challenges

    Louise Bruun Thingholm

    2016-02-01

    Full Text Available The development and progression of cancer, a collection of diseases with complex genetic architectures, is facilitated by the interplay of multiple etiological factors. This complexity challenges the traditional single-platform study design and calls for an integrated approach to data analysis. However, integration of heterogeneous measurements of biological variation is a non-trivial exercise due to the diversity of the human genome and the variety of output data formats and genome coverage obtained from the commonly used molecular platforms. This review article will provide an introduction to integration strategies used for analyzing genetic risk factors for cancer. We critically examine the ability of these strategies to handle the complexity of the human genome and also accommodate information about the biological and functional interactions between the elements that have been measured – making the assessment of disease risk against a composite genomic factor possible. The focus of this review is to provide an overview and introduction to the main strategies and to discuss where there is a need for further development.

  2. Extra-binomial variation approach for analysis of pooled DNA sequencing data

    Wallace, Chris

    2012-01-01

    Motivation: The invention of next-generation sequencing technology has made it possible to study the rare variants that are more likely to pinpoint causal disease genes. To make such experiments financially viable, DNA samples from several subjects are often pooled before sequencing. This induces large between-pool variation which, together with other sources of experimental error, creates over-dispersed data. Statistical analysis of pooled sequencing data needs to appropriately model this additional variance to avoid inflating the false-positive rate. Results: We propose a new statistical method based on an extra-binomial model to address the over-dispersion and apply it to pooled case-control data. We demonstrate that our model provides a better fit to the data than either a standard binomial model or a traditional extra-binomial model proposed by Williams and can analyse both rare and common variants with lower or more variable pool depths compared to the other methods. Availability: Package ‘extraBinomial’ is on http://cran.r-project.org/ Contact: chris.wallace@cimr.cam.ac.uk Supplementary information: Supplementary data are available at Bioinformatics Online. PMID:22976083

  3. Evaluation of stroke volume variation obtained by arterial pulse contour analysis to predict fluid responsiveness intraoperatively.

    Lahner, D; Kabon, B; Marschalek, C; Chiari, A; Pestel, G; Kaider, A; Fleischmann, E; Hetz, H

    2009-09-01

    Fluid management guided by oesophageal Doppler monitor has been reported to improve perioperative outcome. Stroke volume variation (SVV) is considered a reliable clinical predictor of fluid responsiveness. Consequently, the aim of the present trial was to evaluate the accuracy of SVV determined by arterial pulse contour (APCO) analysis, using the FloTrac/Vigileo system, to predict fluid responsiveness as measured by the oesophageal Doppler. Patients undergoing major abdominal surgery received intraoperative fluid management guided by oesophageal Doppler monitoring. Fluid boluses of 250 ml each were administered in case of a decrease in corrected flow time (FTc) to 10%. The ability of SVV to predict fluid responsiveness was assessed by calculation of the area under the receiver operating characteristic (ROC) curve. Twenty patients received 67 fluid boluses. Fifty-two of the 67 fluid boluses administered resulted in fluid responsiveness. SVV achieved an area under the ROC curve of 0.512 [confidence interval (CI) 0.32-0.70]. A cut-off point for fluid responsiveness was found for SVV > or =8.5% (sensitivity: 77%; specificity: 43%; positive predictive value: 84%; and negative predictive value: 33%). This prospective, interventional observer-blinded study demonstrates that SVV obtained by APCO, using the FloTrac/Vigileo system, is not a reliable predictor of fluid responsiveness in the setting of major abdominal surgery.

  4. An assessment of surface emissivity variation effects on plasma uniformity analysis using IR cameras

    Greenhalgh, Abigail; Showers, Melissa; Biewer, Theodore

    2017-10-01

    The Prototype-Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device operating at Oak Ridge National Laboratory (ORNL). Its purpose is to test plasma source and heating concepts for the planned Material Plasma Exposure eXperiment (MPEX), which has the mission to test the plasma-material interactions under fusion reactor conditions. In this device material targets will be exposed to high heat fluxes (>10 MW/m2). To characterize the heat fluxes to the target a IR thermography system is used taking up to 432 frames per second videos. The data is analyzed to determine the surface temperature on the target in specific regions of interest. The IR analysis has indicated a low level of plasma uniformity; the plasma often deposits more heat to the edge of the plate than the center. An essential parameter for IR temperature calculation is the surface emissivity of the plate (stainless steel). A study has been performed to characterize the variation in the surface emissivity of the plate as its temperature changes and its surface finish is modified by plasma exposure.

  5. Determining Time Variation of Cable Tension Forces in Suspended Bridges Using Time-Frequency Analysis

    Gannon Stromquist-LeVoir

    2018-01-01

    Full Text Available A feasibility study was conducted to develop a novel method to determine the temporal changes of tensile forces in bridge suspender cables using time-frequency analysis of ambient vibration measurements. An analytical model of the suspender cables was developed to evaluate the power spectral density (PSD function of a cable with consideration of cable flexural stiffness. Discrete-time, short-time Fourier transform (STFT was utilized to analyze the recorded acceleration histories in both time and frequency domains. A mathematical convolution of the analytical PSD function and time-frequency data was completed to evaluate changes in cable tension force over time. The method was implemented using acceleration measurements collected from an in-service steel arch bridge with a suspended deck to calculate the temporal variation in cable forces from the vibration measurements. The observations served as proof of concept that the proposed method may be used for cable fatigue life calculations and bridge weigh-in-motion studies.

  6. International variation in adherence to referral guidelines for suspected cancer: a secondary analysis of survey data.

    Nicholson, Brian D; Mant, David; Neal, Richard D; Hart, Nigel; Hamilton, Willie; Shinkins, Bethany; Rubin, Greg; Rose, Peter W

    2016-02-01

    Variation in cancer survival persists between comparable nations and appears to be due, in part, to primary care practitioners (PCPs) having different thresholds for acting definitively in response to cancer-related symptoms. To explore whether cancer guidelines, and adherence to them, differ between jurisdictions and impacts on PCPs' propensity to take definitive action on cancer-related symptoms. A secondary analysis of survey data from six countries (10 jurisdictions) participating in the International Cancer Benchmarking Partnership. PCPs' responses to five clinical vignettes presenting symptoms and signs of lung (n = 2), colorectal (n = 2), and ovarian cancer (n = 1) were compared with investigation and referral recommendations in cancer guidelines. Nine jurisdictions had guidelines covering the two colorectal vignettes. For the lung vignettes, although eight jurisdictions had guidelines for the first, the second was covered by a Swedish guideline alone. Only the UK and Denmark had an ovarian cancer guideline. Survey responses of 2795 PCPs (crude response rate: 12%) were analysed. Guideline adherence ranged from 20-82%. UK adherence was lower than other jurisdictions for the lung vignette covered by the guidance (47% versus 58%; P nations and poor guideline adherence does not explain differential survival. Guidelines that fail to cover high-risk presentations or that recommend non-definitive action may reduce definitive diagnostic action. © British Journal of General Practice 2016.

  7. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  8. Pragmatic evaluation of the Toyota Production System (TPS analysis procedure for problem solving with entry-level nurses

    Lukasz Maciej Mazur

    2008-12-01

    Full Text Available Medication errors occurring in hospitals are a growing national concern. These medication errors and their related costs (or wastes are seen as major factors leading to increased patient safety risks and increased waste in the hospital setting.  This article presents a study in which sixteen entry-level nurses utilized a Toyota Production System (TPS analysis procedure to solve medication delivery problems at one community hospital. The objective of this research was to study and evaluate the TPS analysis procedure for problem solving with entry-level nurses. Personal journals, focus group discussions, and a survey study were used to collect data about entry-level nurses’ perceptions of using the TPS problem solving approach to study medication delivery. A regression analysis was used to identify characteristics that enhance problem solving efforts. In addition, propositions for effective problem solving by entry-level nurses to aid in the reduction of medication errors in healthcare delivery settings are offered.

  9. Evaluation of six sample preparation procedures for qualitative and quantitative proteomics analysis of milk fat globule membrane.

    Yang, Yongxin; Anderson, Elizabeth; Zhang, Sheng

    2018-04-12

    Proteomic analysis of membrane proteins is challenged by the proteins solubility and detergent incompatibility with MS analysis. No single perfect protocol can be used to comprehensively characterize the proteome of membrane fraction. Here, we used cow milk fat globule membrane (MFGM) proteome analysis to assess six sample preparation procedures including one in-gel and five in-solution digestion approaches prior to LC-MS/MS analysis. The largest number of MFGM proteins were identified by suspension trapping (S-Trap) and filter-aided sample preparation (FASP) methods, followed by acetone precipitation without clean-up of tryptic peptides method. Protein identifications with highest average coverage was achieved by Chloroform/MeOH, in-gel and S-Trap methods. Most distinct proteins were identified by FASP method, followed by S-Trap. Analyses by Venn diagram, principal-component analysis, hierarchical clustering and the abundance ranking of quantitative proteins highlight differences in the MFGM fraction by the all sample preparation procedures. These results reveal the biased proteins/peptides loss occurred in each protocol. In this study, we found several novel proteins that were not observed previously by in-depth proteomics characterization of MFGM fraction in milk. Thus, a combination of multiple procedures with orthologous properties of sample preparation was demonstrated to improve the protein sequence coverage and expression level accuracy of membrane samples. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Analysis of the energetic metabolism in cyclic Bedouin goats (Capra hircus): Nychthemeral and seasonal variations of some haematochemical parameters in relation with body and ambient temperatures.

    Malek, Mouna; Amirat, Zaina; Khammar, Farida; Khaldoun, Mounira

    2016-08-01

    Several studies have examined changes in some haematochemical parameters as a function of the different physiological status (cyclic, pregnant and lactating) of goats, but no relevant literature has exhaustively investigated these variations from anestrous to estrous stages in cyclic goats. In this paper, we report nychthemeral and seasonal variations in ambient and body temperatures, and in some haematochemical parameters (glycemia, cholesterolemia, triglyceridemia, creatininemia and uremia) measured during summer, winter and spring, in seven (7) experimental cyclic female Bedouin goats (Capra hircus) living in the Béni-Abbès region (Algerian Sahara desert). Cosinor rhythmometry procedure was used to determine the rhythmic parameters of ambient temperature and haematochemical parameters. To determine the effect of time of day on the rhythmicity of the studied parameters, as well as their seasonality, repeated measure analysis of variance (ANOVA) was applied. The results showed that in spite of the nychthemeral profile presented by the ambient temperature for each season, the body temperature remained in a narrow range, thus indicating a successful thermoregulation. The rhythmometry analysis showed a circadian rhythmicity of ambient temperature and haematochemical parameters with diurnal acrophases. A statistically significant effect of the time of day was shown on all studied haematochemical parameters, except on creatininemia. It was also found that only uremia, cholesterolemia and triglyceridemia followed the seasonal sexual activity of the studied ruminant. This study demonstrated the good physiological adaptation developed by this breed in response to the harsh climatic conditions of its natural environment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Developmental stage of strongyle eggs affects the outcome variations of real-time PCR analysis

    Andersen, Ulla Vestergaard; Haakansson, I. T.; Roust, Tina

    2013-01-01

    extent developmental stages can affect the variation of diagnostic test results. This study investigated the influence of developmental stages of strongyle eggs on the variation real-time polymerase chain reaction (PCR) results. Mixed species strongyle eggs were obtained from the faeces of a naturally...

  12. Natural variation and QTL analysis for cationic mineral content in seeds of Arabidopsis thaliana.

    Vreugdenhil, D.; Aarts, M.G.M.; Koornneef, M.; Nelissen, H.J.M.; Ernst, W.H.O.

    2004-01-01

    Naturally occurring genetic variation for contents of cationic minerals in seeds of Arabidopsis thaliana was studied by screening a series of accessions (ecotypes) for Ca, Fe, K, Mg, Mn, Na, Zn, and for total contents of P. Variation was observed for all minerals and correlations between contents of

  13. Genetic analysis of mitochondrial DNA control region variations in four tribes of Khyber Pakhtunkhwa, Pakistan.

    Bhatti, Shahzad; Aslamkhan, M; Abbas, Sana; Attimonelli, Marcella; Aydin, Hikmet Hakan; de Souza, Erica Martinha Silva

    2017-09-01

    Due to its geo strategic position at the crossroad of Asia, Pakistan has gained crucial importance of playing its pivotal role in subsequent human migratory events, both prehistoric and historic. This human movement became possible through an ancient overland network of trails called "The Silk Route" linking Asia Minor, Middle East China, Central Asia and Southeast Asia. This study was conducted to analyze complete mitochondrial control region samples of 100 individuals of four major Pashtun tribes namely, Bangash, Khattak, Mahsuds and Orakzai in the province of Khyber Pakhtunkhwa, Pakistan. All Pashtun tribes revealed high genetic diversity which is comparable to the other Central Asian, Southeast Asian and European populations. The configuration of genetic variation and heterogeneity further unveiled through Multidimensional Scaling, Principal Component Analysis and phylogenetic analysis. The results revealed that Pashtun are the composite mosaic of West Eurasian ancestry of numerous geographic origin. They received substantial gene flow during different invasive movements and have a high element of the Western provenance. The most common haplogroups reported in this study are: South Asian haplogroups M (28%) and R (8%); whereas, West Asians haplogroups are present, albeit in high frequencies (67%) and widespread over all; HV (15%), U (17%), H (9%), J (8%), K (8%), W (4%), N (3%) and T (3%). Moreover, we linked the unexplored genetic connection between Ashkenazi Jews and Pashtun. The presence of specific haplotypes J1b (4%) and K1a1b1a (5%) pointed to a genetic connection of Jewish conglomeration in Khattak tribe. This was a result of an ancient genetic influx in the early Neolithic period that led to the formation of a diverse genetic substratum in present day Pashtun.

  14. Variation in worldwide incidence of amyotrophic lateral sclerosis: a meta-analysis.

    Marin, Benoît; Boumédiene, Farid; Logroscino, Giancarlo; Couratier, Philippe; Babron, Marie-Claude; Leutenegger, Anne Louise; Copetti, Massimilano; Preux, Pierre-Marie; Beghi, Ettore

    2017-02-01

    To assess the worldwide variation of amyotrophic lateral sclerosis (ALS) incidence, we performed a systematic review and meta-analysis of population-based data published to date. We reviewed Medline and Embase up to June 2015 and included all population-based studies of newly diagnosed ALS cases, using multiple sources for case ascertainment. ALS crude and standardized incidence (on age and sex using the US 2010 population) were calculated. Random effect meta-analysis and meta-regression were performed using the subcontinent as the main study level covariate. Sources of heterogeneity related to the characteristics of the study population and the study methodology were investigated. Among 3216 records, 44 studies were selected, covering 45 geographical areas in 11 sub-continents. A total of 13 146 ALS cases and 825 million person-years of follow-up (PYFU) were co-nsidered. The overall pooled worldwide crude ALS incidence was at 1.75 (1.55-1.96)/100 000 PYFU; 1.68 (1.50-1.85)/100 000 PYFU after standardization. Heterogeneity was identified in ALS standardized incidence between North Europe [1.89 (1.46-2.32)/100 000 PYFU] and East Asia [0.83 (0.42-1.24)/100 000 PYFU, China and Japan P = 0.001] or South Asia [0.73 (0.58-0.89)/100 000/PYFU Iran, P = 0.02]. Conversely, homogeneous rates have been reported in populations from Europe, North America and New Zealand [pooled ALS standardized incidence of 1.81 (1.66-1.97)/100 000 PYFU for those areas]. This review confirms a heterogeneous distribution worldwide of ALS, and sets the scene to sustain a collaborative study involving a wide international consortium to investigate the link between ancestry, environment and ALS incidence. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association

  15. Quantitative analysis of structural variations in corpus callosum in adults with multiple system atrophy (MSA)

    Bhattacharya, Debanjali; Sinha, Neelam; Saini, Jitender

    2017-03-01

    Multiple system atrophy (MSA) is a rare, non-curable, progressive neurodegenerative disorder that affects nervous system and movement, poses a considerable diagnostic challenge to medical researchers. Corpus callosum (CC) being the largest white matter structure in brain, enabling inter-hemispheric communication, quantification of callosal atrophy may provide vital information at the earliest possible stages. The main objective is to identify the differences in CC structure for this disease, based on quantitative analysis on the pattern of callosal atrophy. We report results of quantification of structural changes in regional anatomical thickness, area and length of CC between patient-groups with MSA with respect to healthy controls. The method utilizes isolating and parcellating the mid-sagittal CC into 100 segments along the length - measuring the width of each segment. It also measures areas within geometrically defined five callosal compartments of the well-known Witelson, and Hofer-Frahma schemes. For quantification, statistical tests are performed on these different callosal measurements. From the statistical analysis, it is concluded that compared to healthy controls, width is reduced drastically throughout CC for MSA group and as well as changes in area and length are also significant for MSA. The study is further extended to check if any significant difference in thickness is found between the two variations of MSA, Parkinsonian MSA and Cerebellar MSA group, using the same methodology. However area and length of this two sub-MSA group, no substantial difference is obtained. The study is performed on twenty subjects for each control and MSA group, who had T1-weighted MRI.

  16. Molecular Karyotyping and Exome Analysis of Salt-Tolerant Rice Mutant from Somaclonal Variation

    Thanikarn Udomchalothorn

    2014-11-01

    Full Text Available LPT123-TC171 is a salt-tolerant (ST and drought-tolerant (DT rice line that was selected from somaclonal variation of the original Leuang Pratew 123 (LPT123 rice cultivar. The objective of this study was to identify the changes in the rice genome that possibly lead to ST and/or DT characteristics. The genomes of LPT123 and LPT123-TC171 were comparatively studied at the four levels of whole chromosomes (chromosome structure including telomeres, transposable elements, and DNA sequence changes by using next-generation sequencing analysis. Compared with LPT123, the LPT123-TC171 line displayed no changes in the ploidy level, but had a significant deficiency of chromosome ends (telomeres. The functional genome analysis revealed new aspects of the genome response to the in vitro cultivation condition, where exome sequencing revealed the molecular spectrum and pattern of changes in the somaclonal variant compared with the parental LPT123 cultivar. Mutation detection was performed, and the degree of mutations was evaluated to estimate the impact of mutagenesis on the protein functions. Mutations within the known genes responding to both drought and salt stress were detected in 493 positions, while mutations within the genes responding to only salt stress were found in 100 positions. The possible functions of the mutated genes contributing to salt or drought tolerance were discussed. It was concluded that the ST and DT characteristics in the somaclonal variegated line resulted from the base changes in the salt- and drought-responsive genes rather than the changes in chromosome structure or the large duplication or deletion in the specific region of the genome.

  17. Procedural learning is impaired in dyslexia: Evidence from a meta-analysis of serial reaction time studies☆

    Lum, Jarrad A.G.; Ullman, Michael T.; Conti-Ramsden, Gina

    2013-01-01

    A number of studies have investigated procedural learning in dyslexia using serial reaction time (SRT) tasks. Overall, the results have been mixed, with evidence of both impaired and intact learning reported. We undertook a systematic search of studies that examined procedural learning using SRT tasks, and synthesized the data using meta-analysis. A total of 14 studies were identified, representing data from 314 individuals with dyslexia and 317 typically developing control participants. The results indicate that, on average, individuals with dyslexia have worse procedural learning abilities than controls, as indexed by sequence learning on the SRT task. The average weighted standardized mean difference (the effect size) was found to be 0.449 (CI95: .204, .693), and was significant (p dyslexia. PMID:23920029

  18. Radiologically assisted percutaneous gastrostomy or enterostomy. A retrospective analysis of 90 procedures

    Mildenberger, P.; Oberholzer, K.; Kauczor, H.U.; Dueber, C.; Kurz, S.; Schild, H.; Thelen, M.

    1996-01-01

    To analyse retrospectively the indications, technique and results of radiologically induced gastrostomies or enterostomies. Radiologically assisted gastrostomies or enterostomies were induced in 90 patients, mostly with high grade obstructions due to head and neck tumours or oesophageal tumours. 19/90 patients had had previous resection of the oesophagus or gastric operations. A catheter was successfully introduced in all patients. In 11/90 patients this was followed by a two-stage procedure. In 16/90 patients puncture was performed under CT control because of some anatomical peculiarity. Serious complication occurred in 9/90 patients but in only one was surgery necessary. There were no deaths resulting from the procedure but mortality after 30 days was 6.7%. Radiologically assisted gastrostomies or enterostomies are a reliable and effective form of treatment even amongst difficult patients. The results and complications are comparable to those from endoscopic procedures. (orig.) [de

  19. Damage detection and quantification using mode curvature variation on framed structures: analysis of the preliminary results

    Iacovino, Chiara; Ditommaso, Rocco; Auletta, Gianluca; Ponzo, Felice C.

    2017-04-01

    Continuous monitoring based on vibrational identification methods is increasingly employed for the evaluation of the state of health of existing buildings after strong motion earthquake. Different damage identification methods are based on the variations of damage indices defined in terms modal (eigenfrequencies, mode shapes, and modal damping) and/or non-modal parameters. Most of simplified methods for structural health monitoring and damage detection are based on the evaluation of the dynamic characteristics evolution associated to the fundamental mode of vibration of a monitored structure. Aim of this work is the upgrade of an existing method for damage localization on framed structures during a moderate/destructive earthquake. The existing version of the method is based on the comparison of the geometric characteristics (with particular reference to the mode curvature) exhibited by the structures, related to fundamental mode of vibration, before and during an earthquake. The approach is based on the use of a nonlinear filter, the band-variable filter, based on the Stockwell Transform able to extract the nonlinear response of each mode of vibration. The new version of the method provides the possibility to quantify a possible damage occurred on the monitored structure linking the mode curvature variation with the maximum inter-story drift. This paper shows the preliminary results obtained from several simulations on nonlinear numerical models of reinforced concrete framed structures, designed for only gravity loads, without and with the presence of infill panels. Furthermore, a correlation between maximum mode curvature difference and maximum inter-story drift has been defined for the different numerical models in order to quantify the structural damage. Acknowledgements This study was partially funded by the Italian Department of Civil Protection within the project DPC-RELUIS 2016 - RS4 ''Seismic observatory of structures and health monitoring'' and by the

  20. [Genetic variation and differentiation in striped field mouse Apodemus agrarius inferred from RAPD-PCR analysis].

    Atopkin, D M; Bogdanov, A S; Chelomina, G N

    2007-06-01

    Genetic variation and differentiation of the trans-Palearctic species Apodemus agrarius (striped field mouse), whose range consists of two large isolates-European-Siberian and Far Eastern-Chinese, were examined using RAPD-PCR analysis. The material from the both parts of the range was examined (41 individual of A. agrarius from 18 localities of Russia, Ukraine, Moldova, and Kazakhstan); the Far-Eastern part was represented by samples from the Amur region, Khabarovsk krai, and Primorye (Russia). Differences in frequencies of polymorphic RAPD loci were found between the European-Siberian and the Far Eastern population groups of striped field mouse. No "fixed" differences between them in RAPD spectra were found, and none of the used statistical methods permitted to distinguish with absolute certainty animals from the two range parts. Thus, genetic isolation of the European-Siberian and the Far Eastern population groups of A. agrarius is not strict. These results support the hypothesis on recent dispersal of striped field mouse from East to West Palearctics (during the Holocene climatic optimum, 7000 to 4500 years ago) and subsequent disjunction of the species range (not earlier than 4000-4500 years ago). The Far Eastern population group is more polymorphic than the European-Siberian one, while genetic heterogeneity is more uniformly distributed within it. This is probably explained by both historical events that happened during the species dispersal in the past, and different environmental conditions for the species in different parts of its range. The Far Eastern population group inhabits the area close to the distribution center of A. agrarius. It is likely that this group preserved genetic variation of the formerly integral ancestral form, while some amount of genetic polymorphism could be lost during the species colonization of the Siberian and European areas. To date, the settlement density and population number in general are higher than within the European

  1. Influence of the derivatization procedure on the results of the gaschromatographic fatty acid analysis of human milk and infant formulae.

    Kohn, G; van der Ploeg, P; Möbius, M; Sawatzki, G

    1996-09-01

    Many different analytical procedures for fatty acid analysis of infant formulae and human milk are described. The objective was to study possible pitfalls in the use of different acid-catalyzed procedures compared to a base-catalyzed procedure based on sodium-methoxide in methanol. The influence of the different methods on the relative fatty acid composition (wt% of total fatty acids) and the total fatty acid recovery rate (expressed as % of total lipids) was studied in two experimental LCP-containing formulae and a human milk sample. MeOH/HCl-procedures were found to result in an incomplete transesterification of triglycerides, if an additional nonpolar solvent like toluene or hexane is not added and a water-free preparation is not guaranteed. In infant formulae the low transesterification of triglycerides (up to only 37%) could result in an 100%-overestimation of the relative amount of LCP, if these fatty acids primarily derive from phospholipids. This is the case in infant formulae containing egg lipids as raw materials. In formula containing fish oils and in human milk the efficacy of esterification results in incorrect absolute amounts of fatty acids, but has no remarkable effect on the relative fatty acid distribution. This is due to the fact that in these samples LCP are primarily bound to triglycerides. Furthermore, in formulae based on butterfat the derivatization procedure should be designed in such a way that losses of short-chain fatty acids due to evaporation steps can be avoided. The procedure based on sodium methoxide was found to result in a satisfactory (about 90%) conversion of formula lipids and a reliable content of all individual fatty acids. Due to a possibly high amount of free fatty acids in human milk, which are not methylated by sodium-methoxide, caution is expressed about the use of this reagent for fatty acid analysis of mothers milk. It is concluded that accurate fatty acid analysis of infant formulae and human milk requires a careful

  2. Describing shell shape variations and sexual dimorphism of Golden Apple Snail, Pomacea caniculata (Lamarck, 1822 using geometric morphometric analysis

    C.C. Cabuga

    2017-09-01

    Full Text Available Pomacea caniculata or Golden Apple Snail (GAS existed to be a rice pest in the Philippines and in Asia. Likewise, geographic location also contributes its increasing populations thus making it invasive among freshwater habitats and rice field areas. This study was conducted in order to describe shell shape variations and sexual dimorphism among the populations of P. caniculata. A total of 180 were randomly collected in the three lakes of Esperanza, Agusan del Sur (Lake Dakong Napo, Lake Oro, and Lake Cebulan, of which each lake comprised of 60 samples (30 males and 30 females. To determine the variations and sexual dimorphism in the shell shape of golden apple snail, coordinates was administered to relative warp analysis and the resulting data were subjected to Multivariate Analysis of Variance (MANOVA, Principal Component Analysis (PCA and Canonical Variate Analysis (CVA. The results show statistically significant (P<0.05 from the appended male and female dorsal and ventral/apertural portion. While male and female spire height, body size, and shell shape opening also shows significant variations. These phenotypic distinctions could be associated with geographic isolation, predation and nutrient component of the gastropods. Thus, the importance of using geometric morphometric advances in describing sexual dimorphism in the shell shape of P. caniculata.

  3. Finite element procedures for coupled linear analysis of heat transfer, fluid and solid mechanics

    Sutjahjo, Edhi; Chamis, Christos C.

    1993-01-01

    Coupled finite element formulations for fluid mechanics, heat transfer, and solid mechanics are derived from the conservation laws for energy, mass, and momentum. To model the physics of interactions among the participating disciplines, the linearized equations are coupled by combining domain and boundary coupling procedures. Iterative numerical solution strategy is presented to solve the equations, with the partitioning of temporal discretization implemented.

  4. A qualitative analysis of the determinants in the choice of a French journal reviewing procedures

    Morge, Ludovic

    2015-12-01

    Between 1993 and 2010, two French journals (Aster and Didaskalia) coming from different backgrounds but belonging to the same institution used to publish papers on research in science and technology education. The merging of these journals made it necessary for them to compare the different reviewing procedures used by each. This merging occurred at a time when research is becoming increasingly international which partly determines some of the reviewing procedure choices. In order for a francophone international journal to survive, it needs to take this internationalization into account in a reasoned manner. The author of this article, as a chief editor of RDST (Recherches en Didactique des Sciences et des Technologies)—the journal resulting from the merging- taking part in this merger, analyses the social, cultural and pragmatic determinants which impacted the choices made in reviewing procedures. This paper describes how these diversity of factors leads us to drop the idea of a standard reviewing procedure which would be valid for all journals.

  5. Rule of law and participation: a normative analysis of internationalised rulemaking as composite procedures

    Mendes, J.M.

    2013-01-01

    Procedural standards of participation have the capacity to structure and constrain the exercise of authority. Focusing on the way decisions are formed, this paper argues that the depletion of such standards in processes of reception of trans- and international decisions within the EU potentially

  6. A smart Monte Carlo procedure for production costing and uncertainty analysis

    Parker, C.; Stremel, J.

    1996-01-01

    Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge of the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined

  7. An analysis of the procedural components of supported employment programs associated with employment outcomes.

    McDonnell, J; Nofs, D; Hardman, M; Chambless, C

    1989-01-01

    This study examined the relation between the procedural components of supported employment programs and employment outcomes for 120 individuals with disabilities. These individuals were involved in supported employment programs established through the Utah Supported Employment Project. The results suggest that successful implementation of supported employment services led to ongoing employment of study participants in community work sites, increased wages, and ongoing opportunities for worker...

  8. Privatising Consumer Complaints : An analysis of a patient’s complaints procedure in the Netherlands

    de Vries, U.M.R.T.

    2006-01-01

    This article analyzes a type of ADR seeking to resolve complaints of consumers and in particular of hospital patients. In doing so, it aims to establish how this type of procedure differs from “normal” litigation and why it proves to be successful. It aims to show that its success is not only due to

  9. Genetic contributions to variation in general cognitive function: a meta-analysis of genome-wide association studies in the CHARGE consortium (N=53 949)

    Davies, G; Armstrong, N; Bis, J C; Bressler, J; Chouraki, V; Giddaluru, S; Hofer, E; Ibrahim-Verbaas, C A; Kirin, M; Lahti, J; van der Lee, S J; Le Hellard, S; Liu, T; Marioni, R E; Oldmeadow, C; Postmus, I; Smith, A V; Smith, J A; Thalamuthu, A; Thomson, R; Vitart, V; Wang, J; Yu, L; Zgaga, L; Zhao, W; Boxall, R; Harris, S E; Hill, W D; Liewald, D C; Luciano, M; Adams, H; Ames, D; Amin, N; Amouyel, P; Assareh, A A; Au, R; Becker, J T; Beiser, A; Berr, C; Bertram, L; Boerwinkle, E; Buckley, B M; Campbell, H; Corley, J; De Jager, P L; Dufouil, C; Eriksson, J G; Espeseth, T; Faul, J D; Ford, I; Scotland, Generation; Gottesman, R F; Griswold, M E; Gudnason, V; Harris, T B; Heiss, G; Hofman, A; Holliday, E G; Huffman, J; Kardia, S L R; Kochan, N; Knopman, D S; Kwok, J B; Lambert, J-C; Lee, T; Li, G; Li, S-C; Loitfelder, M; Lopez, O L; Lundervold, A J; Lundqvist, A; Mather, K A; Mirza, S S; Nyberg, L; Oostra, B A; Palotie, A; Papenberg, G; Pattie, A; Petrovic, K; Polasek, O; Psaty, B M; Redmond, P; Reppermund, S; Rotter, J I; Schmidt, H; Schuur, M; Schofield, P W; Scott, R J; Steen, V M; Stott, D J; van Swieten, J C; Taylor, K D; Trollor, J; Trompet, S; Uitterlinden, A G; Weinstein, G; Widen, E; Windham, B G; Jukema, J W; Wright, A F; Wright, M J; Yang, Q; Amieva, H; Attia, J R; Bennett, D A; Brodaty, H; de Craen, A J M; Hayward, C; Ikram, M A; Lindenberger, U; Nilsson, L-G; Porteous, D J; Räikkönen, K; Reinvang, I; Rudan, I; Sachdev, P S; Schmidt, R; Schofield, P R; Srikanth, V; Starr, J M; Turner, S T; Weir, D R; Wilson, J F; van Duijn, C; Launer, L; Fitzpatrick, A L; Seshadri, S; Mosley, T H; Deary, I J

    2015-01-01

    General cognitive function is substantially heritable across the human life course from adolescence to old age. We investigated the genetic contribution to variation in this important, health- and well-being-related trait in middle-aged and older adults. We conducted a meta-analysis of genome-wide association studies of 31 cohorts (N=53 949) in which the participants had undertaken multiple, diverse cognitive tests. A general cognitive function phenotype was tested for, and created in each cohort by principal component analysis. We report 13 genome-wide significant single-nucleotide polymorphism (SNP) associations in three genomic regions, 6q16.1, 14q12 and 19q13.32 (best SNP and closest gene, respectively: rs10457441, P=3.93 × 10−9, MIR2113; rs17522122, P=2.55 × 10−8, AKAP6; rs10119, P=5.67 × 10−9, APOE/TOMM40). We report one gene-based significant association with the HMGN1 gene located on chromosome 21 (P=1 × 10−6). These genes have previously been associated with neuropsychiatric phenotypes. Meta-analysis results are consistent with a polygenic model of inheritance. To estimate SNP-based heritability, the genome-wide complex trait analysis procedure was applied to two large cohorts, the Atherosclerosis Risk in Communities Study (N=6617) and the Health and Retirement Study (N=5976). The proportion of phenotypic variation accounted for by all genotyped common SNPs was 29% (s.e.=5%) and 28% (s.e.=7%), respectively. Using polygenic prediction analysis, ~1.2% of the variance in general cognitive function was predicted in the Generation Scotland cohort (N=5487; P=1.5 × 10−17). In hypothesis-driven tests, there was significant association between general cognitive function and four genes previously associated with Alzheimer's disease: TOMM40, APOE, ABCG1 and MEF2C. PMID:25644384

  10. Systematic review and meta-analysis of enterocolitis after one-stage transanal pull-through procedure for Hirschsprung's disease.

    Ruttenstock, Elke

    2012-02-01

    PURPOSE: The transanal one-stage pull-through procedure (TERPT) has gained worldwide popularity over open and laparoscopic-assisted one-stage techniques in children with Hirschsprung\\'s disease (HD). It offers the advantages of avoiding laparotomy, laparoscopy, scars, abdominal contamination, and adhesions. However, enterocolitis associated with Hirschsprung\\'s disease (HAEC) still remains to be a potentially life-threatening complication after pull-through operation. The reported incidence of HAEC ranges from 4.6 to 54%. This meta-analysis was designed to evaluate postoperative incidence of HAEC following TERPT procedure. METHODS: A meta-analysis of cases of TERPT reported between 1998 and 2009 was performed. Detailed information was recorded regarding intraoperative details and postoperative complications with particular emphasis on incidence of HAEC. Diagnosis of HAEC in a HD patient was based on the clinical presentation of diarrhoea, abdominal distension, and fever. RESULTS: Of the 54 published articles worldwide, 27 articles, including 899 patients were identified as reporting entirely TERPT procedure. Postoperative HAEC occurred in 92 patients (10.2%). Recurrent episodes of HAEC were reported in 18 patients (2%). Conservative treatment of HAEC was successful in 75 patients (81.5%), whereas in 17 patients (18.5%) surgical treatment was needed. CONCLUSIONS: This systematic review reveals that TERPT is a safe and less-invasive procedure with a low incidence of postoperative HAEC.

  11. Quantitative analysis of crystalline pharmaceuticals in tablets by pattern-fitting procedure using X-ray diffraction pattern.

    Takehira, Rieko; Momose, Yasunori; Yamamura, Shigeo

    2010-10-15

    A pattern-fitting procedure using an X-ray diffraction pattern was applied to the quantitative analysis of binary system of crystalline pharmaceuticals in tablets. Orthorhombic crystals of isoniazid (INH) and mannitol (MAN) were used for the analysis. Tablets were prepared under various compression pressures using a direct compression method with various compositions of INH and MAN. Assuming that X-ray diffraction pattern of INH-MAN system consists of diffraction intensities from respective crystals, observed diffraction intensities were fitted to analytic expression based on X-ray diffraction theory and separated into two intensities from INH and MAN crystals by a nonlinear least-squares procedure. After separation, the contents of INH were determined by using the optimized normalization constants for INH and MAN. The correction parameter including all the factors that are beyond experimental control was required for quantitative analysis without calibration curve. The pattern-fitting procedure made it possible to determine crystalline phases in the range of 10-90% (w/w) of the INH contents. Further, certain characteristics of the crystals in the tablets, such as the preferred orientation, size of crystallite, and lattice disorder were determined simultaneously. This method can be adopted to analyze compounds whose crystal structures are known. It is a potentially powerful tool for the quantitative phase analysis and characterization of crystals in tablets and powders using X-ray diffraction patterns. Copyright 2010 Elsevier B.V. All rights reserved.

  12. Research procedure and criteria for analysis and choice of variants for construction of national radioactive waste repository

    Vachev, B.

    1993-01-01

    General principles, overlying objectives and basic radioactive waste management strategy future priorities are considered. The research procedure is based on system approach and analysis, decision making theory, basic objectives and principles of the national repository construction. Main criteria and some basic notions (like radioactive wastes environment and radioactive wastes barriers - input and output) are introduced. Six environment elements are identified: surroundings and natural environment, economic, scientific and technical-technological, socio-psychological, legal and institutional-political. Flow charts of the hierarchical structure of research procedure, decision making levels and direct and back feeds are presented and a scenario analysis is proposed as one of the tools for reflection of uncertainty. The hierarchical structure of the high level waste repository construction scenarios and variants tree (8 levels) is defined. The methodology and methods of analysis, screening and choice of variants is considered. A 7-group system of criteria and constrains for analysis, screening and choice of variants is formulated. One implementation of the proposed methodology and procedure is the technological choice for radioactive waste conditioning and solving of a preliminary site selection problem. 4 figs., 25 refs. (author)

  13. Analysis of interfractional variations in pancreatic position based on four-dimensional computed tomography

    Shiinoki, Takehiro; Itoh, Akio; Shibuya, Keiko; Nakamura, Mitsuhiro; Nakamura, Akira; Matsuo, Yukinori; Sawada, Akira; Mizowaki, Takashi; Hiraoka, Masahiro

    2010-01-01

    The purpose of this study was to assess inter-fractional variations in pancreatic position using four-dimensional computed tomography (4D-CT) and to find the suitable phase of respiration for breath-holding. The variations in respiratory motion range during treatment course and inter-fractional variations in pancreatic positions were not negligible; however, our study suggested that breath-holding at end-exhalation with some coaching techniques might be considerable one of the non-invasive approaches to get higher positional reproducibility of pancreatic tumors. (author)

  14. Investigation of Deterioration Behavior of Hysteretic Loops in Nonlinear Static Procedure Analysis of Concrete Structures with Shear Walls

    Ghodrati Amiri, G.; Amidi, S.; Khorasani, M.

    2008-01-01

    In the recent years, scientists developed the seismic rehabilitation of structures and their view points were changed from sufficient strength to the performance of structures (Performance Base Design) to prepare a safe design. Nonlinear Static Procedure analysis (NSP) or pushover analysis is a new method that is chosen for its speed and simplicity in calculations. 'Seismic Rehabilitation Code for Existing Buildings' and FEMA 356 considered this method. Result of this analysis is a target displacement that is the base of the performance and rehabilitation procedure of the structures. Exact recognition of that displacement could develop the workability of pushover analysis. In these days, Nonlinear Dynamic Analysis (NDP) is only method can exactly apply the seismic ground motions. In this case because it consumes time, costs very high and is more difficult than other methods, is not applicable as much as NSP. A coefficient used in NSP for determining the target displacement is C2 (Stiffness and Strength Degradations Coefficient) and is applicable for correcting the errors due to eliminating the stiffness and strength degradations in hysteretic loops. In this study it has been tried to analysis three concrete frames with shear walls by several accelerations that scaled according to FEMA 273 and FEMA 356. These structures were designed with Iranian 2800 standard (vers.3). Finally after the analyzing by pushover method and comparison results with dynamic analysis, calculated C2 was comprised with values in rehabilitation codes

  15. Whole-genome copy number variation analysis in anophthalmia and microphthalmia.

    Schilter, K F; Reis, L M; Schneider, A; Bardakjian, T M; Abdul-Rahman, O; Kozel, B A; Zimmerman, H H; Broeckel, U; Semina, E V

    2013-11-01

    Anophthalmia/microphthalmia (A/M) represent severe developmental ocular malformations. Currently, mutations in known genes explain less than 40% of A/M cases. We performed whole-genome copy number variation analysis in 60 patients affected with isolated or syndromic A/M. Pathogenic deletions of 3q26 (SOX2) were identified in four independent patients with syndromic microphthalmia. Other variants of interest included regions with a known role in human disease (likely pathogenic) as well as novel rearrangements (uncertain significance). A 2.2-Mb duplication of 3q29 in a patient with non-syndromic anophthalmia and an 877-kb duplication of 11p13 (PAX6) and a 1.4-Mb deletion of 17q11.2 (NF1) in two independent probands with syndromic microphthalmia and other ocular defects were identified; while ocular anomalies have been previously associated with 3q29 duplications, PAX6 duplications, and NF1 mutations in some cases, the ocular phenotypes observed here are more severe than previously reported. Three novel regions of possible interest included a 2q14.2 duplication which cosegregated with microphthalmia/microcornea and congenital cataracts in one family, and 2q21 and 15q26 duplications in two additional cases; each of these regions contains genes that are active during vertebrate ocular development. Overall, this study identified causative copy number mutations and regions with a possible role in ocular disease in 17% of A/M cases. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Systematic Prioritization and Integrative Analysis of Copy Number Variations in Schizophrenia Reveal Key Schizophrenia Susceptibility Genes

    Luo, Xiongjian; Huang, Liang; Han, Leng; Luo, Zhenwu; Hu, Fang; Tieu, Roger; Gan, Lin

    2014-01-01

    Schizophrenia is a common mental disorder with high heritability and strong genetic heterogeneity. Common disease-common variants hypothesis predicts that schizophrenia is attributable in part to common genetic variants. However, recent studies have clearly demonstrated that copy number variations (CNVs) also play pivotal roles in schizophrenia susceptibility and explain a proportion of missing heritability. Though numerous CNVs have been identified, many of the regions affected by CNVs show poor overlapping among different studies, and it is not known whether the genes disrupted by CNVs contribute to the risk of schizophrenia. By using cumulative scoring, we systematically prioritized the genes affected by CNVs in schizophrenia. We identified 8 top genes that are frequently disrupted by CNVs, including NRXN1, CHRNA7, BCL9, CYFIP1, GJA8, NDE1, SNAP29, and GJA5. Integration of genes affected by CNVs with known schizophrenia susceptibility genes (from previous genetic linkage and association studies) reveals that many genes disrupted by CNVs are also associated with schizophrenia. Further protein-protein interaction (PPI) analysis indicates that protein products of genes affected by CNVs frequently interact with known schizophrenia-associated proteins. Finally, systematic integration of CNVs prioritization data with genetic association and PPI data identifies key schizophrenia candidate genes. Our results provide a global overview of genes impacted by CNVs in schizophrenia and reveal a densely interconnected molecular network of de novo CNVs in schizophrenia. Though the prioritized top genes represent promising schizophrenia risk genes, further work with different prioritization methods and independent samples is needed to confirm these findings. Nevertheless, the identified key candidate genes may have important roles in the pathogenesis of schizophrenia, and further functional characterization of these genes may provide pivotal targets for future therapeutics and

  17. International variation in adherence to referral guidelines for suspected cancer: a secondary analysis of survey data

    Nicholson, Brian D; Mant, David; Neal, Richard D; Hart, Nigel; Hamilton, Willie; Shinkins, Bethany; Rubin, Greg; Rose, Peter W

    2016-01-01

    Background Variation in cancer survival persists between comparable nations and appears to be due, in part, to primary care practitioners (PCPs) having different thresholds for acting definitively in response to cancer-related symptoms. Aim To explore whether cancer guidelines, and adherence to them, differ between jurisdictions and impacts on PCPs’ propensity to take definitive action on cancer-related symptoms. Design and setting A secondary analysis of survey data from six countries (10 jurisdictions) participating in the International Cancer Benchmarking Partnership. Method PCPs’ responses to five clinical vignettes presenting symptoms and signs of lung (n = 2), colorectal (n = 2), and ovarian cancer (n = 1) were compared with investigation and referral recommendations in cancer guidelines. Results Nine jurisdictions had guidelines covering the two colorectal vignettes. For the lung vignettes, although eight jurisdictions had guidelines for the first, the second was covered by a Swedish guideline alone. Only the UK and Denmark had an ovarian cancer guideline. Survey responses of 2795 PCPs (crude response rate: 12%) were analysed. Guideline adherence ranged from 20–82%. UK adherence was lower than other jurisdictions for the lung vignette covered by the guidance (47% versus 58%; P <0.01) but similar (45% versus 46%) or higher (67% versus 38%; P <0.01) for the two colorectal vignettes. PCPs took definitive action least often when a guideline recommended a non-definitive action or made no recommendation. UK PCPs adhered to recommendations for definitive action less than their counterparts (P <0.01). There wasno association between jurisdictional guideline adherence and 1-year survival. Conclusion Cancer guideline content is variable between similarly developed nations and poor guideline adherence does not explain differential survival. Guidelines that fail to cover high-risk presentations or that recommend non-definitive action may reduce definitive

  18. Evaluation of solution procedures for material and/or geometrically nonlinear structural analysis by the direct stiffness method.

    Stricklin, J. A.; Haisler, W. E.; Von Riesemann, W. A.

    1972-01-01

    This paper presents an assessment of the solution procedures available for the analysis of inelastic and/or large deflection structural behavior. A literature survey is given which summarized the contribution of other researchers in the analysis of structural problems exhibiting material nonlinearities and combined geometric-material nonlinearities. Attention is focused at evaluating the available computation and solution techniques. Each of the solution techniques is developed from a common equation of equilibrium in terms of pseudo forces. The solution procedures are applied to circular plates and shells of revolution in an attempt to compare and evaluate each with respect to computational accuracy, economy, and efficiency. Based on the numerical studies, observations and comments are made with regard to the accuracy and economy of each solution technique.

  19. Crop physiological analysis of seed quality variation in common bean (Phaseolus vulgaris L.)

    Muasya, R.M.

    2001-01-01

    Keywords : Physiological maturity, harvest maturity, earliness, common bean, Phaseolus vulgaris L., morphological markers, variation, moisture content, dry weight, viability, vigour, electrical conductivity, tetrazolium, seed lot, seed

  20. Analysis of the Diurnal Variation of the Global Electric Circuit Obtained From Different Numerical Models

    Jánský, Jaroslav; Lucas, Greg M.; Kalb, Christina; Bayona, Victor; Peterson, Michael J.; Deierling, Wiebke; Flyer, Natasha; Pasko, Victor P.

    2017-12-01

    This work analyzes different current source and conductivity parameterizations and their influence on the diurnal variation of the global electric circuit (GEC). The diurnal variations of the current source parameterizations obtained using electric field and conductivity measurements from plane overflights combined with global Tropical Rainfall Measuring Mission satellite data give generally good agreement with measured diurnal variation of the electric field at Vostok, Antarctica, where reference experimental measurements are performed. An approach employing 85 GHz passive microwave observations to infer currents within the GEC is compared and shows the best agreement in amplitude and phase with experimental measurements. To study the conductivity influence, GEC models solving the continuity equation in 3-D are used to calculate atmospheric resistance using yearly averaged conductivity obtained from the global circulation model Community Earth System Model (CESM). Then, using current source parameterization combining mean currents and global counts of electrified clouds, if the exponential conductivity is substituted by the conductivity from CESM, the peak to peak diurnal variation of the ionospheric potential of the GEC decreases from 24% to 20%. The main reason for the change is the presence of clouds while effects of 222Rn ionization, aerosols, and topography are less pronounced. The simulated peak to peak diurnal variation of the electric field at Vostok is increased from 15% to 18% from the diurnal variation of the global current in the GEC if conductivity from CESM is used.