WorldWideScience

Sample records for factor analysis procedures

  1. [Graphical procedures for assessing person-fit in item factor analysis].

    Science.gov (United States)

    Ferrando Piera, Pere Joan; Morales Vives, Fàbia

    2010-05-01

    Flagging the individuals who did not answer consistently can be very useful in certain applied domains, especially in clinical and personnel selection areas. Identification of inconsistent patterns prevents erroneous interpretations of test scores. Two graphic procedures based on linear factor analysis are proposed in this paper. They allow the possible causes of low intra-individual consistency to be assessed once a pattern has been flagged as inconsistent. Moreover, these procedures allow us to identify the items that have contributed the most to the inconsistency. The procedures are illustrated with some empirical examples in personality. Lastly, implications of the results in the construction of personality measures are discussed.

  2. Using sequential analysis procedures to rank the influencing factors of public work's quality

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In order to improve the efficiency in management of public work projects, screening and controlling influencing factors affecting the quality of a public work project is essential. This study synthesized 9 influential categories including 91 factors related to quality management of public works in Taiwan using a sequential analysis procedure. According to the Borda-values of influencing factors obtained from a first stage questionnaire, the number of primary factors selected by the responsible entities and the design-supervisory entities were 44 and 45 respectively. A Fuzzy Analytic Hierarchy Process (FAHP)was used to prioritize and rank these factors. The top five factors ranked by the responsible entities were (1) introduction of the earned value analysis, (2) working efficiency, (3) environmental laws and regulations, (4) price-index fluctuation, and (5) on-site safety management. The top five factors ranked by the design-supervisory entities were (1) man power, (2) laws and regulations, (3)price-index fluctuation, (4) traffic conditions, and (5) faulty design.

  3. Human Factors Process Task Analysis Liquid Oxygen Pump Acceptance Test Procedure for the Advanced Technology Development Center

    Science.gov (United States)

    Diorio, Kimberly A.

    2002-01-01

    A process task analysis effort was undertaken by Dynacs Inc. commencing in June 2002 under contract from NASA YA-D6. Funding was provided through NASA's Ames Research Center (ARC), Code M/HQ, and Industrial Engineering and Safety (IES). The John F. Kennedy Space Center (KSC) Engineering Development Contract (EDC) Task Order was 5SMA768. The scope of the effort was to conduct a Human Factors Process Failure Modes and Effects Analysis (HF PFMEA) of a hazardous activity and provide recommendations to eliminate or reduce the effects of errors caused by human factors. The Liquid Oxygen (LOX) Pump Acceptance Test Procedure (ATP) was selected for this analysis. The HF PFMEA table (see appendix A) provides an analysis of six major categories evaluated for this study. These categories include Personnel Certification, Test Procedure Format, Test Procedure Safety Controls, Test Article Data, Instrumentation, and Voice Communication. For each specific requirement listed in appendix A, the following topics were addressed: Requirement, Potential Human Error, Performance-Shaping Factors, Potential Effects of the Error, Barriers and Controls, Risk Priority Numbers, and Recommended Actions. This report summarizes findings and gives recommendations as determined by the data contained in appendix A. It also includes a discussion of technology barriers and challenges to performing task analyses, as well as lessons learned. The HF PFMEA table in appendix A recommends the use of accepted and required safety criteria in order to reduce the risk of human error. The items with the highest risk priority numbers should receive the greatest amount of consideration. Implementation of the recommendations will result in a safer operation for all personnel.

  4. Human Factors Process Task Analysis Liquid Oxygen Pump Acceptance Test Procedure for the Advanced Technology Development Center

    Science.gov (United States)

    Diorio, Kimberly A.

    2002-01-01

    A process task analysis effort was undertaken by Dynacs Inc. commencing in June 2002 under contract from NASA YA-D6. Funding was provided through NASA's Ames Research Center (ARC), Code M/HQ, and Industrial Engineering and Safety (IES). The John F. Kennedy Space Center (KSC) Engineering Development Contract (EDC) Task Order was 5SMA768. The scope of the effort was to conduct a Human Factors Process Failure Modes and Effects Analysis (HF PFMEA) of a hazardous activity and provide recommendations to eliminate or reduce the effects of errors caused by human factors. The Liquid Oxygen (LOX) Pump Acceptance Test Procedure (ATP) was selected for this analysis. The HF PFMEA table (see appendix A) provides an analysis of six major categories evaluated for this study. These categories include Personnel Certification, Test Procedure Format, Test Procedure Safety Controls, Test Article Data, Instrumentation, and Voice Communication. For each specific requirement listed in appendix A, the following topics were addressed: Requirement, Potential Human Error, Performance-Shaping Factors, Potential Effects of the Error, Barriers and Controls, Risk Priority Numbers, and Recommended Actions. This report summarizes findings and gives recommendations as determined by the data contained in appendix A. It also includes a discussion of technology barriers and challenges to performing task analyses, as well as lessons learned. The HF PFMEA table in appendix A recommends the use of accepted and required safety criteria in order to reduce the risk of human error. The items with the highest risk priority numbers should receive the greatest amount of consideration. Implementation of the recommendations will result in a safer operation for all personnel.

  5. NASA trend analysis procedures

    Science.gov (United States)

    1993-01-01

    This publication is primarily intended for use by NASA personnel engaged in managing or implementing trend analysis programs. 'Trend analysis' refers to the observation of current activity in the context of the past in order to infer the expected level of future activity. NASA trend analysis was divided into 5 categories: problem, performance, supportability, programmatic, and reliability. Problem trend analysis uncovers multiple occurrences of historical hardware or software problems or failures in order to focus future corrective action. Performance trend analysis observes changing levels of real-time or historical flight vehicle performance parameters such as temperatures, pressures, and flow rates as compared to specification or 'safe' limits. Supportability trend analysis assesses the adequacy of the spaceflight logistics system; example indicators are repair-turn-around time and parts stockage levels. Programmatic trend analysis uses quantitative indicators to evaluate the 'health' of NASA programs of all types. Finally, reliability trend analysis attempts to evaluate the growth of system reliability based on a decreasing rate of occurrence of hardware problems over time. Procedures for conducting all five types of trend analysis are provided in this publication, prepared through the joint efforts of the NASA Trend Analysis Working Group.

  6. Settlements around pumping wells: Analysis of influential factors and a simple calculation procedure

    Science.gov (United States)

    Pujades, Estanislao; De Simone, Silvia; Carrera, Jesus; Vázquez-Suñé, Enric; Jurado, Anna

    2017-05-01

    Estimated and measured settlements caused by pumping rarely agree. Several reasons could explain this mismatch, including the influence of layering, the mechanical parameters used in the predictions, or the relationship between settlements and drawdown. We analyze the influence of the above issues by investigating the mechanical response of pumped elastic porous media under different conditions. A radially symmetric conceptual model is considered and several hydro-mechanical simulations are performed varying the boundary conditions, the size of the modeled domain and the presence or not of an overlying layer. The simplicity of the considered problem allows us to compare our results with existing analytical solutions, to identify the role of each variable on pumping settlements and to generalize the results. The most relevant results are as follows: (1) settlements are proportional to drawdown only outside a circle of radius equal to 0.7 times the thickness of the pumped porous medium; inside, they are virtually constant, which leads to two simple procedures for computing pumping settlements. (2) Poorly conductive layers located above (or below) a pumped porous medium (with higher hydraulic conductivity) reduce and smooth settlements. (3) Boundary constraints affect the local specific storage coefficient and the displacements occurred. (4) The specific storage coefficient evaluated by interpreting pumping tests with the Cooper and Jacob method (1946) leads to overestimation of the actual Young's Modulus of the soil. The main conclusion is that settlements are less differential than expected near pumping wells. Still, they must always be evaluated acknowledging the nature of layering, the boundary constraints and carefully selecting the mechanical parameters of the soil.

  7. New alternating direction procedures in finite element analysis based upon EBE approximate factorizations. [element-by-element

    Science.gov (United States)

    Hughes, T. J. R.; Winget, J.; Levit, I.; Tezduyar, T. E.

    1983-01-01

    Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in computational mechanics. A variety of techniques are compared on problems of structural mechanics, heat conduction and fluid mechanics. The results obtained suggest considerable potential for the methods described.

  8. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    Science.gov (United States)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  9. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; David I. Gertman; Katya Le Blanc

    2011-09-01

    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  10. Multivariate analysis of risk factors for the persistence of high-grade squamous intraepithelial lesions following loop electrosurgical excision procedure.

    Science.gov (United States)

    Dos Santos Melli, Patrícia P; Duarte, Geraldo; Quintana, Silvana M

    2016-05-01

    To evaluate risk factors related to the persistence of high-grade squamous intraepithelial lesions (HSILs) following loop electrosurgical excision procedure (LEEP). The present prospective, observational study evaluated a convenience sample of participants with HSILs who were treated using LEEP between January 7, 2003 and December 30, 2011. Participants were evaluated 6months and 1year after treatment. Potential risk factors included in multivariate analyses were HIV co-infection, involved margins, multicentric lesions, smoking, and use of hormonal contraception. The present study enrolled 307 participants. At 1year, 250 (81.4%) participants were free from lesions, 30 (9.8%) had low-grade squamous intraepithelial lesions, 26 (8.5%) had persistent HSILs, and 1 (0.3%) had developed invasive carcinoma. The risk of lesions persisting at 1year after LEEP was increased by HIV infection (P=0.003), involved margins (P=0.05), and smoking (P=0.02). The presence of multicentric lesions (P=0.73) and the use of hormonal contraception (P=0.99) did not increase the risk of lesion persistence. The risk of HSIL persistence was increased by the presence of involved margins (relative risk 3.25; 95% confidence interval 1.55-6.80; P=0.001). The presence of involved margins was the only variable that increased the risk of HSIL persistence after LEEP, increasing the risk of patients requiring further treatment. Copyright © 2016. Published by Elsevier Ireland Ltd.

  11. Safety analysis procedures for PHWR

    Energy Technology Data Exchange (ETDEWEB)

    Min, Byung Joo; Kim, Hyoung Tae; Yoo, Kun Joong

    2004-03-01

    The methodology of safety analyses for CANDU reactors in Canada, a vendor country, uses a combination of best-estimate physical models and conservative input parameters so as to minimize the uncertainty of the plant behavior predictions. As using the conservative input parameters, the results of the safety analyses are assured the regulatory requirements such as the public dose, the integrity of fuel and fuel channel, the integrity of containment and reactor structures, etc. However, there is not the comprehensive and systematic procedures for safety analyses for CANDU reactors in Korea. In this regard, the development of the safety analyses procedures for CANDU reactors is being conducted not only to establish the safety analyses system, but also to enhance the quality assurance of the safety assessment. In the first phase of this study, the general procedures of the deterministic safety analyses are developed. The general safety procedures are covered the specification of the initial event, selection of the methodology and accident sequences, computer codes, safety analysis procedures, verification of errors and uncertainties, etc. Finally, These general procedures of the safety analyses are applied to the Large Break Loss Of Coolant Accident (LBLOCA) in Final Safety Analysis Report (FSAR) for Wolsong units 2, 3, 4.

  12. Factor analysis

    CERN Document Server

    Gorsuch, Richard L

    2013-01-01

    Comprehensive and comprehensible, this classic covers the basic and advanced topics essential for using factor analysis as a scientific tool in psychology, education, sociology, and related areas. Emphasizing the usefulness of the techniques, it presents sufficient mathematical background for understanding and sufficient discussion of applications for effective use. This includes not only theory but also the empirical evaluations of the importance of mathematical distinctions for applied scientific analysis.

  13. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corr......This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  14. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  15. Risk factors for unplanned readmission within 30 days after pediatric neurosurgery: a nationwide analysis of 9799 procedures from the American College of Surgeons National Surgical Quality Improvement Program.

    Science.gov (United States)

    Sherrod, Brandon A; Johnston, James M; Rocque, Brandon G

    2016-09-01

    OBJECTIVE Hospital readmission rate is increasingly used as a quality outcome measure after surgery. The purpose of this study was to establish, using a national database, the baseline readmission rates and risk factors for patient readmission after pediatric neurosurgical procedures. METHODS The American College of Surgeons National Surgical Quality Improvement Program-Pediatric database was queried for pediatric patients treated by a neurosurgeon between 2012 and 2013. Procedures were categorized by current procedural terminology (CPT) code. Patient demographics, comorbidities, preoperative laboratory values, operative variables, and postoperative complications were analyzed via univariate and multivariate techniques to find associations with unplanned readmissions within 30 days of the primary procedure. RESULTS A total of 9799 cases met the inclusion criteria, 1098 (11.2%) of which had an unplanned readmission within 30 days. Readmission occurred 14.0 ± 7.7 days postoperatively (mean ± standard deviation). The 4 procedures with the highest unplanned readmission rates were CSF shunt revision (17.3%; CPT codes 62225 and 62230), repair of myelomeningocele > 5 cm in diameter (15.4%), CSF shunt creation (14.1%), and craniectomy for infratentorial tumor excision (13.9%). The lowest unplanned readmission rates were for spine (6.5%), craniotomy for craniosynostosis (2.1%), and skin lesion (1.0%) procedures. On multivariate regression analysis, the odds of readmission were greatest in patients experiencing postoperative surgical site infection (SSI; deep, organ/space, superficial SSI, and wound disruption: OR > 12 and p 10 days (OR 1.411, p = 0.010), oxygen supplementation (OR 1.645, p = 0.010), nutritional support (OR 1.403, p = 0.009), seizure disorder (OR 1.250, p = 0.021), and longer operative time (per hour increase, OR 1.059, p = 0.029). CONCLUSIONS This study may aid in identifying patients at risk for unplanned readmission following pediatric neurosurgery

  16. Factors affecting the design of instrument flight procedures

    Directory of Open Access Journals (Sweden)

    Ivan FERENCZ

    2008-01-01

    Full Text Available The article highlights factors, which might affect the design of instrument flight procedures. Ishikawa diagram is used to distribute individual factors into classes, as are People, Methods, Regulations, Tools, Data and Environment.

  17. Factors affecting the design of instrument flight procedures

    OpenAIRE

    Ivan FERENCZ; František JÚN; Dušan KEVICKÝ

    2008-01-01

    The article highlights factors, which might affect the design of instrument flight procedures. Ishikawa diagram is used to distribute individual factors into classes, as are People, Methods, Regulations, Tools, Data and Environment.

  18. Factors affecting anxiety-fear of surgical procedures in dentistry ...

    African Journals Online (AJOL)

    Factors affecting anxiety-fear of surgical procedures in dentistry. ... the questions concerning previous dental experience, education level, and previous ... structure and gender are the significantly effective factors on dental anxiety and fear.

  19. Foundations of factor analysis

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Introduction Factor Analysis and Structural Theories Brief History of Factor Analysis as a Linear Model Example of Factor AnalysisMathematical Foundations for Factor Analysis Introduction Scalar AlgebraVectorsMatrix AlgebraDeterminants Treatment of Variables as Vectors Maxima and Minima of FunctionsComposite Variables and Linear Transformations Introduction Composite Variables Unweighted Composite VariablesDifferentially Weighted Composites Matrix EquationsMulti

  20. Deriving directions through procedural task analysis.

    Science.gov (United States)

    Yuen, H K; D'Amico, M

    1998-01-01

    Task analysis is one of the essential components of activity analysis. Procedural task analysis involves breaking down an activity into a sequence of steps. Directions are the sequence of steps resulting from the task analysis (i.e., the product of the task analysis). Directions become a guide for caregivers or trainers use in teaching clients a specific skill. However, occupational therapy students often have difficulty in writing directions that are clear enough for caregivers or trainers to carry out. Books on activity analysis only provide examples of directions without giving guidelines on how to perform the writing process. The purposes of this paper are to describe the process of procedural task analysis and to provide a guideline for writing steps of directions.

  1. An analysis of aircrew procedural compliance

    Science.gov (United States)

    Schofield, J. E.; Giffin, W. C.

    1981-01-01

    This research examines the relationships between aircrew compliance with procedures and operator errors. The data for this analysis were generated by reexamination of a 1976 experiment in full mission simulation conducted by Dr. H. P. Ruffell Smith (1979) for the NASA-Ames Research Center. The character of individual operators, the chemistry of crew composition, and complex aspects of the operational environment affected procedural compliance by crew members. Associations between enumerated operator errors and several objective indicators of crew coordination were investigated. The correspondence among high operator error counts and infrequent compliance with specific crew coordination requirements was most notable when copilots were accountable for control of flight parameters.

  2. A new general pressure-analysis procedure for slug tests

    Energy Technology Data Exchange (ETDEWEB)

    Peres, A.M.M.; Onur, M.; Reynolds, A.C. (Univ. of Tulsa, OK (United States))

    1993-12-01

    A new analysis procedure for determining formation flow capacity and skin factor from slug-test data is presented. The procedure arises from exact deconvolution equations that convert measured slug-test pressure data into equivalent pressure and pressure-derivative responses that would be obtained if the well were produced at a constant surface flow rate. The converted data then can be analyzed by use of existing wellbore-storage and skin type curves for the particular reservoir or well model represented by the field data. For cases where the slug test is short, the authors show that flow rate convolution can be incorporated to improve analysis reliability. The analysis procedures do not require direct knowledge of the sandface flow rate.

  3. PROGNOSTIC FACTORS ANALYSIS FOR STAGEⅠ RECTAL CANCER

    Institute of Scientific and Technical Information of China (English)

    武爱文; 顾晋; 薛钟麒; 王怡; 徐光炜

    2001-01-01

    To explore the death-related factors of stageⅠrectal cancer patients. Methods: 89 cases of stage I rectal cancer patients between 1985 and 2000 were retrospectively studied for prognostic factors. Factors including age, gender, tumor size, circumferential occupation, gross type, pathological type, depth of tumor invasion, surgical procedure, adjuvant chemotherapy and postoperative complication were chosen for cox multivariate analysis (forward procedure) using Spss software (10.0 version). Results: multivariate analysis demonstrated that muscular invasion was an independent negative prognostic factor for stageⅠrectal cancer patients (P=0.003). Conclusion: Muscular invasion is a negative prognostic factor for stage I rectal cancer patients.

  4. Factor Analysis and AIC.

    Science.gov (United States)

    Akaike, Hirotugu

    1987-01-01

    The Akaike Information Criterion (AIC) was introduced to extend the method of maximum likelihood to the multimodel situation. Use of the AIC in factor analysis is interesting when it is viewed as the choice of a Bayesian model; thus, wider applications of AIC are possible. (Author/GDC)

  5. Risk factors for postoperative urinary tract infection following midurethral sling procedures.

    Science.gov (United States)

    Doganay, Melike; Cavkaytar, Sabri; Kokanali, Mahmut Kuntay; Ozer, Irfan; Aksakal, Orhan Seyfi; Erkaya, Salim

    2017-04-01

    To identify the potential risk factors for urinary tract infections following midurethral sling procedures. 556 women who underwent midurethral sling procedure due to stress urinary incontinence over a four-year period were reviewed in this retrospective study. Of the study population, 280 women underwent TVT procedures and 276 women underwent TOT procedures. Patients were evaluated at 4-8 weeks postoperatively and were investigated for the occurrence of a urinary tract infection. Patients who experienced urinary tract infection were defined as cases, and patients who didn't were defined as controls. All data were collected from medical records. Multivariate logistic regression model was used to identify the risk factors for urinary tract infection. Of 556 women, 58 (10.4%) were defined as cases while 498 (89.6%) were controls. The mean age of women in cases (57.8±12.9years) was significantly greater than in controls (51.8±11.2years) (purinary tract infection, concomitant vaginal hysterectomy and cystocele repair, TVT procedure and postoperative postvoiding residual bladder volume ≥100ml were more common in cases than in controls. However, in multivariate regression analysis model presence of preoperative urinary tract infection [OR (95% CI)=0.1 (0.1-0.7); p=0.013], TVT procedure [OR (95% CI)=8.4 (3.1-22.3); p=0.000] and postoperative postvoiding residual bladder volume ≥100ml [OR (95% CI)=4.6 (1.1-19.2); p=0.036] were significant independent risk factors for urinary tract infection following midurethral slings CONCLUSION: Urinary tract infection after midurethral sling procedures is a relatively common complication. The presence of preoperative urinary tract infection, TVT procedure and postoperative postvoiding residual bladder volume ≥100ml may increase the risk of this complication. Identification of these factors could help surgeons to minimize this complicationby developing effective strategies. Copyright © 2017. Published by Elsevier B.V.

  6. Calculation of conversion factors for effective dose for various interventional radiology procedures

    Energy Technology Data Exchange (ETDEWEB)

    Compagnone, Gaetano; Giampalma, Emanuela; Domenichelli, Sara; Renzulli, Matteo; Golfieri, Rita [Medical Physics Department, S. Orsola-Malpighi University Hospital, Via Massarenti 9, 40138 Bologna (Italy); Radiology Department, S. Orsola-Malpighi University Hospital, Via Massarenti 9, 40138 Bologna (Italy); Medical Physics Department, S. Orsola-Malpighi University Hospital, Via Massarenti 9, 40138 Bologna (Italy); Radiology Department, S. Orsola-Malpighi University Hospital, Via Massarenti 9, 40138 Bologna (Italy)

    2012-05-15

    Purpose: To provide dose-area-product (DAP) to effective dose (E) conversion factors for complete interventional procedures, based on in-the-field clinical measurements of DAP values and using tabulated E/DAP conversion factors for single projections available from the literature. Methods: Nine types of interventional procedures were performed on 84 patients with two angiographic systems. Different calibration curves (with and without patient table attenuation) were calculated for each DAP meter. Clinical and dosimetric parameters were recorded in-the-field for each projection and for all patients, and a conversion factor linking DAP and effective doses was derived for each complete procedure making use of published, Monte Carlo calculated conversion factors for single static projections. Results: Fluoroscopy time and DAP values for the lowest-dose procedure (biliary drainage) were approximately 3-fold and 13-fold lower, respectively, than those for the highest-dose examination (transjugular intrahepatic portosystemic shunt, TIPS). Median E/DAP conversion factors from 0.12 (abdominal percutaneous transluminal angioplasty) to 0.25 (Nephrostomy) mSvGy{sup -1} cm{sup -2} were obtained and good correlations between E and DAP were found for all procedures, with R{sup 2} coefficients ranging from 0.80 (abdominal angiography) to 0.99 (biliary stent insertion, Nephrostomy and TIPS). The DAP values obtained in this study showed general consistency with the values provided in the literature and median E values ranged from 4.0 mSv (biliary drainage) to 49.6 mSv (TIPS). Conclusions: Values of E/DAP conversion factors were derived for each procedure from a comprehensive analysis of projection and dosimetric data: they could provide a good evaluation for the stochastic effects. These results can be obtained by means of a close cooperation between different interventional professionals involved in patient care and dose optimization.

  7. Summative Mass Analysis of Algal Biomass - Integration of Analytical Procedures: Laboratory Analytical Procedure (LAP)

    Energy Technology Data Exchange (ETDEWEB)

    Laurens, Lieve M. L.

    2016-01-13

    This procedure guides the integration of laboratory analytical procedures to measure algal biomass constituents in an unambiguous manner and ultimately achieve mass balance closure for algal biomass samples. Many of these methods build on years of research in algal biomass analysis.

  8. Factors Affecting Anxiety-Fear of Surgical Procedures in Dentistry

    African Journals Online (AJOL)

    2017-05-16

    May 16, 2017 ... Aim: To compare dental anxiety and fear during procedures performed under local anesthesia ... of the procedure trigger psychosomatic diseases in the ... Nigerian Journal of Clinical Practice ¦ Volume 20 ¦ Issue 4 ¦ April 2017.

  9. Solid-phase extraction procedures in systematic toxicological analysis

    NARCIS (Netherlands)

    Franke, J.P.; de Zeeuw, R.A

    1998-01-01

    In systematic toxicological analysis (STA) the substance(s) present is (are) not known at the start of the analysis. in such an undirected search the extraction procedure cannot be directed to a given substance but must be a general procedure where a compromise must be reached in that the substances

  10. Differential item functioning analysis by applying multiple comparison procedures.

    Science.gov (United States)

    Eusebi, Paolo; Kreiner, Svend

    2015-01-01

    Analysis within a Rasch measurement framework aims at development of valid and objective test score. One requirement of both validity and objectivity is that items do not show evidence of differential item functioning (DIF). A number of procedures exist for the assessment of DIF including those based on analysis of contingency tables by Mantel-Haenszel tests and partial gamma coefficients. The aim of this paper is to illustrate Multiple Comparison Procedures (MCP) for analysis of DIF relative to a variable defining a very large number of groups, with an unclear ordering with respect to the DIF effect. We propose a single step procedure controlling the false discovery rate for DIF detection. The procedure applies for both dichotomous and polytomous items. In addition to providing evidence against a hypothesis of no DIF, the procedure also provides information on subset of groups that are homogeneous with respect to the DIF effect. A stepwise MCP procedure for this purpose is also introduced.

  11. Stepwise Variable Selection in Factor Analysis.

    Science.gov (United States)

    Kano, Yutaka; Harada, Akira

    2000-01-01

    Takes several goodness-of-fit statistics as measures of variable selection and develops backward elimination and forward selection procedures in exploratory factor analysis. A newly developed variable selection program, SEFA, can print several fit measures for a current model and models obtained by removing an internal variable or adding an…

  12. Fundamental procedures of geographic information analysis

    Science.gov (United States)

    Berry, J. K.; Tomlin, C. D.

    1981-01-01

    Analytical procedures common to most computer-oriented geographic information systems are composed of fundamental map processing operations. A conceptual framework for such procedures is developed and basic operations common to a broad range of applications are described. Among the major classes of primitive operations identified are those associated with: reclassifying map categories as a function of the initial classification, the shape, the position, or the size of the spatial configuration associated with each category; overlaying maps on a point-by-point, a category-wide, or a map-wide basis; measuring distance; establishing visual or optimal path connectivity; and characterizing cartographic neighborhoods based on the thematic or spatial attributes of the data values within each neighborhood. By organizing such operations in a coherent manner, the basis for a generalized cartographic modeling structure can be developed which accommodates a variety of needs in a common, flexible and intuitive manner. The use of each is limited only by the general thematic and spatial nature of the data to which it is applied.

  13. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-02-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based procedure system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the

  14. Building America Performance Analysis Procedures: Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    None

    2004-06-01

    To measure progress toward multi-year research goals, cost and performance trade-offs are evaluated through a series of controlled field and laboratory experiments supported by energy analysis techniques using test data to calibrate simulation models.

  15. Navigating the "liberation procedure": a qualitative study of motivating and hesitating factors among people with multiple sclerosis.

    Science.gov (United States)

    Ploughman, Michelle; Harris, Chelsea; Hogan, Stephen H; Murray, Cynthia; Murdoch, Michelle; Austin, Mark W; Stefanelli, Mark

    2014-01-01

    The debate within the multiple sclerosis (MS) community initiated by the chronic cerebrospinal venous insufficiency (CCSVI) hypothesis and the subsequent liberation procedure placed some people with MS at odds with health care professionals and researchers. This study explored decision making regarding the controversial liberation procedure among people with MS. Fifteen people with MS (procedure, n=7; no procedure, n=8) participated in audiotaped semistructured interviews exploring their thoughts and experiences related to the liberation procedure. Data were transcribed and analyzed using an iterative, consensus-based, thematic content-analysis approach. Participants described an imbalance of motivating factors affirming the procedure compared to hesitating factors that provoked the participant to pause or reconsider when deciding to undergo the procedure. Collegial conversational relationships with trusted sources (eg, MS nurse, neurologist) and ability to critically analyze the CCSVI hypothesis were key hesitating factors. Fundraising, family enthusiasm, and the ease of navigation provided by medical tourism companies helped eliminate barriers to the procedure. Knowledge of factors that helped to popularize the liberation procedure in Canada may inform shared decision making concerning this and future controversies in MS.

  16. Umbilical Hernia Repair: Analysis After 934 Procedures.

    Science.gov (United States)

    Porrero, José L; Cano-Valderrama, Oscar; Marcos, Alberto; Bonachia, Oscar; Ramos, Beatriz; Alcaide, Benito; Villar, Sol; Sánchez-Cabezudo, Carlos; Quirós, Esther; Alonso, María T; Castillo, María J

    2015-09-01

    There is a lack of consensus about the surgical management of umbilical hernias. The aim of this study is to analyze the medium-term results of 934 umbilical hernia repairs. In this study, 934 patients with an umbilical hernia underwent surgery between 2004 and 2010, 599 (64.1%) of which were evaluated at least one year after the surgery. Complications, recurrence, and the reoperation rate were analyzed. Complications were observed in 5.7 per cent of the patients. With a mean follow-up time of 35.5 months, recurrence and reoperation rates were 3.8 per cent and 4.7 per cent, respectively. A higher percentage of female patients (60.9 % vs 29 %, P = 0.001) and a longer follow-up time (47.4 vs 35 months, P = 0.037) were observed in patients who developed a recurrence. No significant differences were observed between complications and the reoperation rate in patients who underwent Ventralex(®) preperitoneal mesh reinforcement and suture repair; however, a trend toward a higher recurrence rate was observed in patients with suture repair (6.5 % vs 3.2 %, P = 0.082). Suture repair had lower recurrence and reoperation rates in patients with umbilical hernias less than 1 cm. Suture repair is an appropriate procedure for small umbilical hernias; however, for larger umbilical hernias, mesh reinforcement should be considered.

  17. Analysis and Evaluation of Current Library Procedures

    Science.gov (United States)

    Heinritz, Fred J.

    1973-01-01

    There is no shortage of appropriate techniques for the analysis of library operations. Many of those described here require no particular mathematical background, and are essentially extensions of common sense. Strengths and weaknesses of various techniques were called to attention, and wider use by librarians of certain ones was encouraged. (37…

  18. Video Instrumentation And Procedures For Data Analysis

    Science.gov (United States)

    Keller, Patrick N.

    1982-02-01

    Video systems can be configured to measure position, size, attitude, brightness, and color of objects including objects in high speed events. The measurements may be time correlated or images from several sources (perhaps widely separated) may be correlated directly by image splitting techniques. The composition and specifications of the video system will vary considerably depending on the parameters measured and the accuracy desired. The basis of making the above measurements, using video, are presented in a format to guide practitioners in applying video as a measuring tool. Topics include relative vs. absolute measurements, scales and references, data insertion and retrieval, human factors, and video digitization.

  19. F-111C Flight Data Reduction and Analysis Procedures

    Science.gov (United States)

    1990-12-01

    Victoria Qantas Airways Limited Australian Airline, Library Ansett Airlines of Australia, Library Hawker de Havilland Aust Pty Ltd, Victoria, Library...MELBOURNE, VICTORIA Flight Mechanics Report 187 F-111C FLIGHT DATA REDUCTION AND ANALYSIS PROCEDURES by y 7 M.I. Cooper J.S. Drobik C.A. Martin...RESEARCH LABORATORY Flight Mechanics Report 187 F-111C FLIGHT DATA REDUCTION AND ANALYSIS PROCEDURES by M. I. COOPER J. S. DROBIK C. A. MARTIN

  20. Factorial invariance in multilevel confirmatory factor analysis.

    Science.gov (United States)

    Ryu, Ehri

    2014-02-01

    This paper presents a procedure to test factorial invariance in multilevel confirmatory factor analysis. When the group membership is at level 2, multilevel factorial invariance can be tested by a simple extension of the standard procedure. However level-1 group membership raises problems which cannot be appropriately handled by the standard procedure, because the dependency between members of different level-1 groups is not appropriately taken into account. The procedure presented in this article provides a solution to this problem. This paper also shows Muthén's maximum likelihood (MUML) estimation for testing multilevel factorial invariance across level-1 groups as a viable alternative to maximum likelihood estimation. Testing multilevel factorial invariance across level-2 groups and testing multilevel factorial invariance across level-1 groups are illustrated using empirical examples. SAS macro and Mplus syntax are provided.

  1. A General Factor-Analytic Procedure for Assessing Response Bias in Questionnaire Measures

    Science.gov (United States)

    Ferrando, Pere J.; Lorenzo-Seva, Urbano; Chico, Eliseo

    2009-01-01

    This article proposes procedures for simultaneously assessing and controlling acquiescence and social desirability in questionnaire items. The procedures are based on a semi-restricted factor-analytic tridimensional model, and can be used with binary, graded-response, or more continuous items. We discuss procedures for fitting the model (item…

  2. Nonparametric inference procedures for multistate life table analysis.

    Science.gov (United States)

    Dow, M M

    1985-01-01

    Recent generalizations of the classical single state life table procedures to the multistate case provide the means to analyze simultaneously the mobility and mortality experience of 1 or more cohorts. This paper examines fairly general nonparametric combinatorial matrix procedures, known as quadratic assignment, as an analysis technic of various transitional patterns commonly generated by cohorts over the life cycle course. To some degree, the output from a multistate life table analysis suggests inference procedures. In his discussion of multstate life table construction features, the author focuses on the matrix formulation of the problem. He then presents several examples of the proposed nonparametric procedures. Data for the mobility and life expectancies at birth matrices come from the 458 member Cayo Santiago rhesus monkey colony. The author's matrix combinatorial approach to hypotheses testing may prove to be a useful inferential strategy in several multidimensional demographic areas.

  3. A two-step procedure of fractal analysis

    Science.gov (United States)

    Dedovich, T. G.; Tokarev, M. V.

    2016-03-01

    A two-step procedure for the analysis of different-type fractals is proposed for the PaC and SePaC methods. An advantage of the two-step procedures of the PaC and SePaC methods over the basic and modified PaC and SePaC methods is shown. Results of comparative analysis of the unified data set using different approaches (the BC method and two-step procedures of the PaC and SePaC methods) are given. It is shown that the two-step procedure of the SePaC method is most efficient in reconstructing the overall data set.

  4. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring

    2012-06-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  5. Goals Analysis Procedure Guidelines for Applying the Goals Analysis Process

    Science.gov (United States)

    Motley, Albert E., III

    2000-01-01

    One of the key elements to successful project management is the establishment of the "right set of requirements", requirements that reflect the true customer needs and are consistent with the strategic goals and objectives of the participating organizations. A viable set of requirements implies that each individual requirement is a necessary element in satisfying the stated goals and that the entire set of requirements, taken as a whole, is sufficient to satisfy the stated goals. Unfortunately, it is the author's experience that during project formulation phases' many of the Systems Engineering customers do not conduct a rigorous analysis of the goals and objectives that drive the system requirements. As a result, the Systems Engineer is often provided with requirements that are vague, incomplete, and internally inconsistent. To complicate matters, most systems development methodologies assume that the customer provides unambiguous, comprehensive and concise requirements. This paper describes the specific steps of a Goals Analysis process applied by Systems Engineers at the NASA Langley Research Center during the formulation of requirements for research projects. The objective of Goals Analysis is to identify and explore all of the influencing factors that ultimately drive the system's requirements.

  6. Human factors evaluation of teletherapy: Human-system interfaces and procedures. Volume 3

    Energy Technology Data Exchange (ETDEWEB)

    Kaye, R.D.; Henriksen, K.; Jones, R. [Hughes Training, Inc., Falls Church, VA (United States); Morisseau, D.S.; Serig, D.I. [Nuclear Regulatory Commission, Washington, DC (United States). Div. of Systems Technology

    1995-07-01

    A series of human factors evaluations was undertaken to better understand the contributing factors to human error in the teletherapy environment. Teletherapy is a multidisciplinary methodology for treating cancerous tissue through selective exposure to an external beam of ionizing radiation. The principal sources of radiation are a radioactive isotope, typically cobalt60 (Co-60), or a linear accelerator device capable of producing very high energy x-ray and electron beams. A team of human factors specialists conducted site visits to radiation oncology departments at community hospitals, university centers, and free-standing clinics. In addition, a panel of radiation oncologists, medical physicists, and radiation technologists served as subject matter experts. A function and task analysis was initially performed to guide subsequent evaluations in the areas of user-system interfaces, procedures, training and qualifications, and organizational policies and practices. The present report focuses on an evaluation of the human-system interfaces in relation to the treatment machines and supporting equipment (e.g., simulators, treatment planning computers, control consoles, patient charts) found in the teletherapy environment. The report also evaluates operating, maintenance and emergency procedures and practices involved in teletherapy. The evaluations are based on the function and task analysis and established human engineering guidelines, where applicable.

  7. Accident Sequence Evaluation Program: Human reliability analysis procedure

    Energy Technology Data Exchange (ETDEWEB)

    Swain, A.D.

    1987-02-01

    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.

  8. A procedure to estimate proximate analysis of mixed organic wastes.

    Science.gov (United States)

    Zaher, U; Buffiere, P; Steyer, J P; Chen, S

    2009-04-01

    In waste materials, proximate analysis measuring the total concentration of carbohydrate, protein, and lipid contents from solid wastes is challenging, as a result of the heterogeneous and solid nature of wastes. This paper presents a new procedure that was developed to estimate such complex chemical composition of the waste using conventional practical measurements, such as chemical oxygen demand (COD) and total organic carbon. The procedure is based on mass balance of macronutrient elements (carbon, hydrogen, nitrogen, oxygen, and phosphorus [CHNOP]) (i.e., elemental continuity), in addition to the balance of COD and charge intensity that are applied in mathematical modeling of biological processes. Knowing the composition of such a complex substrate is crucial to study solid waste anaerobic degradation. The procedure was formulated to generate the detailed input required for the International Water Association (London, United Kingdom) Anaerobic Digestion Model number 1 (IWA-ADM1). The complex particulate composition estimated by the procedure was validated with several types of food wastes and animal manures. To make proximate analysis feasible for validation, the wastes were classified into 19 types to allow accurate extraction and proximate analysis. The estimated carbohydrates, proteins, lipids, and inerts concentrations were highly correlated to the proximate analysis; correlation coefficients were 0.94, 0.88, 0.99, and 0.96, respectively. For most of the wastes, carbohydrate was the highest fraction and was estimated accurately by the procedure over an extended range with high linearity. For wastes that are rich in protein and fiber, the procedure was even more consistent compared with the proximate analysis. The new procedure can be used for waste characterization in solid waste treatment design and optimization.

  9. Factor analysis of multivariate data

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A.; Mahadevan, R.

    A brief introduction to factor analysis is presented. A FORTRAN program, which can perform the Q-mode and R-mode factor analysis and the singular value decomposition of a given data matrix is presented in Appendix B. This computer program, uses...

  10. Factor Analysis of Intern Effectiveness

    Science.gov (United States)

    Womack, Sid T.; Hannah, Shellie Louise; Bell, Columbus David

    2012-01-01

    Four factors in teaching intern effectiveness, as measured by a Praxis III-similar instrument, were found among observational data of teaching interns during the 2010 spring semester. Those factors were lesson planning, teacher/student reflection, fairness & safe environment, and professionalism/efficacy. This factor analysis was as much of a…

  11. Factor analysis and missing data

    NARCIS (Netherlands)

    Kamakura, WA; Wedel, M

    2000-01-01

    The authors study the estimation of factor models and the imputation of missing data and propose an approach that provides direct estimates of factor weights without the replacement of missing data with imputed values. First, the approach is useful in applications of factor analysis in the presence

  12. Development of numerical procedures for analysis of complex structures

    Science.gov (United States)

    Gupta, K. K.

    1984-01-01

    The paper is concerned with the development of novel numerical procedures for the solution of static, stability, free vibration and dynamic response analysis of large, complex practical structures. Thus, details of numerical algorithms evolved for dynamic analysis of usual non-rotating and also rotating structures as well as finite dynamic elements are presented in the paper. Furthermore, the article provides some description of a general-purpose computer program STARS specifically developed for efficient analysis of complex practical structures.

  13. Risk factors for laboratory-confirmed bloodstream infection in neonates undergoing surgical procedures

    Directory of Open Access Journals (Sweden)

    Roberta Maia de Castro Romanelli

    Full Text Available Background Healthcare Associated Infections constitute an important problem in Neonatal Units and invasive devices are frequently involved. However, studies on risk factors of newborns who undergo surgical procedures are scarce. Objective To identify risk factors for laboratory-confirmed bloodstream infection in neonates undergoing surgical procedures. Methods This case–control study was conducted from January 2008 to May 2011, in a referral center. Cases were of 21 newborns who underwent surgery and presented the first episode of laboratory-confirmed bloodstream infection. Control was 42 newborns who underwent surgical procedures without notification of laboratory-confirmed bloodstream infection in the study period. Information was obtained from the database of the Hospital Infection Control Committee Notification of infections and related clinical data of patients that routinely collected by trained professionals and follow the recommendations of Agência Nacional de Vigilância Sanitária and analyzed with Statistical Package for Social Sciences. Results During the study period, 1141 patients were admitted to Neonatal Unit and 582 Healthcare Associated Infections were reported (incidence-density of 25.75 Healthcare Associated Infections/patient-days. In the comparative analysis, a higher proportion of laboratory-confirmed bloodstream infection was observed in preterm infants undergoing surgery (p = 0.03 and use of non-invasive ventilation was a protective factor (p = 0.048. Statistically significant difference was also observed for mechanical ventilation duration (p = 0.004, duration of non-invasive ventilation (p = 0.04, and parenteral nutrition duration (p = 0.003. In multivariate analysis duration of parenteral nutrition remained significantly associated with laboratory-confirmed bloodstream infection (p = 0.041. Conclusions Shortening time on parenteral nutrition whenever possible and preference for non-invasive ventilation in neonates

  14. Activated sludge morphology characterization through an image analysis procedure

    Directory of Open Access Journals (Sweden)

    Y. G. Perez

    2006-09-01

    Full Text Available This work deals with the development of a digital image analysis procedure to characterize microbial flocs obtained in three different WWTP: a bench-scale Sequencing Batch Reactor (SBR dealing with phenol and nitrogen biological removal, a municipal treatment unit (Ilha do Governador, Rio de Janeiro, Brazil and an industrial wastewater treatment plant (Ciba - Estrada do Colégio, Rio de Janeiro, Brazil. The developed procedure permits to obtain its morphological parameters like equivalent diameter, compactness, roundness and porosity properties as well as the fractal dimension. This procedure was validated and lead to identify the major relationships between the analysed morphological parameters. A minimum of 300 flocs should be included in the image analysis and a significant influence of the sample dilution step on the mean size of the flocs was verified. The porosity parameter positively correlated with the fractal dimension of microbial aggregates indicating the that highly porous flocs are very irregular.

  15. Risk Factors of Voiding Dysfunction and Patient Satisfaction After Tension-free Vaginal Tape Procedure

    OpenAIRE

    2005-01-01

    This study was undertaken to identify risk factors for postoperative voiding dysfunction and factors having impact on patient global satisfaction after a tension-free vaginal tape (TVT) procedure. Two hundred and eighty-five women who underwent the TVT procedure for stress urinary incontinence were analyzed to identify risk factors predictive of voiding dysfunction. Postoperative voiding dysfunction was defined as a peak urinary flow rate (PFR) 30% of bladder capacity (incomplete emptying, n=...

  16. Compositional Analysis Procedures for Selected Elastomers Used in Sonar Transducers

    Science.gov (United States)

    1987-03-16

    in ASTM D297 , Method 54.* Permitted deviations from this procedure include the use of boric acid solution as the trapping medium with subsequent...ratio, 3,788. * D297 -81, Standard Methods for Rubber Products—Chemical Analysis," Annual Book of ASTM Standards. Part 37. **D3533-76, "Standard

  17. Review and Application of Ship Collision and Grounding Analysis Procedures

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup

    2010-01-01

    It is the purpose of the paper to present a review of prediction and analysis tools for collision and grounding analyses and to outline a probabilistic procedure for which these tools can be used by the maritime industry to develop performance based rules to reduce the risk associated with human...

  18. Environmental Quality Information Analysis Center (EQIAC) operating procedures handbook

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, T.E. (Florida Univ., Gainesville, FL (United States)); Das, S. (Oak Ridge National Lab., TN (United States))

    1992-08-01

    The Operating Procedures Handbook of the Environmental Quality Information Analysis Center (EQIAC) is intended to be kept current as EQIAC develops and evolves. Its purpose is to provide a comprehensive guide to the mission, infrastructure, functions, and operational procedures of EQIAC. The handbook is a training tool for new personnel and a reference manual for existing personnel. The handbook will be distributed throughout EQIAC and maintained in binders containing current dated editions of the individual sections. The handbook will be revised at least annually to reflect the current structure and operational procedures of EQIAC. The EQIAC provides information on environmental issues such as compliance, restoration, and environmental monitoring do the Air Force and DOD contractors.

  19. Roughness Analysis on Composite Materials (Microfilled, Nanofilled and Silorane) After Different Finishing and Polishing Procedures.

    Science.gov (United States)

    Pettini, Francesco; Corsalini, Massimo; Savino, Maria Grazia; Stefanachi, Gianluca; Venere, Daniela Di; Pappalettere, Carmine; Monno, Giuseppe; Boccaccio, Antonio

    2015-01-01

    The finishing and polishing of composite materials affect the restoration lifespan. The market shows a variety of finishing and polishing procedures and the choice among them is conditioned by different factors such as the resulting surface roughness. In the present study, 156 samples were realized with three composite materials, -microfilled, nanofilled and silorane-, and treated with different finishing and polishing procedures. Profilometric analyses were carried out on the samples' surface, the measured roughness values were submitted to statistical analysis. A complete factorial plan was drawn up and two-way analysis of variance (ANOVA) was carried out to investigate whether the following factors affect the values of roughness: (i) material; (ii) polishing/finishing procedure. Tukey post-hoc test was also conducted to evaluate any statistically significant differences between the material/procedure combinations. The results show that the tested materials do not affect the resulting surface quality but roughness values depend on the finishing/polishing procedure adopted. The procedures that involve: (a) the finishing with medium Sof-Lex discs and (b) the finishing with two tungsten carbide multi-blade milling cutters Q series and UF series are those that allow the lowest values of roughness to be obtained.

  20. Jet Engine hot parts IR Analysis Procedure (J-EIRP)

    Science.gov (United States)

    Baumeister, Joseph F.

    1993-01-01

    A thermal radiation analysis method called Jet Engine IR Analysis Procedure (J-EIRP) was developed to evaluate jet engine cavity hot parts source radiation. The objectives behind J-EIRP were to achieve the greatest accuracy in model representation and solution, while minimizing computer resources and computational time. The computer programs that comprise J-EIRP were selected on the basis of their performance, accuracy, and flexibility to solve both simple and complex problems. These programs were intended for use on a personal computer, but include the ability to solve large problems on a mainframe or supercomputer. J-EIRP also provides the user with a tool for developing thermal design experience and engineering judgment through analysis experimentation, while using minimal computer resources. A sample jet engine cavity analysis demonstrates the procedure and capabilities within J-EIRP, and is compared to a simplified method for approximating cavity radiation. The goal is to introduce the terminology and solution process used in J-EIRP and to provide insight into the radiation heat transfer principles used in this procedure.

  1. First course in factor analysis

    CERN Document Server

    Comrey, Andrew L

    2013-01-01

    The goal of this book is to foster a basic understanding of factor analytic techniques so that readers can use them in their own research and critically evaluate their use by other researchers. Both the underlying theory and correct application are emphasized. The theory is presented through the mathematical basis of the most common factor analytic models and several methods used in factor analysis. On the application side, considerable attention is given to the extraction problem, the rotation problem, and the interpretation of factor analytic results. Hence, readers are given a background of

  2. Impact of clinical and procedural factors upon C reactive protein dynamics following transcatheter aortic valve implantation

    Institute of Scientific and Technical Information of China (English)

    Sayan Sen; Iqbal S Malik; Antonio Colombo; Ghada W Mikhail

    2016-01-01

    AIM: To determine the effect of procedural and clinical factors upon C reactive protein(CRP) dynamics following transcatheter aortic valve implantation(TAVI).METHODS: Two hundred and eight consecutive patients that underwent transfemoral TAVI at two hospitals(Imperial, College Healthcare NHS Trust, Hammersmith Hospital, London, United Kingdom and San Raffaele Scientific Institute, Milan, Italy) were included. Daily venous plasma CRP levels were measured for up to 7 d following the procedure(or up to discharge). Procedural factors and 30-d safety outcomes according tothe Valve Academic Research Consortium 2 definition were collected. RESULTS: Following TAVI, CRP significantly increased reaching a peak on day 3 of 87.6 ± 5.5 mg/d L, P < 0.001. Patients who developed clinical signs and symptoms of sepsis had significantly increased levels of CRP(P < 0.001). The presence of diabetes mellitus was associated with a significantly higher peak CRP level at day 3(78.4 ± 3.2 vs 92.2 ± 4.4, P < 0.001). There was no difference in peak CRP release following balloonexpandable or self-expandable TAVI implantation(94.8 ± 9.1 vs 81.9 ± 6.9, P = 0.34) or if post-dilatation was required(86.9 ± 6.3 vs 96.6 ± 5.3, P = 0.42), however, when pre-TAVI balloon aortic valvuloplasty was performed this resulted in a significant increase in the peak CRP(110.1 ± 8.9 vs 51.6 ± 3.7, P < 0.001). The development of a major vascular complication did result in a significantly increased maximal CRP release(153.7 ± 11.9 vs 83.3 ± 7.4, P = 0.02) and there was a trend toward a higher peak CRP following major/lifethreatening bleeding(113.2 ± 9.3 vs 82.7 ± 7.5, P = 0.12) although this did not reach statistical significance. CRP was not found to be a predictor of 30-d mortality on univariate analysis. CONCLUSION: Careful attention should be paid to baseline clinical characteristics and procedural factors when interpreting CRP following TAVI to determine their future management.

  3. Hair decontamination procedure prior to multi-class pesticide analysis.

    Science.gov (United States)

    Duca, Radu-Corneliu; Hardy, Emilie; Salquèbre, Guillaume; Appenzeller, Brice M R

    2014-06-01

    Although increasing interest is being observed in hair analysis for the biomonitoring of human exposure to pesticides, some limitations still have to be addressed for optimum use of this matrix in that specific context. One main possible issue concerns the need to differentiate chemicals biologically incorporated into hair from those externally deposited on hair surface from contaminated air or dust. The present study focuses on the development of a washing procedure for the decontamination of hair before analysis of pesticides from different chemical classes. For this purpose, three different procedures of artificial contamination (with silica, cellulose, and aqueous solution) were used to simulate pesticides deposition on hair surface. Several washing solvents (four organic: acetone, dichloromethane, methanol, acetonitrile; and four aqueous: water, phosphate buffer, shampoo, sodium dodecylsulfate) were evaluated for their capacity to remove artificially deposited pesticides from hair surface. The most effective washing solvents were sodium dodecylsulfate and methanol for aqueous and organic solvents, respectively. Moreover, after a first washing with sodium dodecylsulfate or methanol, the majority of externally deposited pesticides was removed and a steady-state was reached since significantly lower amounts were removed by additional second and third washings. Finally, the effectiveness of a decontamination procedure comprising washing with sodium dodecylsulfate and methanol was successively demonstrated. In parallel, it was determined that the final procedure did not affect the chemicals biologically incorporated, as hair strands naturally containing pesticides were used. Such a procedure appears to remove in one-shot the fraction of chemicals located on hair surface and does not require repeated washing steps.

  4. COMPARISON OF PROCEDURES FOR IMMEDIATE CHEMICAL ANALYSIS OF CHARCOAL

    Directory of Open Access Journals (Sweden)

    Artur Queiroz Lana

    2016-04-01

    Full Text Available ABSTRACT The climate change, the quest for sustainability and the strong environmental pressures for alternatives to traditional fossil fuels have increased the interest in the search and use of renewable energy sources. Among them stands out the biomass of charcoal coming from renewable forests, widely used as a thermal reductant in the steel industry in the detriment of the use of mineral coal coke. This study aimed to compare different operating procedures of immediate chemical analysis of charcoal. Seven essays to immediate chemical analysis were compared, spread between procedures performed by Brazilian companies and laboratories, the test described by NBR 8112 and one realized with a thermogravimetric analyzer (TGA using the parameters of the NBR 8112. There were significant differences in the volatiles matter content and consequently in the fixed carbon contents found. The differences between the procedures and the NBR 8112 were caused by an excess burning time, a mass sample above or below the standard or inappropriate container used for burning. It observed that the TGA appraisal of the volatiles content must be carried out with a burning time equal to 2 minutes to obtain results similar to those of the NBR 8112 norm. Moreover, the ash content values were statistically identical and the particles size did not influence the differences between means.

  5. First outbreak with MRSA in a Danish neonatal intensive care unit: risk factors and control procedures.

    Directory of Open Access Journals (Sweden)

    Benedicte Grenness Utke Ramsing

    Full Text Available INTRODUCTION: The purpose of the study was to describe demographic and clinical characteristics and outbreak handling of a large methicillin-resistant Staphylococcus aureus (MRSA outbreak in a neonatal intensive care unit (NICU in Denmark June 25(th-August 8(th 2008, and to identify risk factors for MRSA transmission. METHODS: Data were collected retrospectively from medical records and the Danish Neobase database. All MRSA isolates obtained from neonates, relatives and NICU health care workers (HCW as well as environmental cultures were typed. RESULTS: During the 46 day outbreak period, 102 neonates were admitted to the two neonatal wards. Ninety-nine neonates were subsequently sampled, and 32 neonates (32% from 25 families were colonized with MRSA (spa-type t127, SCCmec V, PVL negative. Thirteen family members from 11 of those families (44% and two of 161 HCWs (1% were colonized with the same MRSA. No one was infected. Five environmental cultures were MRSA positive. In a multiple logistic regression analysis, nasal Continuous Positive Airway Pressure (nCPAP treatment (p = 0.006 and Caesarean section (p = 0.016 were independent risk factors for MRSA acquisition, whereas days of exposure to MRSA was a risk factors in the unadjusted analysis (p = 0.04. CONCLUSIONS: MRSA transmission occurs with high frequency in the NICU during hospitalization with unidentified MRSA neonates. Caesarean section and nCPAP treatment were identified as risk factors for MRSA colonization. The MRSA outbreak was controlled through infection control procedures.

  6. Analysis of Elementary School students’ algebraic perceptions and procedures

    Directory of Open Access Journals (Sweden)

    Sandra Mara Marasini

    2012-12-01

    Full Text Available This study aims to verify how students in elementary school see themselves in relation to mathematics and, at the same time, analyze the procedures used to solve algebraic tasks. These students in the 8th year of elementary school, and first and third years of high school, from two State schools in Passo Fundo/RS, answered a questionnaire about their own perceptions of the mathematics lessons, the subject mathematics and algebraic content. The analysis was based mainly on authors from the athematical education and the historic-cultural psychology areas. It was verifi ed that even among students who claimed to be happy with the idea of having mathematicsclasses several presented learning diffi culties regarding algebraic contents, revealed by the procedures employed. It was concluded that it is necessary to design proposals with didactic sequences, mathematically and pedagogically based, which can effi cientlyoptimize the appropriation of meaning from the concepts approached and their application in different situations.

  7. Transforming Rubrics Using Factor Analysis

    Science.gov (United States)

    Baryla, Ed; Shelley, Gary; Trainor, William

    2012-01-01

    Student learning and program effectiveness is often assessed using rubrics. While much time and effort may go into their creation, it is equally important to assess how effective and efficient the rubrics actually are in terms of measuring competencies over a number of criteria. This study demonstrates the use of common factor analysis to identify…

  8. EXPLORATORY FACTOR ANALYSIS (EFA IN CONSUMER BEHAVIOR AND MARKETING RESEARCH

    Directory of Open Access Journals (Sweden)

    Marcos Pascual Soler

    2012-06-01

    Full Text Available Exploratory Factor Analysis (EFA is one of the most widely used statistical procedures in social research. The main objective of this work is to describe the most common practices used by researchers in the consumer behavior and marketing area. Through a literature review methodology the practices of AFE in five consumer behavior and marketing journals(2000-2010 were analyzed. Then, the choices made by the researchers concerning factor model, retention criteria, rotation, factors interpretation and other relevant issues to factor analysis were analized. The results suggest that researchers routinely conduct analyses using such questionable methods. Suggestions for improving the use of factor analysis and the reporting of results are presented and a checklist (Exploratory Factor Analysis Checklist, EFAC is provided to help editors, reviewers, and authors improve reporting exploratory factor analysis.

  9. A simplified procedure of linear regression in a preliminary analysis

    Directory of Open Access Journals (Sweden)

    Silvia Facchinetti

    2013-05-01

    Full Text Available The analysis of a statistical large data-set can be led by the study of a particularly interesting variable Y – regressed – and an explicative variable X, chosen among the remained variables, conjointly observed. The study gives a simplified procedure to obtain the functional link of the variables y=y(x by a partition of the data-set into m subsets, in which the observations are synthesized by location indices (mean or median of X and Y. Polynomial models for y(x of order r are considered to verify the characteristics of the given procedure, in particular we assume r= 1 and 2. The distributions of the parameter estimators are obtained by simulation, when the fitting is done for m= r + 1. Comparisons of the results, in terms of distribution and efficiency, are made with the results obtained by the ordinary least square methods. The study also gives some considerations on the consistency of the estimated parameters obtained by the given procedure.

  10. User's operating procedures. Volume 2: Scout project financial analysis program

    Science.gov (United States)

    Harris, C. G.; Haris, D. K.

    1985-01-01

    A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.

  11. Cell-based land use screening procedure for regional siting analysis. [Utilizing spatial analysis procedures and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Jalbert, J.S.; Dobson, J.E.

    1976-10-03

    An energy facility site-screening methodology which permits the land resource planner to identify candidate siting areas was developed. Through the use of spatial analysis procedures and computer graphics, a selection of candidate areas is obtained. Specific sites then may be selected from among candidate areas for environmental impact analysis. The computerized methodology utilizes a cell-based geographic information system for specifying the suitability of candidate areas for an energy facility. The criteria to be considered may be specified by the user and weighted in terms of importance. Three primary computer programs have been developed. These programs produce thematic maps, proximity calculations, and suitability calculations. Programs are written so as to be transferrable to regional planning or regulatory agencies to assist in rational and comprehensive power plant site identification and analysis.

  12. Pharmacopoeial procedure for the determination of tylosin factors by high-performance liquid chromatography.

    Science.gov (United States)

    Fish, B J; Carr, G P

    1986-02-26

    A method is described for the determination of the factors in tylosin base and tylosin tartrate as raw materials and in dosage forms. The reversed-phase chromatographic system is compared with other similar systems in terms of selectivity towards the major tylosin factors and an aldol condensation degradation product observed in tylosin injection. Experimental conditions affecting the separation of the components are discussed, together with procedures to demonstrate system validity. It is considered that the methods developed provide appropriate procedures for inclusion in pharmacopoeial monographs for tylosin, tylosin tartrate, tylosin premix, tylosin soluble powder, tylosin tablets and tylosin injection.

  13. A receptor model for urban aerosols based on oblique factor analysis

    DEFF Research Database (Denmark)

    Keiding, Kristian; Sørensen, Morten S.; Pind, Niels

    1987-01-01

    A procedure is outlined for the construction of receptor models of urban aerosols, based on factor analysis. The advantage of the procedure is that the covariation of source impacts is included in the construction of the models. The results are compared with results obtained by other receptor-modelling...... procedures. It was found that procedures based on correlating sources were physically sound as well as in mutual agreement. Procedures based on non-correlating sources were found to generate physically obscure models....

  14. Synfuel program analysis. Volume I. Procedures-capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    This is the first of the two volumes describing the analytic procedures and resulting capabilities developed by Resource Applications (RA) for examining the economic viability, public costs, and national benefits of alternative synfuel projects and integrated programs. This volume is intended for Department of Energy (DOE) and Synthetic Fuel Corporation (SFC) program management personnel and includes a general description of the costing, venture, and portfolio models with enough detail for the reader to be able to specifiy cases and interpret outputs. It also contains an explicit description (with examples) of the types of results which can be obtained when applied to: the analysis of individual projects; the analysis of input uncertainty, i.e., risk; and the analysis of portfolios of such projects, including varying technology mixes and buildup schedules. In all cases, the objective is to obtain, on the one hand, comparative measures of private investment requirements and expected returns (under differing public policies) as they affect the private decision to proceed, and, on the other, public costs and national benefits as they affect public decisions to participate (in what form, in what areas, and to what extent).

  15. Perioperative outcomes for pediatric neurosurgical procedures: analysis of the National Surgical Quality Improvement Program-Pediatrics.

    Science.gov (United States)

    Kuo, Benjamin J; Vissoci, Joao Ricardo N; Egger, Joseph R; Smith, Emily R; Grant, Gerald A; Haglund, Michael M; Rice, Henry E

    2017-03-01

    OBJECTIVE Existing studies have shown a high overall rate of adverse events (AEs) following pediatric neurosurgical procedures. However, little is known regarding the morbidity of specific procedures or the association with risk factors to help guide quality improvement (QI) initiatives. The goal of this study was to describe the 30-day mortality and AE rates for pediatric neurosurgical procedures by using the American College of Surgeons (ACS) National Surgical Quality Improvement Program-Pediatrics (NSQIP-Peds) database platform. METHODS Data on 9996 pediatric neurosurgical patients were acquired from the 2012-2014 NSQIP-Peds participant user file. Neurosurgical cases were analyzed by the NSQIP-Peds targeted procedure categories, including craniotomy/craniectomy, defect repair, laminectomy, shunts, and implants. The primary outcome measure was 30-day mortality, with secondary outcomes including individual AEs, composite morbidity (all AEs excluding mortality and unplanned reoperation), surgical-site infection, and unplanned reoperation. Univariate analysis was performed between individual AEs and patient characteristics using Fischer's exact test. Associations between individual AEs and continuous variables (duration from admission to operation, work relative value unit, and operation time) were examined using the Student t-test. Patient characteristics and continuous variables associated with any AE by univariate analysis were used to develop category-specific multivariable models through backward stepwise logistic regression. RESULTS The authors analyzed 3383 craniotomy/craniectomy, 242 defect repair, 1811 laminectomy, and 4560 shunt and implant cases and found a composite overall morbidity of 30.2%, 38.8%, 10.2%, and 10.7%, respectively. Unplanned reoperation rates were highest for defect repair (29.8%). The mortality rate ranged from 0.1% to 1.2%. Preoperative ventilator dependence was a significant predictor of any AE for all procedure groups, whereas

  16. Development of a draft of human factors safety review procedures for the Korean next generation reactor

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Woon; Moon, B. S.; Park, J. C.; Lee, Y. H.; Oh, I. S.; Lee, H. C. [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-02-01

    In this study, a draft of human factors engineering (HFE) safety review procedures (SRP) was developed for the safety review of KNGR based on HFE Safety and Regulatory Requirements and Guidelines (SRRG). This draft includes acceptance criteria, review procedure, and evaluation findings for the areas of review including HFE Program Management, Human Factors Analyses, Human Factors Design, and HFE Verification and Validation, based on Section 15.1 'Human Factors Engineering Design Process' and 15.2 'Control Room Human Factors Engineering' of KNGR Specific Safety Requirements and Chapter 15 'Human Factors Engineering' of KNGR Safety Regulatory Guides. For the effective review, human factors concerns or issues related to advanced HSI design that have been reported so far should be extensively examined. In this study, a total of 384 human factors issues related to the advanced HSI design were collected through our review of a total of 145 documents. A summary of each issue was described and the issues were identified by specific features of HSI design. These results were implemented into a database system. 8 refs., 2 figs. (Author)

  17. Conditional Versus Unconditional Procedures for Sample-Free Item Analysis

    Science.gov (United States)

    Wright, Benjamin D.; Douglas, Graham A.

    1977-01-01

    Two procedures for Rasch, sample-free item calibration are reviewed and compared for accuracy. The theoretically ideal "conditional" procedure is impractical for more than fifteen items. The more practical but biased "unconditional" procedure is discussed in detail. (Author/JKS)

  18. Scenes for Social Information Processing in Adolescence: Item and factor analytic procedures for psychometric appraisal.

    Science.gov (United States)

    Vagos, Paula; Rijo, Daniel; Santos, Isabel M

    2016-04-01

    Relatively little is known about measures used to investigate the validity and applications of social information processing theory. The Scenes for Social Information Processing in Adolescence includes items built using a participatory approach to evaluate the attribution of intent, emotion intensity, response evaluation, and response decision steps of social information processing. We evaluated a sample of 802 Portuguese adolescents (61.5% female; mean age = 16.44 years old) using this instrument. Item analysis and exploratory and confirmatory factor analytic procedures were used for psychometric examination. Two measures for attribution of intent were produced, including hostile and neutral; along with 3 emotion measures, focused on negative emotional states; 8 response evaluation measures; and 4 response decision measures, including prosocial and impaired social behavior. All of these measures achieved good internal consistency values and fit indicators. Boys seemed to favor and choose overt and relational aggression behaviors more often; girls conveyed higher levels of neutral attribution, sadness, and assertiveness and passiveness. The Scenes for Social Information Processing in Adolescence achieved adequate psychometric results and seems a valuable alternative for evaluating social information processing, even if it is essential to continue investigation into its internal and external validity.

  19. Factors affecting exposure level for medical staff during orthopedic procedures under fluoroscopic control

    Directory of Open Access Journals (Sweden)

    Maria A. Staniszewska

    2017-02-01

    Full Text Available Background: Extended control of staff exposure in interventional radiology has been legally required over the last few years. This is determined by a number of factors, including the type of procedure, technical conditions and methodology. In orthopedic procedures fluoroscopy is used to control surgical reconstructions. An influence of particular factors on the registered values of doses received by the members of medical team performing osteosynthesis for limb fractures is presented in this paper. Material and Methods: Doses received by individual interventional team members performing specific functions, operator, assisting physicians and scrub nurse, during a series of the procedures were measured. Each person was equipped with 4 dosimetric tools, containing thermoluminescent dosimeters, to measure the equivalent doses for the eyes, hand skin and the neck (outside the shield and to evaluate effective doses. The investigations were performed in operational theatres of 3 hospitals in Łódź. Results: The equivalent doses per one procedure for the eyes and hand skin of the operator were 0.029–0.073 mSv and 0.366–1.604 mSv, respectively. Significantly higher doses were noted during the procedures of intramedullary osteosynthesis, especially for the operator. An average age and body mass index (BMI of patients treated in the monitored hospitals did not differ statistically. Conclusions: Based on the dosimetric measurements the following conclusions can be drawn: in orthopedic procedures of interventional radiology (IR the exposure of the staff is mostly determined by the type of procedure and more precisely by its complexity and by the optimized use of X-ray unit, including pulsed fluoroscopy. It is also revealed that the operator is the most exposed person in the interventional team. Med Pr 2017;68(1:75–83

  20. Procedural justice versus risk factors for offending: predicting recidivism in youth.

    Science.gov (United States)

    Penner, Erika K; Viljoen, Jodi L; Douglas, Kevin S; Roesch, Ronald

    2014-06-01

    Theories of procedural justice suggest that individuals who experience respectful and fair legal decision-making procedures are more likely to believe in the legitimacy of the law and, in turn, are less likely to reoffend. However, few studies have examined these relationships in youth. To begin to fill this gap in the literature, in the current study, the authors studied 92 youth (67 male, 25 female) on probation regarding their perceptions of procedural justice and legitimacy, and then monitored their offending over the subsequent 6 months. Results indicated that perceptions of procedural justice predicted self-reported offending at 3 months but not at 6 months, and that youths' beliefs about the legitimacy of the law did not mediate this relationship. Furthermore, procedural justice continued to account for unique variance in self-reported offending over and above the predictive power of well-established risk factors for offending (i.e., peer delinquency, substance abuse, psychopathy, and age at first contact with the law). Theoretically, the current study provides evidence that models of procedural justice developed for adults are only partially replicated in a sample of youth; practically, this research suggests that by treating adolescents in a fair and just manner, justice professionals may be able to reduce the likelihood that adolescents will reoffend, at least in the short term.

  1. Sociodemographic Predictors of Breast Reconstruction Procedure Choice: Analysis of the Mastectomy Reconstruction Outcomes Consortium Study Cohort

    Directory of Open Access Journals (Sweden)

    Tiffany N. S. Ballard

    2015-01-01

    Full Text Available Background. To promote patient-centered care, it is important to understand the impact of sociodemographic factors on procedure choice for women undergoing postmastectomy breast reconstruction. In this context, we analyzed the effects of these variables on the reconstructive method chosen. Methods. Women undergoing postmastectomy breast reconstruction were recruited for the prospective Mastectomy Reconstruction Outcomes Consortium Study. Procedure types were divided into tissue expander-implant/direct-to-implant and abdominally based flap reconstructions. Adjusted odds ratios were calculated from logistic regression. Results. The analysis included 2,203 women with current or previous breast cancer and 202 women undergoing prophylactic mastectomy. Compared with women <40 years old with current or previous breast cancer, those 40 to 59 were significantly more likely to undergo an abdominally based flap. Women working or attending school full-time were more likely to receive an autologous procedure than those working part-time or volunteering. Women undergoing prophylactic mastectomy who were ≥50 years were more likely to undergo an abdominal flap compared to those <40. Conclusions. Our results indicate that sociodemographic factors affect the reconstructive procedure received. As we move forward into a new era of patient-centered care, providing tailored treatment options to reconstruction patients will likely lead to higher satisfaction and better outcomes for those we serve.

  2. Estimation procedures affect the center of pressure frequency analysis.

    Science.gov (United States)

    Vieira, T M M; Oliveira, L F; Nadal, J

    2009-07-01

    Even though frequency analysis of body sway is widely applied in clinical studies, the lack of standardized procedures concerning power spectrum estimation may provide unreliable descriptors. Stabilometric tests were applied to 35 subjects (20-51 years, 54-95 kg, 1.6-1.9 m) and the power spectral density function was estimated for the anterior-posterior center of pressure time series. The median frequency was compared between power spectra estimated according to signal partitioning, sampling rate, test duration, and detrending methods. The median frequency reliability for different test durations was assessed using the intraclass correlation coefficient. When increasing number of segments, shortening test duration or applying linear detrending, the median frequency values increased significantly up to 137%. Even the shortest test duration provided reliable estimates as observed with the intraclass coefficient (0.74-0.89 confidence interval for a single 20-s test). Clinical assessment of balance may benefit from a standardized protocol for center of pressure spectral analysis that provides an adequate relationship between resolution and variance. An algorithm to estimate center of pressure power density spectrum is also proposed.

  3. Estimation procedures affect the center of pressure frequency analysis

    Directory of Open Access Journals (Sweden)

    T.M.M. Vieira

    2009-07-01

    Full Text Available Even though frequency analysis of body sway is widely applied in clinical studies, the lack of standardized procedures concerning power spectrum estimation may provide unreliable descriptors. Stabilometric tests were applied to 35 subjects (20-51 years, 54-95 kg, 1.6-1.9 m and the power spectral density function was estimated for the anterior-posterior center of pressure time series. The median frequency was compared between power spectra estimated according to signal partitioning, sampling rate, test duration, and detrending methods. The median frequency reliability for different test durations was assessed using the intraclass correlation coefficient. When increasing number of segments, shortening test duration or applying linear detrending, the median frequency values increased significantly up to 137%. Even the shortest test duration provided reliable estimates as observed with the intraclass coefficient (0.74-0.89 confidence interval for a single 20-s test. Clinical assessment of balance may benefit from a standardized protocol for center of pressure spectral analysis that provides an adequate relationship between resolution and variance. An algorithm to estimate center of pressure power density spectrum is also proposed.

  4. Guide to IDAP, Version 2: an interactive decision analysis procedure

    Energy Technology Data Exchange (ETDEWEB)

    Jusko, M.J.; Whitfield, R.G.

    1980-11-01

    This document is intended to serve as both a programmer's and user's guide to the current version of the IDAP; and to prompt interested individuals into making suggestions for the future development of IDAP. The majority of the sections pertain to the main IDA program rather than to the IDAIN procedure. A brief discussion is presented of the theory of decision analysis. The aspects of decision analysis that are relevant to the IDAP are discussed. A complete list and description of the commands used in the IDAP program is provided and, including three complete examples. This section may be considered a user's guide to the IDAP. The programmer's guide to the IDAP discusses the various technical aspects of the programs, and may be skipped by users not involved with programming the IDAP. A list of the error messages generated by the IDAP is presented. As the program is developed, error handling and messages will improve.

  5. An easy guide to factor analysis

    CERN Document Server

    Kline, Paul

    2014-01-01

    Factor analysis is a statistical technique widely used in psychology and the social sciences. With the advent of powerful computers, factor analysis and other multivariate methods are now available to many more people. An Easy Guide to Factor Analysis presents and explains factor analysis as clearly and simply as possible. The author, Paul Kline, carefully defines all statistical terms and demonstrates step-by-step how to work out a simple example of principal components analysis and rotation. He further explains other methods of factor analysis, including confirmatory and path analysis, a

  6. Optimization of an effective extraction procedure for the analysis of microcystins in soils and lake sediments

    Energy Technology Data Exchange (ETDEWEB)

    Chen Wei [State Key Laboratory of Freshwater Ecology and Biotechnology, Institute of Hydrobiology, Chinese Academy of Sciences, Wuhan 430072 (China); Graduate School of Chinese Academy of Sciences, Beijing 100049 (China); Li Lin [State Key Laboratory of Freshwater Ecology and Biotechnology, Institute of Hydrobiology, Chinese Academy of Sciences, Wuhan 430072 (China); Gan Nanqin [State Key Laboratory of Freshwater Ecology and Biotechnology, Institute of Hydrobiology, Chinese Academy of Sciences, Wuhan 430072 (China); Song Lirong [State Key Laboratory of Freshwater Ecology and Biotechnology, Institute of Hydrobiology, Chinese Academy of Sciences, Wuhan 430072 (China)]. E-mail: lrsong@ihb.ac.cn

    2006-09-15

    Microcystin analysis in sediments and soils is considered very difficult due to low recovery for extraction. This is the primary limiting factor for understanding the fate of toxins in the interface between water and sediment in both the aquatic ecosystem as well as in soils. In the present study, a wide range of extraction solvents were evaluated over a wide range of pH, extraction approaches and equilibration time to optimize an effective extraction procedure for the analysis of microcystins in soils and lake sediments. The number of extractions required and acids in extraction solutions were also studied. In this procedure, EDTA-sodium pyrophosphate solution was selected as an extraction solvent based on the adsorption mechanism study. The optimized procedure proved to be highly efficient and achieved over 90% recovery. Finally, the developed procedure was applied to field soil and sediment sample collected from Chinese lakes during bloom seasons and microcystins were determined in six of ten samples. - Efficiency of extraction of microcystins from soil and sediment was greatly increased.

  7. PROCEDURE FOR ANALYSIS AND EVALUATION OF MARKET POSITION PRODUCTION ORGANIZATION

    Directory of Open Access Journals (Sweden)

    A. N. Polozova

    2014-01-01

    Full Text Available Summary. Methodical procedures economic monitoring market position of industrial organization, particularly those relating to food production, including the 5 elements: matrix «component of business processes», matrix «materiality – efficiency», matrix «materiality – relevant», matrix emption and hindering factors matrix operation scenarios. Substantiated components assess the strengths and weaknesses of the business activities of organizations that characterize the state of internal business environment on the elements: production, organization, personnel, finance, marketing. The advantages of the matrix «materiality – relevance» consisting of 2 materiality level - high and low, and 3 directions relevance – «no change», «gain importance in the future», «lose importance in the future». Presented the contents of the matrix «scenarios functioning of the organization», involving 6 attribute levels, 10 classes of scenarios, 19 activities, including an optimistic and pessimistic. The evaluation of primary classes of scenarios, characterized by the properties of «development», «dynamic equilibrium», «quality improvement», «competitiveness», «favorable realization of opportunities», «competition resistance».

  8. Modifiable factors to decrease the cost of robotic-assisted procedures.

    Science.gov (United States)

    Nayeemuddin, Mohammed; Daley, Susan C; Ellsworth, Pamela

    2013-10-01

    In 2000, the US Food and Drug Administration approved the da Vinci Surgical System® for use in the United States. Since that time, the number of surgical robotic systems throughout the United States has continued to grow. The costs for using the system include the initial purchase ($1 million to $2.3 million) plus annual maintenance fees ($100,000 to $150,000) and the cost of limited-use or disposable instruments. Increasing the number of procedures that are performed using the robotic system can decrease the per-procedure costs. Two modifiable factors that contribute to increasing the annual caseload are increasing the number of surgeons capable of using the system and having a properly educated perioperative nursing team. An educated surgical team decreases turnover time, facilitates proper flow of each surgical procedure, and is able to actively and passively solve intraoperative problems.

  9. Computing the surveillance error grid analysis: procedure and examples.

    Science.gov (United States)

    Kovatchev, Boris P; Wakeman, Christian A; Breton, Marc D; Kost, Gerald J; Louie, Richard F; Tran, Nam K; Klonoff, David C

    2014-07-01

    The surveillance error grid (SEG) analysis is a tool for analysis and visualization of blood glucose monitoring (BGM) errors, based on the opinions of 206 diabetes clinicians who rated 4 distinct treatment scenarios. Resulting from this large-scale inquiry is a matrix of 337 561 risk ratings, 1 for each pair of (reference, BGM) readings ranging from 20 to 580 mg/dl. The computation of the SEG is therefore complex and in need of automation. The SEG software introduced in this article automates the task of assigning a degree of risk to each data point for a set of measured and reference blood glucose values so that the data can be distributed into 8 risk zones. The software's 2 main purposes are to (1) distribute a set of BG Monitor data into 8 risk zones ranging from none to extreme and (2) present the data in a color coded display to promote visualization. Besides aggregating the data into 8 zones corresponding to levels of risk, the SEG computes the number and percentage of data pairs in each zone and the number/percentage of data pairs above/below the diagonal line in each zone, which are associated with BGM errors creating risks for hypo- or hyperglycemia, respectively. To illustrate the action of the SEG software we first present computer-simulated data stratified along error levels defined by ISO 15197:2013. This allows the SEG to be linked to this established standard. Further illustration of the SEG procedure is done with a series of previously published data, which reflect the performance of BGM devices and test strips under various environmental conditions. We conclude that the SEG software is a useful addition to the SEG analysis presented in this journal, developed to assess the magnitude of clinical risk from analytically inaccurate data in a variety of high-impact situations such as intensive care and disaster settings.

  10. Pricing of common cosmetic surgery procedures: local economic factors trump supply and demand.

    Science.gov (United States)

    Richardson, Clare; Mattison, Gennaya; Workman, Adrienne; Gupta, Subhas

    2015-02-01

    The pricing of cosmetic surgery procedures has long been thought to coincide with laws of basic economics, including the model of supply and demand. However, the highly variable prices of these procedures indicate that additional economic contributors are probable. The authors sought to reassess the fit of cosmetic surgery costs to the model of supply and demand and to determine the driving forces behind the pricing of cosmetic surgery procedures. Ten plastic surgery practices were randomly selected from each of 15 US cities of various population sizes. Average prices of breast augmentation, mastopexy, abdominoplasty, blepharoplasty, and rhytidectomy in each city were compared with economic and demographic statistics. The average price of cosmetic surgery procedures correlated substantially with population size (r = 0.767), cost-of-living index (r = 0.784), cost to own real estate (r = 0.714), and cost to rent real estate (r = 0.695) across the 15 US cities. Cosmetic surgery pricing also was found to correlate (albeit weakly) with household income (r = 0.436) and per capita income (r = 0.576). Virtually no correlations existed between pricing and the density of plastic surgeons (r = 0.185) or the average age of residents (r = 0.076). Results of this study demonstrate a correlation between costs of cosmetic surgery procedures and local economic factors. Cosmetic surgery pricing cannot be completely explained by the supply-and-demand model because no association was found between procedure cost and the density of plastic surgeons. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  11. Human Factors Analysis in Software Engineering

    Institute of Scientific and Technical Information of China (English)

    Xu Ren-zuo; Ma Ruo-feng; Liu Li-na; Xiong Zhong-wei

    2004-01-01

    The general human factors analysis analyzes human functions, effects and influence in a system. But in a narrow sense, it analyzes human influence upon the reliability of a system, it includes traditional human reliability analysis, human error analysis, man-machine interface analysis, human character analysis, and others. A software development project in software engineering is successful or not to be completely determined by human factors. In this paper, we discuss the human factors intensions, declare the importance of human factors analysis for software engineering by listed some instances. At last, we probe preliminarily into the mentality that a practitioner in software engineering should possess.

  12. A factor analysis to detect factors influencing building national brand

    Directory of Open Access Journals (Sweden)

    Naser Azad

    Full Text Available Developing a national brand is one of the most important issues for development of a brand. In this study, we present factor analysis to detect the most important factors in building a national brand. The proposed study uses factor analysis to extract the most influencing factors and the sample size has been chosen from two major auto makers in Iran called Iran Khodro and Saipa. The questionnaire was designed in Likert scale and distributed among 235 experts. Cronbach alpha is calculated as 84%, which is well above the minimum desirable limit of 0.70. The implementation of factor analysis provides six factors including “cultural image of customers”, “exciting characteristics”, “competitive pricing strategies”, “perception image” and “previous perceptions”.

  13. Comparison of various procedures for progressive collapse analysis of cable-stayed bridges

    Institute of Scientific and Technical Information of China (English)

    Jian-guo CAI; Yi-xiang XU; Li-ping ZHUANG; Jian FENG; Jin ZHANG

    2012-01-01

    Alternate path (AP) method is the most widely used method for the progressive collapse analysis,and its application in frame structures has been well proved.However,the application of AP method for other structures,especially for cable-stayed structures,should be further developed.The four analytical procedures,i.e.,linear static,nonlinear static,linear dynamic,and nonlinear dynamic were firstly improved by taking into account the initial state.Then a cable-stayed structure was studied using the four improved methods.Furthermore,the losses of both one cable and two cables were discussed.The results show that for static and dynamic analyses of the cable-stayed bridges,there is large difference between the results obtained from simulations starting with either a deformed or a nondeformed configuration at the time of cable loss.The static results are conservative in the vicinity of the ruptured cable,but the dynamic effect of the cable loss in the area farther away from the loss-cable cannot be considered.Moreover,the dynamic amplification factor of 2.0 is found to be a good estimate for static analysis procedures,since linear static and linear dynamic procedures yield approximately the same maximum vertical deflection.The results of the comprehensive evaluation of the cable failure show that the tread of the progressive failure of the cable-stayed bridges decreases when the location of the failed cables is closer to the pylon.

  14. Measurement Bias Detection through Factor Analysis

    Science.gov (United States)

    Barendse, M. T.; Oort, F. J.; Werner, C. S.; Ligtvoet, R.; Schermelleh-Engel, K.

    2012-01-01

    Measurement bias is defined as a violation of measurement invariance, which can be investigated through multigroup factor analysis (MGFA), by testing across-group differences in intercepts (uniform bias) and factor loadings (nonuniform bias). Restricted factor analysis (RFA) can also be used to detect measurement bias. To also enable nonuniform…

  15. An improved modal pushover analysis procedure for estimating seismic demands of structures

    Institute of Scientific and Technical Information of China (English)

    Mao Jianmeng; Zhai Changhai; Xie Lili

    2008-01-01

    The pushover analysis (POA) procedure is difficult to apply to high-rise buildings, as it cannot account for the contributions of higher modes. To overcome this limitation, a modal pushover analysis (MPA) procedure was proposed by Chopra et al. (2001). However, invariable lateral force distributions are still adopted in the MPA. In this paper, an improved MPA procedure is presented to estimate the seismic demands of structures, considering the redistribution of inertia forces after the structure yields. This improved procedure is verified with numerical examples of 5-, 9- and 22-story buildings. It is concluded that the improved MPA procedure is more accurate than either the POA procedure or MPA procedure. In addition, the proposed procedure avoids a large computational effort by adopting a two-phase lateral force distribution..

  16. Radiographic parameter analysis on modified sauvé-kapandji procedure.

    Science.gov (United States)

    Ota, Norikazu; Nakamura, Toshiyasu; Iwamoto, Takuji; Sato, Kazuki; Toyama, Yoshiaki

    2013-02-01

    Purpose The Sauvé-Kapandji (S-K) procedure is now an established treatment option for symptomatic distal radioulnar joint (DRUJ) dysfunction. However, for patients with poor bone quality (frequently as a result of advanced-stage rheumatoid arthritis [RA]), the conventional S-K procedure is difficult to perform without reducing the radioulnar diameter of the wrist, which may result in a loss of grip strength and pain over the proximal ulnar stump. The purpose of this study was to review the radiographic outcomes of patients who underwent a modified S-K procedure that involves rotating the resected ulnar segment 90 degrees and using it to bridge the gap between the sigmoid notch and the ulnar head. Methods The modified S-K procedure was performed in 29 wrists of 23 patients. Twenty-one patients had severe RA, while two had malunited radius fractures. The mean follow-up period was 43 months (range, 23 to 95). The radiographic evaluation included a measurement of the radioulnar width, the pseudarthrosis gap between the proximal and distal ulnar stump, the radioulnar distance, and the ulnar translation of the carpus. Results The radioulnar width of the wrist, pseudarthrosis gap, and radioulnar distance were well maintained throughout the period. A postoperative loss in the radioulnar width of the wrists appeared to correlate with a postoperative additional ulnar translocation of the carpus. Conclusion Narrowing of the radioulnar width of the wrist is a potential cause of progressive ulnar translocation of the carpus. The modified technique for the S-K procedure maintains the distal ulna in the proper position and provides sufficient ulnar support for the carpus. It is a useful reconstruction procedure in patients with severe RA with poor bone quality.

  17. Patient and procedural factors associated with an increased risk of harm or death in the first 4,000 incidents reported to webAIRS.

    Science.gov (United States)

    Gibbs, N M; Culwick, M D; Merry, A F

    2017-03-01

    This report describes an analysis of patient and procedural factors associated with a higher proportion of harm or death versus no harm in the first 4,000 incidents reported to webAIRS. The report is supplementary to a previous cross-sectional report on the first 4,000 incidents reported to webAIRS. The aim of this analysis was to identify potential patient or procedural factors that are more common in incidents resulting in harm or death than in incidents with more benign outcomes. There was a >50% higher proportion of harm (versus no harm) for incidents in which the patient's body mass index (BMI) was incidents in post-anaesthesia care units and non-theatre procedural areas, and for incidents under the main category of cardiovascular or neurological. The proportion of incidents associated with death was also higher (risk ratio >1.5) for BMI incidents in non-theatre procedural areas, and incidents under the main category of cardiovascular or neurological. In addition, the proportion of incidents associated with death was higher for incidents in which the patient's age was >80 years, the American Society of Anesthesiologists physical status was 4 or 5, incidents involving non-elective procedures, and incidents occurring after hours (1800 to 0800 hours). When faced with incidents with these potential risk factors, anaesthetists should consider earlier interventions and request assistance at an earlier stage. Educational strategies on incident prevention and management should place even further emphasis on scenarios involving these factors.

  18. Rates and risk factors of unplanned 30-day readmission following general and thoracic pediatric surgical procedures.

    Science.gov (United States)

    Polites, Stephanie F; Potter, Donald D; Glasgow, Amy E; Klinkner, Denise B; Moir, Christopher R; Ishitani, Michael B; Habermann, Elizabeth B

    2017-08-01

    Postoperative unplanned readmissions are costly and decrease patient satisfaction; however, little is known about this complication in pediatric surgery. The purpose of this study was to determine rates and predictors of unplanned readmission in a multi-institutional cohort of pediatric surgical patients. Unplanned 30-day readmissions following general and thoracic surgical procedures in children Pediatric. Time-dependent rates of readmission per 30 person-days were determined to account for varied postoperative length of stay (pLOS). Patients were randomly divided into 70% derivation and 30% validation cohorts which were used for creation and validation of a risk model for readmission. Readmission occurred in 1948 (3.6%) of 54,870 children for a rate of 4.3% per 30 person-days. Adjusted predictors of readmission included hepatobiliary procedures, increased wound class, operative duration, complications, and pLOS. The predictive model discriminated well in the derivation and validation cohorts (AUROC 0.710 and 0.701) with good calibration between observed and expected readmission events in both cohorts (p>.05). Unplanned readmission occurs less frequently in pediatric surgery than what is described in adults, calling into question its use as a quality indicator in this population. Factors that predict readmission including type of procedure, complications, and pLOS can be used to identify at-risk children and develop prevention strategies. III. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Analysis of assistance procedures to normal birth in primiparous

    Directory of Open Access Journals (Sweden)

    Joe Luiz Vieira Garcia Novo

    2016-04-01

    Full Text Available Introduction: Current medical technologies in care in birth increased maternal and fetal benefits persist, despite numerous unnecessary procedures. The purpose of the normal childbirth care is to have healthy women and newborns, using a minimum of safe interventions. Objective: To analyze the assistance to normal delivery in secondary care maternity. Methodology: A total of 100 primiparous mothers who had vaginal delivery were included, in which care practices used were categorized: 1 according to the WHO classification for assistance to normal childbirth: effective, harmful, used with caution and used inappropriately; 2 associating calculations with the Bologna Index parameters: presence of a birth partner, partograph, no stimulation of labor, delivery in non-supine position, and mother-newborn skin-to-skin contact. Results: Birth partners (85%, correctly filled partographs (62%, mother-newborn skin-to-skin contact (36%, use of oxytocin (87%, use of parenteral nutrition during labor (86% and at delivery (74%, episiotomy (94% and uterine fundal pressure in the expulsion stage (58%. The overall average value of the Bologna Index of the mothers analyzed was 1.95. Conclusions: Some effective procedures recommended by WHO (presence of a birth partner, some effective and mandatory practices were not complied with (partograph completely filled, potentially harmful or ineffective procedures were used (oxytocin in labor/post-partum, as well as inadequate procedures (uterine fundal pressure during the expulsion stage, use of forceps and episiotomy. The maternity’s care model did not offer excellence procedures in natural birth to their mothers in primiparity, (BI=1.95.

  20. Text mining factor analysis (TFA) in green tea patent data

    Science.gov (United States)

    Rahmawati, Sela; Suprijadi, Jadi; Zulhanif

    2017-03-01

    Factor analysis has become one of the most widely used multivariate statistical procedures in applied research endeavors across a multitude of domains. There are two main types of analyses based on factor analysis: Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA). Both EFA and CFA aim to observed relationships among a group of indicators with a latent variable, but they differ fundamentally, a priori and restrictions made to the factor model. This method will be applied to patent data technology sector green tea to determine the development technology of green tea in the world. Patent analysis is useful in identifying the future technological trends in a specific field of technology. Database patent are obtained from agency European Patent Organization (EPO). In this paper, CFA model will be applied to the nominal data, which obtain from the presence absence matrix. While doing processing, analysis CFA for nominal data analysis was based on Tetrachoric matrix. Meanwhile, EFA model will be applied on a title from sector technology dominant. Title will be pre-processing first using text mining analysis.

  1. An outcomes analysis of 2142 breast reduction procedures.

    Science.gov (United States)

    Manahan, Michele Ann; Buretta, Kate J; Chang, David; Mithani, Suhail K; Mallalieu, Jesse; Shermak, Michele A

    2015-03-01

    Breast reduction alleviates macromastia symptoms and facilitates symmetrical breast reconstruction after cancer treatment. We investigated a large series of consecutive breast reductions to study important factors that impact outcomes. An institutional review board-approved, retrospective review of all breast reductions from 1999 to 2009 in a single institution was performed using the medical record for demographics, medical history, physical examination, intraoperative data, and postoperative complications. Multivariate statistical analysis was performed using Stata 1.0. P ≤ 0.05 defined significance. Seventeen surgeons performed 2152 consecutive breast reductions on 1148 patients using inferior pedicle/Wise pattern (56.4%), medial pedicle/Wise pattern (16.8%), superior pedicle/nipple graft/Wise pattern (15.1%), superior pedicle/vertical pattern (11.6%), and liposuction (0.1%) techniques. Complications included discernible scars (14.5%), nonsurgical wounds (13.5%), fat necrosis (8.2%), infection (7.3%), wounds requiring negative pressure wound therapy or reoperation (1.4%), and seroma (1.2%). Reoperation rates were 6.7% for scars, 1.4% for fat necrosis, and 1% for wounds.Body mass index greater than or equal to 35 kg/m increased risk of infections [odds ratio (OR), 2.3, P = 0.000], seromas (OR, 2.9, P = 0.03), fat necrosis (OR, 2.0, P = 0.002), and minor wounds (OR, 1.7, P = 0.001). Cardiac disease increased reoperation for scar (OR, 3.0, P = 0.04) and fat necrosis (OR, 5.3, P = 0.03). Tobacco use increased infection rate (OR, 2.1, P = 0.008). Secondary surgery increased seromas (OR, 12.0, P = 0.001). Previous hysterectomy/oophorectomy increased risk of wound reoperations (OR, 3.4, P = 0.02), and exogenous hormone supplementation trended toward decreasing infections (OR, 0.5, P = 0.08). χ analysis revealed 7.8% infection risk without exogenous hormone versus 3.8% risk with hormone supplementation (P = 0.02). Morbid obesity, tobacco, cardiac history, and

  2. Optimal thermographic procedures for moisture analysis in building materials

    Science.gov (United States)

    Rosina, Elisabetta; Ludwig, Nicola

    1999-09-01

    The presence of moisture in building materials causes damage second only to structural one. NDT are successfully applied to map moisture distribution, to localize the source of water and to determine microclimatic conditions. IR Thermography has the advantage of non-destructive testing while it allows to investigate large surfaces. The measures can be repeated in time to monitor the phenomenon of raising water. Nevertheless the investigation of moisture in walls is one of the less reliable application of Thermography IR applied to cultural heritage preservation. The temperature of the damp areas can be colder than dry ones, because of surface evaporation, or can be warmer, because of the higher thermal inertia of water content versus building materials. The apparent discrepancies between the two results are due to the different microclimatic conditions of the scanning. Aim of the paper is to describe optimal procedures to obtain reliable maps of moisture in building materials, at different environmental and microclimatic conditions. Another goal is the description of the related energetic phenomena, which cause temperature discontinuities, and that are detected by thermography. Active and passive procedures are presented and compared. Case studies show some examples of procedures application.

  3. Statistical inference of Minimum Rank Factor Analysis

    NARCIS (Netherlands)

    Shapiro, A; Ten Berge, JMF

    2002-01-01

    For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the

  4. Statistical inference of Minimum Rank Factor Analysis

    NARCIS (Netherlands)

    Shapiro, A; Ten Berge, JMF

    For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the

  5. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    Energy Technology Data Exchange (ETDEWEB)

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L. [Pacific Northwest Lab., Richland, WA (United States)

    1995-10-01

    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual`s performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average.

  6. An Analysis of a Heuristic Procedure to Evaluate Tail (independence

    Directory of Open Access Journals (Sweden)

    Marta Ferreira

    2014-01-01

    Full Text Available Measuring tail dependence is an important issue in many applied sciences in order to quantify the risk of simultaneous extreme events. A usual measure is given by the tail dependence coefficient. The characteristics of events behave quite differently as these become more extreme, whereas we are in the class of asymptotic dependence or in the class of asymptotic independence. The literature has emphasized the asymptotic dependent class but wrongly infers that tail dependence will result in the overestimation of extreme value dependence and consequently of the risk. In this paper we analyze this issue through simulation based on a heuristic procedure.

  7. An Equilibrium Analysis of Knaster’s Fair Division Procedure

    Directory of Open Access Journals (Sweden)

    Matt Van Essen

    2013-01-01

    Full Text Available In an incomplete information setting, we analyze the sealed bid auction proposed by Knaster (cf. Steinhaus (1948. This procedure was designed to efficiently and fairly allocate multiple indivisible items when participants report their valuations truthfully. In equilibrium, players do not follow truthful bidding strategies. We find that, ex-post, the equilibrium allocation is still efficient but may not be fair. However, on average, participants receive the same outcome they would have received if everyone had reported truthfully—i.e., the mechanism is ex-ante fair.

  8. Finite element procedure for stress amplification factor recovering in a representative volume of composite materials

    Directory of Open Access Journals (Sweden)

    Paulo Cesar Plaisant Junior

    2011-09-01

    Full Text Available Finite element models are proposed to the micromechanical analysis of a representative volume of composite materials. A detailed description of the meshes, boundary conditions, and loadings are presented. An illustrative application is given to evaluate stress amplification factors within a representative volume of the unidirectional carbon fiber composite plate. The results are discussed and compared to the numerical findings.

  9. Procedure for the Analysis of Repeatability and Reproducibility in Manufacturing Process

    Directory of Open Access Journals (Sweden)

    Gonzalo González Rey

    2015-12-01

    Full Text Available A procedure for the analysis of repeatability and reproducibility conditions (R&R in a manufacturing system is presented. The analysis of repeatability and reproducibility is based in measurement of dimensions from a piece or manufactured part. The procedure is fi xed by means of method of average and range with great acceptance and exclusivity in the study of measurement system analysis. This procedure and results derived of R&R analysis prove that method of average and range could be used in the stability study of manufacturing systems.

  10. Exploratory factor analysis and reliability analysis with missing data: A simple method for SPSS users

    Directory of Open Access Journals (Sweden)

    Bruce Weaver

    2014-09-01

    Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.

  11. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  12. Kernel Factor Analysis Algorithm with Varimax

    Institute of Scientific and Technical Information of China (English)

    Xia Guoen; Jin Weidong; Zhang Gexiang

    2006-01-01

    Kernal factor analysis (KFA) with varimax was proposed by using Mercer kernel function which can map the data in the original space to a high-dimensional feature space, and was compared with the kernel principle component analysis (KPCA). The results show that the best error rate in handwritten digit recognition by kernel factor analysis with varimax (4.2%) was superior to KPCA (4.4%). The KFA with varimax could more accurately image handwritten digit recognition.

  13. RHDM procedure for analysis of the potential specific risk due to a rockfall hazard

    Directory of Open Access Journals (Sweden)

    Blažo Đurović

    2005-06-01

    Full Text Available Theoretical basis and practical legislation (Water Law and regulation acts would allow in future the determination and classification of endangered territorial zones due to various natural hazards, among them also due to rock collapse and rockfall hazard as forms of the mass movement hazard. Interdisciplinary risk analysis, assessment and management of natural hazard are factors of harmonious spatial development in future. Especially risk analysis is the essential part of preventive mitigation actions and forms the basis for evaluation of the spatial plans, programs and policies.In accordance with the basic principles of the risk analysis the Rockfall Hazard Determination Method (RHDM for estimation of the potential specific risk degree due to a rock fall hazard along roadways and in hinterland is introduced. The method is derivedfrom the Rockfall Hazard Rating System (RHRS and adjusted to a holistic concept of the risk analysis procedure. The outcomes of the phenomenon simulation with a computer programme for rock mass movement analysis at local scale are included as well as climateand seismic conditions criteria which are newly introduced, thus making this method more adequate for specific geologic conditions in Slovenia.

  14. Multiple factor analysis by example using R

    CERN Document Server

    Pagès, Jérôme

    2014-01-01

    Multiple factor analysis (MFA) enables users to analyze tables of individuals and variables in which the variables are structured into quantitative, qualitative, or mixed groups. Written by the co-developer of this methodology, Multiple Factor Analysis by Example Using R brings together the theoretical and methodological aspects of MFA. It also includes examples of applications and details of how to implement MFA using an R package (FactoMineR).The first two chapters cover the basic factorial analysis methods of principal component analysis (PCA) and multiple correspondence analysis (MCA). The

  15. A Primer on Bootstrap Factor Analysis as Applied to Health Studies Research

    Science.gov (United States)

    Lu, Wenhua; Miao, Jingang; McKyer, E. Lisako J.

    2014-01-01

    Objectives: To demonstrate how the bootstrap method could be conducted in exploratory factor analysis (EFA) with a syntax written in SPSS. Methods: The data obtained from the Texas Childhood Obesity Prevention Policy Evaluation project (T-COPPE project) were used for illustration. A 5-step procedure to conduct bootstrap factor analysis (BFA) was…

  16. Risk analysis using fuzzy set theory of the accidental exposure of medical staff during brachytherapy procedures.

    Science.gov (United States)

    Castiglia, F; Giardina, M; Tomarchio, E

    2010-03-01

    Using fuzzy set theory, this paper presents results from risk analyses that explore potential exposure of medical operators working in a high dose rate brachytherapy irradiation plant. In these analyses, the HEART methodology, a first generation method for human reliability analysis, has been employed to evaluate the probability of human error. This technique has been modified on the basis of fuzzy set concepts to take into account, more directly, the uncertainties of the so-called error-promoting factors on which the method is based. Moreover, with regard to some identified accident scenarios, fuzzy potential dose was also evaluated to estimate the relevant risk. The results also provide some recommendations for procedures and safety equipment to reduce the occurrence of radiological exposure accidents.

  17. Procedure for conducting a human-reliability analysis for nuclear power plants. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Bell, B.J.; Swain, A.D.

    1983-05-01

    This document describes in detail a procedure to be followed in conducting a human reliability analysis as part of a probabilistic risk assessment when such an analysis is performed according to the methods described in NUREG/CR-1278, Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. An overview of the procedure describing the major elements of a human reliability analysis is presented along with a detailed description of each element and an example of an actual analysis. An appendix consists of some sample human reliability analysis problems for further study.

  18. A Bayesian multidimensional scaling procedure for the spatial analysis of revealed choice data

    NARCIS (Netherlands)

    DeSarbo, WS; Kim, Y; Fong, D

    1999-01-01

    We present a new Bayesian formulation of a vector multidimensional scaling procedure for the spatial analysis of binary choice data. The Gibbs sampler is gainfully employed to estimate the posterior distribution of the specified scalar products, bilinear model parameters. The computational procedure

  19. A HERMENEUTIC ANALYSIS OF THE NEW CIVIL PROCEDURE CODE ADVANCES

    Directory of Open Access Journals (Sweden)

    Lenio Luiz Streck

    2016-07-01

    Full Text Available I've never been unwillingly with CPC/15. Everything I wrote to criticize the procedural instrumentalism and its side effects, present until the Rapporteur, Deputy Paulo Teixeira, assumed courageously the thesis that there was something more to be treated in the Project. This plus concerned the philosophical paradigms and the need to control the judicial decisions. Anyway, I believe that some guiding principles of the new code can be drawn from the project and its complexity, such as the need to maintain the consistency and integrity of the case law (including the precedents, the prohibition of the free convincing, which implies minor role and the need to adopt the intersubjectivism paradigm, that is, the subjectivity of the judge should be suspended and controlled by structuring intersubjectivity of law. This is the holding of the new "system". Without understanding it, we run the risk of making a reverse revolution. Small-gnosiological reasoning still seated in objectivist and subjectivist paradigm (or its voluntarist vulgatas can quickly cause the downfall of a good idea.

  20. Analysis of generalized Schwarz alternating procedure for domain decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Engquist, B.; Zhao, Hongkai [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    The Schwartz alternating method(SAM) is the theoretical basis for domain decomposition which itself is a powerful tool both for parallel computation and for computing in complicated domains. The convergence rate of the classical SAM is very sensitive to the overlapping size between each subdomain, which is not desirable for most applications. We propose a generalized SAM procedure which is an extension of the modified SAM proposed by P.-L. Lions. Instead of using only Dirichlet data at the artificial boundary between subdomains, we take a convex combination of u and {partial_derivative}u/{partial_derivative}n, i.e. {partial_derivative}u/{partial_derivative}n + {Lambda}u, where {Lambda} is some {open_quotes}positive{close_quotes} operator. Convergence of the modified SAM without overlapping in a quite general setting has been proven by P.-L.Lions using delicate energy estimates. The important questions remain for the generalized SAM. (1) What is the most essential mechanism for convergence without overlapping? (2) Given the partial differential equation, what is the best choice for the positive operator {Lambda}? (3) In the overlapping case, is the generalized SAM superior to the classical SAM? (4) What is the convergence rate and what does it depend on? (5) Numerically can we obtain an easy to implement operator {Lambda} such that the convergence is independent of the mesh size. To analyze the convergence of the generalized SAM we focus, for simplicity, on the Poisson equation for two typical geometry in two subdomain case.

  1. PROC LCA: A SAS Procedure for Latent Class Analysis

    Science.gov (United States)

    Lanza, Stephanie T.; Collins, Linda M.; Lemmon, David R.; Schafer, Joseph L.

    2007-01-01

    Latent class analysis (LCA) is a statistical method used to identify a set of discrete, mutually exclusive latent classes of individuals based on their responses to a set of observed categorical variables. In multiple-group LCA, both the measurement part and structural part of the model can vary across groups, and measurement invariance across…

  2. PROC LCA: A SAS Procedure for Latent Class Analysis

    Science.gov (United States)

    Lanza, Stephanie T.; Collins, Linda M.; Lemmon, David R.; Schafer, Joseph L.

    2007-01-01

    Latent class analysis (LCA) is a statistical method used to identify a set of discrete, mutually exclusive latent classes of individuals based on their responses to a set of observed categorical variables. In multiple-group LCA, both the measurement part and structural part of the model can vary across groups, and measurement invariance across…

  3. A Suggested Stress Analysis Procedure For Nozzle To Head Shell Element Model – A Case Study

    Directory of Open Access Journals (Sweden)

    Sanket S. Chaudhari

    2012-08-01

    Full Text Available Stress analysis of pressure vessel has been always a serious and a critical analysis. The paper performs a standard procedure of pressure vessel analysis and validation based on previous papers. It also demonstrates the most critical part and how it affects entire structure. Relevant ASME (ASME, 2004, ASME Boiler and Pressure Vessel Code, Section VIII, Division 2, American Society of Mechanical Engineers, New York norms are produced to explain analysis procedure. WRC (Welding research council methodology is explained to validate finite element analysis work

  4. MARK I EXPERIMENTAL CORPUS AND DESCRIPTOR SET FOR THE STATISTICAL ASSOCIATION PROCEDURES FOR MESSAGE CONTENT ANALYSIS

    Science.gov (United States)

    This supplement to TDR-63-159 describes the corpus initially used for experiments with the Statistical Association Procedures for Message Content ... Analysis Program. The ASTIA TABs served as a source from which the initial experimental corpus was drawn.

  5. Theoretical Analysis and Simulation of Jacking Procedure of Pantadome System

    Institute of Scientific and Technical Information of China (English)

    WANG Xiaodun; SHI Yongjiu; WANG Yuanqing; Kawaguchi Mamoru

    2005-01-01

    In order to obtain the principle of Pantadome lifting process and make theoretical foundation for practical applications, the core idea of Pantadome was introduced, which is to make a structure become a mechanism by temporarily removing some members during the process of construction.The abstract motion model was built. By determining the change of the coordinates of the hinge joint and that of each point of the structure, simulative analysis of the mechanical motion of Pantadome was realized. Then general program that simulates the lifting process of Pantadome was developed based on AutoCAD environment by Auto Lisp language. By completing the theoretical analysis of the lifting process of Pantadome, three-dimensional simulation of the lifting process of Pantadome was realized. And it is successfully applied to bidding work of practical engineering.

  6. Pragmatic evaluation of the Toyota Production System (TPS analysis procedure for problem solving with entry-level nurses

    Directory of Open Access Journals (Sweden)

    Lukasz Maciej Mazur

    2008-12-01

    Full Text Available Medication errors occurring in hospitals are a growing national concern. These medication errors and their related costs (or wastes are seen as major factors leading to increased patient safety risks and increased waste in the hospital setting.  This article presents a study in which sixteen entry-level nurses utilized a Toyota Production System (TPS analysis procedure to solve medication delivery problems at one community hospital. The objective of this research was to study and evaluate the TPS analysis procedure for problem solving with entry-level nurses. Personal journals, focus group discussions, and a survey study were used to collect data about entry-level nurses’ perceptions of using the TPS problem solving approach to study medication delivery. A regression analysis was used to identify characteristics that enhance problem solving efforts. In addition, propositions for effective problem solving by entry-level nurses to aid in the reduction of medication errors in healthcare delivery settings are offered.

  7. Unconditionally stable concurrent procedures for transient finite-element analysis

    Science.gov (United States)

    Ortiz, Michael; Nour-Omid, Bahram

    1989-01-01

    A family of algorithms was outlined which would appear to be particularly well-suited for implementation in a parallel environment. This is due to the fact that for any partition of the mesh each subdomain in the partition can be processed over a time step simultaneously and independently of the rest. The method eliminates the need for assembling and factorizing large global arrays while retaining the unconditional stability properties of the algorithms used at the local level. To critically appraise the proposed methodology, two limiting cases were considered: element-by-element mesh partitions, and coarse mesh partitions. It was concluded that while the proposed methodology can be useful in sequential machines, it would appear to be promising as it bears on computation. It should also be emphasized that extensions of the method to nonlinear problems are possible.

  8. Transport Simulation Model Calibration with Two-Step Cluster Analysis Procedure

    Directory of Open Access Journals (Sweden)

    Zenina Nadezda

    2015-12-01

    Full Text Available The calibration results of transport simulation model depend on selected parameters and their values. The aim of the present paper is to calibrate a transport simulation model by a two-step cluster analysis procedure to improve the reliability of simulation model results. Two global parameters have been considered: headway and simulation step. Normal, uniform and exponential headway generation models have been selected for headway. Application of two-step cluster analysis procedure to the calibration procedure has allowed reducing time needed for simulation step and headway generation model value selection.

  9. DoD Cost Analysis Guidance and Procedures

    Science.gov (United States)

    1992-12-01

    Itemis 3-9 ’TABLE’S TAB P!E TIFlF IIAGF 2-1 Cost Analysis -improvemnent Group (CAIG) Tim etab Ic 2-11 De’i’en:sc Acquisition Prolgraw Life’-Cycle Cost...relationship to other systems. 1.1.3 System- Configuration. This section identifies the cquipmenleit (hardwvare and software ) work breakdown structure (W135) for...furnished commercial off-ti,: ~ (COTS) software should be addressed in thle discussion. Where Goverrnlent-fu, .’ ’.cd equipment or inron~ertx’ is

  10. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...

  11. Multistructure Statistical Model Applied To Factor Analysis

    Science.gov (United States)

    Bentler, Peter M.

    1976-01-01

    A general statistical model for the multivariate analysis of mean and covariance structures is described. Matrix calculus is used to develop the statistical aspects of one new special case in detail. This special case separates the confounding of principal components and factor analysis. (DEP)

  12. A procedure for seiche analysis with Bayesian information criterion

    Science.gov (United States)

    Aichi, Masaatsu

    2016-04-01

    Seiche is a standing wave in enclosed or semi-enclosed water body. Its amplitude irregularly changes in time due to weather condition etc. Then, extracting seiche signal is not easy by usual methods for time series analysis such as fast Fourier transform (FFT). In this study, a new method for time series analysis with Bayesian information criterion was developed to decompose seiche, tide, long-term trend and residual components from time series data of tide stations. The method was developed based on the maximum marginal likelihood estimation of tide amplitudes, seiche amplitude, and trend components. Seiche amplitude and trend components were assumed that they gradually changes as second derivative in time was close to zero. These assumptions were incorporated as prior distributions. The variances of prior distributions were estimated by minimizing Akaike-Bayes information criterion (ABIC). The frequency of seiche was determined by Newton method with initial guess by FFT. The accuracy of proposed method was checked by analyzing synthetic time series data composed of known components. The reproducibility of the original components was quite well. The proposed method was also applied to the actual time series data of sea level observed by tide station and the strain of coastal rock masses observed by fiber Bragg grating sensor in Aburatsubo Bay, Japan. The seiche in bay and its response of rock masses were successfully extracted.

  13. Sensitivity analysis of standardization procedures in drought indices to varied input data selections

    Science.gov (United States)

    Liu, Yi; Ren, Liliang; Hong, Yang; Zhu, Ye; Yang, Xiaoli; Yuan, Fei; Jiang, Shanhu

    2016-07-01

    Reasonable input data selection is of great significance for accurate computation of drought indices. In this study, a comprehensive comparison is conducted on the sensitivity of two commonly used standardization procedures (SP) in drought indices to datasets, namely the probability distribution based SP and the self-calibrating Palmer SP. The standardized Palmer drought index (SPDI) and the self-calibrating Palmer drought severity index (SC-PDSI) are selected as representatives of the two SPs, respectively. Using meteorological observations (1961-2012) in the Yellow River basin, 23 sub-datasets with a length of 30 years are firstly generated with the moving window method. Then we use the whole time series and 23 sub-datasets to compute two indices separately, and compare their spatiotemporal differences, as well as performances in capturing drought areas. Finally, a systematic investigation in term of changing climatic conditions and varied parameters in each SP is conducted. Results show that SPDI is less sensitive to data selection than SC-PDSI. SPDI series derived from different datasets are highly correlated, and consistent in drought area characterization. Sensitivity analysis shows that among the three parameters in the generalized extreme value (GEV) distribution, SPDI is most sensitive to changes in the scale parameter, followed by location and shape parameters. For SC-PDSI, its inconsistent behaviors among different datasets are primarily induced by the self-calibrated duration factors (p and q). In addition, it is found that the introduction of the self-calibrating procedure for duration factors further aggravates the dependence of drought index on input datasets compared with original empirical algorithm that Palmer uses, making SC-PDSI more sensitive to variations in data sample. This study clearly demonstrate the impacts of dataset selection on sensitivity of drought index computation, which has significant implications for proper usage of drought

  14. Epilepsy Surgery: Factors That Affect Patient Decision-Making in Choosing or Deferring a Procedure

    Directory of Open Access Journals (Sweden)

    Christopher Todd Anderson

    2013-01-01

    Full Text Available Surgical resection for well-selected patients with refractory epilepsy provides seizure freedom approximately two-thirds of the time. Despite this, many good candidates for surgery, after a presurgical workup, ultimately do not consent to a procedure. The reasons why patients decline potentially effective surgery are not completely understood. We explored the socio cultural, medical, personal, and psychological differences between candidates who chose (n = 23 and those who declined surgical intervention (n = 9. We created a novel questionnaire addressing a range of possible factors important in patient decision making. We found that patients who declined surgery were less bothered by their epilepsy (despite comparable severity, more anxious about surgery, and less likely to listen to their doctors (and others and had more comorbid psychiatric disease. Patients who chose surgery were more embarrassed by their seizures, more interested in being “seizure-free”, and less anxious about specific aspects of surgery. Patient attitudes, beliefs, and anxiety serve as barriers to ideal care. These results can provide opportunities for education, treatment, and intervention. Additionally, patients who fit a profile of someone who is likely to defer surgery may not be appropriate for risky and expensive presurgical testing.

  15. Epilepsy surgery: factors that affect patient decision-making in choosing or deferring a procedure.

    Science.gov (United States)

    Anderson, Christopher Todd; Noble, Eva; Mani, Ram; Lawler, Kathy; Pollard, John R

    2013-01-01

    Surgical resection for well-selected patients with refractory epilepsy provides seizure freedom approximately two-thirds of the time. Despite this, many good candidates for surgery, after a presurgical workup, ultimately do not consent to a procedure. The reasons why patients decline potentially effective surgery are not completely understood. We explored the socio cultural, medical, personal, and psychological differences between candidates who chose (n = 23) and those who declined surgical intervention (n = 9). We created a novel questionnaire addressing a range of possible factors important in patient decision making. We found that patients who declined surgery were less bothered by their epilepsy (despite comparable severity), more anxious about surgery, and less likely to listen to their doctors (and others) and had more comorbid psychiatric disease. Patients who chose surgery were more embarrassed by their seizures, more interested in being "seizure-free", and less anxious about specific aspects of surgery. Patient attitudes, beliefs, and anxiety serve as barriers to ideal care. These results can provide opportunities for education, treatment, and intervention. Additionally, patients who fit a profile of someone who is likely to defer surgery may not be appropriate for risky and expensive presurgical testing.

  16. Epilepsy Surgery: Factors That Affect Patient Decision-Making in Choosing or Deferring a Procedure

    Science.gov (United States)

    Anderson, Christopher Todd; Mani, Ram; Lawler, Kathy; Pollard, John R.

    2013-01-01

    Surgical resection for well-selected patients with refractory epilepsy provides seizure freedom approximately two-thirds of the time. Despite this, many good candidates for surgery, after a presurgical workup, ultimately do not consent to a procedure. The reasons why patients decline potentially effective surgery are not completely understood. We explored the socio cultural, medical, personal, and psychological differences between candidates who chose (n = 23) and those who declined surgical intervention (n = 9). We created a novel questionnaire addressing a range of possible factors important in patient decision making. We found that patients who declined surgery were less bothered by their epilepsy (despite comparable severity), more anxious about surgery, and less likely to listen to their doctors (and others) and had more comorbid psychiatric disease. Patients who chose surgery were more embarrassed by their seizures, more interested in being “seizure-free”, and less anxious about specific aspects of surgery. Patient attitudes, beliefs, and anxiety serve as barriers to ideal care. These results can provide opportunities for education, treatment, and intervention. Additionally, patients who fit a profile of someone who is likely to defer surgery may not be appropriate for risky and expensive presurgical testing. PMID:24159385

  17. Task Analysis of Emergency Operating Procedures for Generating Quantitative HRA Data

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yochan; Park, Jinkyun; Kim, Seunghwan; Choi, Sun Yeong; Jung, Wondea; Jang, Inseok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this paper, the analysis results of the emergency task in the procedures (EOPs; emergency operating procedures) that can be observed from the simulator data are introduced. The task type, component type, system type, and additional information related with the performance of the operators were described. In addition, a prospective application of the analyzed information to HEP quantification process was discussed. In the probabilistic safety analysis (PSA) field, various human reliability analyses (HRAs) have been performed to produce estimates of human error probabilities (HEPs) for significant tasks in complex socio-technical systems. To this end, Many HRA methods have provided basic or nominal HEPs for typical tasks and the quantitative relations describing how a certain performance context or performance shaping factors (PSFs) affects the HEPs. In the HRA community, however, the necessity of appropriate and sufficient human performance data has been recently indicated. This is because a wide range of quantitative estimates in the previous HRA methods are not supported by solid empirical bases. Hence, there have been attempts to collect HRA supporting data. For example, KAERI has started to collect information on both unsafe acts of operators and the relevant PSFs. A characteristic of the database that is being developed at KAERI is that human errors and related PSF surrogates that can be objectively observable are collected from full-scope simulator experiences. In this environment, to produce concretely grounded bases of the HEPs, the traits or attributes of tasks where significant human errors can be observed should be definitely determined. The determined traits should be applicable to compare the HEPs on the traits with the data in previous HRA methods or databases. In this study, task characteristics in a Westinghouse type of EOPs were analyzed with the defining task, component, and system taxonomies.

  18. Patients' perceptions of palliative surgical procedures: a qualitative analysis.

    Science.gov (United States)

    Hamilton, Trevor D; Selby, Debbie; Tsang, Melanie E; Kim, Audrey; Wright, Frances C

    2017-08-01

    Patients with incurable malignancies can require surgical intervention. We prospectively evaluated patients treated with palliative surgery to qualitatively assess peri-operative outcomes. Eligible patients were assessed at a tertiary care cancer center. Demographic information and peri-operative morbidity and mortality were collected. Semi-structured qualitative interviews were obtained pre-operatively and post-operatively (1 month). Qualitative evaluation was performed using content analysis and an inductive approach. Twenty-eight patients were approached and 20 consented to interview. Data saturation was achieved after 14 patients. Median patient age was 58% and 56% were female. Peri-operative morbidity and mortality were 44% and 22%, respectively. "No other option" was seen as a dominant pre-operative theme (14 of 18). Other pre-operative themes included a "poor understanding of prognosis and the role of surgery in overall treatment plan". Post-operative themes included a "perceived benefit from surgery" and "satisfaction with decision-making", notwithstanding significant complications. Improved understanding of prognosis and the role of surgery were described post-operatively. Despite limited options and a poor understanding of prognosis, many patients perceived benefit from palliative surgery. However, peri-operative mortality was substantial. A robust and thorough patient-centered discussion about individual goals for surgery should be undertaken by surgeon, patient and family prior to embarking on a palliative operation.

  19. Performance analysis of new word weighting procedures for opinion mining

    Institute of Scientific and Technical Information of China (English)

    G. R. BRINDHA; P. SWAMINATHAN; B. SANTHI

    2016-01-01

    The proliferation of forums and blogs leads to challenges and opportunities for processing large amounts of infor-mation. The information shared on various topics often contains opinionated words which are qualitative in nature. These quali-tative words need statistical computations to convert them into useful quantitative data. This data should be processed properly since it expresses opinions. Each of these opinion bearing words differs based on the significant meaning it conveys. To process the linguistic meaning of words into data and to enhance opinion mining analysis, we propose a novel weighting scheme, referred to as inferred word weighting (IWW). IWW is computed based on the significance of the word in the document (SWD) and the significance of the word in the expression (SWE) to enhance their performance. The proposed weighting methods give an analytic view and provide appropriate weights to the words compared to existing methods. In addition to the new weighting methods, another type of checking is done on the performance of text classification by including stop-words. Generally, stop-words are removed in text processing. When this new concept of including stop-words is applied to the proposed and existing weighting methods, two facts are observed: (1) Classification performance is enhanced; (2) The outcome difference between inclusion and exclusion of stop-words is smaller in the proposed methods, and larger in existing methods. The inferences provided by these observations are discussed. Experimental results of the benchmark data sets show the potential enhancement in terms of classi-fication accuracy.

  20. Replica Analysis for Portfolio Optimization with Single-Factor Model

    Science.gov (United States)

    Shinzato, Takashi

    2017-06-01

    In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.

  1. Analysis of Economic Factors Affecting Stock Market

    OpenAIRE

    Xie, Linyin

    2010-01-01

    This dissertation concentrates on analysis of economic factors affecting Chinese stock market through examining relationship between stock market index and economic factors. Six economic variables are examined: industrial production, money supply 1, money supply 2, exchange rate, long-term government bond yield and real estate total value. Stock market comprises fixed interest stocks and equities shares. In this dissertation, stock market is restricted to equity market. The stock price in thi...

  2. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES, PART TWO: APPLICABILITY OF CURRENT METHODS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; David I. Gertman

    2012-10-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no U.S. nuclear power plant has implemented CPs in its main control room. Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  3. CONSIDERATIONS FOR THE TREATMENT OF COMPUTERIZED PROCEDURES IN HUMAN RELIABILITY ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; David I. Gertman

    2012-07-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room. Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  4. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness.

    Science.gov (United States)

    Graneheim, U H; Lundman, B

    2004-02-01

    Qualitative content analysis as described in published literature shows conflicting opinions and unsolved issues regarding meaning and use of concepts, procedures and interpretation. This paper provides an overview of important concepts (manifest and latent content, unit of analysis, meaning unit, condensation, abstraction, content area, code, category and theme) related to qualitative content analysis; illustrates the use of concepts related to the research procedure; and proposes measures to achieve trustworthiness (credibility, dependability and transferability) throughout the steps of the research procedure. Interpretation in qualitative content analysis is discussed in light of Watzlawick et al.'s [Pragmatics of Human Communication. A Study of Interactional Patterns, Pathologies and Paradoxes. W.W. Norton & Company, New York, London] theory of communication.

  5. Multiple Group Testing Procedures for Analysis of High-Dimensional Genomic Data

    Science.gov (United States)

    Ko, Hyoseok; Kim, Kipoong

    2016-01-01

    In genetic association studies with high-dimensional genomic data, multiple group testing procedures are often required in order to identify disease/trait-related genes or genetic regions, where multiple genetic sites or variants are located within the same gene or genetic region. However, statistical testing procedures based on an individual test suffer from multiple testing issues such as the control of family-wise error rate and dependent tests. Moreover, detecting only a few of genes associated with a phenotype outcome among tens of thousands of genes is of main interest in genetic association studies. In this reason regularization procedures, where a phenotype outcome regresses on all genomic markers and then regression coefficients are estimated based on a penalized likelihood, have been considered as a good alternative approach to analysis of high-dimensional genomic data. But, selection performance of regularization procedures has been rarely compared with that of statistical group testing procedures. In this article, we performed extensive simulation studies where commonly used group testing procedures such as principal component analysis, Hotelling's T2 test, and permutation test are compared with group lasso (least absolute selection and shrinkage operator) in terms of true positive selection. Also, we applied all methods considered in simulation studies to identify genes associated with ovarian cancer from over 20,000 genetic sites generated from Illumina Infinium HumanMethylation27K Beadchip. We found a big discrepancy of selected genes between multiple group testing procedures and group lasso.

  6. 7 CFR 201.51a - Special procedures for purity analysis.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Special procedures for purity analysis. 201.51a... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Purity Analysis in the Administration of the Act §...

  7. Statistical testing procedure for the interaction effects of several controllable factors in two-valued input-output systems

    OpenAIRE

    Aoki, Satoshi; Miyakawa, Masami

    2007-01-01

    Suppose several two-valued input-output systems are designed by setting the levels of several controllable factors. For this situation, Taguchi method has proposed to assign the controllable factors to the orthogonal array and use ANOVA model for the standardized SN ratio, which is a natural measure for evaluating the performance of each input-output system. Though this procedure is simple and useful in application indeed, the result can be unreliable when the estimated standard errors of the...

  8. What Is Rotating in Exploratory Factor Analysis?

    Science.gov (United States)

    Osborne, Jason W.

    2015-01-01

    Exploratory factor analysis (EFA) is one of the most commonly-reported quantitative methodology in the social sciences, yet much of the detail regarding what happens during an EFA remains unclear. The goal of this brief technical note is to explore what "rotation" is, what exactly is rotating, and why we use rotation when performing…

  9. Multilevel exploratory factor analysis of discrete data

    NARCIS (Netherlands)

    Barendse, M.T.; Oort, F.J.; Jak, S.; Timmerman, M.E.

    2013-01-01

    Exploratory factor analysis (EFA) can be used to determine the dimensionality of a set of items. When data come from clustered subjects, such as pupils within schools or children within families, the hierarchical structure of the data should be taken into account. Standard multilevel EFA is only sui

  10. An SPSSR -Menu for Ordinal Factor Analysis

    Directory of Open Access Journals (Sweden)

    Mario Basto

    2012-01-01

    Full Text Available Exploratory factor analysis is a widely used statistical technique in the social sciences. It attempts to identify underlying factors that explain the pattern of correlations within a set of observed variables. A statistical software package is needed to perform the calculations. However, there are some limitations with popular statistical software packages, like SPSS. The R programming language is a free software package for statistical and graphical computing. It offers many packages written by contributors from all over the world and programming resources that allow it to overcome the dialog limitations of SPSS. This paper offers an SPSS dialog written in theR programming language with the help of some packages, so that researchers with little or no knowledge in programming, or those who are accustomed to making their calculations based on statistical dialogs, have more options when applying factor analysis to their data and hence can adopt a better approach when dealing with ordinal, Likert-type data.

  11. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  12. Role of laser irradiation in direct pulp capping procedures: a systematic review and meta-analysis.

    Science.gov (United States)

    Javed, Fawad; Kellesarian, Sergio Varela; Abduljabbar, Tariq; Gholamiazizi, Elham; Feng, Changyong; Aldosary, Khaled; Vohra, Fahim; Romanos, Georgios E

    2017-02-01

    A variety of materials are available to treat exposed dental pulp by direct pulp capping. The healing response of the pulp is crucial to form a dentin bridge and seal off the exposed pulp. Studies have used lasers to stimulate the exposed pulp to form tertiary dentin. The aim of the present systematic review and meta-analysis was to evaluate the evidence on the effects of laser irradiation as an adjunctive therapy to stimulate healing after pulp exposure. A systematic literature search was conducted up to April 2016. A structured search using the keywords "Direct pulp capping," "Lasers," "Calcium hydroxide pulp capping," and "Resin pulp capping" was performed. Initially, 34 potentially relevant articles were identified. After removal of duplicates and screening by title, abstract, and full text when necessary, nine studies were included. Studies were assessed for bias and data were synthetized using a random-effects meta-analysis model. Six studies were clinical, and three were preclinical animal trials; the follow-up period ranged from 2 weeks to 54 months. More than two thirds of the included studies showed that laser therapy used as an adjunct for direct pulp capping was more effective in maintaining pulp vitality than conventional therapy alone. Meta-analysis showed that the success rate in the laser treatment group was significantly higher than the control group (log odds ratio = 1.737; 95 % confidence interval, 1.304-2.171). Lasers treatment of exposed pulps can improve the outcome of direct pulp capping procedures; a number of confounding factors may have influenced the outcomes of the included studies.

  13. Reliability assessment of a manual-based procedure towards learning curve modeling and fmea analysis

    Directory of Open Access Journals (Sweden)

    Gustavo Rech

    2013-03-01

    Full Text Available Separation procedures in drug Distribution Centers (DC are manual-based activities prone to failures such as shipping exchanged, expired or broken drugs to the customer. Two interventions seem as promising in improving the reliability in the separation procedure: (i selection and allocation of appropriate operators to the procedure, and (ii analysis of potential failure modes incurred by selected operators. This article integrates Learning Curves (LC and FMEA (Failure Mode and Effect Analysis aimed at reducing the occurrence of failures in the manual separation of a drug DC. LCs parameters enable generating an index to identify the recommended operators to perform the procedures. The FMEA is then applied to the separation procedure carried out by the selected operators in order to identify failure modes. It also deployed the traditional FMEA severity index into two sub-indexes related to financial issues and damage to company´s image in order to characterize failures severity. When applied to a drug DC, the proposed method significantly reduced the frequency and severity of failures in the separation procedure.

  14. A reliable procedure for the analysis of multiexponential transients that arise in deep level transient spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Hanine, M. [Laboratoire Electronique Microtechnologie et Instrumentation (LEMI), University of Rouen, 76821 Mont Saint Aignan (France)]. E-mail: Mounir.Hanine@univ-rouen.fr; Masmoudi, M. [Laboratoire Electronique Microtechnologie et Instrumentation (LEMI), University of Rouen, 76821 Mont Saint Aignan (France); Marcon, J. [Laboratoire Electronique Microtechnologie et Instrumentation (LEMI), University of Rouen, 76821 Mont Saint Aignan (France)

    2004-12-15

    In this paper, a reliable procedure, which allows a fine as well as a robust analysis of the deep defects in semiconductors, is detailed. In this procedure where capacitance transients are considered as multiexponential and corrupted with Gaussian noise, our new method of analysis, the Levenberg-Marquardt deep level transient spectroscopy (LM-DLTS) is associated with two other high-resolution techniques, i.e. the Matrix Pencil which provides an approximation of exponential components contained in the capacitance transients and Prony's method recently revised by Osborne in order to set the initial parameters.

  15. Power analysis for multivariate and repeated measures designs: a flexible approach using the SPSS MANOVA procedure.

    Science.gov (United States)

    D'Amico, E J; Neilands, T B; Zambarano, R

    2001-11-01

    Although power analysis is an important component in the planning and implementation of research designs, it is often ignored. Computer programs for performing power analysis are available, but most have limitations, particularly for complex multivariate designs. An SPSS procedure is presented that can be used for calculating power for univariate, multivariate, and repeated measures models with and without time-varying and time-constant covariates. Three examples provide a framework for calculating power via this method: an ANCOVA, a MANOVA, and a repeated measures ANOVA with two or more groups. The benefits and limitations of this procedure are discussed.

  16. Confirmatory factor analysis of the Sport Organizational Effectiveness Scale.

    Science.gov (United States)

    Karteroliotis, Konstantinos; Papadimitriou, Dimitra

    2004-08-01

    The purpose of this study was to examine the factorial validity of the 5-factor model of sport organizational effectiveness developed by Papadimitriou and Taylor. This questionnaire has 33 items which assess five composite effectiveness dimensions pertinent to the operation of sport organizations: calibre of the board and external liaisons, interest in athletes, internal procedures, long term planning, and sport science support. The multiple constituency approach was used as a theoretical framework for developing this scale. Data were obtained from respondents affiliated with 20 Greek national sport organizations with a questionnaire. Analysis indicated that the 5-factor model of effectiveness is workable in assessing the organizational performance of nonprofit sport organizations. The application of the multiple constituency approach in studying sport organizational effectiveness was also suggested.

  17. Assessing the effect of data pretreatment procedures for principal components analysis of chromatographic data.

    Science.gov (United States)

    McIlroy, John W; Smith, Ruth Waddell; McGuffin, Victoria L

    2015-12-01

    Following publication of the National Academy of Sciences report "Strengthening Forensic Science in the United States: A Path Forward", there has been increasing interest in the application of multivariate statistical procedures for the evaluation of forensic evidence. However, prior to statistical analysis, variance from sources other than the sample must be minimized through application of data pretreatment procedures. This is necessary to ensure that subsequent statistical analysis of the data provides meaningful results. The purpose of this work was to evaluate the effect of pretreatment procedures on multivariate statistical analysis of chromatographic data obtained for a reference set of diesel fuels. Diesel was selected due to its chemical complexity and forensic relevance, both for fire debris and environmental forensic applications. Principal components analysis (PCA) was applied to the untreated chromatograms to assess association of replicates and discrimination among the different diesel samples. The chromatograms were then pretreated by sequentially applying the following procedures: background correction, smoothing, retention-time alignment, and normalization. The effect of each procedure on association and discrimination was evaluated based on the association of replicates in the PCA scores plot. For these data, background correction and smoothing offered minimal improvement, whereas alignment and normalization offered the greatest improvement in the association of replicates and discrimination among highly similar samples. Further, prior to pretreatment, the first principal component accounted for only non-sample sources of variance. Following pretreatment, these sources were minimized and the first principal component accounted for significant chemical differences among the diesel samples. These results highlight the need for pretreatment procedures and provide a metric to assess the effect of pretreatment on subsequent multivariate statistical

  18. Complications of cerebral angiography: a prospective analysis of 2,924 consecutive procedures

    Energy Technology Data Exchange (ETDEWEB)

    Dawkins, A.A.; Evans, A.L.; Wattam, J.; Romanowski, C.A.J.; Connolly, D.J.A.; Hodgson, T.J.; Coley, S.C. [Royal Hallamshire Hospital, Department of Radiology, Sheffield (United Kingdom)

    2007-09-15

    Cerebral angiography is an invasive procedure associated with a small, but definite risk of neurological morbidity. In this study we sought to establish the nature and rate of complications at our institution among a large prospective cohort of consecutive patients. Also, the data were analysed in an attempt to identify risk factors for complications associated with catheter angiography. Data were prospectively collected for a consecutive cohort of patients undergoing diagnostic cerebral angiography between January 2001 and May 2006. A total of 2,924 diagnostic cerebral angiography procedures were performed during this period. The following data were recorded for each procedure: date of procedure, patient age and sex, clinical indication, referring specialty, referral status (routine/emergency), operator, angiographic findings, and the nature of any clinical complication or asymptomatic adverse event (arterial dissection). Clinical complications occurred in 23 (0.79%) of the angiographic procedures: 12 (0.41%) significant puncture-site haematomas, 10 (0.34%) transient neurological events, and 1 nonfatal reaction to contrast agent. There were no permanent neurological complications. Asymptomatic technical complications occurred in 13 (0.44%) of the angiographic procedures: 3 groin dissections and 10 dissections of the cervical vessels. No patient with a neck dissection suffered an immediate or delayed stroke. Emergency procedures (P = 0.0004) and angiography procedures performed for intracerebral haemorrhage (P = 0.02) and subarachnoid haemorrhage (P = 0.04) were associated with an increased risk of complications. Neurological complications following cerebral angiography are rare (0.34%), but must be minimized by careful case selection and the prudent use of alternative noninvasive angiographic techniques, particularly in the acute setting. The low complication rate in this series was largely due to the favourable case mix. (orig.)

  19. Testing the number of required dimensions in exploratory factor analysis

    Directory of Open Access Journals (Sweden)

    Achim, Andr\\'e

    2017-01-01

    Full Text Available While maximum likelihood exploratory factor analysis (EFA provides a statistical test that $k$ dimensions are sufficient to account for the observed correlations among a set of variables, determining the required number of factors in least-squares based EFA has essentially relied on heuristic procedures. Two methods, Revised Parallel Analysis (R-PA and Comparison Data (CD, were recently proposed that generate surrogate data based on an increasing number of principal axis factors in order to compare their sequence of eigenvalues with that from the data. The latter should be unremarkable among the former if enough dimensions are included. While CD looks for a balance between efficiency and parsimony, R-PA strictly test that $k$ dimensions are sufficient by ranking the next eigenvalue, i.e. at rank $k+1$, of the actual data among those from the surrogate data. Importing two features of CD into R-PA defines four variants that are here collectively termed Next Eigenvalue Sufficiency Tests (NESTs. Simulations implementing 144 sets of parameters, including correlated factors and presence of a doublet factor, show that all four NESTs largely outperform CD, the standard Parallel Analysis, the Mean Average Partial method and even the maximum likelihood approach, in identifying the correct number of common factors. The recommended, most successful NEST variant is also the only one that never overestimates the correct number of dimensions beyond its nominal $\\alpha$ level. This variant is made available as R and MATLAB code as well as a complement incorporated in a Microsoft Excel file.

  20. An extension of 2D Janbu's generalized procedure of slices for 3D slope stability analysis Ⅰ- Basic theory

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Junfeng; QI; Tao; LI; Zhengguo

    2005-01-01

    Based on 2D Janbu's generalized procedure of slices (GPS), a new three-dimensional slope stability analysis method has been developed, in which all forces acting on the discretized blocks in static equilibrium are taken into account in all three directions. In this method, the potential sliding mass is divided into rigid blocks and each block is analyzed separately by using both geometric relations and static equilibrium formulations. By introducing force boundary conditions, the stability problem is determined statically. The proposed method can be applied to analyze the stability of slopes with various types of potential sliding surfaces, complicated geological boundaries and stratifications, water pressure, and earthquake loading. This method can also be helpful in determining individual factor of safety and local potential sliding direction for each block. As an extension of 2D Janbu's method, the present method has both the advantages and disadvantages of Janbu's generalized procedure of slices.

  1. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M.A. Wasiolek

    2005-04-28

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis

  2. Compliance & dexterity, factors to consider in home care and maintenance procedures Adherencia e destreza: factores a considerar en programas preventivos

    OpenAIRE

    Victoria Criado; Andrew Tawse-Smith

    2007-01-01

    Mechanical plaque control appears to be the primary means of controlling supragingival dental plaque build-up. Although daily oral hygiene practices and periodic professional care are considered the basis for any program aimed at the prevention and treatment of oral diseases, these procedures are technically demanding, time consuming and can be affected by the compliance and manual dexterity of the patient. Individual skills and acquired behavior patterns determine effectiveness of a preventi...

  3. Risk factors for surgical site infection following nonshunt pediatric neurosurgery: a review of 9296 procedures from a national database and comparison with a single-center experience.

    Science.gov (United States)

    Sherrod, Brandon A; Arynchyna, Anastasia A; Johnston, James M; Rozzelle, Curtis J; Blount, Jeffrey P; Oakes, W Jerry; Rocque, Brandon G

    2017-04-01

    OBJECTIVE Surgical site infection (SSI) following CSF shunt operations has been well studied, yet risk factors for nonshunt pediatric neurosurgery are less well understood. The purpose of this study was to determine SSI rates and risk factors following nonshunt pediatric neurosurgery using a nationwide patient cohort and an institutional data set specifically for better understanding SSI. METHODS The authors reviewed the American College of Surgeons National Surgical Quality Improvement Program-Pediatric (ACS NSQIP-P) database for the years 2012-2014, including all neurosurgical procedures performed on pediatric patients except CSF shunts and hematoma evacuations. SSI included deep (intracranial abscesses, meningitis, osteomyelitis, and ventriculitis) and superficial wound infections. The authors performed univariate analyses of SSI association with procedure, demographic, comorbidity, operative, and hospital variables, with subsequent multivariate logistic regression analysis to determine independent risk factors for SSI within 30 days of the index procedure. A similar analysis was performed using a detailed institutional infection database from Children's of Alabama (COA). RESULTS A total of 9296 nonshunt procedures were identified in NSQIP-P with an overall 30-day SSI rate of 2.7%. The 30-day SSI rate in the COA institutional database was similar (3.3% of 1103 procedures, p = 0.325). Postoperative time to SSI in NSQIP-P and COA was 14.6 ± 6.8 days and 14.8 ± 7.3 days, respectively (mean ± SD). Myelomeningocele (4.3% in NSQIP-P, 6.3% in COA), spine (3.5%, 4.9%), and epilepsy (3.4%, 3.1%) procedure categories had the highest SSI rates by procedure category in both NSQIP-P and COA. Independent SSI risk factors in NSQIP-P included postoperative pneumonia (OR 4.761, 95% CI 1.269-17.857, p = 0.021), immune disease/immunosuppressant use (OR 3.671, 95% CI 1.371-9.827, p = 0.010), cerebral palsy (OR 2.835, 95% CI 1.463-5.494, p = 0.002), emergency operation (OR 1

  4. Risk factors for surgical site infection following nonshunt pediatric neurosurgery: a review of 9296 procedures from a national database and comparison with a single-center experience

    Science.gov (United States)

    Sherrod, Brandon A.; Arynchyna, Anastasia A.; Johnston, James M.; Rozzelle, Curtis J.; Blount, Jeffrey P.; Oakes, W. Jerry; Rocque, Brandon G.

    2017-01-01

    Objective Surgical site infection (SSI) following CSF shunt operations has been well studied, yet risk factors for nonshunt pediatric neurosurgery are less well understood. The purpose of this study was to determine SSI rates and risk factors following nonshunt pediatric neurosurgery using a nationwide patient cohort and an institutional dataset specifically for better understanding SSI. Methods The authors reviewed the American College of Surgeons National Surgical Quality Improvement Program Pediatric (ACS NSQIP-P) database for the years 2012–2014, including all neurosurgical procedures performed on pediatric patients except CSF shunts and hematoma evacuations. SSI included deep (intracranial abscesses, meningitis, osteomyelitis, and ventriculitis) and superficial wound infections. The authors performed univariate analyses of SSI association with procedure, demographic, comorbidity, operative, and hospital variables, with subsequent multivariate logistic regression analysis to determine independent risk factors for SSI within 30 days of the index procedure. A similar analysis was performed using a detailed institutional infection database from Children’s Hospital of Alabama (COA). Results A total of 9296 nonshunt procedures were identified in NSQIP-P with an overall 30-day SSI rate of 2.7%. The 30-day SSI rate in the COA institutional database was similar (3.3% of 1103 procedures, p = 0.325). Postoperative time to SSI in NSQIP-P and COA was 14.6 ± 6.8 days and 14.8 ± 7.3 days, respectively (mean ± SD). Myelomeningocele (4.3% in NSQIP-P, 6.3% in COA), spine (3.5%, 4.9%), and epilepsy (3.4%, 3.1%) procedure categoriess had the highest SSI rates by procedure category in both NSQIP-P and COA. Independent SSI risk factors in NSQIP-P included postoperative pneumonia (OR 4.761, 95% CI 1.269–17.857, p = 0.021), immune disease/immunosuppressant use (OR 3.671, 95% CI 1.371–9.827, p = 0.010), cerebral palsy (OR 2.835, 95% CI 1.463–5.494, p = 0.002), emergency

  5. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  6. Development of the HELIOS/MASTER 2-Step Procedure for the Prismatic VHTR Physics Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog; Cho, Jin Young; Noh, Jae Man

    2007-05-15

    A new physics analysis procedure has been developed for a prismatic very high temperature gas-cooled reactor based on a conventional two-step procedure for the PWR physics analysis. The HELIOS and MASTER codes were employed to generate the coarse group cross sections through a transport lattice calculation, and to perform the 3-dimensional core physics analysis by a nodal diffusion calculation, respectively. Physics analysis of the prismatic VHTRs involves particular modeling issues such as a double heterogeneity of the coated fuel particles, a neutron streaming in the coolant channels, a strong core-reflector interaction, and large spectrum shifts due to changes of the surrounding environment and state parameters. Double heterogeneity effect was considered by using a recently developed reactivity equivalent physical transformation method. Neutron streaming effect was quantified through 3-dimensional Monte Carlo transport calculations by using the MCNP code. Strong core-reflector interaction could be handled by applying an equivalence theory to the generation of the reflector cross sections. The effects of a spectrum shift could be covered by optimizing the coarse energy group structure. A two-step analysis procedure was established for the prismatic VHTR physics analysis by combining all the methodologies described above. The applicability of our code system was tested against core benchmark problems. The results of these benchmark tests show that our code system is very accurate and practical for a prismatic VHTR physics analysis.

  7. Impact of aerobic exercise capacity and procedure-related factors in lung cancer surgery.

    Science.gov (United States)

    Licker, M; Schnyder, J-M; Frey, J-G; Diaper, J; Cartier, V; Inan, C; Robert, J; Bridevaux, P-O; Tschopp, J-M

    2011-05-01

    Over the past decades, major progress in patient selection, surgical techniques and anaesthetic management have largely contributed to improved outcome in lung cancer surgery. The purpose of this study was to identify predictors of post-operative cardiopulmonary morbidity in patients with a forced expiratory volume in 1 s exercise testing (CPET). In this observational study, 210 consecutive patients with lung cancer underwent CPET with completed data over a 9-yr period (2001-2009). Cardiopulmonary complications occurred in 46 (22%) patients, including four (1.9%) deaths. On logistic regression analysis, peak oxygen uptake (peak V'(O₂) and anaesthesia duration were independent risk factors of both cardiovascular and pulmonary complications; age and the extent of lung resection were additional predictors of cardiovascular complications, whereas tidal volume during one-lung ventilation was a predictor of pulmonary complications. Compared with patients with peak V'(O₂) >17 mL·kg⁻¹·min⁻¹, those with a peak V'(O₂) training can improve post-operative outcome.

  8. What Is Rotating in Exploratory Factor Analysis?

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2015-01-01

    Full Text Available Exploratory factor analysis (EFA is one of the most commonly-reported quantitative methodology in the social sciences, yet much of the detail regarding what happens during an EFA remains unclear. The goal of this brief technical note is to explore what - rotation- is, what exactly is rotating, and why we use rotation when performing EFAs. Some commentary about the relative utility and desirability of different rotation methods concludes the narrative.

  9. Procedural and Conceptual Difficulties with Slope: An Analysis of Students' Mistakes on Routine Tasks

    Science.gov (United States)

    Cho, Peter; Nagle, Courtney

    2017-01-01

    This study extends past research on students' understanding of slope by analyzing college students' mistakes on routine tasks involving slope. We conduct both quantitative and qualitative analysis of students' mistakes on common slope tasks to extract information regarding procedural proficiencies and conceptual underpinnings required in order for…

  10. A Procedure for the Computerized Analysis of Cleft Palate Speech Transcription

    Science.gov (United States)

    Fitzsimons, David A.; Jones, David L.; Barton, Belinda; North, Kathryn N.

    2012-01-01

    The phonetic symbols used by speech-language pathologists to transcribe speech contain underlying hexadecimal values used by computers to correctly display and process transcription data. This study aimed to develop a procedure to utilise these values as the basis for subsequent computerized analysis of cleft palate speech. A computer keyboard…

  11. A Procedure for the Computerized Analysis of Cleft Palate Speech Transcription

    Science.gov (United States)

    Fitzsimons, David A.; Jones, David L.; Barton, Belinda; North, Kathryn N.

    2012-01-01

    The phonetic symbols used by speech-language pathologists to transcribe speech contain underlying hexadecimal values used by computers to correctly display and process transcription data. This study aimed to develop a procedure to utilise these values as the basis for subsequent computerized analysis of cleft palate speech. A computer keyboard…

  12. Isolating the Effects of Training Using Simple Regression Analysis: An Example of the Procedure.

    Science.gov (United States)

    Waugh, C. Keith

    This paper provides a case example of simple regression analysis, a forecasting procedure used to isolate the effects of training from an identified extraneous variable. This case example focuses on results of a three-day sales training program to improve bank loan officers' knowledge, skill-level, and attitude regarding solicitation and sale of…

  13. Working with stories in nursing research: procedures used in narrative analysis.

    Science.gov (United States)

    Kelly, Teresa; Howie, Linsey

    2007-04-01

    This paper describes the procedures undertaken in a qualitative study that used nurses' stories to examine the influence of Gestalt therapy training on the professional practice of psychiatric nurses. The paper places narrative research methodologies within a nursing context before introducing narrative inquiry, specifically narrative analysis methodology. Procedures used in the study are subsequently described in sufficient detail to serve as a guide for novice researchers interested in undertaking a narrative analysis study. An exemplar of a storied outcome is provided to evidence the product of the narrative analysis research process. The paper concludes with reflections on the importance of articulating the process of narrative analysis as a means of developing interest and competence in narrative research, and using nurses' stories as a means of exploring, understanding, and communicating nursing practice.

  14. A Beginner’s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis

    Directory of Open Access Journals (Sweden)

    An Gie Yong

    2013-10-01

    Full Text Available The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.

  15. Towards the Procedure Automation of Full Stochastic Spectral Based Fatigue Analysis

    Directory of Open Access Journals (Sweden)

    Khurram Shehzad

    2013-05-01

    Full Text Available Fatigue is one of the most significant failure modes for marine structures such as ships and offshore platforms. Among numerous methods for fatigue life estimation, spectral method is considered as the most reliable one due to its ability to cater different sea states as well as their probabilities of occurrence. However, spectral based simulation procedure itself is quite complex and numerically intensive owing to various critical technical details. Present research study is focused on the application and automation of spectral based fatigue analysis procedure for ship structure using ANSYS software with 3D liner sea keeping code AQWA. Ansys Parametric Design Language (APDL macros are created and subsequently implemented to automate the workflow of simulation process by reducing the time spent on non-value added repetitive activity. A MATLAB program based on direct calculation procedure of spectral fatigue is developed to calculate total fatigue damage. The automation procedure is employed to predict the fatigue life of a ship structural detail using wave scatter data of North Atlantic and Worldwide trade. The current work will provide a system for efficient implementation of stochastic spectral fatigue analysis procedure for ship structures.

  16. Early neurosurgical procedures enhance survival in blunt head injury: propensity score analysis.

    Science.gov (United States)

    Hedges, Jerris R; Newgard, Craig D; Veum-Stone, Judith; Selden, Nathan R; Adams, Annette L; Diggs, Brian S; Arthur, Melanie; Mullins, Richard J

    2009-08-01

    Studies of trauma systems have identified traumatic brain injury as a frequent cause of death or disability. Due to the heterogeneity of patient presentations, practice variations, and potential for secondary brain injury, the importance of early neurosurgical procedures upon survival remains controversial. Traditional observational outcome studies have been biased because injury severity and clinical prognosis are associated with use of such interventions. We used propensity analysis to investigate the clinical efficacy of early neurosurgical procedures in patients with traumatic brain injury. We analyzed a retrospectively identified cohort of 518 consecutive patients (ages 18-65 years) with blunt, traumatic brain injury (head Abbreviated Injury Scale score of >or= 3) presenting to the emergency department of a Level-1 trauma center. The propensity for a neurosurgical procedure (i.e., craniotomy or ventriculostomy) in the first 24 h was determined (based upon demographic, clinical presentation, head computed tomography scan findings, intracranial pressure monitor use, and injury severity). Multivariate logistic regression models for survival were developed using both the propensity for a neurosurgical procedure and actual performance of the procedure. The odds of in-hospital death were substantially less in those patients who received an early neurosurgical procedure (odds ratio [OR] 0.15; 95% confidence interval [CI] 0.05-0.41). The mortality benefit of early neurosurgical intervention persisted after exclusion of patients who died within the first 24 h (OR 0.13; 95% CI 0.04-0.48). Analysis of observational data after adjustment using the propensity score for a neurosurgical procedure in the first 24 h supports the association of early neurosurgical intervention and patient survival in the setting of significant blunt, traumatic brain injury. Transfer of at-risk head-injured patients to facilities with high-level neurosurgical capabilities seems warranted.

  17. THE INFLUENCE OF THE CHOSEN SOCIO-DEMOGRAPHIC FACTORS ON THE QUALITY OF LIFE IN WOMEN AFTER GYNAECOLOGICAL SURGICAL PROCEDURES

    Directory of Open Access Journals (Sweden)

    Beata Karakiewicz

    2010-09-01

    Full Text Available Background: The aim of this study was to assess how the chosen socio-demographic factors effect the quality of life in the patients after gynaecological surgical procedures. Materials and Methods: Research was conducted in 2007 among 250 women operated in the Department of Reproduction and Gynaecology, the Pomeranian Medical University in Szczecin. In this survey-based study, we used a standardized quality of life questionnaire, the Women’s Health Questionnaire (WHQ, developed by Dr Myra Hunter at London University. Results: The most numerous patients were those with sleep disorders (38,8%, 37,6% of the surveyed complained of troublesome menstrual symptoms, 26,8% of respondents had disturbing somatic symptoms, short memory and problems with concentration. The lowest percentage of women (12,4% felt anxiety and fear associated with the past gynaecological surgical procedure. Conclusions: 1. General satisfaction and good disposition is declared by the majority of patients after gynaecological surgical procedures. 2. Age, education, having a partner, place of residence, and the number of children are the factors which have significant effect on the quality of life in women after gynaecological procedures.

  18. Factor analysis identifies subgroups of constipation

    Institute of Scientific and Technical Information of China (English)

    Philip G Dinning; Mike Jones; Linda Hunt; Sergio E Fuentealba; Jamshid Kalanter; Denis W King; David Z Lubowski; Nicholas J Talley; Ian J Cook

    2011-01-01

    AIM: To determine whether distinct symptom groupings exist in a constipated population and whether such grouping might correlate with quantifiable pathophysiological measures of colonic dysfunction. METHODS: One hundred and ninety-one patients presenting to a Gastroenterology clinic with constipation and 32 constipated patients responding to a newspaper advertisement completed a 53-item, wide-ranging selfreport questionnaire. One hundred of these patients had colonic transit measured scintigraphically. Factor analysis determined whether constipation-related symptoms grouped into distinct aspects of symptomatology. Cluster analysis was used to determine whether individual patients naturally group into distinct subtypes. RESULTS: Cluster analysis yielded a 4 cluster solution with the presence or absence of pain and laxative unresponsiveness providing the main descriptors. Amongst all clusters there was a considerable proportion of patients with demonstrable delayed colon transit, irritable bowel syndrome positive criteria and regular stool frequency. The majority of patients with these characteristics also reported regular laxative use. CONCLUSION: Factor analysis identified four constipation subgroups, based on severity and laxative unresponsiveness, in a constipated population. However, clear stratification into clinically identifiable groups remains imprecise.

  19. Current good manufacturing practices, quality control procedures, quality factors, notification requirements, and records and reports, for infant formula. Final rule.

    Science.gov (United States)

    2014-06-10

    The Food and Drug Administration (FDA or we) is issuing a final rule that adopts, with some modifications, the interim final rule (IFR) entitled "Current Good Manufacturing Practices, Quality Control Procedures, Quality Factors, Notification Requirements, and Records and Reports, for Infant Formula'' (February 10, 2014). This final rule affirms the IFR's changes to FDA's regulations and provides additional modifications and clarifications. The final rule also responds to certain comments submitted in response to the request for comments in the IFR.

  20. SU-D-209-05: Sensitivity of the Diagnostic Radiological Index of Protection (DRIP) to Procedural Factors in Fluoroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Jones, A [UT MD Anderson Cancer Center, Houston, TX (United States); Pasciak, A [University of Tennessee Medical Center, Knoxville, TN (United States); Wagner, L [UT Medical School, Houston, TX (United States)

    2016-06-15

    Purpose: To evaluate the sensitivity of the Diagnostic Radiological Index of Protection (DRIP) to procedural factors in fluoroscopy in an effort to determine an appropriate set of scatter-mimicking primary beams (SMPB) to be used in measuring the DRIP. Methods: A series of clinical and factorial Monte Carlo simulations were conducted to determine the shape of the scattered X-ray spectra incident on the operator in different clinical fluoroscopy scenarios. Two clinical evaluations studied the sensitivity of the scattered spectrum to gantry angle and patient size while technical factors were varied according to measured automatic dose rate control (ADRC) data. Factorial evaluations studied the sensitivity of the scattered spectrum to gantry angle, field of view, patient size and beam quality for constant technical factors. Average energy was the figure of merit used to condense fluence in each energy bin to a single numerical index. Results: Beam quality had the strongest influence on the scattered spectrum in fluoroscopy. Many procedural factors affected the scattered spectrum indirectly through their effects on primary beam quality through ADRC, e.g., gantry angle and patient size. Lateral C-arm rotation, common in interventional cardiology, increased the energy of the scattered spectrum, regardless of the direction of rotation. The effect of patient size on scattered radiation depended on ADRC characteristics, patient size, and procedure type. Conclusion: The scattered spectrum striking the operator in fluoroscopy, and therefore the DRIP, is most strongly influenced by primary beam quality, particularly kV. Use cases for protective garments should be classified by typical procedural primary beam qualities, which are governed by the ADRC according to the impacts of patient size, anatomical location, and gantry angle. These results will help determine an appropriate set of SMPB to be used for measuring the DRIP.

  1. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    of PCA and related techniques. An interesting dilemma in reduction of dimensionality of data is the desire to obtain simplicity for better understanding, visualization and interpretation of the data on the one hand, and the desire to retain sufficient detail for adequate representation on the other hand......Based on work by Pearson in 1901, Hotelling in 1933 introduced principal component analysis (PCA). PCA is often used for general feature generation and linear orthogonalization or compression by dimensionality reduction of correlated multivariate data, see Jolliffe for a comprehensive description...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  2. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-08

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle

  3. Data and analysis procedures for improved aerial applications mission performance. [agricultural aircraft wing geometry

    Science.gov (United States)

    Holmes, B. J.; Morris, D. K.; Razak, K.

    1979-01-01

    An analysis procedure is given and cases analyzed for the effects of wing geometry on lateral transport of a variety of agricultural particles released in the wake of an agricultural airplane. The cases analyzed simulate the release of particles from a fuselage centerline-mounted dry material spreader; however, the procedure applies to particles released anywhere along the wing span. Consideration is given to the effects of taper ratio, aspect ratio, wing loading, and deflected flaps. It is noted that significant lateral transport of large particles can be achieved using high-lift devices positioned to create a strong vortex near the location of particle release.

  4. Monte Carlo Analysis of Airport Throughput and Traffic Delays Using Self Separation Procedures

    Science.gov (United States)

    Consiglio, Maria C.; Sturdy, James L.

    2006-01-01

    This paper presents the results of three simulation studies of throughput and delay times of arrival and departure operations performed at non-towered, non-radar airports using self-separation procedures. The studies were conducted as part of the validation process of the Small Aircraft Transportation Systems Higher Volume Operations (SATS HVO) concept and include an analysis of the predicted airport capacity using with different traffic conditions and system constraints under increasing levels of demand. Results show that SATS HVO procedures can dramatically increase capacity at non-towered, non-radar airports and that the concept offers the potential for increasing capacity of the overall air transportation system.

  5. A P-value model for theoretical power analysis and its applications in multiple testing procedures

    Directory of Open Access Journals (Sweden)

    Fengqing Zhang

    2016-10-01

    Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.

  6. Exploratory factor analysis in Rehabilitation Psychology: a content analysis.

    Science.gov (United States)

    Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N

    2014-11-01

    Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.

  7. A case study of view-factor rectification procedures for diffuse-gray radiation enclosure computations

    Science.gov (United States)

    Taylor, Robert P.; Luck, Rogelio

    1995-01-01

    The view factors which are used in diffuse-gray radiation enclosure calculations are often computed by approximate numerical integrations. These approximately calculated view factors will usually not satisfy the important physical constraints of reciprocity and closure. In this paper several view-factor rectification algorithms are reviewed and a rectification algorithm based on a least-squares numerical filtering scheme is proposed with both weighted and unweighted classes. A Monte-Carlo investigation is undertaken to study the propagation of view-factor and surface-area uncertainties into the heat transfer results of the diffuse-gray enclosure calculations. It is found that the weighted least-squares algorithm is vastly superior to the other rectification schemes for the reduction of the heat-flux sensitivities to view-factor uncertainties. In a sample problem, which has proven to be very sensitive to uncertainties in view factor, the heat transfer calculations with weighted least-squares rectified view factors are very good with an original view-factor matrix computed to only one-digit accuracy. All of the algorithms had roughly equivalent effects on the reduction in sensitivity to area uncertainty in this case study.

  8. Orthogonal Higher Order Structure of the WISC-IV Spanish Using Hierarchical Exploratory Factor Analytic Procedures

    Science.gov (United States)

    McGill, Ryan J.; Canivez, Gary L.

    2016-01-01

    As recommended by Carroll, the present study examined the factor structure of the Wechsler Intelligence Scale for Children-Fourth Edition Spanish (WISC-IV Spanish) normative sample using higher order exploratory factor analytic techniques not included in the WISC-IV Spanish Technical Manual. Results indicated that the WISC-IV Spanish subtests were…

  9. Orthogonal Higher Order Structure of the WISC-IV Spanish Using Hierarchical Exploratory Factor Analytic Procedures

    Science.gov (United States)

    McGill, Ryan J.; Canivez, Gary L.

    2016-01-01

    As recommended by Carroll, the present study examined the factor structure of the Wechsler Intelligence Scale for Children-Fourth Edition Spanish (WISC-IV Spanish) normative sample using higher order exploratory factor analytic techniques not included in the WISC-IV Spanish Technical Manual. Results indicated that the WISC-IV Spanish subtests were…

  10. Policy analysis of authorisation procedures for wind energy deployment in Spain

    Energy Technology Data Exchange (ETDEWEB)

    Iglesias, Guillermo, E-mail: gwig@udc.es [Universidad de A Coruna, Facultad de Ciencias Economicas y Empresariales, Campus de Elvina, s/n. A Coruna 15071 (Spain); Rio, Pablo del, E-mail: pablo.delrio@cchs.csic.es [Consejo Superior de Investigaciones Cientificas (CSIC), C/Albasanz, 26-28, Madrid 28037 (Spain); Dopico, Jesus Angel, E-mail: jesu1971@udc.es [Universidad de A Coruna, Facultad de Ciencias Economicas y Empresariales, Campus de Elvina, s/n. A Coruna 15071 (Spain)

    2011-07-15

    The aim of this paper is to analyse the administrative procedures for the granting of authorisations for the siting of wind farms in Spain, currently the competency of regional authorities. The analysis reveals some commonalities and differences between the procedures across regions. Furthermore, some aspects regarding these procedures have raised the concern of different stakeholders, including the central government and wind energy investors. A conflict between the interests of the central and regional governments can be observed. Lack of coordination between the different administrative levels and the 'more is better mentality' of regional authorities have led to a significant growth of wind energy requests for the (national) feed-in tariff. In turn, investors have complained about the discretionarity and non-transparency of those procedures and the lack of homogeneity across regions. This is likely to result in delays, uncertainty for investors and higher transaction costs. Although there has been a trend to a model which involves the use of multicriteria bidding procedures with more explicit, objective and precise criteria regarding project selection, the aforementioned problems suggest the need to improve coordination between the different administrative levels. - Highlights: > A conflict between the interests of the central and regional governments in the granting of administrative procedures can be observed. > Lack of coordination between different administrative levels have led to a significant growth of wind energy requests for the (national) feed-in tariff. > The resulting increase in the total costs of wind energy promotion has been a major concern for national policy-makers. > In turn, investors have complained about the discretionarity and non-transparency of those procedures and the lack of homogeneity across regions. > Those problems suggest the need to improve coordination between the different administrative levels.

  11. An Introduction To Multi-Battery Factor Analysis: Overcoming Method Artefacts.

    Directory of Open Access Journals (Sweden)

    Gavin T L Brown

    2007-05-01

    Full Text Available Examination of participants' responses to factor or scale scores provides useful insights, but analysis of such scores from multiple measures or batteries is sometimes confounded by methodological artefacts. This paper provides a short primer into the use of multi-trait, multi-method (MTMM correlational analysis and multi-battery factor analysis (MBFA. The principles of both procedures are outlined and a case study is provided from the author's research into 233 teachers' responses to 22 scale scores drawn from five batteries. The batteries were independently developed measures of teachers' thinking about the nature and purpose of assessment, teaching, learning, curriculum, and teacher efficacy. Detailed procedures for using Cudeck's (1982 MBFACT software are provided. Both MTMM and MBFA analyses identified an appropriate common trait across the five batteries, whereas joint factor analysis of the 22 scale scores confounded the common trait with a battery or method artefact. When researchers make use of multiple measures, they ought to take into account the impact of method artefacts when analyzing scale scores from multiple batteries. The multi-battery factor analysis procedure and MBFACT software provide a robust procedure for exploring how scales inter-relate.

  12. [Morbidity, mortality and analysis of prognostic factors for colorectal cancer].

    Science.gov (United States)

    Clauer, U; Schäfer, J; Roder, J

    2015-06-01

    This study analyzed morbidity, mortality and prognostic factors for patient survival in a single center collective of patients with colorectal cancer and a high follow-up rate. A total of 698 consecutive patients were included in this study. Data were collected prospectively. Descriptive and survival analyses as well as Cox regression analyses were performed to identify factors for morbidity, mortality and prognostic factors for survival. At presentation 78.8 % of the colon cancer patients and 83.5 % of rectal cancer patients showed symptomatic disease and 6.5 % of patients underwent an emergency procedure. Mortality was 3.6 %, morbidity was 42.7 % and 4.3 % of patients developed an anastomotic leakage with the need of reoperation. In spite of the regular application of a fast-track program, 10 % of patients had a prolonged duration of bowel paralysis. In patients with colon cancer there were no differences between overall survival (OAS) and disease-free survival, whereas there was a significant difference in patients with rectal cancer. The mean survival of all patients was 65.39 ± 1.722 months. The ASA score, cardiovascular disease, number of metastatic lymph nodes, lymph node ratio, residual tumor and general or surgery-associated complications were strongly independent influencing factors on OAS. A Cox analysis revealed age at diagnosis and microscopic residual tumor (TNM R1) as highly significant influencing factors on OAS. Other significant factors of influence on OAS were development of general or surgery-associated complications and the presence of cardiovascular diseases. Cardiovascular disease leads to a higher morbidity rate whereas age, International Union Against Cancer (UICC) stage, R-status, lymphatic spread and occurrence of complications are important prognostic factors for survival.

  13. Analysis of Ultra Linguistic Factors in Interpretation

    Institute of Scientific and Technical Information of China (English)

    姚嘉

    2015-01-01

    The quality of interpretation is a dynamic conception, involving a good deal of variables, such as the participants, the situations, working conditions, cultures etc.. Therefore, in interpretation, those static elements, such as traditional grammars and certain linguistic rules can not be counted as the only criteria for the quality of interpretation. That is, there are many other non-language elements—Ultra-linguistic factors that play an important role in interpretation. Ultra-linguistic factors get rid of the bounding of traditional grammar and parole, and reveal the facts in an indirect way. This paper gives a brief analysis of Ultra Lin⁃guistic elements in interpretation in order to achieve better result in interpretation practice.

  14. Estimating the Cost of Neurosurgical Procedures in a Low-Income Setting: An Observational Economic Analysis.

    Science.gov (United States)

    Abdelgadir, Jihad; Tran, Tu; Muhindo, Alex; Obiga, Doomwin; Mukasa, John; Ssenyonjo, Hussein; Muhumza, Michael; Kiryabwire, Joel; Haglund, Michael M; Sloan, Frank A

    2017-05-01

    There are no data on cost of neurosurgery in low-income and middle-income countries. The objective of this study was to estimate the cost of neurosurgical procedures in a low-resource setting to better inform resource allocation and health sector planning. In this observational economic analysis, microcosting was used to estimate the direct and indirect costs of neurosurgical procedures at Mulago National Referral Hospital (Kampala, Uganda). During the study period, October 2014 to September 2015, 1440 charts were reviewed. Of these patients, 434 had surgery, whereas the other 1006 were treated nonsurgically. Thirteen types of procedures were performed at the hospital. The estimated mean cost of a neurosurgical procedure was $542.14 (standard deviation [SD], $253.62). The mean cost of different procedures ranged from $291 (SD, $101) for burr hole evacuations to $1,221 (SD, $473) for excision of brain tumors. For most surgeries, overhead costs represented the largest proportion of the total cost (29%-41%). This is the first study using primary data to determine the cost of neurosurgery in a low-resource setting. Operating theater capacity is likely the binding constraint on operative volume, and thus, investing in operating theaters should achieve a higher level of efficiency. Findings from this study could be used by stakeholders and policy makers for resource allocation and to perform economic analyses to establish the value of neurosurgery in achieving global health goals. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Improving Your Exploratory Factor Analysis for Ordinal Data: A Demonstration Using FACTOR

    Directory of Open Access Journals (Sweden)

    James Baglin

    2014-06-01

    Full Text Available Exploratory factor analysis (EFA methods are used extensively in the field of assessment and evaluation. Due to EFA's widespread use, common methods and practices have come under close scrutiny. A substantial body of literature has been compiled highlighting problems with many of the methods and practices used in EFA, and, in response, many guidelines have been proposed with the aim to improve application. Unfortunately, implementing recommended EFA practices has been restricted by the range of options available in commercial statistical packages and, perhaps, due to an absence of clear, practical - how-to' demonstrations. Consequently, this article describes the application of methods recommended to get the most out of your EFA. The article focuses on dealing with the common situation of analysing ordinal data as derived from Likert-type scales. These methods are demonstrated using the free, stand-alone, easy-to-use and powerful EFA package FACTOR (http://psico.fcep.urv.es/utilitats/factor/, Lorenzo-Seva & Ferrando, 2006. The demonstration applies the recommended techniques using an accompanying dataset, based on the Big 5 personality test. The outcomes obtained by the EFA using the recommended procedures through FACTOR are compared to the default techniques currently available in SPSS.

  16. Factor Rotation and Standard Errors in Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.

    2015-01-01

    In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…

  17. Model correction factor method for system analysis

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Johannesen, Johannes M.

    2000-01-01

    The Model Correction Factor Method is an intelligent response surface method based on simplifiedmodeling. MCFM is aimed for reliability analysis in case of a limit state defined by an elaborate model. Herein it isdemonstrated that the method is applicable for elaborate limit state surfaces on which...... severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... surface than existing in the idealized model....

  18. A novel procedure on next generation sequencing data analysis using text mining algorithm.

    Science.gov (United States)

    Zhao, Weizhong; Chen, James J; Perkins, Roger; Wang, Yuping; Liu, Zhichao; Hong, Huixiao; Tong, Weida; Zou, Wen

    2016-05-13

    Next-generation sequencing (NGS) technologies have provided researchers with vast possibilities in various biological and biomedical research areas. Efficient data mining strategies are in high demand for large scale comparative and evolutional studies to be performed on the large amounts of data derived from NGS projects. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. We report a novel procedure to analyse NGS data using topic modeling. It consists of four major procedures: NGS data retrieval, preprocessing, topic modeling, and data mining using Latent Dirichlet Allocation (LDA) topic outputs. The NGS data set of the Salmonella enterica strains were used as a case study to show the workflow of this procedure. The perplexity measurement of the topic numbers and the convergence efficiencies of Gibbs sampling were calculated and discussed for achieving the best result from the proposed procedure. The output topics by LDA algorithms could be treated as features of Salmonella strains to accurately describe the genetic diversity of fliC gene in various serotypes. The results of a two-way hierarchical clustering and data matrix analysis on LDA-derived matrices successfully classified Salmonella serotypes based on the NGS data. The implementation of topic modeling in NGS data analysis procedure provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data. The implementation of topic modeling in NGS data analysis provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data.

  19. Forensic analysis of Salvia divinorum using multivariate statistical procedures. Part I: discrimination from related Salvia species.

    Science.gov (United States)

    Willard, Melissa A Bodnar; McGuffin, Victoria L; Smith, Ruth Waddell

    2012-01-01

    Salvia divinorum is a hallucinogenic herb that is internationally regulated. In this study, salvinorin A, the active compound in S. divinorum, was extracted from S. divinorum plant leaves using a 5-min extraction with dichloromethane. Four additional Salvia species (Salvia officinalis, Salvia guaranitica, Salvia splendens, and Salvia nemorosa) were extracted using this procedure, and all extracts were analyzed by gas chromatography-mass spectrometry. Differentiation of S. divinorum from other Salvia species was successful based on visual assessment of the resulting chromatograms. To provide a more objective comparison, the total ion chromatograms (TICs) were subjected to principal components analysis (PCA). Prior to PCA, the TICs were subjected to a series of data pretreatment procedures to minimize non-chemical sources of variance in the data set. Successful discrimination of S. divinorum from the other four Salvia species was possible based on visual assessment of the PCA scores plot. To provide a numerical assessment of the discrimination, a series of statistical procedures such as Euclidean distance measurement, hierarchical cluster analysis, Student's t tests, Wilcoxon rank-sum tests, and Pearson product moment correlation were also applied to the PCA scores. The statistical procedures were then compared to determine the advantages and disadvantages for forensic applications.

  20. The effects of post-persulfate-digestion procedures on total phosphorus analysis in water.

    Science.gov (United States)

    Zhou, Meifang; Struve, David M

    2004-11-01

    There are differences between the EPA Method 365 and the APHA-AWWA-WEF's Standard Method 4500 with respect to the post-digestion treatment procedures of the persulfate-digested water. The effects on total phosphorus analysis of different post-digestion treatment procedures, such as neutralization and reacidification, and shaking/settling, were investigated in this study using the total phosphorus measurements of water samples from the Everglades Round Robin (ERR) study and comparing the results with the ERR study. The effects of the insoluble particles or phosphorus adsorption/precipitation on/with Al and Fe hydroxides in different post-digestion treatment procedures adequately accounted for the differences between the most probable value and the higher or lower total phosphorus measurements reported in the ERR study. Based on the results of this investigation we recommend that a clearly defined set of digestion and post-digestion treatment procedures be adopted as the standard for total phosphorus analysis using the ascorbic acid method.

  1. Energy dispersive X-ray diffraction to identify explosive substances: Spectra analysis procedure optimization

    Energy Technology Data Exchange (ETDEWEB)

    Crespy, C., E-mail: charles.crespy@insa-lyon.f [CNDRI-Insa Lyon, Universite de Lyon, F-69621, Villeurbanne cedex (France); Duvauchelle, P., E-mail: philippe.duvauchelle@insa-lyon.f [CNDRI-Insa Lyon, Universite de Lyon, F-69621, Villeurbanne cedex (France); Kaftandjian, V.; Soulez, F. [CNDRI-Insa Lyon, Universite de Lyon, F-69621, Villeurbanne cedex (France); Ponard, P. [Thales Components and Subsystems, 2 rue Marcel Dassault 78491, Velizy cedex (France)

    2010-11-21

    To detect the presence of explosives in packages, automated systems are required. Energy dispersive X-ray diffraction (EDXRD) represents a powerful non-invasive tool providing information on the atomic structure of samples. In this paper, EDXRD is investigated as a suitable technique for explosive detection and identification. To this end, a database has been constructed, containing measured X-ray diffraction spectra of several explosives and common materials. In order to quantify spectral resolution influence, this procedure is repeated with two different detectors which have different spectral resolution. Using our database, some standard spectrum analysis procedures generally used for this application have been implemented. Regarding to the results, it is possible to conclude on the robustness and the limits of each analysis procedure. The aim of this work is to define a robust and efficient sequence of EDXRD spectra analysis to discriminate explosive substances. Since our explosive substances are crystalline, the first step consists in using characteristic of the spectrum to estimate a crystallinity criterion which allows to remove a large part of common materials. The second step is a more detailed analysis, it consists in using similarity criterion and major peaks location to differentiate explosive from crystalline common materials. The influence of the spectral resolution on the detection is also examined.

  2. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  3. Disruptive Event Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-08

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2004 [DIRS 169671]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis''. The objective of this

  4. Growth inhibitory factors in bovine faeces impairs detection of Salmonella Dublin by conventional culture procedure

    DEFF Research Database (Denmark)

    Baggesen, Dorte Lau; Nielsen, L.R.; Sørensen, Gitte

    2007-01-01

    Aims: To analyse the relative importance of different biological and technical factors on the analytical sensitivity of conventional culture methods for detection of Salmonella Dublin in cattle faeces. Methods and Results: Faeces samples collected from six adult bovines from different salmonella-...... by focusing on the strain variations and the ecology of the faecal sample. Detailed investigation of the faecal flora (pathogens and normal flora) and the interaction with chemical factors may result in developing an improved method for detection of S. Dublin....

  5. Compliance & dexterity, factors to consider in home care and maintenance procedures Adherencia e destreza: factores a considerar en programas preventivos

    Directory of Open Access Journals (Sweden)

    Victoria Criado

    2007-01-01

    Full Text Available Mechanical plaque control appears to be the primary means of controlling supragingival dental plaque build-up. Although daily oral hygiene practices and periodic professional care are considered the basis for any program aimed at the prevention and treatment of oral diseases, these procedures are technically demanding, time consuming and can be affected by the compliance and manual dexterity of the patient. Individual skills and acquired behavior patterns determine effectiveness of a preventive program and oral hygiene practice. Successful preventive programs and home care procedures clearly depend on the interaction and commitment between the dental professional and the patient. Identifying the capacity of the individual to comply with the professional recommendations and evaluating the dexterity of the patient to remove supragingival dental plaque will permit the implementation of an adequate preventive program and can help on the selection of adjunctive antimicrobial agents and devices needed to reach an effective oral care routine.El control de la placa dental parece ser el mecanismo primario para controlar el crecimiento de la placa dental supragingival. Aunque la práctica diaria de la higiene bucal y el cuidado profesional periódico, son considerados la base para cualquier programa dirigido a la prevención y tratamiento de las enfermedades de la cavidad bucal, estos procedimientos son técnicamente exigentes, consumen tiempo y pueden ser afectados por la aceptación y la destreza manual del paciente. Las destrezas individuales y los patrones de comportamiento adquiridos, determinan la efectividad de un programa preventivo y la práctica de la higiene bucal. El éxito de los programas preventivos y los procedimientos del cuidado bucal en el hogar dependen claramente de la interacción y compromiso entre el odontólogo y el paciente. La importancia de identificar la capacidad del individuo para cumplir con las recomendaciones y la

  6. A qualitative analysis of the determinants in the choice of a French journal reviewing procedures

    Science.gov (United States)

    Morge, Ludovic

    2015-12-01

    Between 1993 and 2010, two French journals (Aster and Didaskalia) coming from different backgrounds but belonging to the same institution used to publish papers on research in science and technology education. The merging of these journals made it necessary for them to compare the different reviewing procedures used by each. This merging occurred at a time when research is becoming increasingly international which partly determines some of the reviewing procedure choices. In order for a francophone international journal to survive, it needs to take this internationalization into account in a reasoned manner. The author of this article, as a chief editor of RDST (Recherches en Didactique des Sciences et des Technologies)—the journal resulting from the merging- taking part in this merger, analyses the social, cultural and pragmatic determinants which impacted the choices made in reviewing procedures. This paper describes how these diversity of factors leads us to drop the idea of a standard reviewing procedure which would be valid for all journals.

  7. Detection of cow milk in donkey milk by chemometric procedures on triacylglycerol stereospecific analysis results.

    Science.gov (United States)

    Cossignani, Lina; Blasi, Francesca; Bosi, Ancilla; D'Arco, Gilda; Maurelli, Silvia; Simonetti, Maria Stella; Damiani, Pietro

    2011-08-01

    Stereospecific analysis is an important tool for the characterization of lipid fraction of food matrices, and also of milk samples. The results of a chemical-enzymatic-chromatographic analytical method were elaborated by chemometric procedures such as linear discriminant analysis (LDA) and artificial neural network (ANN). According to the total composition and intrapositional fatty acid distribution in the triacylglycerol (TAG) backbone, the obtained results were able to characterize pure milk samples and milk mixtures with 1, 3, 5% cow milk added to donkey milk. The resulting score was very satisfactory. Totally correct classified samples were obtained when the TAG stereospecific results of all the considered milk mixtures (donkey-cow) were elaborated by LDA and ANN chemometric procedures.

  8. Wave Concept Iterative Procedure Analysis of Patch Antennas on Nanostructured Ceramic Substrates

    Directory of Open Access Journals (Sweden)

    Valdemir Silva Neto

    2014-02-01

    Full Text Available The wave concept iterative procedure (WCIP is proposed to analyze rectangular and circular patch antennas on nanostructured ceramic substrates. Principles of WCIP are described and advantages are emphasized. The analysis of microstrip antennas on double layered substrates is performed in space and spectral domains. In addition, Fast Fourier Transformation (FFT is used to improve the efficiency of the method. WCIP simulated results are compared to HFSS software ones. A good agreement is observed.

  9. Automated 3D Analysis of Pre-Procedural MDCT to Predict Annulus Plane Angulation and C-Arm Positioning Benefit on Procedural Outcome in Patients Referred for TAVR

    NARCIS (Netherlands)

    Samim, Mariam; Stella, Pieter R.; Agostoni, Pierfrancesco; Kluin, Jolanda; Ramjankhan, Faiez; Budde, Ricardo P. J.; Sieswerda, Gertjan; Algeri, Emanuela; van Belle, Camille; Elkalioubie, Ahmed; Juthier, Francis; Belkacemi, Anouar; Bertrand, Michel E.; Doevendans, Pieter A.; Van Belle, Eric

    2013-01-01

    OBJECTIVES The aim of this study was to determine whether pre-procedural analysis of multidetector row computed tomography (MDCT) scans could accurately predict the "line of perpendicularity" (LP) of the aortic annulus and corresponding C-arm angulations required for prosthesis delivery and impact t

  10. Consequences of Decontamination Procedures in Forensic Hair Analysis Using Metal-Assisted Secondary Ion Mass Spectrometry Analysis.

    Science.gov (United States)

    Cuypers, Eva; Flinders, Bryn; Boone, Carolien M; Bosman, Ingrid J; Lusthof, Klaas J; Van Asten, Arian C; Tytgat, Jan; Heeren, Ron M A

    2016-03-15

    Today, hair testing is considered to be the standard method for the detection of chronic drug abuse. Nevertheless, the differentiation between systemic exposure and external contamination remains a major challenge in the forensic interpretation of hair analysis. Nowadays, it is still impossible to directly show the difference between external contamination and use-related incorporation. Although the effects of washing procedures on the distribution of (incorporated) drugs in hair remain unknown, these decontamination procedures prior to hair analysis are considered to be indispensable in order to exclude external contamination. However, insights into the effect of decontamination protocols on levels and distribution of drugs incorporated in hair are essential to draw the correct forensic conclusions from hair analysis; we studied the consequences of these procedures on the spatial distribution of cocaine in hair using imaging mass spectrometry. Additionally, using metal-assisted secondary ion mass spectrometry, we are the first to directly show the difference between cocaine-contaminated and user hair without any prior washing procedure.

  11. Meta-analysis of the technical performance of an imaging procedure: Guidelines and statistical methodology

    Science.gov (United States)

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2017-01-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353

  12. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    Science.gov (United States)

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes.

  13. Attitude Exploration Using Factor Analysis Technique

    Directory of Open Access Journals (Sweden)

    Monika Raghuvanshi

    2016-12-01

    Full Text Available Attitude is a psychological variable that contains positive or negative evaluation about people or an environment. The growing generation possesses learning skills, so if positive attitude is inculcated at the right age, it might therefore become habitual. Students in the age group 14-20 years from the city of Bikaner, India, are the target population for this study. An inventory of 30Likert-type scale statements was prepared in order to measure attitude towards the environment and matters related to conservation. The primary data is collected though a structured questionnaire, using cluster sampling technique and analyzed using the IBM SPSS 23 statistical tool. Factor analysis is used to reduce 30 variables to a smaller number of more identifiable groups of variables. Results show that students “need more regulation and voluntary participation to protect the environment”, “need conservation of water and electricity”, “are concerned for undue wastage of water”, “need visible actions to protect the environment”, “need strengthening of the public transport system”, “are a little bit ignorant about the consequences of global warming”, “want prevention of water pollution by industries”, “need changing of personal habits to protect the environment”, and “don’t have firsthand experience of global warming”. Analysis revealed that nine factors obtained could explain about 58.5% variance in the attitude of secondary school students towards the environment in the city of Bikaner, India. The remaining 39.6% variance is attributed to other elements not explained by this analysis. A global campaign for improvement in attitude about environmental issues and its utility in daily lives may boost positive youth attitudes, potentially impacting worldwide. A cross-disciplinary approach may be developed by teaching along with other related disciplines such as science, economics, and social studies etc.

  14. Development and optimization of the procedure of gas- chromatographic elemental analysis of high-carbon solid fossil fuels

    Energy Technology Data Exchange (ETDEWEB)

    Platonov, V.V.; Shvykin, A.Y.; Proskuryakov, V.A.; Podshibyakin, S.I.; Chilachava, K.B.; Khmarin, E.M.; Solov' ev, A.S. [Tolstoy Tula State Pedagogical University, Tula (Russian Federation)

    2002-07-01

    A procedure was developed for gas-chromatographic elemental analysis of coals. The conditions of exhaustive oxidation of weighed microportions of the coals were optimized. The procedure of calculating the results of analysis was modified with the aim to improve its reproducibility.

  15. Factorized molecular wave functions: Analysis of the nuclear factor

    Energy Technology Data Exchange (ETDEWEB)

    Lefebvre, R., E-mail: roland.lefebvre@u-psud.fr [Institut des Sciences Moléculaires d’ Orsay, Bâtiment 350, UMR8214, CNRS- Université. Paris-Sud, 91405 Orsay, France and Sorbonne Universités, UPMC Univ Paris 06, UFR925, F-75005 Paris (France)

    2015-06-07

    The exact factorization of molecular wave functions leads to nuclear factors which should be nodeless functions. We reconsider the case of vibrational perturbations in a diatomic species, a situation usually treated by combining Born-Oppenheimer products. It was shown [R. Lefebvre, J. Chem. Phys. 142, 074106 (2015)] that it is possible to derive, from the solutions of coupled equations, the form of the factorized function. By increasing artificially the interstate coupling in the usual approach, the adiabatic regime can be reached, whereby the wave function can be reduced to a single product. The nuclear factor of this product is determined by the lowest of the two potentials obtained by diagonalization of the potential matrix. By comparison with the nuclear wave function of the factorized scheme, it is shown that by a simple rectification, an agreement is obtained between the modified nodeless function and that of the adiabatic scheme.

  16. Risk Factors for Complications after Peripheral Vascular Surgery in 3,202 Patient Procedures

    DEFF Research Database (Denmark)

    Kehlet, Mette; Jensen, Leif Panduro; Schroeder, Torben V.

    2016-01-01

    Background Complications after open vascular surgery are a major health challenge for the healthcare system and the patients. Infrainguinal vascular surgery is often perceived as less risky than aortic surgery and the aim of this study was to identify which risk factors correlated with postoperat...

  17. Correspondence factor analysis of steroid libraries.

    Science.gov (United States)

    Ojasoo, T; Raynaud, J P; Doré, J C

    1995-06-01

    The receptor binding of a library of 187 steroids to five steroid hormone receptors (estrogen, progestin, androgen, mineralocorticoid, and glucocorticoid) has been analyzed by correspondence factor analysis (CFA) in order to illustrate how the method could be used to derive structure-activity-relationships from much larger libraries. CFA is a cartographic multivariate technique that provides objective distribution maps of the data after reduction and filtering of redundant information and noise. The key to the analysis of very complex data tables is the formation of barycenters (steroids with one or more common structural fragments) that can be introduced into CFA analyses used as mathematical models. This is possible in CFA because the method uses X2-metrics and is based on the distributional equivalence of the rows and columns of the transformed data matrix. We have thus demonstrated, in purely objective statistical terms, the general conclusions on the specificity of various functional and other groups derived from prior analyses by expert intuition and reasoning. A finer analysis was made of a series of A-ring phenols showing the high degree of glucocorticoid receptor and progesterone receptor binding that can be generated by certain C-11-substitutions despite the presence of the phenolic A-ring characteristic of estrogen receptor-specific binding.

  18. Multi-response permutation procedure as an alternative to the analysis of variance: an SPSS implementation.

    Science.gov (United States)

    Cai, Li

    2006-02-01

    A permutation test typically requires fewer assumptions than does a comparable parametric counterpart. The multi-response permutation procedure (MRPP) is a class of multivariate permutation tests of group difference useful for the analysis of experimental data. However, psychologists seldom make use of the MRPP in data analysis, in part because the MRPP is not implemented in popular statistical packages that psychologists use. A set of SPSS macros implementing the MRPP test is provided in this article. The use of the macros is illustrated by analyzing example data sets.

  19. Patent Network Analysis and Quadratic Assignment Procedures to Identify the Convergence of Robot Technologies

    Science.gov (United States)

    Lee, Woo Jin; Lee, Won Kyung

    2016-01-01

    Because of the remarkable developments in robotics in recent years, technological convergence has been active in this area. We focused on finding patterns of convergence within robot technology using network analysis of patents in both the USPTO and KIPO. To identify the variables that affect convergence, we used quadratic assignment procedures (QAP). From our analysis, we observed the patent network ecology related to convergence and found technologies that have great potential to converge with other robotics technologies. The results of our study are expected to contribute to setting up convergence based R&D policies for robotics, which can lead new innovation. PMID:27764196

  20. Financial incentives for lumbar surgery: a critical analysis of physician reimbursement for decompression and fusion procedures.

    Science.gov (United States)

    Whang, Peter G; Lim, Moe R; Sasso, Rick C; Skelton, Alta; Brown, Zoe B; Greg Anderson, David; Albert, Todd J; Hilibrand, Alan S; Vaccaro, Alexander R

    2008-08-01

    Retrospective case-control study/economic analysis. To determine the treatment times required for isolated lumbar decompressions and for combined decompression and instrumented fusion procedures to compare the relative reimbursements for each type of operation as a function of time expenditure by the surgeon. Under current Medicare fee schedules, the payment for a fusion procedure is higher than of an isolated decompression. It has been recently suggested in the lay press that the greater reimbursement for a lumbar arthrodesis may inappropriately influence the manner in which surgeons elect to treat lumbar degenerative conditions, resulting in what they believe to be a substantial number of unnecessary spinal fusions. A consecutive series of 50 single-level decompression cases performed by single surgeon were retrospectively analyzed and compared with an equivalent cohort of subjects who underwent single-level decompression and instrumented posterolateral fusion with autogenous iliac crest bone grafting. The operative reports, office charts, and billing records were reviewed to determine the total clinical time invested by the surgeon and the Medicare reimbursement for each surgery. Relative to the corresponding values of the decompression group, combined decompression and fusion procedures were associated with a longer mean surgical time (134.6 min vs. 47.3 min, Pundue financial incentive to recommend a combined decompression and instrumented fusion procedure over an isolated decompression to patients with symptomatic lumbar degeneration, especially when considering the greater time, effort, and risk characteristic of this more complex operation.

  1. Effect of music on procedure time and sedation during colonoscopy: A meta-analysis

    Institute of Scientific and Technical Information of China (English)

    Wilson WS Tam; Eliza LY Wong; Sheila F Twinn

    2008-01-01

    AIM:To integrate results from different studies in examining the effectiveness of music in reducing the procedure time and the amount of sedation used during colonoscopic procedure.METHODS:An electronic search in various databases was performed to identify related articles.Study quality was evaluated by the Jadad's scale.The random effect model was used to pool the effect from individual trials and the Cohen Q-statistic was used to determine heterogeneity.Egger's regression was used to detect publication bias.RESULTS:Eight studies with 722 subjects were included in this meta-analysis.The combined mean difference for the time taken for the colonoscopy procedure between the music and control groups was-2.84 with 95% CI (-5.61 to-0.08),implying a short time for the music group.The combined mean difference for the use of sedation was-0.46 with 95%CI (-0.91 to-0.01),showing a significant reduction in the use of sedation in the music group.Heterogeneity was observed in both analyses but no publication bias was detected.CONCLUSION:Listening to music is effective in reducing procedure time and amount of sedation during colonoscopy and should be promoted.

  2. Risk analysis procedure for post-wildfire natural hazards in British Columbia

    Science.gov (United States)

    Jordan, Peter

    2010-05-01

    Following a severe wildfire season in 2003, and several subsequent damaging debris flow and flood events, the British Columbia Forest Service developed a procedure for analysing risks to public safety and infrastructure from such events. At the same time, the Forest Service undertook a research program to determine the extent of post-wildfire hazards, and examine the hydrologic and geomorphic processes contributing to the hazards. The risk analysis procedure follows the Canadian Standards Association decision-making framework for risk management (which in turn is based on international standards). This has several steps: identification of risk, risk analysis and estimation, evaluation of risk tolerability, developing control or mitigation strategies, and acting on these strategies. The Forest Service procedure deals only with the first two steps. The results are passed on to authorities such as the Provincial Emergency Program and local government, who are responsible for evaluating risks, warning residents, and applying mitigation strategies if appropriate. The objective of the procedure is to identify and analyse risks to public safety and infrastructure. The procedure is loosely based on the BAER (burned area emergency response) program in the USA, with some important differences. Our procedure focuses on identifying risks and warning affected parties, not on mitigation activities such as broadcast erosion control measures. Partly this is due to limited staff and financial resources. Also, our procedure is not multi-agency, but is limited to wildfires on provincial forest land; in British Columbia about 95% of forest land is in the publicly-owned provincial forest. Each fire season, wildfires are screened by size and proximity to values at risk such as populated areas. For selected fires, when the fire is largely contained, the procedure begins with an aerial reconnaissance of the fire, and photography with a hand-held camera, which can be used to make a

  3. Paget-Schroetter syndrome after a dental procedure in a patient with factor V Leiden (R506Q) heterozygosity.

    Science.gov (United States)

    Sharma, Prabin

    2017-04-01

    : Paget-Schroetter syndrome or effort thrombosis is characterized by spontaneous thrombosis of the upper extremity venous system, commonly seen in a young healthy patient after repetitive use of the upper extremities. It is rarely associated with coagulopathy and thus, hypercoagulable work-up is not usually a part of the investigation. We present a first case of a young woman, who was diagnosed with left upper extremity effort thrombosis following a dental procedure. Interestingly, she was also noted to be heterozygous for factor V Leiden mutation.

  4. Procedural Factors That Affect Psychophysical Measures of Spatial Selectivity in Cochlear Implant Users

    Directory of Open Access Journals (Sweden)

    Stefano Cosentino

    2015-09-01

    Full Text Available Behavioral measures of spatial selectivity in cochlear implants are important both for guiding the programing of individual users’ implants and for the evaluation of different stimulation methods. However, the methods used are subject to a number of confounding factors that can contaminate estimates of spatial selectivity. These factors include off-site listening, charge interactions between masker and probe pulses in interleaved masking paradigms, and confusion effects in forward masking. We review the effects of these confounds and discuss methods for minimizing them. We describe one such method in which the level of a 125-pps masker is adjusted so as to mask a 125-pps probe, and where the masker and probe pulses are temporally interleaved. Five experiments describe the method and evaluate the potential roles of the different potential confounding factors. No evidence was obtained for off-site listening of the type observed in acoustic hearing. The choice of the masking paradigm was shown to alter the measured spatial selectivity. For short gaps between masker and probe pulses, both facilitation and refractory mechanisms had an effect on masking; this finding should inform the choice of stimulation rate in interleaved masking experiments. No evidence for confusion effects in forward masking was revealed. It is concluded that the proposed method avoids many potential confounds but that the choice of method should depend on the research question under investigation.

  5. High-dose-rate Intracavitary Radiotherapy in the Management of Cervical Intraepithelial Neoplasia 3 and Carcinoma In Situ Presenting With Poor Histologic Factors After Undergoing Excisional Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yong Bae, E-mail: ybkim3@yuhs.ac [Department of Radiation Oncology, Severance Hospital, Yonsei University, College of Medicine, Seoul (Korea, Republic of); Kim, Young Tae [Department of Obstetrics and Gynecology, Severance Hospital, Yonsei University, College of Medicine, Seoul (Korea, Republic of); Cho, Nam Hoon [Department of Pathology, Severance Hospital, Yonsei University, College of Medicine, Seoul (Korea, Republic of); Koom, Woong Sub [Department of Radiation Oncology, Severance Hospital, Yonsei University, College of Medicine, Seoul (Korea, Republic of); Kim, Sunghoon; Kim, Sang Wun; Nam, Eun Ji [Department of Obstetrics and Gynecology, Severance Hospital, Yonsei University, College of Medicine, Seoul (Korea, Republic of); Kim, Gwi Eon [Department of Radiation Oncology, Severance Hospital, Yonsei University, College of Medicine, Seoul (Korea, Republic of)

    2012-09-01

    Purpose: To assess the effectiveness of high-dose-rate intracavitary radiotherapy (HDR-ICR) in patients with cervical intraepithelial neoplasia 3 (CIN 3) and carcinoma in situ (CIS) presenting with poor histologic factors for predicting residual disease after undergoing diagnostic excisional procedures. Methods and Materials: This study was a retrospective analysis of 166 patients with CIN 3 (n=15) and CIS (n=151) between October 1986 and December 2005. They were diagnosed by conization (n=158) and punch biopsy (n=8). Pathologic analysis showed 135 cases of endocervical gland involvement (81.4%), 74 cases of positive resection margins (44.5%), and 52 cases of malignant cells on endocervical curettage (31.3%). All patients were treated with HDR-ICR using Co{sup 60} or Ir{sup 192} at a cancer center. The dose was prescribed at point A located 2 cm superior to the external os and 2 cm lateral to the axis of the tandem for intact uterus. Results: Median age was 61 years (range, 29-77). The median total dose of HDR-ICR was 30 Gy/6 fractions (range, 30-52). At follow-up (median, 152 months), 2 patients developed recurrent diseases: 1 CIN 2 and 1 invasive carcinoma. One hundred and forty patients survived and 26 patients died, owing to nonmalignant intercurrent disease. Rectal bleeding occurred in one patient; however, this symptom subsided with conservative management. Conclusions: Our data showed HDR-ICR is an effective modality for CIN 3 and CIS patients presenting with poor histologic factors after excisional procedures. HDR-ICR should be considered as a definitive treatment in CIN 3 and CIS patients with possible residual disease after undergoing excisional procedures.

  6. Physiological Factors Analysis in Unpressurized Aircraft Cabins

    Science.gov (United States)

    Patrao, Luis; Zorro, Sara; Silva, Jorge

    2016-11-01

    Amateur and sports flight is an activity with growing numbers worldwide. However, the main cause of flight incidents and accidents is increasingly pilot error, for a number of reasons. Fatigue, sleep issues and hypoxia, among many others, are some that can be avoided, or, at least, mitigated. This article describes the analysis of psychological and physiological parameters during flight in unpressurized aircraft cabins. It relates cerebral oximetry and heart rate with altitude, as well as with flight phase. The study of those parameters might give clues on which variations represent a warning sign to the pilot, thus preventing incidents and accidents due to human factors. Results show that both cerebral oximetry and heart rate change along the flight and altitude in the alert pilot. The impaired pilot might not reveal these variations and, if this is detected, he can be warned in time.

  7. In-hospital mortality analysis in patients with proximal femoral fracture operatively treated by hip arthroplasty procedure

    Directory of Open Access Journals (Sweden)

    Starčević Srdjan

    2016-01-01

    Full Text Available Background/Aim. Hip fracture remains the leading cause of death in trauma among elderly population and is a great burden to national health services. In-patient death analysis is important to evaluate risk factors, make appropriate selection and perform adequate treatment of infections for patients to be operated. The aim of this study was to analyze in-hospital mortality in proximal femoral fracture patients operatively treated with hip arthroplasty procedure. Methods. We followed 622 consecutive patients, and collected data about age, gender, the presence of infection preoperatively and postoperatively, American Society of Anesthesiologists (ASA score, diabetes mellitus and the type of surgical procedure. Postoperative infections included pneumonia, urinary tract infections, surgical site infections and sepsis. Results. We found a statistically significant influence of preoperative and postoperative infection presence for in-patient mortality with relative risk for lethal outcome of 4.53 (95% CI: 1.44-14.22 for patients with preoperative infection and 7.5 (95% CI: 1.90-29.48 for patients with postoperative infection. We did not confirm a statistically significant influence of age, gender, ASA score, diabetes mellitus or the type of surgical procedure for increased mortality rate. Conclusion. Adequate preoperative selection, risk evaluation and adequate treatment of infections are of the key importance for lowering the risk of death in patients operated due to proximal femoral fracture and treated by hip arthroplasty procedures. Special attention is to be paid for the presence of preoperative and postoperative infections in patients operatively treated due to the risk for increased in-hospital mortality.

  8. Efficient Procedures of Sensitivity Analysis for Structural Vibration Systems with Repeated Frequencies

    Directory of Open Access Journals (Sweden)

    Shijia Zhao

    2013-01-01

    Full Text Available Derivatives of eigenvectors with respect to structural parameters play an important role in structural design, identification, and optimization. Particularly, calculation of eigenvector sensitivity is considered when the eigenvalues are repeated. A relaxation factor embedded in the combined approximations (CA method makes it effective to the structural response at various modified designs. The proposed method is feasible after overcoming the defection of irreversibility of the characteristic matrix. Numerical examples show that it is easy to implement the computational procedure, and the method presented in this paper is efficient for the general linear vibration damped systems with repeated frequencies.

  9. A Procedure for Modeling Structural Component/Attachment Failure Using Transient Finite Element Analysis

    Science.gov (United States)

    Lovejoy, Andrew E.; Jegley, Dawn C. (Technical Monitor)

    2007-01-01

    Structures often comprise smaller substructures that are connected to each other or attached to the ground by a set of finite connections. Under static loading one or more of these connections may exceed allowable limits and be deemed to fail. Of particular interest is the structural response when a connection is severed (failed) while the structure is under static load. A transient failure analysis procedure was developed by which it is possible to examine the dynamic effects that result from introducing a discrete failure while a structure is under static load. The failure is introduced by replacing a connection load history by a time-dependent load set that removes the connection load at the time of failure. The subsequent transient response is examined to determine the importance of the dynamic effects by comparing the structural response with the appropriate allowables. Additionally, this procedure utilizes a standard finite element transient analysis that is readily available in most commercial software, permitting the study of dynamic failures without the need to purchase software specifically for this purpose. The procedure is developed and explained, demonstrated on a simple cantilever box example, and finally demonstrated on a real-world example, the American Airlines Flight 587 (AA587) vertical tail plane (VTP).

  10. Instability analysis procedure for 3-level multi-bearing rotor-foundation systems

    Science.gov (United States)

    Zhou, S.; Rieger, N. F.

    1985-01-01

    A procedure for the instability analysis of a three-level multispan rotor systems is described. This procedure is based on a distributed mass elastic representation of the rotor system in several eight-coefficient bearings. Each bearing is supported from an elastic foundation on damped, elastic pedestals. The foundation is represented as a general distributed mass elastic structure on discrete supports, which may have different stiffness and damping properties in the horizontal and vertical directions. This system model is suited to studies of instability threshold conditions for multirotor turbomachines on either massive or flexible foundations. The instability conditions is found by obtaining the eigenvalues of the system determinant, which is obtained by the transfer matrix method from the three-level system model. The stability determinant is solved for the lowest rotational speed at which the system damping becomes zero in the complex eigenvalue, and for the whirl frequency corresponding to the natural frequency of the unstable mode. An efficient algorithm for achieving this is described. Application of this procedure to a rigid rotor in two damped-elastic bearings and flexible supports is described. A second example discusses a flexible rotor with four damped-elastic bearings. The third case compares the stability of a six-bearing 300 Mw turbine generator unit, using two different bearing types. These applications validate the computer program and various aspects of the analysis.

  11. Practical Considerations for Using Exploratory Factor Analysis in Educational Research

    Science.gov (United States)

    Beavers, Amy S.; Lounsbury, John W.; Richards, Jennifer K.; Huck, Schuyler W.; Skolits, Gary J.; Esquivel, Shelley L.

    2013-01-01

    The uses and methodology of factor analysis are widely debated and discussed, especially the issues of rotational use, methods of confirmatory factor analysis, and adequate sample size. The variety of perspectives and often conflicting opinions can lead to confusion among researchers about best practices for using factor analysis. The focus of the…

  12. Simplified Procedure For The Free Vibration Analysis Of Rectangular Plate Structures With Holes And Stiffeners

    Directory of Open Access Journals (Sweden)

    Cho Dae Seung

    2015-04-01

    Full Text Available Thin and thick plates, plates with holes, stiffened panels and stiffened panels with holes are primary structural members in almost all fields of engineering: civil, mechanical, aerospace, naval, ocean etc. In this paper, a simple and efficient procedure for the free vibration analysis of such elements is presented. It is based on the assumed mode method and can handle different plate thickness, various shapes and sizes of holes, different framing sizes and types as well as different combinations of boundary conditions. Natural frequencies and modes are determined by solving an eigenvalue problem of a multi-degree-of-freedom system matrix equation derived by using Lagrange’s equations. Mindlin theory is applied for a plate and Timoshenko beam theory for stiffeners. The applicability of the method in the design procedure is illustrated with several numerical examples obtained by the in-house developed code VAPS. Very good agreement with standard commercial finite element software is achieved.

  13. Procedure for conducting probabilistic safety assessment: level 1 full power internal event analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Won Dae; Lee, Y. H.; Hwang, M. J. [and others

    2003-07-01

    This report provides guidance on conducting a Level I PSA for internal events in NPPs, which is based on the method and procedure that was used in the PSA for the design of Korea Standard Nuclear Plants (KSNPs). Level I PSA is to delineate the accident sequences leading to core damage and to estimate their frequencies. It has been directly used for assessing and modifying the system safety and reliability as a key and base part of PSA. Also, Level I PSA provides insights into design weakness and into ways of preventing core damage, which in most cases is the precursor to accidents leading to major accidents. So Level I PSA has been used as the essential technical bases for risk-informed application in NPPs. The report consists six major procedural steps for Level I PSA; familiarization of plant, initiating event analysis, event tree analysis, system fault tree analysis, reliability data analysis, and accident sequence quantification. The report is intended to assist technical persons performing Level I PSA for NPPs. A particular aim is to promote a standardized framework, terminology and form of documentation for PSAs. On the other hand, this report would be useful for the managers or regulatory persons related to risk-informed regulation, and also for conducting PSA for other industries.

  14. Unintentionally retained foreign bodies after surgical procedures. Analysis of 4547 cases

    Directory of Open Access Journals (Sweden)

    Dário Vianna Birolini

    Full Text Available Objective: this study aims to explore the experience of Brazilian surgeons on Unintentionally Retained Foreign Bodies (RFB after surgical procedures. Methods: A questionnaire was sent to surgeons by electronic mail, between March and July 2012. The questions analyzed their experience with foreign bodies (FB, foreign bodies' types, clinical manifestations, diagnoses, risk factors and legal implications. Results: in the 2872 eligible questionnaires, 43% of the surgeons asserted that they had already left FB and 73% had removed FB in one or more occasions, totalizing 4547. Of these foreign bodies, 90% were textiles, 78% were discovered in the first year and 14% remained asymptomatic. Among doctors with less than five years after graduation, 36% had already left a FB. The most frequently surgical procedures mentioned were the elective (57% and routine (85% ones. Emergency (26%, lack of counting (25% and inadequate conditions of work contributed (12.5% to the occurrence. In 46% of the cases patients were alerted about the FB, and 26% of them sued the doctors or the institution. Conclusions: challenging medical situations, omission of security protocols and inadequate work conditions contributed to RFB. However, RFB occurs mostly in routine procedures such as cesarean or cholecystectomy, and at the beginning of the professional career, highlighting, particularly in poorest countries, the need for primary prevention. Textiles predominated causing clinical repercussions and they were diagnosed in the first postoperative months. Surgeons were sued in 11.3% of the RFB cases.

  15. Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas

    Science.gov (United States)

    Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.

    In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.

  16. Analysis of boutique arrays: a universal method for the selection of the optimal data normalization procedure.

    Science.gov (United States)

    Uszczyńska, Barbara; Zyprych-Walczak, Joanna; Handschuh, Luiza; Szabelska, Alicja; Kaźmierczak, Maciej; Woronowicz, Wiesława; Kozłowski, Piotr; Sikorski, Michał M; Komarnicki, Mieczysław; Siatkowski, Idzi; Figlerowicz, Marek

    2013-09-01

    DNA microarrays, which are among the most popular genomic tools, are widely applied in biology and medicine. Boutique arrays, which are small, spotted, dedicated microarrays, constitute an inexpensive alternative to whole-genome screening methods. The data extracted from each microarray-based experiment must be transformed and processed prior to further analysis to eliminate any technical bias. The normalization of the data is the most crucial step of microarray data pre-processing and this process must be carefully considered as it has a profound effect on the results of the analysis. Several normalization algorithms have been developed and implemented in data analysis software packages. However, most of these methods were designed for whole-genome analysis. In this study, we tested 13 normalization strategies (ten for double-channel data and three for single-channel data) available on R Bioconductor and compared their effectiveness in the normalization of four boutique array datasets. The results revealed that boutique arrays can be successfully normalized using standard methods, but not every method is suitable for each dataset. We also suggest a universal seven-step workflow that can be applied for the selection of the optimal normalization procedure for any boutique array dataset. The described workflow enables the evaluation of the investigated normalization methods based on the bias and variance values for the control probes, a differential expression analysis and a receiver operating characteristic curve analysis. The analysis of each component results in a separate ranking of the normalization methods. A combination of the ranks obtained from all the normalization procedures facilitates the selection of the most appropriate normalization method for the studied dataset and determines which methods can be used interchangeably.

  17. Confirmatory factor analysis of the Child Oral Health Impact Profile (Korean version).

    Science.gov (United States)

    Cho, Young Il; Lee, Soonmook; Patton, Lauren L; Kim, Hae-Young

    2016-04-01

    Empirical support for the factor structure of the Child Oral Health Impact Profile (COHIP) has not been fully established. The purposes of this study were to evaluate the factor structure of the Korean version of the COHIP (COHIP-K) empirically using confirmatory factor analysis (CFA) based on the theoretical framework and then to assess whether any of the factors in the structure could be grouped into a simpler single second-order factor. Data were collected through self-reported COHIP-K responses from a representative community sample of 2,236 Korean children, 8-15 yr of age. Because a large inter-factor correlation of 0.92 was estimated in the original five-factor structure, the two strongly correlated factors were combined into one factor, resulting in a four-factor structure. The revised four-factor model showed a reasonable fit with appropriate inter-factor correlations. Additionally, the second-order model with four sub-factors was reasonable with sufficient fit and showed equal fit to the revised four-factor model. A cross-validation procedure confirmed the appropriateness of the findings. Our analysis empirically supported a four-factor structure of COHIP-K, a summarized second-order model, and the use of an integrated summary COHIP score.

  18. Old and new ideas for data screening and assumption testing for exploratory and confirmatory factor analysis

    Directory of Open Access Journals (Sweden)

    David B. Flora

    2012-03-01

    Full Text Available We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables.

  19. Old and new ideas for data screening and assumption testing for exploratory and confirmatory factor analysis.

    Science.gov (United States)

    Flora, David B; Labrish, Cathy; Chalmers, R Philip

    2012-01-01

    We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables.

  20. Processes and Procedures for Application of CFD to Nuclear Reactor Safety Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Richard W. Johnson; Richard R. Schultz; Patrick J. Roache; Ismail B. Celik; William D. Pointer; Yassin A. Hassan

    2006-09-01

    Traditionally, nuclear reactor safety analysis has been performed using systems analysis codes such as RELAP5, which was developed at the INL. However, goals established by the Generation IV program, especially the desire to increase efficiency, has lead to an increase in operating temperatures for the reactors. This increase pushes reactor materials to operate towards their upper temperature limits relative to structural integrity. Because there will be some finite variation of the power density in the reactor core, there will be a potential for local hot spots to occur in the reactor vessel. Hence, it has become apparent that detailed analysis will be required to ensure that local ‘hot spots’ do not exceed safety limits. It is generally accepted that computational fluid dynamics (CFD) codes are intrinsically capable of simulating fluid dynamics and heat transport locally because they are based on ‘first principles.’ Indeed, CFD analysis has reached a fairly mature level of development, including the commercial level. However, CFD experts are aware that even though commercial codes are capable of simulating local fluid and thermal physics, great care must be taken in their application to avoid errors caused by such things as inappropriate grid meshing, low-order discretization schemes, lack of iterative convergence and inaccurate time-stepping. Just as important is the choice of a turbulence model for turbulent flow simulation. Turbulence models model the effects of turbulent transport of mass, momentum and energy, but are not necessarily applicable for wide ranges of flow types. Therefore, there is a well-recognized need to establish practices and procedures for the proper application of CFD to simulate flow physics accurately and establish the level of uncertainty of such computations. The present document represents contributions of CFD experts on what the basic practices, procedures and guidelines should be to aid CFD analysts to obtain accurate

  1. Investigating the Hierarchical Factor Structure of the Fifth Edition of the 16PF: An Application of the Schmid-Leiman Orthogonalization Procedure.

    Science.gov (United States)

    Chernyshenko, Oleksandr S.; Stark, Stephen; Chan, Kim Yin

    2001-01-01

    Studied the unidimensionality of the 16 noncognitive scales of the Sixteen Personality Factor Questionnaire (16PF) and the hierarchical factor structure of the inventory. Results using the Schmid Leiman orthogonalization procedure (J. Schmid and J. Leiman, 1957) showed that the noncognitive multi-item composites could be factored into 16…

  2. Energy Consumption Analysis Procedure for Robotic Applications in different task motion

    Science.gov (United States)

    Ahmed, Iman; Aris, Ishak b.; Hamiruce Marhaban, Mohammad; Juraiza Ishak, Asnor

    2015-11-01

    This work proposes energy analysis method for humanoid robot, seen from simple motion task to complex one in energy chain. The research developed a procedure suitable for analysis, saving and modelling of energy consumption not only in this type of robot but also in most robots that based on electrical power as an energy source. This method has validated by an accurate integration using Matlab software for the power consumption curve to calculate the energy of individual and multiple servo motors. Therefore, this study can be considered as a procedure for energy analysis by utilizing the laboratory instruments capabilities to measure the energy parameters. We performed a various task motions with different angular speed to find out the speed limits in terms of robot stability and control strategy. A battery capacity investigation have been searched for several types of batteries to extract the power modelling equation and energy density parameter for each battery type, Matlab software have been built to design the algorithm and to evaluate experimental amount of the energy which is represented by area under the curve of the power curves. This will provide a robust estimation for the required energy in different task motions to be considered in energy saving (i.e., motion planning and real time scheduling).

  3. Cardiac arrests in patients undergoing gastrointestinal endoscopy: A retrospective analysis of 73,029 procedures

    Directory of Open Access Journals (Sweden)

    Basavana Goudra

    2015-01-01

    Full Text Available Background/Aims: Airway difficulties leading to cardiac arrest are frequently encountered during propofol sedation in patients undergoing gastrointestinal (GI endoscopy. With a noticeable increase in the use of propofol for endoscopic sedation, we decided to examine the incidence and outcome of cardiac arrests in patients undergoing gastrointestinal (GI endoscopy with sedation. Patients and Methods: In this retrospective study, cardiac arrest data obtained from the clinical quality improvement and local registry over 5 years was analyzed. The information of patients who sustained cardiac arrest attributable to sedation was studied in detail. Analysis included comparison of cardiac arrests due to all causes until discharge (or death versus the cardiac arrests and death occurring during the procedure and in the recovery area. Results: The incidence of cardiac arrest and death (all causes, until discharge was 6.07 and 4.28 per 10,000 in patients sedated with propofol, compared with non–propofol-based sedation (0.67 and 0.44. The incidence of cardiac arrest during and immediately after the procedure (recovery area for all endoscopies was 3.92 per 10,000; of which, 72% were airway management related. About 90.0% of all peri-procedural cardiac arrests occurred in patients who received propofol. Conclusions: The incidence of cardiac arrest and death is about 10 times higher in patients receiving propofol-based sedation compared with those receiving midazolam–fentanyl sedation. More than two thirds of these events occur during EGD and ERCP.

  4. A Procedure for the supercritical fluid extraction of coal samples, with subsequent analysis of extracted hydrocarbons

    Science.gov (United States)

    Kolak, Jonathan J.

    2006-01-01

    Introduction: This report provides a detailed, step-by-step procedure for conducting extractions with supercritical carbon dioxide (CO2) using the ISCO SFX220 supercritical fluid extraction system. Protocols for the subsequent separation and analysis of extracted hydrocarbons are also included in this report. These procedures were developed under the auspices of the project 'Assessment of Geologic Reservoirs for Carbon Dioxide Sequestration' (see http://pubs.usgs.gov/fs/fs026-03/fs026-03.pdf) to investigate possible environmental ramifications associated with CO2 storage (sequestration) in geologic reservoirs, such as deep (~1 km below land surface) coal beds. Supercritical CO2 has been used previously to extract contaminants from geologic matrices. Pressure-temperature conditions within deep coal beds may render CO2 supercritical. In this context, the ability of supercritical CO2 to extract contaminants from geologic materials may serve to mobilize noxious compounds from coal, possibly complicating storage efforts. There currently exists little information on the physicochemical interactions between supercritical CO2 and coal in this setting. The procedures described herein were developed to improve the understanding of these interactions and provide insight into the fate of CO2 and contaminants during simulated CO2 injections.

  5. A procedure for automated analysis of brief pumping tests of domestic wells.

    Science.gov (United States)

    Klusman, Kate

    2004-01-01

    A new computer program has been developed to automate analysis of brief single-well pumping tests. Adapted from a procedure developed by Picking (1994) that does not require measurement of the pumping rate, this new program is menu-driven and eliminates one significant source of imprecision in Picking's original method, namely, selection of "well function of u" values by interpolation in a lookup table. This new program has been applied to tests of 25 domestic wells penetrating bedrock, each pumped for <2 min.

  6. Hierarchical Direct Time Integration Method and Adaptive Procedure for Dynamic Analysis

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    New hierarchical direct time integration method for structural dynamic analysis is developed by using Taylor series expansions in each time step. Very accurate results can be obtained by increasing the order of the Taylor series. Furthermore, the local error can be estimated by simply comparing the solutions obtained by the proposed method with the higher order solutions. This local estimate is then used to develop an adaptive order-control technique. Numerical examples are given to illustrate the performance of the present method and its adaptive procedure.

  7. Procedure for the record, calculation and analysis of costs at the Post Company of Cuba.

    Directory of Open Access Journals (Sweden)

    María Luisa Lara Zayas

    2012-12-01

    Full Text Available The Cuban Company is immersed in important changes, which lead to a new economic model that requires to increase the productivity of work and to enlarge the economic efficiency by means of rational use of material resources, financial and humans. In the present work it is proposed a procedure based on the application of cost techniques, for the record, calculation and costs analysis of activities in the Post Company of Cuba in Sancti Spiritus with the objective to obtain a major efficiency from the rational use of resources.

  8. An Effective Procedure for the RAPD Analysis of Hemp Cannabissativa L.

    Institute of Scientific and Technical Information of China (English)

    SongShujuan; Robert.C.ClarkeI; ShaoHong

    2001-01-01

    China has a great resource of Cannabis. Research on the taxonomy and morphology of Chinese Cannabis has been carried out, but so far no molecular genetic research has been published. Random amplifiedpolymorphic DNA (RAPD) is a suitable technique for molecular genetic research on Cannabis. In tills experiment,using Cannabis herbarium specimens as a source of genetic materials, the correlative conditions of the Polymerasechain reaction (PCR), (i.e., gradient density of Mg2+, dNTPs, Taq DNA polymerase, anneal temperature, anneal timeand reaction cycles) were examined separately. An effective procedure for the RAPD analysis of Cannabis wasobtained.

  9. Radiological management of patients with urinary obstruction following urinary diversion procedures: technical factors, complications, long-term management and outcome. Experience with 378 procedures.

    LENUS (Irish Health Repository)

    Maher, M M

    2012-02-03

    We aimed to assess management by interventional radiology techniques of patients with urinary diversion procedures (UD) complicated by urinary obstruction (UO). A 12-year electronic database of interventional cases was searched for urinary access in patients with UD. Patients\\' records were assessed for aetiology of obstruction, indication for procedure, types of interventional radiology, complications and outcome. Management issues included frequency of visits for catheter care, type of catheter placement and technical problems associated with catheter maintenance. Three hundred and seventy eight procedures were carried out in 25 patients (mean age 70 years; Male : Female ratio 13:12). Indications for UD were malignancy (n = 22) and neuropathic bladder (n = 3). UD included ileal conduits (n = 17), cutaneous ureterostomy (n = 3 (2 patients)) and sigmoid colon urinary conduit (n = 6). In most patients, catheters were placed antegradely through nephrostomy tract, but subsequent access was through the UD. Twenty of 25 patients had unilateral stents where as 5 had bilateral stents (8-10- Fr pigtail catheters (20-45 cm in length)). The mean number of procedures including catheter changes was 15 +\\/- 4 per patient and 331 of 378 procedures (87 %) were carried out as outpatients. Since catheter placement, 11 patients required hospital admission on 22 occasions for catheter-related complications. Ureteric strictures in patients with UD can be successfully managed by interventional radiology.

  10. Patient Dose During Carotid Artery Stenting With Embolic-Protection Devices: Evaluation With Radiochromic Films and Related Diagnostic Reference Levels According to Factors Influencing the Procedure

    Energy Technology Data Exchange (ETDEWEB)

    D' Ercole, Loredana, E-mail: l.dercole@smatteo.pv.it [Fondazione IRCCS Policlinico San Matteo, Department of Medical Physics (Italy); Quaretti, Pietro; Cionfoli, Nicola [Fondazione IRCCS Policlinico San Matteo, Department of Radiology (Italy); Klersy, Catherine [Fondazione IRCCS Policlinico San Matteo, Biometry and Clinical Epidemiology Service, Research Department, (Italy); Bocchiola, Milena [Fondazione IRCCS Policlinico San Matteo, Department of Medical Physics (Italy); Rodolico, Giuseppe; Azzaretti, Andrea [Fondazione IRCCS Policlinico San Matteo, Department of Radiology (Italy); Lisciandro, Francesco [Fondazione IRCCS Policlinico San Matteo, Department of Medical Physics (Italy); Cascella, Tommaso; Zappoli Thyrion, Federico [Fondazione IRCCS Policlinico San Matteo, Department of Radiology (Italy)

    2013-04-15

    To measure the maximum entrance skin dose (MESD) on patients undergoing carotid artery stenting (CAS) using embolic-protection devices, to analyze the dependence of dose and exposure parameters on anatomical, clinical, and technical factors affecting the procedure complexity, to obtain some local diagnostic reference levels (DRLs), and to evaluate whether overcoming DRLs is related to procedure complexity. MESD were evaluated with radiochromic films in 31 patients (mean age 72 {+-} 7 years). Five of 33 (15 %) procedures used proximal EPD, and 28 of 33 (85 %) procedures used distal EPD. Local DRLs were derived from the recorded exposure parameters in 93 patients (65 men and 28 women, mean age 73 {+-} 9 years) undergoing 96 CAS with proximal (33 %) or distal (67 %) EPD. Four bilateral lesions were included. MESD values (mean 0.96 {+-} 0.42 Gy) were <2 Gy without relevant dependence on procedure complexity. Local DRL values for kerma area product (KAP), fluoroscopy time (FT), and number of frames (N{sub FR}) were 269 Gy cm{sup 2}, 28 minutes, and 251, respectively. Only simultaneous bilateral treatment was associated with KAP (odds ratio [OR] 10.14, 95 % confidence interval [CI] 1-102.7, p < 0.05) and N{sub FR} overexposures (OR 10.8, 95 % CI 1.1-109.5, p < 0.05). Type I aortic arch decreased the risk of FT overexposure (OR 0.4, 95 % CI 0.1-0.9, p = 0.042), and stenosis {>=} 90 % increased the risk of N{sub FR} overexposure (OR 2.8, 95 % CI 1.1-7.4, p = 0.040). At multivariable analysis, stenosis {>=} 90 % (OR 2.8, 95 % CI 1.1-7.4, p = 0.040) and bilateral treatment (OR 10.8, 95 % CI 1.1-109.5, p = 0.027) were associated with overexposure for two or more parameters. Skin doses are not problematic in CAS with EPD because these procedures rarely lead to doses >2 Gy.

  11. Housing price forecastability: A factor analysis

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    of the model stays high at longer horizons. The estimated factors are strongly statistically signi…cant according to a bootstrap resampling method which takes into account that the factors are estimated regressors. The simple three-factor model also contains substantial out-of-sample predictive power...

  12. A Second Generation Nonlinear Factor Analysis.

    Science.gov (United States)

    Etezadi-Amoli, Jamshid; McDonald, Roderick P.

    1983-01-01

    Nonlinear common factor models with polynomial regression functions, including interaction terms, are fitted by simultaneously estimating the factor loadings and common factor scores, using maximum likelihood and least squares methods. A Monte Carlo study gives support to a conjecture about the form of the distribution of the likelihood ratio…

  13. Procedure for implementation of temperature-dependent mechanical property capability in the Engineering Analysis Language (EAL) system

    Science.gov (United States)

    Glass, David E.; Robinson, James C.

    1990-01-01

    A procedure is presented to allow the use of temperature dependent mechanical properties in the Engineering Analysis Language (EAL) System for solid structural elements. This is accomplished by including a modular runstream in the main EAL runstream. The procedure is applicable for models with multiple materials and with anisotropic properties, and can easily be incorporated into an existing EAL runstream. The procedure (which is applicable for EAL elastic solid elements) is described in detail, followed by a description of the validation of the routine. A listing of the EAL runstream used to validate the procedure is included in the Appendix.

  14. Pairwise Comparison Procedures for One-Way Analysis of Variance Designs. Research Report.

    Science.gov (United States)

    Zwick, Rebecca

    Research in the behavioral and health sciences frequently involves the application of one-factor analysis of variance models. The goal may be to compare several independent groups of subjects on a quantitative dependent variable or to compare measurements made on a single group of subjects on different occasions or under different conditions. In…

  15. Analysis of the Impact of Transparency, Corruption, Openness in Competition and Tender Procedures on Public Procurement in the Czech Republic

    Directory of Open Access Journals (Sweden)

    František Ochrana

    2014-01-01

    Full Text Available This study analyses the impact of transparency and openness to competition in public procurement in the Czech Republic. The problems of the Czech procurement market have been demonstrated on the analysis of a sample of contracts awarded by local government entities. From among a set of factors influencing the efficiency of public procurement, we closely analyse transparency, resilience against corruption, openness, effective administrative award procedure, and formulation of appropriate evaluation criteria for selecting the most suitable bid. Some assumptions were confirmed, including a positive effect of open procedures on the level of competition on the supply side as well as the dominant use of price criteria only. The latter case is probably often caused by low skills of workers at the contracting entities, as well as the lack of resources in public budgets. However, we have to reject the persistent legend of “undershooting” tender prices and subsequently increasing the final prices of public contracts. Increases of final prices are very limited. Based on the results of the analyses presented, we argue that the main problem of the Czech public procurement market lies in a rather low competence of administrators who are not able to use non-price criteria more often.

  16. Sedation for pediatric radiological procedures: analysis of potential causes of sedation failure and paradoxical reactions

    Energy Technology Data Exchange (ETDEWEB)

    Karian, V.E.; Burrows, P.E.; Connor, L. [Dept. of Radiology, Children' s Hospital, Boston, MA (United States); Zurakowski, D. [Dept. of Biostatistics, Children' s Hospital, Boston, MA (United States); Mason, K.P. [Dept. of Anesthesiology, Children' s Hospital, Boston, MA (United States)

    1999-11-01

    Background. Sedation for diagnostic imaging and interventional radiologic procedures in pediatrics has greatly increased over the past decade. With appropriate patient selection and monitoring, serious adverse effects are infrequent, but failure to sedate and paradoxical reactions do occur. Objective. The purpose of this study was to determine, among patients undergoing sedation for radiologic procedures, the incidence of sedation failure and paradoxical reaction to pentobarbital and to identify potentially correctable causes. Materials and methods. Records of 1665 patients who were sedated in the radiology department from 1 November 1997 to 1 July 1998 were reviewed. Patients failing sedation or experiencing paradoxical reaction were compared with respect to sex, age group, diagnosis, scan type, time of day, NPO status, use of IV contrast and type of sedation agent using the Fisher exact test, Pearson chi-square, analysis of variance (ANOVA), the Student t-test, and logistic regression. Results. Data analysis revealed a sedation failure rate of 1 % and paradoxical reaction rate of 1.2 %. Stepwise multiple logistic regression revealed that the only significant independent multivariate predictor of failure was the need for the administration of a combination of pentobarbital, fentanyl, and midazolam IV. Conclusion. The low rate of sedation failure and paradoxical reactions to pentobarbital was near optimal and probably cannot be improved with the currently available sedatives. (orig.)

  17. Improved enteral tolerance following step procedure: systematic literature review and meta-analysis.

    Science.gov (United States)

    Fernandes, Melissa A; Usatin, Danielle; Allen, Isabel E; Rhee, Sue; Vu, Lan

    2016-10-01

    Surgical management of children with short bowel syndrome (SBS) changed with the introduction of the serial transverse enteroplasty procedure (STEP). We conducted a systematic review and meta-analysis using MEDLINE and SCOPUS to determine if children with SBS had improved enteral tolerance following STEP. Studies were included if information about a child's pre- and post-STEP enteral tolerance was provided. A random effects meta-analysis provided a summary estimate of the proportion of children with enteral tolerance increase following STEP. From 766 abstracts, seven case series involving 86 children were included. Mean percent tolerance of enteral nutrition improved from 35.1 to 69.5. Sixteen children had no enteral improvement following STEP. A summary estimate showed that 87 % (95 % CI 77-95 %) of children who underwent STEP had an increase in enteral tolerance. Compilation of the literature supports the belief that SBS subjects' enteral tolerance improves following STEP. Enteral nutritional tolerance is a measure of efficacy of STEP and should be presented as a primary or secondary outcome. By standardizing data collection on children undergoing STEP procedure, better determination of nutritional benefit from STEP can be ascertained.

  18. Simulation and analysis of data for enhancing low cycle fatigue test procedures

    Energy Technology Data Exchange (ETDEWEB)

    Sarajaervi, U.; Cronvall, O. [VTT Technical Research Centre of Finland (Finland)

    2006-04-15

    The simulation and analysis of data for enhancing low cycle fatigue test procedures is discussed in this report. The analysed materials are an austenitic stainless piping steel and an austenitic weld material. This project continues the work performed in 2003 and 2004. The fatigue test data treatment application developed within the project in 2004 for the preparation of the fatigue data has been developed further. Also, more fatigue test data has been analysed with the application than in 2004. In addition to this numerical fatigue simulations were performed with FEM code ABAQUS. With the fatigue test data treatment application one can e.g. both calculate cyclically certain relevant characteristic values, e.g. elastic range, and form a set of certain cyclical parameter values needed as a part of ABAQUS analysis input files. The hardening properties of metals were modelled with both isotropic and kinematic hardening models. The further development of the application included trimming of the analysed data, and consequently trimming of resulting hardening parameters. The need for the trimming arose from the fact that the analysed fatigue test data presents some scatter caused by the limited accuracy of the test equipment and the sampling rate. The hardening parameters obtained from the application analysis results were used in the subsequent ABAQUS analyses, and then the fatigue test data were compared with the ABAQUS simulation results. After finding a procedure to trim result data to get smooth curves for cyclic hardening, hardening and softening could be reproduced in ABAQUS analysis with a reasonable accuracy. The modelling of the fatigue induced initiation and growth of cracks was not considered in this study. On the other hand, a considerable part of the fatigue life of nuclear power plant (NPP) piping components is spent in the phase preceding the initiation and growth of cracks. (au)

  19. Disruptive Event Biosphere Doser Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2000-12-28

    The purpose of this report was to document the process leading to, and the results of, development of radionuclide-, exposure scenario-, and ash thickness-specific Biosphere Dose Conversion Factors (BDCFs) for the postulated postclosure extrusive igneous event (volcanic eruption) at Yucca Mountain. BDCF calculations were done for seventeen radionuclides. The selection of radionuclides included those that may be significant dose contributors during the compliance period of up to 10,000 years, as well as radionuclides of importance for up to 1 million years postclosure. The approach documented in this report takes into account human exposure during three different phases at the time of, and after, volcanic eruption. Calculations of disruptive event BDCFs used the GENII-S computer code in a series of probabilistic realizations to propagate the uncertainties of input parameters into the output. The pathway analysis included consideration of different exposure pathway's contribution to the BDCFs. BDCFs for volcanic eruption, when combined with the concentration of radioactivity deposited by eruption on the soil surface, allow calculation of potential radiation doses to the receptor of interest. Calculation of radioactivity deposition is outside the scope of this report and so is the transport of contaminated ash from the volcano to the location of the receptor. The integration of the biosphere modeling results (BDCFs) with the outcomes of the other component models is accomplished in the Total System Performance Assessment (TSPA), in which doses are calculated to the receptor of interest from radionuclides postulated to be released to the environment from the potential repository at Yucca Mountain.

  20. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2000-12-21

    The purpose of this report was to document the process leading to development of the Biosphere Dose Conversion Factors (BDCFs) for the postclosure nominal performance of the potential repository at Yucca Mountain. BDCF calculations concerned twenty-four radionuclides. This selection included sixteen radionuclides that may be significant nominal performance dose contributors during the compliance period of up to 10,000 years, five additional radionuclides of importance for up to 1 million years postclosure, and three relatively short-lived radionuclides important for the human intrusion scenario. Consideration of radionuclide buildup in soil caused by previous irrigation with contaminated groundwater was taken into account in the BDCF development. The effect of climate evolution, from the current arid conditions to a wetter and cooler climate, on the BDCF values was evaluated. The analysis included consideration of different exposure pathway's contribution to the BDCFs. Calculations of nominal performance BDCFs used the GENII-S computer code in a series of probabilistic realizations to propagate the uncertainties of input parameters into the output. BDCFs for the nominal performance, when combined with the concentrations of radionuclides in groundwater allow calculation of potential radiation doses to the receptor of interest. Calculated estimates of radionuclide concentration in groundwater result from the saturated zone modeling. The integration of the biosphere modeling results (BDCFs) with the outcomes of the other component models is accomplished in the Total System Performance Assessment (TSPA) to calculate doses to the receptor of interest from radionuclides postulated to be released to the environment from the potential repository at Yucca Mountain.

  1. Shifted factor analysis for the separation of evoked dependent MEG signals

    Energy Technology Data Exchange (ETDEWEB)

    Kohl, F; Wuebbeler, G; Baer, M; Elster, C [Physikalisch-Technische Bundesanstalt (PTB), Abbestrasse 2-12, 10587 Berlin (Germany); Kolossa, D; Orglmeister, R, E-mail: florian.kohl@ptb.d [Technische Universitaet Berlin, Strasse des 17. Juni 135, 10623 Berlin (Germany)

    2010-08-07

    Decomposition of evoked magnetoencephalography (MEG) data into their underlying neuronal signals is an important step in the interpretation of these measurements. Often, independent component analysis (ICA) is employed for this purpose. However, ICA can fail as for evoked MEG data the neuronal signals may not be statistically independent. We therefore consider an alternative approach based on the recently proposed shifted factor analysis model, which does not assume statistical independence of the neuronal signals. We suggest the application of this model in the time domain and present an estimation procedure based on a Taylor series expansion. We show in terms of synthetic evoked MEG data that the proposed procedure can successfully separate evoked dependent neuronal signals while standard ICA fails. Latency estimation of neuronal signals is an inherent part of the proposed procedure and we demonstrate that resulting latency estimates are superior to those obtained by a maximum likelihood method.

  2. Investigation and analysis of clinical trial research nurse to perform standard operating procedures

    Institute of Scientific and Technical Information of China (English)

    Yan Lin; Yan-Yun Wu; Mei-Hua Wu; Xiu-Yu Yang; Ming Zhou

    2016-01-01

    Objective: The aim of this study was to investigate the situations and factors that cause nurses not to follow standard operating procedures (SOPs) during the clinical trial process. Methods: Five cases involving patients enrolled in a clinical trial were divided into two groups, pre-SOP training and post-SOP training, to compare and observe the process problems and whether nurses fol-lowed SOPs in clinical trials. The causes of problems were analyzed and corrective measures were proposed. Results: Our results indicate significant improvement in compliance with SOPs after training. There were three occurrences of irregular behavior after training compared with 21 occurrences of irregular behavior before training. Conclusions: The quality of clinical trials can be improved if nurses strictly follow SOPs.

  3. Factor analysis of serogroups botanica and aurisina of Leptospira biflexa.

    Science.gov (United States)

    Cinco, M

    1977-11-01

    Factor analysis is performed on serovars of Botanica and Aurisina serogroup of Leptospira biflexa. The results show the arrangement of main factors serovar and serogroup specific, as well as the antigens common with serovars of heterologous serogroups.

  4. An Item Factor Analysis of the Mooney Problem Check List

    Science.gov (United States)

    Stewart, David W.; Deiker, Thomas

    1976-01-01

    Explores the factor structure of the Mooney Problem Check List (MPCL) at the junior and senior high school level by undertaking a large obverse factor analysis of item responses in three adolescent criterion groups. (Author/DEP)

  5. Full Information Item Factor Analysis of the FCI

    Science.gov (United States)

    Hagedorn, Eric

    2010-02-01

    Traditional factor analytical methods, principal factors or principal components analysis, are inappropriate techniques for analyzing dichotomously scored responses to standardized tests or concept inventories because they lead to artifactual factors often referred to as ``difficulty factors.'' Full information item factor analysis (Bock, Gibbons and Muraki, 1988) based on Thurstone's multiple factor model and calculated using marginal maximum likelihood estimation, is an appropriate technique for such analyses. Force Concept Inventory (Hestenes, Wells and Swackhamer, 1992) data from 1582 university students completing an introductory physics course, was analyzed using the full information item factor analysis software TESTFACT v. 4. Analyzing the statistical significance of successive factors added to the model, using chi-squared statistics, led to a six factor model interpretable in terms of the conceptual dimensions of the FCI. )

  6. Exploratory matrix factorization for PET image analysis.

    Science.gov (United States)

    Kodewitz, A; Keck, I R; Tomé, A M; Lang, E W

    2010-01-01

    Features are extracted from PET images employing exploratory matrix factorization techniques such as nonnegative matrix factorization (NMF). Appropriate features are fed into classifiers such as a support vector machine or a random forest tree classifier. An automatic feature extraction and classification is achieved with high classification rate which is robust and reliable and can help in an early diagnosis of Alzheimer's disease.

  7. Analysis of Factors Affecting the Quality of an E-commerce Website Using Factor Analysis

    Directory of Open Access Journals (Sweden)

    Saurabh Mishra

    2014-12-01

    Full Text Available The purpose of this study is to identify factors which affect the quality and effectiveness of an e commerce website which also majorly affect customer satisfaction and ultimately customer retention and loyalty. This research paper examines a set of 23 variables and integrates them into 4 factors which affect the quality of a website. An online questionnaire survey was conducted to generate statistics regarding the preferences of the e-commerce website users.The 23 variables taken from customer survey are generalized into 4 major factors using exploratory factor analysis which are content, navigation, services and interface design. The research majorly consists of the responses of students between the age group of 18-25 years and considers different B2C commercial websites. Identified variables are important with respect to the current competition in the market as service of an e-commerce website also play a major role in ensuring customer satisfaction. Further research in this domain can be done for websites’ version for mobile devices.

  8. Development of a Quality Assurance Procedure for Dose Volume Histogram Analysis

    Science.gov (United States)

    Davenport, David A.

    The role of the dose-volume histogram (DVH) is rapidly expanding in radiation oncology treatment planning. DVHs are already relied upon to differentiate between two similar plans and evaluate organ-at-risk dosage. Their role will become even more important as progress continues towards implementing biologically based treatment planning systems. Therefore it is imperative that the accuracy of DVHs is evaluated and reappraised after any major software or hardware upgrades, affecting a treatment planning system (TPS). The purpose of this work is to create and implement a comprehensive quality assurance procedure evaluating dose volume histograms to insure their accuracy while satisfying American College of Radiology guidelines. Virtual phantoms of known volumes were created in Pinnacle TPS and exposed to different beam arrangements. Variables including grid size and slice thickness were varied and their effects were analyzed. The resulting DVHs were evaluated by comparison to the commissioned percent depth dose values using a custom Excel spreadsheet. After determining the uncertainty of the DVH based on these variables, multiple second check calculations were performed using MIM Maestro and Matlab software packages. The uncertainties of the DVHs were shown to be less than +/- 3%. The average uncertainty was shown to be less than +/- 1%. The second check procedures resulted in mean percent differences less than 1% which confirms the accuracy of DVH calculation in Pinnacle and the effectiveness of the quality assurance template. The importance of knowing the limits of accuracy of the DVHs, which are routinely used to assess the quality of clinical treatment plans, cannot be overestimated. The developed comprehensive QA procedure evaluating the accuracy of the DVH statistical analysis will become a part of our clinical arsenal for periodic tests of the treatment planning system. It will also be performed at the time of commissioning and after any major software

  9. Exploring Technostress: Results of a Large Sample Factor Analysis

    OpenAIRE

    Steponas Jonušauskas; Agota Giedre Raisiene

    2016-01-01

    With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...

  10. Identification of noise in linear data sets by factor analysis

    Energy Technology Data Exchange (ETDEWEB)

    Roscoe, B.A.; Hopke, Ph.K. (Illinois Univ., Urbana (USA))

    1982-01-01

    A technique which has the ability to identify bad data points, after the data has been generated, is classical factor analysis. The ability of classical factor analysis to identify two different types of data errors make it ideally suited for scanning large data sets. Since the results yielded by factor analysis indicate correlations between parameters, one must know something about the nature of the data set and the analytical techniques used to obtain it to confidentially isolate errors.

  11. Analysis and Design Procedure of LVLP Sub-bandgap Reference - Development and Results

    Directory of Open Access Journals (Sweden)

    T. Urban

    2011-04-01

    Full Text Available This work presents an thorough analysis and design of a low-voltage low-power voltage reference circuit with sub-bandgap output voltage. The outcome of the analysis and the resulting design rules are universal and it is supposed to be general and suitable for similar topologies with just minor modifications. The general analysis is followed by a selection of specific topology. The given topology is analyzed for particular parameters which are standard industrial circuit specifications. These parameters are mathematically expressed, some are simplified and equivalent circuits are used. The analysis and proposed design procedure focuses mainly on versatility of the IP block. The features of the circuit suit to low-voltage low-power design with less than 10μA supply current draw at 1.3V supply voltage. For testing purposes a complex transistor level design was created and verified in wide range of supply voltages (1.3 to 3.3V and temperatures (-45 to 95°C all in concrete 0.35μm IC design process using Mentor Graphics® and Cadence® software.

  12. Effect Factors of Liquid Scintillation Analysis

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>Over the past decades, the liquid scintillation analysis (LSA) technique remains one of the most popular experimental tools used for the quantitative analysis of radionuclide, especially low-energy β

  13. Kriging analysis of geochemical data obtained by sequential extraction procedure (BCR)

    Science.gov (United States)

    Fajkovic, Hana; Pitarević Svedružić, Lovorka; Prohić, Esad; Rončević, Sanda; Nemet, Ivan

    2015-04-01

    Field examination and laboratory analysis were performed to establish whether nonsanitary landfill Bastijunski brig has a negative influence on Vransko Lake, situated only 1500 m away. Vransko Lake is Croatia's largest natural lake, and it is a part of the Nature Park and ornithological reserve, which indicates its high biodiversity. Therefore it is necessary to understand the environmental processes and complex sediment/water interface. Lake sediments are considered to be a good "sinkhole'and are often the final recipients of anthropogenic and natural pollutants through adsorption onto the organic or clay fraction in sediments. Geochemical investigation were obtained throughout more than 50 lake sediments cores situated in different parts of the lake Speciation of heavy metals by modified BCR sequential extraction procedure with the addition of a first step of sequential extraction procedure by Tessier and analysis of residual by aqua regia were used to determine the amounts of selected elements (Al, Cd, Cr, Co, Cu, Fe, Mn, Ni, Pb, Zn) in different fractions. With such approach it is possible to determine which element will be extracted from sediment/soil in a different environmental conditions and can be valuable tool for interpretation of the mobile fraction of the elements, considered bioavailability, that present threat to biota in a case of a contaminant concentration magnification. All sediment and soil samples were analyzed by inductively coupled plasma atomic emission spectrometry. More accurate interpretation of data is an advantage of BCR sequential extraction procedure while high number of the data together with point data type could be considered as a drawback. Due to high amount of data, graphical presentation is advisable while interpolation tool is a first choice for point type of data, as it makes predictions for defined area based on the measurements. Distribution maps of analysed elements were obtained by kriging as a geostatistical method and

  14. A Bloch-based procedure for dispersion analysis of lattices with periodic time-varying properties

    Science.gov (United States)

    Vila, Javier; Pal, Raj Kumar; Ruzzene, Massimo; Trainiti, Giuseppe

    2017-10-01

    We present a procedure for the systematic estimation of the dispersion properties of linear discrete systems with periodic time-varying coefficients. The approach relies on the analysis of a single unit cell, making use of Bloch theorem along with the application of a harmonic balance methodology over an imposed solution ansatz. The solution of the resulting eigenvalue problem is followed by a procedure that selects the eigen-solutions corresponding to the ansatz, which is a plane wave defined by a frequency-wavenumber pair. Examples on spring-mass superlattices demonstrate the effectiveness of the method at predicting the dispersion behavior of linear elastic media. The matrix formulation of the problem suggests the broad applicability of the proposed technique. Furthermore, it is shown how dispersion can inform about the dynamic behavior of time-modulated finite lattices. The technique can be extended to multiple areas of physics, such as acoustic, elastic and electromagnetic systems, where periodic time-varying material properties may be used to obtain non-reciprocal wave propagation.

  15. A Multi—Criteria Decision Making Procedured for the Analysis of an Energy System

    Institute of Scientific and Technical Information of China (English)

    Ming-ShanZhu; Bu-XuanWang; 等

    1992-01-01

    In the course of improving and/or designing an energy system,either purely economic criteria,although the overriding criteria,or purely energy-based criteria,although the emphasized criteria,can not separately handle real-world situations in a satisfactory manner,The econmic effectiveness and the energy efficiency must be considered simultaneously to demonstrate the confilicting and non-commensurable characteristics of these multiple criteria.An iterative and interactive approach to formulating and solving non-linear multi-criteria decision making problems for the analysis of an energy system is proposed.It allows the decision maker(DM) to learn from the available information and dymamically change his mind.Criterion functioins can be treated as objective functions,as constraints of as something in between by the DM.After a series of iterations and interactive procedures,a preferred solution can be made among the non-inferior sets considering thermodymamic criteria and economic creteria simultaneously,a simple example for design of a hest exchanger is used to illustrate the procedure.

  16. An open source software project for obstetrical procedure scheduling and occupancy analysis.

    Science.gov (United States)

    Isken, Mark W; Ward, Timothy J; Littig, Steven J

    2011-03-01

    Increases in the rate of births via cesarean section and induced labor have led to challenging scheduling and capacity planning problems for hospital inpatient obstetrical units. We present occupancy and patient scheduling models to help address these challenges. These patient flow models can be used to explore the relationship between procedure scheduling practices and the resulting occupancy on inpatient obstetrical units such as labor and delivery and postpartum. The models capture numerous important characteristics of inpatient obstetrical patient flow such as time of day and day of week dependent arrivals and length of stay, multiple patient types and clinical interventions, and multiple patient care units with inter-unit patient transfers. We have used these models in several projects at different hospitals involving design of procedure scheduling templates and analysis of inpatient obstetrical capacity. In the development of these models, we made heavy use of open source software tools and have released the entire project as a free and open source model and software toolkit.

  17. System Requirements Analysis for a Computer-based Procedure in a Research Reactor Facility

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jaek Wan; Jang, Gwi Sook; Seo, Sang Moon; Shin, Sung Ki [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    This can address many of the routine problems related to human error in the use of conventional, hard-copy operating procedures. An operating supporting system is also required in a research reactor. A well-made CBP can address the staffing issues of a research reactor and reduce the human errors by minimizing the operator's routine tasks. A CBP for a research reactor has not been proposed yet. Also, CBPs developed for nuclear power plants have powerful and various technical functions to cover complicated plant operation situations. However, many of the functions may not be required for a research reactor. Thus, it is not reasonable to apply the CBP to a research reactor directly. Also, customizing of the CBP is not cost-effective. Therefore, a compact CBP should be developed for a research reactor. This paper introduces high level requirements derived by the system requirements analysis activity as the first stage of system implementation. Operation support tools are under consideration for application to research reactors. In particular, as a full digitalization of the main control room, application of a computer-based procedure system has been required as a part of man-machine interface system because it makes an impact on the operating staffing and human errors of a research reactor. To establish computer-based system requirements for a research reactor, this paper addressed international standards and previous practices on nuclear plants.

  18. Analysis of selection procedures to determine priority areas for payment for water ecosystem services programs

    Directory of Open Access Journals (Sweden)

    Ana Feital Gjorup

    2016-03-01

    Full Text Available The approach of ecosystem services has shown promise for the evaluation of interactions between ecosystems and society, integrating environmental and socioeconomic concepts which require interdisciplinary knowledge. However, its usefulness in decision making is limited due to information gaps. This study was therefore developed in order to contribute to the application of principles of ecosystem services in the decision-making for water resources management. It aims to identify procedures and methodologies used for decision-making in order to select priority areas to be included in projects or compensation programs for environmental services. To do so, we searched technical and scientific literature describing methods and experiences used to select priority areas. Key steps in the process of selecting priority areas were identified; then a survey was conducted of the procedures adopted for each key step considering the literature selected; and, finally, the information collected was analyzed and classified. Considering the study’s sample, we noted that the selection of priority areas was based on the direct use of predetermined criteria. The use of indicators and spatial analyses are practices still scarcely employed. We must highlight, however, that most of the analyzed documents did not aim to describe the process of selecting priority areas in detail, which may have resulted in some omissions. Although these conditions may limit the analysis in this study, the results presented here allow us to identify the main objectives, actions and criteria used to select priority areas for programs or compensation projects for environmental services.

  19. Model correction factor method for system analysis

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Johannesen, Johannes M.

    2000-01-01

    severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... but with clearly defined failure modes, the MCFM can bestarted from each idealized single mode limit state in turn to identify a locally most central point on the elaborate limitstate surface. Typically this procedure leads to a fewer number of locally most central failure points on the elaboratelimit state...... surface than existing in the idealized model....

  20. Robotic right hemicolectomy: Analysis of 108 consecutive procedures and multidimensional assessment of the learning curve.

    Science.gov (United States)

    Parisi, Amilcare; Scrucca, Luca; Desiderio, Jacopo; Gemini, Alessandro; Guarino, Salvatore; Ricci, Francesco; Cirocchi, Roberto; Palazzini, Giorgio; D'Andrea, Vito; Minelli, Liliana; Trastulli, Stefano

    2017-03-01

    Surgeons tend to view the robotic right colectomy (RRC) as an ideal beginning procedure to gain proficiency in robotic general and colorectal surgery. Nevertheless, oncological RRC, especially if performed with intracorporeal ileocolic anastomosis confectioning, cannot be considered a technically easier procedure. The aim of this study was to assess the learning curve of the RRC performed for oncological purposes and to evaluate its safety and efficacy investigating the perioperative and pathology outcomes in the different learning phases. Data on a consecutive series of 108 patients undergoing RRC with intracorporeal anastomosis between June 2011 and September 2015 at our institution were prospectively collected to evaluate surgical and short-term oncological outcomes. CUSUM (Cumulative Sum) and Risk-Adjusted (RA) CUSUM analysis were performed in order to perform a multidimensional assessment of the learning curve for the RRC surgical procedure. Intraoperative, postoperative and pathological outcomes were compared among the learning curve phases. Based on the CUSUM and RA-CUSUM analyses, the learning curve for RRC could be divided into 3 different phases: phase 1, the initial learning period (1st-44th case); phase 2, the consolidation period (45th-90th case); and phase 3, the mastery period (91th-108th case). Operation time, conversion to open surgery rate and the number of harvested lymph nodes significantly improve through the three learning phases. The learning curve for oncological RRC with intracorporeal anastomosis is composed of 3 phases. Our data indicate that the performance of RRC is safe from an oncological point of view in all of the three phases of the learning curve. However, the technical skills necessary to significantly reduce operative time, conversion to open surgery rate and to significantly improve the number of harvested lymph nodes were achieved after 44 procedures. These data suggest that it might be prudent to start the RRC learning curve

  1. Distal wound complications following pedal bypass: analysis of risk factors.

    Science.gov (United States)

    Robison, J G; Ross, J P; Brothers, T E; Elliott, B M

    1995-01-01

    Wound complications of the pedal incision continue to compromise successful limb salvage following aggressive revascularization. Significant distal wound disruption occurred in 14 of 142 (9.8%) patients undergoing pedal bypass with autogenous vein for limb salvage between 1986 and 1993. One hundred forty-two pedal bypass procedures were performed for rest pain in 66 patients and tissue necrosis in 76. Among the 86 men and 56 women, 76% were diabetic and 73% were black. All but eight patients had a history of diabetes and/or tobacco use. Eight wounds were successfully managed with maintenance of patent grafts from 5 to 57 months. Exposure of a patent graft precipitated amputation in three patients, as did graft occlusion in an additional patient. One graft was salvaged by revision to the peroneal artery and one was covered by a local bipedicled flap. Multiple regression analysis identified three factors associated with wound complications at the pedal incision site: diabetes mellitus (p = 0.03), age > 70 years (p = 0.03), and rest pain (p = 0.05). Ancillary techniques ("pie-crusting") to reduce skin tension resulted in no distal wound problems among 15 patients considered to be at greatest risk for wound breakdown. Attention to technique of distal graft tunneling, a wound closure that reduces tension, and control of swelling by avoiding dependency on and use of gentle elastic compression assume crucial importance in minimizing pedal wound complications following pedal bypass.

  2. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist;

    2015-01-01

    exercise, thereby bypassing the challenging task of model structure determination and identification. Parameter identification problems can thus lead to ill-calibrated models with low predictive power and large model uncertainty. Every calibration exercise should therefore be precededby a proper model...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...

  3. A neutron activation analysis procedure for the determination of uranium, thorium and potassium in geologic samples

    Science.gov (United States)

    Aruscavage, P. J.; Millard, H.T.

    1972-01-01

    A neutron activation analysis procedure was developed for the determination of uranium, thorium and potassium in basic and ultrabasic rocks. The three elements are determined in the same 0.5-g sample following a 30-min irradiation in a thermal neutron flux of 2??1012 n??cm-2??sec-1. Following radiochemical separation, the nuclides239U (T=23.5 m),233Th (T=22.2 m) and42K (T=12.36 h) are measured by ??-counting. A computer program is used to resolve the decay curves which are complex owing to contamination and the growth of daughter activities. The method was used to determine uranium, throium and potassium in the U. S. Geological Survey standard rocks DTS-1, PCC-1 and BCR-1. For 0.5-g samples the limits of detection for uranium, throium and potassium are 0.7, 1.0 and 10 ppb, respectively. ?? 1972 Akade??miai Kiado??.

  4. Continuous intraoperative temperature measurement and surgical site infection risk: analysis of anesthesia information system data in 1008 colorectal procedures.

    Science.gov (United States)

    Melton, Genevieve B; Vogel, Jon D; Swenson, Brian R; Remzi, Feza H; Rothenberger, David A; Wick, Elizabeth C

    2013-10-01

    To investigate the association between intraoperative temperature and surgical site infection (SSI) in colorectal surgery with anesthesia information system data. Continuously measured intraoperative anesthesia information system temperature data for adult abdominal colorectal surgery procedures at a large tertiary center for 1 year were linked to 30-day American College of Surgeons National Surgical Quality Improvement Program SSI outcomes. Univariable and multivariable analyses of SSI to descriptive temperature statistics, absolute and relative temperature threshold times, and other clinically relevant variables were performed. Overall, 1008 patients (48% female, median age: 53 years) underwent major colorectal procedures (7% emergent, 72% open, 173 ± 95 minutes mean procedure time) with median intraoperative temperature 36.0°C, using active rewarming in 92% and 1-hour presurgical antibiotic administration in 91%. Thirty-day overall and organ/space infection rates were 17.4% (175) and 8.5% (86). Maximum, minimum, ending, and median temperatures were similar for those with or without SSI (36.6°C vs 36.5°C, 34.9°C vs 35.0°C, 36.4°C vs 36.2°C, and 36.1°C vs 36.0°C, P = not significant) and percent minutes using incremental cutoffs failed to correlate SSI with temperature. Absolute minutes for higher temperature cutoffs correlated with SSI because of longer procedure times. On multivariable analysis, factors associated with SSI were preoperative diabetes [odds ratio: 1.81 (1.07-3.07), P = 0.022] and blood loss of more than 500 mL [odds ratio: 1.61 (1.01-2.58), P = 0.047]. Although active rewarming remains an accepted and valid process measure, highly granular anesthesia information system temperature data did not demonstrate a correlation between temperature measures and SSI. SSI prevention efforts should focus on more efficacious interventions as opposed to currently mandated publicly reported normothermia measures.

  5. Nuss与Ravitch术式治疗漏斗胸疗效的Meta分析%COMPARISON OF NUSS PROCEDURE AND MODIFIED RAVITCH PROCEDURE VIA META ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    黄维佳; 覃家锦; 陈小三

    2011-01-01

    [目的] 综合评价Nuss手术与改良Ravitch手术治疗漏斗胸的术后疗效.[方法] 检索2002~2008年间国内外关于Nuss手术与改良Ravitch手术治疗漏斗胸疗效分析的临床研究,用RevMan 4.2统计软件进行Meta分析.[结果] 纳入研究的文献共8篇,合计病例889例(Nuss手术组427例,Ravitch手术组462例).Meta分析提示:Nuss手术组在手术时间、手术出血量和住院时间上要优于Ravitch1手术组(P<0.05),而在术后疼痛时间和术后并发症上,Ravitch手术组则优于Nuss手术组(P<0.05),但两者在术后远期效果的满意度上无统计学差异(P=0.92).[结论] Nuss手术治疗漏斗胸具有较明显的微创优势,值得推广.%[Objective ] To compare the efficacy of Nuss procedures with those of Ravitch procedures in the treatment of pectus excavatum by using Meta-analysis. [Methads] Clinical reported between 2002 and 2008 comparing the outcomes of Nuss procedures and Ravitch procedures in patients with pectus excavatum were collected and analyzed by Using RevMan4. 2.[Results] Totally, 8 reports involving 889 pectus excavatum patients were enrolled in this study. Significant adventage was found in operative time, bleeding, length of hospital stay of Nuss procedures group (P< 0.05). Adventage was found in pain,complication of Ravitch procedurea groups (P < 0.05). The long term satisfaction were not significantly different between the two groups (P = 0.92). [Conclusion] Nuss procedures have dramatic adventage of mini-trauma in the treatment of pectus excavatum.

  6. Analysis of routine and novel sperm selection methods for the treatment of infertile patients undergoing ICSI procedure

    Directory of Open Access Journals (Sweden)

    Leila Azadi

    2014-06-01

    Full Text Available Background: Successful outcome of Assisted Reproductive Techniques (ART depends on many factors such as sperm preparation methods, embryo quality and factors that may affect implantation. In Intra-cytoplasmic Sperm Injection (ICSI, sperm selection is mainly based on sperm motility and morphology; however, studies have revealed that these parameters cannot guarantee the genomic health. Recently, researchers and scientific have focused on new sperm selection methods based on cellular and molecular properties. Therefore, the aim of this review article was to introduce the routine and novel sperm selection methods, advantages and disadvantages of these methods, and their clinical analysis. Methods: The papers related to routine and novel sperm selection methods based on function tests and clinical outcomes were retrieved from PubMed and Entrez databases and other ISI-related databases. Results: Novel sperm selection methods which are based on selection of a single sperm (like IMSI and ability of sperm to bind to zona are time-consuming and costly. In addition, assessment of DNA fragmentation is difficult for the selected sperm. However, methods that select a population of spermatozoa like Zeta are less time-consuming and suitable for assessment of sperm chromatin integrity. Conclusion: In clinical applications, simultaneous use of traditional and novel approaches may improve ICSI outcome. However, further studies are needed to select an appropriate sperm selection procedure.

  7. Advanced human-system interface design review guideline. Evaluation procedures and guidelines for human factors engineering reviews

    Energy Technology Data Exchange (ETDEWEB)

    O`Hara, J.M.; Brown, W.S. [Brookhaven National Lab., Upton, NY (United States); Baker, C.C.; Welch, D.L.; Granda, T.M.; Vingelis, P.J. [Carlow International Inc., Falls Church, VA (United States)

    1994-07-01

    Advanced control rooms will use advanced human-system interface (HSI) technologies that may have significant implications for plant safety in that they will affect the operator`s overall role in the system, the method of information presentation, and the ways in which operators interact with the system. The U.S. Nuclear Regulatory Commission (NRC) reviews the HSI aspects of control rooms to ensure that they are designed to good human factors engineering principles and that operator performance and reliability are appropriately supported to protect public health and safety. The principal guidance available to the NRC, however, was developed more than ten years ago, well before these technological changes. Accordingly, the human factors guidance needs to be updated to serve as the basis for NRC review of these advanced designs. The purpose of this project was to develop a general approach to advanced HSI review and the human factors guidelines to support. NRC safety reviews of advanced systems. This two-volume report provides the results of the project. Volume I describes the development of the Advanced HSI Design Review Guideline (DRG) including (1) its theoretical and technical foundation, (2) a general model for the review of advanced HSIs, (3) guideline development in both hard-copy and computer-based versions, and (4) the tests and evaluations performed to develop and validate the DRG. Volume I also includes a discussion of the gaps in available guidance and a methodology for addressing them. Volume 2 provides the guidelines to be used for advanced HSI review and the procedures for their use.

  8. Xenogeneic collagen matrix for periodontal plastic surgery procedures: a systematic review and meta-analysis.

    Science.gov (United States)

    Atieh, M A; Alsabeeha, N; Tawse-Smith, A; Payne, A G T

    2016-08-01

    Several clinical trials describe the effectiveness of xenogeneic collagen matrix (XCM) as an alternative option to surgical mucogingival procedures for the treatment of marginal tissue recession and augmentation of insufficient zones of keratinized tissue (KT). The aim of this systematic review and meta-analysis was to evaluate the clinical and patient-centred outcomes of XCM compared to other mucogingival procedures. Applying guidelines of the Preferred Reporting Items for Systematic Reviews and Meta analyses statement, randomized controlled trials were searched for in electronic databases and complemented by hand searching. The risk of bias was assessed using the Cochrane Collaboration's Risk of Bias tool and data were analysed using statistical software. A total of 645 studies were identified, of which, six trials were included with 487 mucogingival defects in 170 participants. Overall meta-analysis showed that connective tissue graft (CTG) in conjunction with the coronally advanced flap (CAF) had a significantly higher percentage of complete/mean root coverage and mean recession reduction than XCM. Insufficient evidence was found to determine any significant differences in width of KT between XCM and CTG. The XCM had a significantly higher mean root coverage, recession reduction and gain in KT compared to CAF alone. No significant differences in patient's aesthetic satisfaction were found between XCM and CTG, except for postoperative morbidity in favour of XCM. Operating time was significantly reduced with the use of XCM compared with CTG but not with CAF alone. There is no evidence to demonstrate the effectiveness of XCM in achieving greater root coverage, recession reduction and gain in KT compared to CTG plus CAF. Superior short-term results in treating root coverage compared with CAF alone are possible. There is limited evidence that XCM may improve aesthetic satisfaction, reduce postoperative morbidity and shorten the operating time. Further long

  9. Simultaneous Spectrophotometric Determination of Four Components including Acetaminophen by Taget Factor Analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    UV Spectrophotometric Target Factor Analysis (TFA) was used for the simultaneous determination of four components (acetaminophen, guuaifenesin, caffeine, Chlorphenamine maleate) in cough syrup. The computer program of TFA is based on VC++ language. The difficulty of overlapping of absorption spectra of four compounds was overcome by this procedure. The experimental results show that the average recovery of each component is all in the range from 98.9% to 106.8% and each component obtains satisfactory results without any pre-separation.

  10. A single extraction and HPLC procedure for simultaneous analysis of phytosterols, tocopherols and lutein in soybeans.

    Science.gov (United States)

    Slavin, Margaret; Yu, Liangli Lucy

    2012-12-15

    A saponification/extraction procedure and high performance liquid chromatography (HPLC) analysis method were developed and validated for simultaneous analysis of phytosterols, tocopherols and lutein (a carotenoid) in soybeans. Separation was achieved on a phenyl column with a ternary, isocratic solvent system of acetonitrile, methanol and water (48:22.5:29.5, v/v/v). Evaporative light scattering detection (ELSD) was used to quantify β-sitosterol, stigmasterol, campesterol, and α-, δ- and γ-tocopherols, while lutein was quantified with visible light absorption at 450 nm. Peak identification was verified by retention times and spikes with external standards. Standard curves were constructed (R(2)>0.99) to allow for sample quantification. Recovery of the saponification and extraction was demonstrated via analysis of spiked samples. Also, the accuracy of results of four soybeans using the described saponification and HPLC analytical method was validated against existing methods. This method offers a more efficient alternative to individual methods for quantifying lutein, tocopherols and sterols in soybeans.

  11. Time series analysis of dental care procedures in Brazilian public services, 1994-2007

    National Research Council Canada - National Science Library

    Roger Keller Celeste; Jacqueline Furtado Vital; Washington Leite Junger; Michael Eduardo Reichenheim

    2011-01-01

      The objectives of this study were to describe the time series of monthly rates of five dental care procedures in the Brazilian public health system and to assess changes in trends of dental procedures from 1994 to 2007...

  12. High procedural fairness heightens the effect of outcome favorability on self-evaluations : An attributional analysis

    NARCIS (Netherlands)

    Brockner, J.; Heuer, L.; Magner, N.; Folger, R.; Umphress, E.; Bos, K. van den; Vermunt, Riël; Magner, M.; Siegel, P.

    2003-01-01

    Previous research has shown that outcome favorability and procedural fairness often interact to influence employees work attitudes and behaviors. Moreover, the form of the interaction effect depends upon the dependent variable. Relative to when procedural fairness is low, high procedural fairness:

  13. An analysis of marketing authorisation applications via the mutual recognition and decentralised procedures in Europe

    NARCIS (Netherlands)

    Ebbers, Hans C; Langedijk, Joris; Bouvy, Jacoline C; Hoekman, Jarno; Boon, Wouter P C; de Jong, Jean Philippe; De Bruin, Marie L

    2015-01-01

    PURPOSE: The aim of this study is to provide a comprehensive overview of the outcomes of marketing authorisation applications via the mutual recognition and decentralised procedures (MRP/DCP) and assess determinants of licensing failure during CMDh referral procedures. METHODS: All MRP/DCP procedure

  14. Cost of bariatric surgery and factors associated with increased cost: an analysis of national inpatient sample.

    Science.gov (United States)

    Khorgami, Zhamak; Aminian, Ali; Shoar, Saeed; Andalib, Amin; Saber, Alan A; Schauer, Philip R; Brethauer, Stacy A; Sclabas, Guido M

    2017-08-01

    In the current healthcare environment, bariatric surgery centers need to be cost-effective while maintaining quality. The aim of this study was to evaluate national cost of bariatric surgery to identify the factors associated with a higher cost. A retrospective analysis of 2012-2013 Healthcare Cost and Utilization Project - Nationwide Inpatient Sample (HCUP-NIS). We included all patients with a diagnosis of morbid obesity (ICD9 278.01) and a Diagnosis Related Group code related to procedures for obesity, who underwent Roux-en-Y gastric bypass (RYGB), sleeve gastrectomy (SG), or adjustable gastric banding (AGB) as their primary procedure. We converted "hospital charges" to "cost," using hospital specific cost-to-charge ratio. Inflation was adjusted using the annual consumer price index. Increased cost was defined as the top 20th percentile of the expenditure and its associated factors were analyzed using the logistic regression multivariate analysis. A total of 45,219 patients (20,966 RYGBs, 22,380 SGs, and 1,873 AGBs) were included. The median (interquartile range) calculated costs for RYGB, SG, and AGB were $12,543 ($9,970-$15,857), $10,531 ($8,248-$13,527), and $9,219 ($7,545-$12,106), respectively (P<.001). Robotic-assisted procedures had the highest impact on the cost (odds ratio 3.6, 95% confidence interval 3.2-4). Hospital cost of RYGB and SG increased linearly with the length of hospital stay and almost doubled after 7 days. Furthermore, multivariate analysis showed that certain co-morbidities and concurrent procedures were associated with an increased cost. Factors contributing to the cost variation of bariatric procedures include co-morbidities, robotic platform, complexity of surgery, and hospital length of stay. Copyright © 2017 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.

  15. FACTOR ANALYSIS OF THE ELKINS HYPNOTIZABILITY SCALE

    Science.gov (United States)

    Elkins, Gary; Johnson, Aimee K.; Johnson, Alisa J.; Sliwinski, Jim

    2015-01-01

    Assessment of hypnotizability can provide important information for hypnosis research and practice. The Elkins Hypnotizability Scale (EHS) consists of 12 items and was developed to provide a time-efficient measure for use in both clinical and laboratory settings. The EHS has been shown to be a reliable measure with support for convergent validity with the Stanford Hypnotic Susceptibility Scale, Form C (r = .821, p < .001). The current study examined the factor structure of the EHS, which was administered to 252 adults (51.3% male; 48.7% female). Average time of administration was 25.8 minutes. Four factors selected on the basis of the best theoretical fit accounted for 63.37% of the variance. The results of this study provide an initial factor structure for the EHS. PMID:25978085

  16. Long-Term Restoration of Anterior Shoulder Stability: A Retrospective Analysis of Arthroscopic Bankart Repair Versus Open Latarjet Procedure.

    Science.gov (United States)

    Zimmermann, Stefan M; Scheyerer, Max J; Farshad, Mazda; Catanzaro, Sabrina; Rahm, Stefan; Gerber, Christian

    2016-12-07

    Various operative techniques are used for treating recurrent anterior shoulder instability, and good mid-term results have been reported. The purpose of this study was to compare shoulder stability after treatment with the 2 commonly performed procedures, the arthroscopic Bankart soft-tissue repair and the open coracoid transfer according to Latarjet. A comparative, retrospective case-cohort analysis of 360 patients (364 shoulders) who had primary repair for recurrent anterior shoulder instability between 1998 and 2007 was performed. The minimum duration of follow-up was 6 years. Reoperations, overt recurrent instability (defined as recurrent dislocation or subluxation), apprehension, the subjective shoulder value (SSV), sports participation, and overall satisfaction were recorded. An open Latarjet procedure was performed in 93 shoulders, and an arthroscopic Bankart repair was done in 271 shoulders. Instability or apprehension persisted or recurred after 11% (10) of the 93 Latarjet procedures and after 41.7% (113) of the 271 arthroscopic Bankart procedures. Overt instability recurred after 3% of the Latarjet procedures and after 28.4% (77) of the Bankart procedures. In the Latarjet group, 3.2% of the patients were not satisfied with their result compared with 13.2% in the Bankart group (p = 0.007). Kaplan-Meier analysis of survivorship, with apprehension (p Latarjet procedure and the decreasing effectiveness of the arthroscopic Bankart repair over time. Twenty percent of the first recurrences after arthroscopic Bankart occurred no earlier than 91 months postoperatively, as opposed to the rare recurrences after osseous reconstruction, which occurred in the early postoperative period, with only rare late failures. In this retrospective cohort study, the arthroscopic Bankart procedure was inferior to the open Latarjet procedure for repair of recurrent anterior shoulder dislocation. The difference between the 2 procedures with respect to the quality of outcomes

  17. Characterization of physical properties of tissue factor-containing microvesicles and a comparison of ultracentrifuge-based recovery procedures.

    Science.gov (United States)

    Ettelaie, Camille; Collier, Mary E W; Maraveyas, Anthony; Ettelaie, Rammile

    2014-01-01

    Microvesicles were isolated from the conditioned media of 3 cell lines (MDA-MB-231, AsPC-1 and A375) by ultracentrifugation at a range of relative centrifugal forces, and the tissue factor (TF) protein and activity, microvesicle number, size distribution and relative density compared. Also, by expressing TF-tGFP in cells and isolating the microvesicles, the relative density of TF-containing microvesicles was established. Nanoparticle tracking analysis (NTA) indicated that the larger-diameter microvesicles (>200 nm) were primarily sedimented at 100,000g and possessed TF-dependent thrombin and factor Xa generation potential, while in the absence of factor VII, all microvesicles possessed some thrombin generation capacity. Immuno-precipitation of TF-containing microvesicles followed by NTA also indicated the range of these microvesicles to be 200-400 nm. Analysis of the microvesicles by gradient density centrifugation showed that lower-density (microvesicles were mainly present in the samples recovered at 100,000g and were associated with TF antigen and activity. Analysis of these fractions by NTA confirmed that these fractions were principally composed of the larger-diameter microvesicles. Similar analysis of microvesicles from healthy or patient plasma supported those obtained from conditioned media indicating that TF activity was mainly associated with lower-density microvesicles. Furthermore, centrifugation of healthy plasma, supplemented with TF-tGFP-containing microvesicles, resulted in 67% retrieval of the fluorescent microvesicles at 100,000g, but only 26% could be recovered at 20,000g. Pre-centrifugation of conditioned media or plasma at 10,000g improved the speed and yield of recovered TF-containing microvesicles by subsequent centrifugation at either 20,000g or 100,000g. In conclusion, TF appears to be associated with low-density (1.03-1.08 g/ml), larger-diameter (200-350 nm) microvesicles.

  18. ANALYSIS OF EXTERNAL FACTORS AFFECTING THE PRICING

    Directory of Open Access Journals (Sweden)

    Irina A. Kiseleva

    2013-01-01

    Full Text Available The external factors influencing the process of formation of tariffs of commercial services are considered in the article. External environment is known to be very diverse and changeable. Currently, pricing has become one of the key processes of strategic development of a company. Pricing in the service sector, in turn, is highly susceptible to changes in the external environment. Its components directly or indirectly affect the market of services, changing it adopted economic processes. As a rule, firms providing services can’t influence the changes in external factors. However, the service market is very flexible, which enables businesses to reshape pricing strategy, to adapt it to the new environment.

  19. Function Allocation in Complex Socio-Technical Systems: Procedure usage in nuclear power and the Context Analysis Method for Identifying Design Solutions (CAMIDS) Model

    Science.gov (United States)

    Schmitt, Kara Anne

    This research aims to prove that strict adherence to procedures and rigid compliance to process in the US Nuclear Industry may not prevent incidents or increase safety. According to the Institute of Nuclear Power Operations, the nuclear power industry has seen a recent rise in events, and this research claims that a contributing factor to this rise is organizational, cultural, and based on peoples overreliance on procedures and policy. Understanding the proper balance of function allocation, automation and human decision-making is imperative to creating a nuclear power plant that is safe, efficient, and reliable. This research claims that new generations of operators are less engaged and thinking because they have been instructed to follow procedures to a fault. According to operators, they were once to know the plant and its interrelations, but organizationally more importance is now put on following procedure and policy. Literature reviews were performed, experts were questioned, and a model for context analysis was developed. The Context Analysis Method for Identifying Design Solutions (CAMIDS) Model was created, verified and validated through both peer review and application in real world scenarios in active nuclear power plant simulators. These experiments supported the claim that strict adherence and rigid compliance to procedures may not increase safety by studying the industry's propensity for following incorrect procedures, and when it directly affects the outcome of safety or security of the plant. The findings of this research indicate that the younger generations of operators rely highly on procedures, and the organizational pressures of required compliance to procedures may lead to incidents within the plant because operators feel pressured into following the rules and policy above performing the correct actions in a timely manner. The findings support computer based procedures, efficient alarm systems, and skill of the craft matrices. The solution to

  20. Factor Analysis for Spectral Reconnaissance and Situational Understanding

    Science.gov (United States)

    2016-07-11

    reviewed journals: Final Report: Factor Analysis for Spectral Reconnaissance and Situational Understanding Report Title The Army has a critical need for...based NP-hard design problems, by associating them with corresponding estimation problems. 1 Factor Analysis for Spectral Reconnaissance and Situational ...SECURITY CLASSIFICATION OF: The Army has a critical need for enhancing situational understanding for dismounted soldiers and rapidly deployed tactical

  1. A Factor Analysis of the BSRI and the PAQ.

    Science.gov (United States)

    Edwards, Teresa A.; And Others

    Factor analysis of the Bem Sex Role Inventory (BSRI) and the Personality Attributes Questionnaire (PAQ) was undertaken to study the independence of the masculine and feminine scales within each instrument. Both instruments were administered to undergraduate education majors. Analysis of primary first and second order factors of the BSRI indicated…

  2. Exploratory Factor Analysis of African Self-Consciousness Scale Scores

    Science.gov (United States)

    Bhagwat, Ranjit; Kelly, Shalonda; Lambert, Michael C.

    2012-01-01

    This study replicates and extends prior studies of the dimensionality, convergent, and external validity of African Self-Consciousness Scale scores with appropriate exploratory factor analysis methods and a large gender balanced sample (N = 348). Viable one- and two-factor solutions were cross-validated. Both first factors overlapped significantly…

  3. Multigroup Confirmatory Factor Analysis: Locating the Invariant Referent Sets

    Science.gov (United States)

    French, Brian F.; Finch, W. Holmes

    2008-01-01

    Multigroup confirmatory factor analysis (MCFA) is a popular method for the examination of measurement invariance and specifically, factor invariance. Recent research has begun to focus on using MCFA to detect invariance for test items. MCFA requires certain parameters (e.g., factor loadings) to be constrained for model identification, which are…

  4. Factor Analysis of People Rather than Variables: Q and Other Two-Mode Factor Analytic Models.

    Science.gov (United States)

    Frederick, Brigitte N.

    Factor analysis attempts to study how different objects group together to form factors with the purposes of: (1) reducing the number of factorable entities (e.g., variables) with which the researcher needs to deal; (2) searching data for qualitative and quantitative differences; and (3) testing hypotheses (R. Gorsuch, 1983). While most factor…

  5. Chiral analysis of baryon form factors

    Energy Technology Data Exchange (ETDEWEB)

    Gail, T.A.

    2007-11-08

    This work presents an extensive theoretical investigation of the structure of the nucleon within the standard model of elementary particle physics. In particular, the long range contributions to a number of various form factors parametrizing the interactions of the nucleon with an electromagnetic probe are calculated. The theoretical framework for those calculations is chiral perturbation theory, the exact low energy limit of Quantum Chromo Dynamics, which describes such long range contributions in terms of a pion-cloud. In this theory, a nonrelativistic leading one loop order calculation of the form factors parametrizing the vector transition of a nucleon to its lowest lying resonance, the {delta}, a covariant calculation of the isovector and isoscalar vector form factors of the nucleon at next to leading one loop order and a covariant calculation of the isoscalar and isovector generalized vector form factors of the nucleon at leading one loop order are performed. In order to perform consistent loop calculations in the covariant formulation of chiral perturbation theory an appropriate renormalization scheme is defined in this work. All theoretical predictions are compared to phenomenology and results from lattice QCD simulations. These comparisons allow for a determination of the low energy constants of the theory. Furthermore, the possibility of chiral extrapolation, i.e. the extrapolation of lattice data from simulations at large pion masses down to the small physical pion mass is studied in detail. Statistical as well as systematic uncertainties are estimated for all results throughout this work. (orig.)

  6. Classification of ECG signals using LDA with factor analysis method as feature reduction technique.

    Science.gov (United States)

    Kaur, Manpreet; Arora, A S

    2012-11-01

    The analysis of ECG signal, especially the QRS complex as the most characteristic wave in ECG, is a widely accepted approach to study and to classify cardiac dysfunctions. In this paper, first wavelet coefficients calculated for QRS complex are taken as features. Next, factor analysis procedures without rotation and with orthogonal rotation (varimax, equimax and quartimax) are used for feature reduction. The procedure uses the 'Principal Component Method' to estimate component loadings. Further, classification has been done with a LDA classifier. The MIT-BIH arrhythmia database is used and five types of beats (normal, PVC, paced, LBBB and RBBB) are considered for analysis. Accuracy, sensitivity and positive predictivity are performance parameters used for comparing performance of feature reduction techniques. Results demonstrate that the equimax rotation method yields maximum average accuracy of 99.056% for unknown data sets among other used methods.

  7. Analytic standard errors for exploratory process factor analysis.

    Science.gov (United States)

    Zhang, Guangjian; Browne, Michael W; Ong, Anthony D; Chow, Sy Miin

    2014-07-01

    Exploratory process factor analysis (EPFA) is a data-driven latent variable model for multivariate time series. This article presents analytic standard errors for EPFA. Unlike standard errors for exploratory factor analysis with independent data, the analytic standard errors for EPFA take into account the time dependency in time series data. In addition, factor rotation is treated as the imposition of equality constraints on model parameters. Properties of the analytic standard errors are demonstrated using empirical and simulated data.

  8. A replication of a factor analysis of motivations for trapping

    Science.gov (United States)

    Schroeder, Susan; Fulton, David C.

    2015-01-01

    Using a 2013 sample of Minnesota trappers, we employed confirmatory factor analysis to replicate an exploratory factor analysis of trapping motivations conducted by Daigle, Muth, Zwick, and Glass (1998).  We employed the same 25 items used by Daigle et al. and tested the same five-factor structure using a recent sample of Minnesota trappers. We also compared motivations in our sample to those reported by Daigle et el.

  9. Development of the thermal behavior analysis code DIRAD and the fuel design procedure for LMFBR

    Science.gov (United States)

    Nakae, N.; Tanaka, K.; Nakajima, H.; Matsumoto, M.

    1992-06-01

    It is very important to increase the fuel linear heat rating for improvement of economy in LMFBR without any degradation in safety. A reduction of the design margin is helpful to achieve the high power operation. The development of a fuel design code and a design procedure is effective on the reduction of the design margin. The thermal behavior analysis code DIRAD has been developed with respect to fuel restructuring and gap conductance models. These models have been calibrated and revised using irradiation data of fresh fuel. It is, therefore, found that the code is applicable for the thermal analysis with fresh fuel. The uncertainties in fuel irradiation condition and fuel fabrication tolerance together with the uncertainty of the code prediction have major contributions to the design margin. In the current fuel design the first two uncertainties independently contribute to temperature increment. Another method which can rationally explain the effect of the uncertainties on the temperature increment is adopted here. Then, the design margin may be rationally reduced.

  10. An iterative statistical tolerance analysis procedure to deal with linearized behavior models

    Institute of Scientific and Technical Information of China (English)

    Antoine DUMAS; Jean-Yves DANTAN; Nicolas GAYTON; Thomas BLES; Robin LOEBL

    2015-01-01

    Tolerance analysis consists of analyzing the impact of variations on the mechanism behavior due to the manufacturing process. The goal is to predict its quality level at the design stage. The technique involves computing probabilities of failure of the mechanism in a mass production process. The various analysis methods have to consider the component’s variations as random variables and the worst configuration of gaps for over-constrained systems. This consideration varies in function by the type of mechanism behavior and is realized by an optimization scheme combined with a Monte Carlo simulation. To simplify the optimization step, it is necessary to linearize the mechanism behavior into several parts. This study aims at analyzing the impact of the linearization strategy on the probability of failure estimation; a highly over-constrained mechanism with two pins and five cotters is used as an illustration for this study. The purpose is to strike a balance among model error caused by the linearization, computing time, and result accuracy. In addition, an iterative procedure is proposed for the assembly requirement to provide accurate results without using the entire Monte Carlo simulation.

  11. Assessment of alternate procedures for the seismic analysis of multiply supported piping systems

    Energy Technology Data Exchange (ETDEWEB)

    Subudhi, M.; Bezler, P.

    1985-06-01

    When response spectrum methods are used in the seismic analysis of piping systems the response due to inertial action, the dynamic response, and the response due to the time varying differential motions of the support points (the pseudo-static response) must be determined. In this study the adequacy and the degree of conservatism associated with the uniform response spectrum method, the center of mass response spectrum method and fourteen variants of the independent response spectrum method to compute the dynamic response and five different methods to compute the pseudo-static response were evaluated. For this purpose a sample of six piping systems, two of which were subjected to thirty-three earthquakes, were studied. For each system and seismic excitation a multiple independent support excitation time history analysis was developed and used to provide a best estimate of true response and to form the basis for comparison. A combination procedure to calculate the total responses is considered as well. Results are presented and compared to the corresponding responses evaluated using the current uniform response spectrum method and the center of mass response spectra approach. Based on the results, recommendations concerning the use of the methods were developed.

  12. [Ocra Method: development of a new procedure for analysis of multiple tasks subject to infrequent rotation].

    Science.gov (United States)

    Occhipinti, E; Colombini, Daniela; Occhipinti, M

    2008-01-01

    In the Ocra methods (Ocra index and Ocra Checklist), when computing the final indices (Ocra index or checklist score), in the case of more than one repetitive task a "traditional" procedure was already proposed, the results of which could be defined as "time-weighted average". This approach appears to be appropriate when considering rotations among tasks that are performed very frequently, for instance almost once every hour (or for shorter periods). However, when rotation among repetitive tasks is less frequent (i.e. once every 1 1/2 or more hours), the "time-weighted average" approach could result in an underestimation of the exposure level (as it practically flattens peaks of high exposures). For those scenarios an alternative approach based on the "most stressful task as minimum" might be more realistic. This latter approach has already been included in the NIOSH approach for multiple sequential lifting tasks and, given the recent availability in the Ocra method of more detailed duration multipliers (practically one different Du(M) for each different step of one hour of duration of the repetitive task), it is now possible to define a particular procedure to compute the complex Ocra Multitask Index (cOCRA) and the complex Checklist Score (cCHESCO) for the analysis of two or more repetitive tasks when rotations are infrequent (rotations every 1 1/2 hours or more). The result of this approach will be at least equal to the index of the most stressful task considered for its individual daily duration and at the most equal to the index of the most stressful task when it is (only theoretically) considered as lasting for the overall daily duration of all examined repetitive tasks. The procedure is based on the following formula: Complex Ocra Multitask Index = Ocra(1(Dum1) + (Delta ocra1xK) where 1,2,3,...,N = repetitive tasks ordered by ocra index values (1 = highest; N = lowest) computed considering respective real duration multipliers (Dum(i)). ocra1 = ocra index of

  13. Obsessions and compulsions in the lab: A meta-analysis of procedures to induce symptoms of obsessive-compulsive disorder.

    Science.gov (United States)

    De Putter, Laura M S; Van Yper, Lotte; Koster, Ernst H W

    2017-03-01

    Efficacious induction procedures of symptoms of obsessive-compulsive disorder (OCD) are necessary in order to test central tenets of theories on OCD. However, the efficacy of the current range of induction procedures remains unclear. Therefore, this meta-analysis set out to examine the efficacy of induction procedures in participants with and without OCD symptoms. Moreover, we explored whether the efficacy varied across different moderators (i.e., induction categories, symptom dimensions of OCD, modalities of presentation, and level of individual tailoring). In total we included 4900 participants across 90 studies. The analyses showed that there was no difference in studies using subclinical and clinical participants, confirming the utility of analogue samples. Induction procedures evoked more symptoms in (sub)clinical OCD than in healthy participants, which was most evident in the contamination symptom dimension of OCD. Analysis within (sub)clinical OCD showed a large effect size of induction procedures, especially for the threat and responsibility category and when stimuli were tailored to individuals. Analysis within healthy participants showed a medium effect size of induction procedures. The magnitude of the effect in healthy individuals was stronger for mental contamination, thought-action fusion and threat inductions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Analysis of Interaction Factors Between Two Piles

    Institute of Scientific and Technical Information of China (English)

    CAO Ming; CHEN Long-zhu

    2008-01-01

    A rigorous analytical method is presented for calculating the interaction factor between two identical piles subjected to vertical loads. Following the technique proposed by Muki and Sternberg, the problem is decomposed into an extended soil mass and two fictitious piles characterized respectively by Young's modulus of the soil and that of the difference between the pile and soil. The unknown axial forces along fictitious piles are determined by solving a Fredholm integral equation of the second kind, which imposes the compatibility condition that the axial strains of the fictitious piles are equal to those corresponding to the centroidal axes of the extended soil. The real pile forces and displacements can subequally be calculated based on the determined fictitious pile forces, and finally, the desired pile interaction factors may be obtained. Results confirm the validity of the proposed approach and portray the influence of the governing parameters on the pile interaction.

  15. Analysis of factors affecting fattening of chickens

    OpenAIRE

    OBERMAJEROVÁ, Barbora

    2013-01-01

    Poultry meat belongs to the basic assortment of human nutrition. The meat of an intensively fattened poultry is a source of easily digestible proteins, lipids, mineral substances and vitamins. The aim of this bachelor´s thesis was to write out a literature review, which is focused on the intensity of growth, carcass yield, quality and composition of broiler chickens meat. The following describes the internal and external factors that affect them, i.e. genetic foundation, hybrid combination, s...

  16. Human factors evaluation of teletherapy: Training and organizational analysis. Volume 4

    Energy Technology Data Exchange (ETDEWEB)

    Henriksen, K.; Kaye, R.D.; Jones, R. [Hughes Training, Inc., Falls Church, VA (United States); Morisseau, D.S.; Serig, D.I. [Nuclear Regulatory Commission, Washington, DC (United States). Div. of Systems Technology

    1995-07-01

    A series of human factors evaluations were undertaken to better understand the contributing factors to human error in the teletherapy environment. Teletherapy is a multidisciplinary methodology for treating cancerous tissue through selective exposure to an external beam of ionizing radiation. A team of human factors specialists, assisted by a panel of radiation oncologists, medical physicists, and radiation therapists, conducted site visits to radiation oncology departments at community hospitals, university centers, and free-standing clinics. A function and task analysis was initially performed to guide subsequent evaluations in the areas of system-user interfaces, procedures, training and qualifications, and organizational policies and practices. The present work focuses solely on training and qualifications of personnel (e.g., training received before and during employment), and the potential impact of organizational factors on the performance of teletherapy. Organizational factors include such topics as adequacy of staffing, performance evaluations, commonly occurring errors, implementation of quality assurance programs, and organizational climate.

  17. Investigating product development strategy in beverage industry using factor analysis

    Directory of Open Access Journals (Sweden)

    Naser Azad

    2013-03-01

    Full Text Available Selecting a product development strategy that is associated with the company's current service or product innovation, based on customers’ needs and changing environment, plays an important role in increasing demand, increasing market share, increasing sales and profits. Therefore, it is important to extract effective variables associated with product development to improve performance measurement of firms. This paper investigates important factors influencing product development strategies using factor analysis. The proposed model of this paper investigates 36 factors and, using factor analysis, we extract six most influential factors including information sharing, intelligence information, exposure strategy, differentiation, research and development strategy and market survey. The first strategy, partnership, includes five sub-factor including product development partnership, partnership with foreign firms, customers’ perception from competitors’ products, Customer involvement in product development, inter-agency coordination, customer-oriented approach to innovation and transmission of product development change where inter-agency coordination has been considered the most important factor. Internal strengths are the most influential factors impacting the second strategy, intelligence information. The third factor, introducing strategy, introducing strategy, includes four sub criteria and consumer buying behavior is the most influencing factor. Differentiation is the next important factor with five components where knowledge and expertise in product innovation is the most important one. Research and development strategy with four sub-criteria where reducing product development cycle plays the most influential factor and finally, market survey strategy is the last important factor with three factors and finding new market plays the most important role.

  18. Housing Price Forecastability: A Factor Analysis

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2016-01-01

    We examine U.S. housing price forecastability using principal component analysis (PCA), partial least squares (PLS), and sparse PLS (SPLS). We incorporate information from a large panel of 128 economic time series and show that macroeconomic fundamentals have strong predictive power for future...

  19. Housing price forecastability: A factor analysis

    DEFF Research Database (Denmark)

    Møller, Stig Vinther; Bork, Lasse

    2017-01-01

    We examine U.S. housing price forecastability using principal component analysis (PCA), partial least squares (PLS), and sparse PLS (SPLS). We incorporate information from a large panel of 128 economic time series and show that macroeconomic fundamentals have strong predictive power for future...

  20. Evaluation of some procedures relevant to the determination of trace elemental components in biological materials by destructive neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Berry, D.L.

    1979-01-01

    The development of a simplified procedure for the analysis of biological materials by destructive neutron activation analysis (DNAA) is described. The sample manipulations preceding gamma ray assay were investigated as five specific stages of processing: (1) pre-irradiation treatment; (2) sample irradiation; (3) removal of the organic matrix; (4) removal of interfering radioactivities; and (5) concentration and separation of analyte activities. Each stage was evaluated with respect to susceptibility to sample contamination, loss of trace elemental components, and compatibility with other operations in the overall DNAA procedures. A complete DNAA procedure was proposed and evaluated for the analysis of standard bovine liver and blood samples. The DNAA system was effective for the determination of As, Cu, Fe, Hg, Mo, Rb, Sb, Se, and Zn without yield determinations and with a minimum turn-around time of approximately 3 days.

  1. Signs and symptoms of acute mania: a factor analysis

    Directory of Open Access Journals (Sweden)

    de Silva Varuni A

    2011-08-01

    Full Text Available Abstract Background The major diagnostic classifications consider mania as a uni-dimensional illness. Factor analytic studies of acute mania are fewer compared to schizophrenia and depression. Evidence from factor analysis suggests more categories or subtypes than what is included in the classification systems. Studies have found that these factors can predict differences in treatment response and prognosis. Methods The sample included 131 patients consecutively admitted to an acute psychiatry unit over a period of one year. It included 76 (58% males. The mean age was 44.05 years (SD = 15.6. Patients met International Classification of Diseases-10 (ICD-10 clinical diagnostic criteria for a manic episode. Patients with a diagnosis of mixed bipolar affective disorder were excluded. Participants were evaluated using the Young Mania Rating Scale (YMRS. Exploratory factor analysis (principal component analysis was carried out and factors with an eigenvalue > 1 were retained. The significance level for interpretation of factor loadings was 0.40. The unrotated component matrix identified five factors. Oblique rotation was then carried out to identify three factors which were clinically meaningful. Results Unrotated principal component analysis extracted five factors. These five factors explained 65.36% of the total variance. Oblique rotation extracted 3 factors. Factor 1 corresponding to 'irritable mania' had significant loadings of irritability, increased motor activity/energy and disruptive aggressive behaviour. Factor 2 corresponding to 'elated mania' had significant loadings of elevated mood, language abnormalities/thought disorder, increased sexual interest and poor insight. Factor 3 corresponding to 'psychotic mania' had significant loadings of abnormalities in thought content, appearance, poor sleep and speech abnormalities. Conclusions Our findings identified three clinically meaningful factors corresponding to 'elated mania', 'irritable mania

  2. Análisis del fracaso empresarial por sectores: factores diferenciadores = Cross-industry analysis of business failure: differential factors

    Directory of Open Access Journals (Sweden)

    María Jesús Mures Quintana

    2012-03-01

    Full Text Available El objetivo de este trabajo se centra en el análisis del fracaso empresarial por sectores, a fin de identificar los factores explicativos y predictivos de este fenómeno que son diferentes en tres de los principales sectores que se distinguen en toda economía: industria, construcción y servicios.Para cada uno de estos sectores, seguimos el mismo procedimiento. En primer lugar, aplicamos un análisis de componentes principales con el que identificamos los factores explicativos del fracaso empresarial en los tres sectores. A continuación, consideramos dichos factores como variables independientes en un análisis discriminante, que aplicamos para predecir el fracaso de una muestra de empresas, utilizando no sólo información financiera en forma de ratios, sino también otras variables no financieras relativas a las empresas, así como información externa a las mismas que refleja las condiciones macroeconómicas bajo las que desarrollan su actividad.This paper focuses on a cross-industry analysis of business failure, in order to identify the explanatory and predictor factors of this event that are different in three of the main industries in every economy: manufacturing, building and service. For each one of these industries, the same procedure is followed. First, a principal components analysis is applied in order to identify the explanatory factors of business failure in the three industries. Next, these factors are considered as independent variables in a discriminant analysis, so as to predict the firms’ failure, using not only financial information expressed by ratios, but also other non-financial variables related to the firms, as well as external information that reflects macroeconomic conditions under which they develop their activity.

  3. Análisis del fracaso empresarial por sectores: factores diferenciadores = Cross-industry analysis of business failure: differential factors

    Directory of Open Access Journals (Sweden)

    María Jesús Mures Quintana

    2012-12-01

    Full Text Available El objetivo de este trabajo se centra en el análisis del fracaso empresarial por sectores, a fin de identificar los factores explicativos y predictivos de este fenómeno que son diferentes en tres de los principales sectores que se distinguen en toda economía: industria, construcción y servicios. Para cada uno de estos sectores, seguimos el mismo procedimiento. En primer lugar, aplicamos un análisis de componentes principales con el que identificamos los factores explicativos del fracaso empresarial en los tres sectores. A continuación, consideramos dichos factores como variables independientes en un análisis discriminante, que aplicamos para predecir el fracaso de una muestra de empresas, utilizando no sólo información financiera en forma de ratios, sino también otras variables no financieras relativas a las empresas, así como información externa a las mismas que refleja las condiciones macroeconómicas bajo las que desarrollan su actividad. This paper focuses on a cross-industry analysis of business failure, in order to identify the explanatory and predictor factors of this event that are different in three of the main industries in every economy: manufacturing, building and service. For each one of these industries, the same procedure is followed. First, a principal components analysis is applied in order to identify the explanatory factors of business failure in the three industries. Next, these factors are considered as independent variables in a discriminant analysis, so as to predict the firms’ failure, using not only financial information expressed by ratios, but also other non-financial variables related to the firms, as well as external information that reflects macroeconomic conditions under which they develop their activity.

  4. Increasing spelling achievement: an analysis of treatment procedures utilizing an alternating treatments design.

    OpenAIRE

    Ollendick, T. H.; Matson, J L; Esveldt-Dawson, K; Shapiro, E S

    1980-01-01

    Two studies which examine the effectiveness of spelling remediation procedures are reported. In both studies, an alternating treatment design was employed. In the first study, positive practice overcorrection plus positive reinforcement was compared to positive practice alone and a no-remediation control condition. In the second study, positive practice plus positive reinforcement was compared to a traditional corrective procedure plus positive reinforcement and a traditional procedure when u...

  5. Factors Effecting Unemployment: A Cross Country Analysis

    Directory of Open Access Journals (Sweden)

    Aurangzeb

    2013-01-01

    Full Text Available This paper investigates macroeconomic determinants of the unemployment for India, China and Pakistan for the period 1980 to 2009. The investigation was conducted through co integration, granger causality and regression analysis. The variables selected for the study are unemployment, inflation, gross domestic product, exchange rate and the increasing rate of population. The results of regression analysis showed significant impact of all the variables for all three countries. GDP of Pakistan showed positive relation with the unemployment rate and the reason of that is the poverty level and underutilization of foreign investment. The result of granger causality showed that bidirectional causality does not exist between any of the variable for all three countries. Co integration result explored that long term relationship do exist among the variables for all the models. It is recommended that distribution of income needs to be improved for Pakistan in order to have positive impact of growth on the employment rate.

  6. ANALYSIS OF RISK FACTORS ECTOPIC PREGNANCY

    Directory of Open Access Journals (Sweden)

    Budi Santoso

    2017-04-01

    Full Text Available Introduction: Ectopic pregnancy is a pregnancy with extrauterine implantation. This situation is gynecologic emergency that contributes to maternal mortality. Therefore, early recognition, based on identification of the causes of ectopic pregnancy risk factors, is needed. Methods: The design descriptive observational. The samples were pregnant women who had ectopic pregnancy at Maternity Room, Emergency Unit, Dr. Soetomo Hospital, Surabaya, from 1 July 2008 to 1 July 2010. Sampling technique was total sampling using medical records. Result: Patients with ectopic pregnancy were 99 individuals out of 2090 pregnant women who searched for treatment in Dr. Soetomo Hospital. However, only 29 patients were accompanied with traceable risk factors. Discussion:. Most ectopic pregnancies were in the age group of 26-30 years, comprising 32 patients (32.32%, then in age groups of 31–35 years as many as 25 patients (25.25%, 18 patients in age group 21–25 years (18.18%, 17 patients in age group 36–40 years (17.17%, 4 patients in age group 41 years and more (4.04%, and the least was in age group of 16–20 years with 3 patients (3.03%. A total of 12 patients with ectopic pregnancy (41.38% had experience of abortion and 6 patients (20.69% each in groups of patients with ectopic pregnancy who used family planning, in those who used family planning as well as ectopic pregnancy patients with history of surgery. There were 2 patients (6.90% of the group of patients ectopic pregnancy who had history of surgery and history of abortion. The incidence rate of ectopic pregnancy was 4.73%, mostly in the second gravidity (34.34%, whereas the nulliparous have the highest prevalence of 39.39%. Acquired risk factors, i.e. history of operations was 10.34%, patients with family planning 20.69%, patients with history of abortion 41.38%, patients with history of abortion and operation 6.90% patients with family and history of abortion was 20.69%.

  7. THE ANALYSIS OF CHANGES AND INFLUENCING FACTORS OF EARLY POSTTHORACOTOMY PULMONARY FUNCTION

    Institute of Scientific and Technical Information of China (English)

    崔玉尚; 张志庸; 徐协群

    2003-01-01

    Objective. To investigate the changes and influencing factors of early postoperative pulmonary functionof thoracotomy.Methods. Pre-and early postoperative pulmonary function was studied in 64 consecutive cases withoptimal thoracotomy. Pain assessment was done before pulmonary function test, and the chief complaintsof patients were recorded after the procedure. The changing curves of pulmonary function were done andthe differences associated with groups, surgical styles, pain assessment, epidural analgesia, chief com-plaint and preoperative conditions were analyzed.Results. Pulmonary function was severely lowered to about 40% of the base line on the first day,and it was rehabilitated to about 60% of the base line on the eighth day. There was a greater gradienton the recovery curve on the 3rd and 4th days. Epidural analgesia was able to improve pain relaxationand pulmonary function in some degree. Single-factor analysis showed that postoperative pain, postopera-tive day and surgical style were the significant influencing factors for early postoperative pulmonary func-tion. By multiple-factor analysis, preoperative pulmonary function, age and postoperative pain were themain factors, while surgical style had only weak effect on it.Conclusions. Early postoperative pulmonary function is severely impaired by thoracotomy. It rehabili-tate gradually with time. Improvement of preoperative pulmonary function, reducing surgical procedure in-juries, especially injury to respiratory muscle system, and enough postoperative pain relief are the mostimportant means that would reduce pulmonary function impairment and consequently reduce postoperativepulmonary complications.

  8. Single molecule FRET data analysis procedures for FRET efficiency determination: probing the conformations of nucleic acid structures.

    Science.gov (United States)

    Krüger, Asger Christian; Birkedal, Victoria

    2013-11-01

    Single molecule FRET microscopy is an attractive technique for studying structural dynamics and conformational diversity of nucleic acid structures. Some of its strengths are that it can follow structural changes on a fast time scale and identify conformation distributions arising from dynamic or static population heterogeneity. Here, we give a description of the experiment and data analysis procedures of this method and detail what parameters are needed for FRET efficiency calculation. Using single molecule FRET data obtained on G-quadruplex DNA structures that exhibit large conformation diversity, we illustrate that the shape of the FRET distribution changes depending on what parameters are included in the data analysis procedure.

  9. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression.

    Science.gov (United States)

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio

    2016-10-01

    We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two (13) C atoms ((13) C2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of (13) C2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% (13) C2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Error Analysis Of Students Working About Word Problem Of Linear Program With NEA Procedure

    Science.gov (United States)

    Santoso, D. A.; Farid, A.; Ulum, B.

    2017-06-01

    Evaluation and assessment is an important part of learning. In evaluation process of learning, written test is still commonly used. However, the tests usually do not following-up by further evaluation. The process only up to grading stage not to evaluate the process and errors which done by students. Whereas if the student has a pattern error and process error, actions taken can be more focused on the fault and why is that happen. NEA procedure provides a way for educators to evaluate student progress more comprehensively. In this study, students’ mistakes in working on some word problem about linear programming have been analyzed. As a result, mistakes are often made students exist in the modeling phase (transformation) and process skills (process skill) with the overall percentage distribution respectively 20% and 15%. According to the observations, these errors occur most commonly due to lack of precision of students in modeling and in hastiness calculation. Error analysis with students on this matter, it is expected educators can determine or use the right way to solve it in the next lesson.

  11. Assessing short summaries with human judgments procedure and latent semantic analysis in narrative and expository texts.

    Science.gov (United States)

    León, José A; Olmos, Ricardo; Escudero, Inmaculada; Cañas, José J; Salmerón, Lalo

    2006-11-01

    In the present study, we tested a computer-based procedure for assessing very concise summaries (50 words long) of two types of text (narrative and expository) using latent semantic analysis (LSA) in comparison with the judgments of four human experts. LSA was used to estimate semantic similarity using six different methods: four holistic (summary-text, summary-summaries, summary-expert summaries, and pregraded-ungraded summary) and two componential (summary-sentence text and summary-main sentence text). A total of 390 Spanish middle and high school students (14-16 years old) and six experts read a narrative or expository text and later summarized it. The results support the viability of developing a computerized assessment tool using human judgments and LSA, although the correlation between human judgments and LSA was higher in the narrative text than in the expository, and LSA correlated more with human content ratings thanwith hu mancoherence ratings. Finally, theholistic methods were found to be more reliable than the componential methods analyzed in this study.

  12. Multiresidue Analysis of Pesticides in Soil by Liquid-Solid Extraction Procedure

    Directory of Open Access Journals (Sweden)

    Rada Đurović

    2012-01-01

    Full Text Available A multiresidue method for simultaneous determination of four pesticides (diazinon,acetochlor, aldrine and carbofuran belonging to different pesticide groups, extractedfrom soil samples, is described. The method presented is based on liquid-solid extraction(LSE and determination of pesticides, i.e. the pesticides were extracted by methanol-acetone mixture, purified on florisil column and eluted by ethyl acetate-acetone mixture.Optimization of the main parameters affecting the LSE procedure, such as the choiceof purification sorbent, as well as the elution solvent and its volume, were investigated indetails and optimized. Also, validation of the proposed method was done.Gas chromatography-mass spectrometry (GC-MS was used for detection and quantificationof the pesticides studied. Relative standard deviation (RSD and recovery values formultiple analysis of soil samples fortified with 30 μg/kg of each pesticide were below 8%and higher than 89%, respectively. Limits of detection (LOD for all the compounds studiedwere less than 4 μg/kg.

  13. A targeted change-detection procedure by combining change vector analysis and post-classification approach

    Science.gov (United States)

    Ye, Su; Chen, Dongmei; Yu, Jie

    2016-04-01

    In remote sensing, conventional supervised change-detection methods usually require effective training data for multiple change types. This paper introduces a more flexible and efficient procedure that seeks to identify only the changes that users are interested in, here after referred to as "targeted change detection". Based on a one-class classifier "Support Vector Domain Description (SVDD)", a novel algorithm named "Three-layer SVDD Fusion (TLSF)" is developed specially for targeted change detection. The proposed algorithm combines one-class classification generated from change vector maps, as well as before- and after-change images in order to get a more reliable detecting result. In addition, this paper introduces a detailed workflow for implementing this algorithm. This workflow has been applied to two case studies with different practical monitoring objectives: urban expansion and forest fire assessment. The experiment results of these two case studies show that the overall accuracy of our proposed algorithm is superior (Kappa statistics are 86.3% and 87.8% for Case 1 and 2, respectively), compared to applying SVDD to change vector analysis and post-classification comparison.

  14. Reliable and Efficient Procedure for Steady-State Analysis of Nonautonomous and Autonomous Systems

    Directory of Open Access Journals (Sweden)

    J. Dobes

    2012-04-01

    Full Text Available The majority of contemporary design tools do not still contain steady-state algorithms, especially for the autonomous systems. This is mainly caused by insufficient accuracy of the algorithm for numerical integration, but also by unreliable steady-state algorithms themselves. Therefore, in the paper, a very stable and efficient procedure for the numerical integration of nonlinear differential-algebraic systems is defined first. Afterwards, two improved methods are defined for finding the steady state, which use this integration algorithm in their iteration loops. The first is based on the idea of extrapolation, and the second utilizes nonstandard time-domain sensitivity analysis. The two steady-state algorithms are compared by analyses of a rectifier and a C-class amplifier, and the extrapolation algorithm is primarily selected as a more reliable alternative. Finally, the method based on the extrapolation naturally cooperating with the algorithm for solving the differential-algebraic systems is thoroughly tested on various electronic circuits: Van der Pol and Colpitts oscillators, fragment of a large bipolar logical circuit, feedback and distributed microwave oscillators, and power amplifier. The results confirm that the extrapolation method is faster than a classical plain numerical integration, especially for larger circuits with complicated transients.

  15. Modular approach to customise sample preparation procedures for viral metagenomics: a reproducible protocol for virome analysis.

    Science.gov (United States)

    Conceição-Neto, Nádia; Zeller, Mark; Lefrère, Hanne; De Bruyn, Pieter; Beller, Leen; Deboutte, Ward; Yinda, Claude Kwe; Lavigne, Rob; Maes, Piet; Van Ranst, Marc; Heylen, Elisabeth; Matthijnssens, Jelle

    2015-11-12

    A major limitation for better understanding the role of the human gut virome in health and disease is the lack of validated methods that allow high throughput virome analysis. To overcome this, we evaluated the quantitative effect of homogenisation, centrifugation, filtration, chloroform treatment and random amplification on a mock-virome (containing nine highly diverse viruses) and a bacterial mock-community (containing four faecal bacterial species) using quantitative PCR and next-generation sequencing. This resulted in an optimised protocol that was able to recover all viruses present in the mock-virome and strongly alters the ratio of viral versus bacterial and 16S rRNA genetic material in favour of viruses (from 43.2% to 96.7% viral reads and from 47.6% to 0.19% bacterial reads). Furthermore, our study indicated that most of the currently used virome protocols, using small filter pores and/or stringent centrifugation conditions may have largely overlooked large viruses present in viromes. We propose NetoVIR (Novel enrichment technique of VIRomes), which allows for a fast, reproducible and high throughput sample preparation for viral metagenomics studies, introducing minimal bias. This procedure is optimised mainly for faecal samples, but with appropriate concentration steps can also be used for other sample types with lower initial viral loads.

  16. A Component Analysis of Toilet-Training Procedures Recommended for Young Children

    Science.gov (United States)

    Greer, Brian D.; Neidert, Pamela L.; Dozier, Claudia L.

    2016-01-01

    We evaluated the combined and sequential effects of 3 toilet-training procedures recommended for use with young children: (a) underwear, (b) a dense sit schedule, and (c) differential reinforcement. A total of 20 children participated. Classroom teachers implemented a toilet-training package consisting of all 3 procedures with 6 children. Of the 6…

  17. A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…

  18. Local stress analysis on semiconductor devices by combined experimental-numerical procedure

    NARCIS (Netherlands)

    Kregting, R.; Gielen, A.W.J.; Driel, W. van; Alkemade, P.; Miro, H.; Kamminga, J.-D.

    2011-01-01

    Intrinsic stresses in bondpads may lead to early failure of IC's. In order to determine the intrinsic stresses in semiconductor structures, a new procedure is set up. This procedure is a combined experimental/numerical approach which consists of the following steps: First, a conductive gold layer (2

  19. Is the Latarjet procedure risky? Analysis of complications and learning curve.

    Science.gov (United States)

    Dauzère, Florence; Faraud, Amélie; Lebon, Julie; Faruch, Marie; Mansat, Pierre; Bonnevialle, Nicolas

    2016-02-01

    The purpose of this study was to analyse the learning curve and complication rate of the open Latarjet procedure. The first 68 Latarjet procedures performed by a single surgeon for chronic anterior shoulder instability were reviewed retrospectively. The standard open surgical technique was followed faithfully during each procedure. Post-operative complications were taken from patient medical records. Post-operative evaluation consisted of clinical and radiological assessments. The rate of early (Latarjet procedure remains low. A surgeon's experience significantly affects the surgery duration and the occurrence of early complications. The main radiological complication is partial lysis of the bone block. After a short learning curve, the clinical outcomes of the Latarjet procedure appear to be satisfactory and reproducible. IV.

  20. An integrated multi-scale risk analysis procedure for pluvial flooding

    Science.gov (United States)

    Tader, Andreas; Mergili, Martin; Jäger, Stefan; Glade, Thomas; Neuhold, Clemens; Stiefelmeyer, Heinz

    2016-04-01

    Mitigation of or adaptation to the negative impacts of natural processes on society requires a better understanding of the spatio-temporal distribution not only of the processes themselves, but also of the elements at risk. Information on their values, exposures and vulnerabilities towards the expected impact magnitudes/intensities of the relevant processes is needed. GIS-supported methods are particularly useful for integrated spatio-temporal analyses of natural processes and their potential consequences. Hereby, pluvial floods are of particular concern for many parts of Austria. The overall aim of the present study is to calculate the hazards emanating from pluvial floods, to determine the exposure of given elements at risk, to determine their vulnerabilities towards given pluvial flood hazards and to analyze potential consequences in terms of monetary losses. The whole approach builds on data available on a national scale. We introduce an integrated, multi-scale risk analysis procedure with regard to pluvial flooding. Focusing on the risk to buildings, we firstly exemplify this procedure with a well-documented event in the city of Graz (Austria), in order to highlight the associated potentials and limitations. Secondly, we attempt to predict the possible consequences of pluvial flooding triggered by rainfall events with recurrence intervals of 30, 100 and 300 years. (i) We compute spatially distributed inundation depths using the software FloodArea. Infiltration capacity and surface roughness are estimated from the land cover units given by the official cadastre. Various assumptions are tested with regard to the inflow to the urban sewer system. (ii) Based on the inundation depths and the official building register, we employ a set of rules and functions to deduce the exposure, vulnerability and risk for each building. A risk indicator for each building, expressed as the expected damage associated to a given event, is derived by combining the building value and

  1. Analysis of Socio-Economic Factors Influencing Farmers' Adoption ...

    African Journals Online (AJOL)

    Analysis of Socio-Economic Factors Influencing Farmers' Adoption of Rice ... Farming experience, household size, farm size and extension contact ... gender, market availability, education, extension contact, labour availability and farm size.

  2. Exploring Technostress: Results of a Large Sample Factor Analysis

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2016-06-01

    Full Text Available With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ answers, revealing technostress causes and consequences as well as technostress prevalence in the population in a statistically validated pattern. A key elements of technostress based on factor analysis can serve for the construction of technostress measurement scales in further research.

  3. Risk Factor Analysis for Oral Precancer among Slum Dwellers in ...

    African Journals Online (AJOL)

    Rajasthan Dental College, Jaipur, Rajasthan, 1Dental Wing, All India Institute of Medical Sciences (AIIMS), Bhopal,. 4Department of Public ... Keywords: Oral cancer, Risk factor analysis, Slum dwellers. Access this .... hygiene aid used in India.

  4. Meta analysis of risk factors for colorectal cancer

    Institute of Scientific and Technical Information of China (English)

    Kun Chen; Jiong-Liang Qiu; Yang Zhang; Yu-Wan Zhao

    2003-01-01

    AIM: To study the risk factors for colorectal cancer in China.METHODS: A meta-analysis of the risk factors of colorectal cancer was conducted for 14 case-control studies, and reviewed 14 reports within 13 years which included 5034cases and 5205 controls. Dersimonian and Laird random effective models were used to process the results.RESULTS: Meta analysis of the 14 studies demonstrated that proper physical activites and dietary fibers were protective factors (pooled OR<0.8), while fecal mucohemorrhage,chronic diarrhea and polyposis were highly associated with colorectal cancer (all pooled OR>4). The stratified results showed that different OR values of some factors were due to geographic factors or different resourses.CONCLUSION: Risks of colorectal cancer are significantly associated with the histories of intestinal diseases or relative symptoms, high lipid diet, emotional trauma and family history of cancers. The suitable physical activities and dietary fibers are protective factors.

  5. Sensitivity Analysis to Select the Most Influential Risk Factors in a Logistic Regression Model

    Directory of Open Access Journals (Sweden)

    Jassim N. Hussain

    2008-01-01

    Full Text Available The traditional variable selection methods for survival data depend on iteration procedures, and control of this process assumes tuning parameters that are problematic and time consuming, especially if the models are complex and have a large number of risk factors. In this paper, we propose a new method based on the global sensitivity analysis (GSA to select the most influential risk factors. This contributes to simplification of the logistic regression model by excluding the irrelevant risk factors, thus eliminating the need to fit and evaluate a large number of models. Data from medical trials are suggested as a way to test the efficiency and capability of this method and as a way to simplify the model. This leads to construction of an appropriate model. The proposed method ranks the risk factors according to their importance.

  6. The Pain Behaviour Checklist: factor analysis and validation.

    Science.gov (United States)

    Anciano, D

    1986-11-01

    A factor analysis was performed on Philips & Hunter's (1981) Pain Behaviour Checklist for headache sufferers. Three intuitively meaningful factors emerged. All were similarly associated with overall intensity; pain severity does not determine type of pain behaviour. Differences in pain behaviour emerged between migraine and tension headache groups.

  7. Determining Dimensionality of Exercise Readiness Using Exploratory Factor Analysis

    Directory of Open Access Journals (Sweden)

    Kelley Strohacker, Rebecca A. Zakrajsek

    2016-06-01

    Full Text Available Assessment of “exercise readiness” is a central component to the flexible non-linear periodization (FNLP method of organizing training workloads, but the underlying factor structure of this construct has not been empirically determined. The purpose of this study was to assess construct dimensionality of exercise readiness using exploratory factor analysis. The result of which serve as initial steps of developing a brief measure of exercise readiness. Participants consisted of students recruited from undergraduate Kinesiology courses at a racially diverse, southern University. Independent, anonymous online survey data were collected across three stages: 1 generation of item pool (n = 290, 2 assessment of face validity and refinement of item pool (n = 168, and 3 exploratory factor analysis (n = 684. A principal axis factor analysis was conducted with 41 items using oblique rotation (promax. Four statistically significant factors, as determined through parallel analysis, explained 61.5% of the variance in exercise readiness. Factor 1 contained items that represented vitality (e.g., lively, revived. Factor 2 items related to physical fatigue (e.g. tired, drained. Factors 3 and 4 were descriptive of, discomfort (e.g. pain, sick and health (i.e. healthy, fit, respectively. This inductive approach indicates that exercise readiness is comprised of four dimensions: vitality, physical fatigue, discomfort, and health. This finding supports readiness assessment techniques currently recommended for practitioners according to the FNLP model. These results serve as a theoretical foundation upon which to further develop and refine a brief survey instrument to measure exercise readiness.

  8. 48 CFR 2115.404-71 - Profit analysis factors.

    Science.gov (United States)

    2010-10-01

    ... account in assigning a plus weight. (5) Cost control. This factor is based on the Contractor's previously... TYPES CONTRACTING BY NEGOTIATION Contract Pricing 2115.404-71 Profit analysis factors. (a) The OPM... receive a plus weight, and poor performance or failure to comply with contract terms and conditions a zero...

  9. Analysis on Family Factor in Construction of New Socialist Countryside

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    This paper analyzes the family factor in the construction of new socialist countryside. It is believed that the family plays both the positive role and negative role in new socialist countryside construction. Based on this analysis,it puts forward corresponding countermeasures,including bringing into play the effect of family in promoting production and carrying forward excellent factors of family culture.

  10. Exploratory Tobit factor analysis for multivariate censored data

    NARCIS (Netherlands)

    Kamakura, WA; Wedel, M

    2001-01-01

    We propose Multivariate Tobit models with a factor structure on the covariance matrix. Such models are particularly useful in the exploratory analysis of multivariate censored data and the identification of latent variables from behavioral data. The factor structure provides a parsimonious

  11. Connectivism in Postsecondary Online Courses: An Exploratory Factor Analysis

    Science.gov (United States)

    Hogg, Nanette; Lomicky, Carol S.

    2012-01-01

    This study explores 465 postsecondary students' experiences in online classes through the lens of connectivism. Downes' 4 properties of connectivism (diversity, autonomy, interactivity, and openness) were used as the study design. An exploratory factor analysis was performed. This study found a 4-factor solution. Subjects indicated that autonomy…

  12. Confirmatory Factor Analysis of the Career Factors Inventory on a Community College Sample

    Science.gov (United States)

    Simon, Merril A.; Tovar, Esau

    2004-01-01

    A confirmatory factor analysis was conducted using AMOS 4.0 to validate the 21-item Career Factors Inventory on a community college student sample. The multidimensional inventory assesses types and levels of career indecision antecedents. The sample consisted of 512 ethnically diverse freshmen students; 46% were men and 54% were women.…

  13. An analysis of marketing authorisation applications via the mutual recognition and decentralised procedures in Europe.

    Science.gov (United States)

    Ebbers, Hans C; Langedijk, Joris; Bouvy, Jacoline C; Hoekman, Jarno; Boon, Wouter P C; de Jong, Jean Philippe; De Bruin, Marie L

    2015-10-01

    The aim of this study is to provide a comprehensive overview of the outcomes of marketing authorisation applications via the mutual recognition and decentralised procedures (MRP/DCP) and assess determinants of licensing failure during CMDh referral procedures. All MRP/DCP procedures to the Co-ordination group for Mutual recognition and Decentralised procedures-human (CMDh) during the period from January 2006 to December 2013 were analysed. Reasons for starting referral procedures were scored. In addition, a survey under pharmaceutical companies was performed to estimate the frequency of licensing failure prior to CMDh referrals. During the study period, 10392 MRP/DCP procedures were finalized. Three hundred seventy-seven (3.6%) resulted in a referral procedure, of which 70 (19%) resulted in licensing failure, defined as refusal or withdrawal of the application. The frequency of CMDh referrals decreased from 14.5% in 2006 to 1.6% in 2013. Of all referrals, 272 (72%) were resolved through consensus within the CMDh, the remaining 105 (28%) were resolved at the level of the CHMP. Most referrals were started because of objections raised about the clinical development program. Study design issues and objections about the demonstration of equivalence were most likely to result in licensing failure. An estimated 11% of all MRP/DCP procedures resulted in licensing failure prior to CMDh referral. Whereas the absolute number of MRP/DCP procedures resulting in a referral has reduced substantially over the past years, no specific time trend could be observed regarding the frequency of referrals resulting in licensing failure. Increased knowledge at the level of companies and regulators has reduced the frequency of late-stage failure of marketing applications via the MRP/DCP.

  14. Enteral Access Procedures: An 18-Year Analysis of Changing Patterns of Utilization in the Medicare Population.

    Science.gov (United States)

    Wan, Wenshuai; Hawkins, C Matthew; Hemingway, Jennifer; Hughes, Danny; Duszak, Richard

    2017-01-01

    To evaluate national trends in enteral access and maintenance procedures for Medicare beneficiaries with regard to utilization rates, specialty group roles, and sites of service. Using Medicare Physician Supplier Procedure Summary Master Files for the period 1994-2012, claims for gastrostomy and gastrojejunostomy access and maintenance procedures were identified. Longitudinal utilization rates were calculated using annual enrollment data. Procedure volumes by site of service and medical specialty were analyzed. Between 1994 and 2012, de novo enteral access procedure utilization decreased from 61.6 to 42.3 per 10,000 Medicare Part B beneficiaries (-31%). Gastroenterologists and surgeons performed > 80% of procedures (unchanged over study period) with 97% in the hospital setting. Over time, relative use of an endoscopic approach (62% in 1994; 82% in 2012) increased as percutaneous (21% to 12%) and open surgical (17% to 5%) procedures declined. Existing enteral access maintenance services increased 29% (from 20.1 to 25.9 per 10,000 beneficiaries). Radiologists (from 13% to 31%) surpassed gastroenterologists (from 36% to 21%) as dominant providers of maintenance procedures. Emergency physicians (from 8% to 23%) and nonphysician providers (from 0% to 6%) have seen rapid growth as maintenance services providers as these services have transitioned increasingly to the emergency department setting (from 18% to 32%). Among Medicare beneficiaries, de novo enteral access procedures have declined in the last 2 decades as existing access maintenance services have increased. The latter are increasingly performed by radiologists, emergency physicians, and nonphysician providers. Copyright © 2016 SIR. Published by Elsevier Inc. All rights reserved.

  15. A retrospective analysis of patients referred for implant placement to a specialty clinic: indications, surgical procedures, and early failures.

    Science.gov (United States)

    Bornstein, Michael M; Halbritter, Sandro; Harnisch, Hendrik; Weber, Hans-Peter; Buser, Daniel

    2008-01-01

    This retrospective study analyzed the pool of patients referred for treatment with dental implants over a 3-year period in a referral specialty clinic. All patients receiving dental implants between 2002 and 2004 in the Department of Oral Surgery and Stomatology, University of Bern, were included in this retrospective study. Patients were analyzed according to age, gender, indications for implant therapy, location of implants, and type and length of implants placed. A cumulative logistic regression analysis was performed to identify and analyze potential risk factors for complications or failures. A total of 1,206 patients received 1,817 dental implants. The group comprised 573 men and 633 women with a mean age of 55.2 years. Almost 60% of patients were age 50 or older. The most frequent indication for implant therapy was single-tooth replacement in the maxilla (522 implants or 28.7%). A total of 726 implants (40%) were inserted in the esthetically demanding region of the anterior maxilla. For 939 implants (51.7%), additional bone-augmentation procedures were required. Of these, ridge augmentation with guided bone regeneration was performed more frequently than sinus grafting. Thirteen complications leading to early failures were recorded, resulting in an early failure rate of 0.7%. The regression analysis failed to identify statistically significant failure etiologies for the variables assessed. From this study it can be concluded that patients referred to a specialty clinic for implant placement were more likely to be partially edentulous and over 50 years old. Single-tooth replacement was the most frequent indication (> 50%). Similarly, additional bone augmentation was indicated in more than 50% of cases. Adhering to strict patient selection criteria and a standardized surgical protocol, an early failure rate of 0.7% was experienced in this study population.

  16. Random analysis of bearing capacity of square footing using the LAS procedure

    Directory of Open Access Journals (Sweden)

    Kawa Marek

    2016-09-01

    Full Text Available In the present paper, a three-dimensional problem of bearing capacity of square footing on random soil medium is analyzed. The random fields of strength parameters c and φ are generated using LAS procedure (Local Average Subdivision, Fenton and Vanmarcke 1990. The procedure used is re-implemented by the authors in Mathematica environment in order to combine it with commercial program. Since the procedure is still tested the random filed has been assumed as one-dimensional: the strength properties of soil are random in vertical direction only.

  17. Evaluation of washing procedures for pollution analysis of Ailanthus altissima leaves

    Energy Technology Data Exchange (ETDEWEB)

    Porter, J.R.

    1986-01-01

    A study of nine different washing procedures using Alconox, HCl and Na2 EDTA for use on Ailanthus altissima leaves in particulate pollutant analyses was conducted. Leaf mineral analyses of washed and unwashed samples were carried out for Ca, Mg, K, Na, Fe, Zn, Cu and Mn by atomic absorption spectrometry, for Cl by a specific ion electrode and for Ti by a spectrophotometric procedure. The data showed that a procedure consisting of washing by hand with 1% Alconox, followed by 0.01M Na2 EDTA, was most effective in removing surface Fe, Cu, Zn and Ti and led to little change in lead K or Cl.

  18. 78 FR 37463 - Expedited Approval of Alternative Test Procedures for the Analysis of Contaminants Under the Safe...

    Science.gov (United States)

    2013-06-21

    ... Under the Safe Drinking Water Act; Analysis and Sampling Procedures Correction In rule document 2013... Technique. Total Coliform 9222 A, B, C..... Membrane Filter Technique. ONPG-MUG Test......... 9223 9223 B... Ionization Tandem Mass Spectrometry (IC-ESI-MS/MS). Bromate Two-Dimensional Ion 302.0 \\18\\ Chromatography (IC...

  19. ANALYSIS OF TRACE-LEVEL ORGANIC COMBUSTION PROCESS EMISSIONS USING NOVEL MULTIDIMENSIONAL GAS CHROMATOGRAPHY-MASS SPECTROMETRY PROCEDURES

    Science.gov (United States)

    The paper discusses the analysis of trace-level organic combustion process emissions using novel multidimensional gas chromatography-mass spectrometry (MDGC-MS) procedures. It outlines the application of the technique through the analyses of various incinerator effluent and produ...

  20. Procedure for Tooth Contact Analysis of a Face Gear Meshing With a Spur Gear Using Finite Element Analysis

    Science.gov (United States)

    Bibel, George; Lewicki, David G. (Technical Monitor)

    2002-01-01

    A procedure was developed to perform tooth contact analysis between a face gear meshing with a spur pinion using finite element analysis. The face gear surface points from a previous analysis were used to create a connected tooth solid model without gaps or overlaps. The face gear surface points were used to create a five tooth face gear Patran model (with rim) using Patran PCL commands. These commands were saved in a series of session files suitable for Patran input. A four tooth spur gear that meshes with the face gear was designed and constructed with Patran PCL commands. These commands were also saved in a session files suitable for Patran input. The orientation of the spur gear required for meshing with the face gear was determined. The required rotations and translations are described and built into the session file for the spur gear. The Abaqus commands for three-dimensional meshing were determined and verified for a simplified model containing one spur tooth and one face gear tooth. The boundary conditions, loads, and weak spring constraints were determined to make the simplified model work. The load steps and load increments to establish contact and obtain a realistic load was determined for the simplified two tooth model. Contact patterns give some insight into required mesh density. Building the two gears in two different local coordinate systems and rotating the local coordinate systems was verified as an easy way to roll the gearset through mesh. Due to limitation of swap space, disk space and time constraints of the summer period, the larger model was not completed.

  1. An exploration of diabetic foot screening procedures data by a multiple correspondence analysis

    Science.gov (United States)

    Rovan, Jože

    2017-01-01

    Abstract Aims Gangrene and amputation are among most feared complications of diabetes mellitus. Early detection of patients at high risk for foot ulceration can prevent foot complications. Regular foot screening (medical history, foot examination and classification into risk groups) was introduced at the out-patient diabetes clinic in Ljubljana in November 1996. We aimed to explore the relationships between the observed variables, check the appropriateness of the risk status classification and of the post-screening decisions. Methods The data of 11.594 patients, obtained in 18 years, were analysed by multiple correspondence analysis (MCA). Most of the observed variables were categorical. Results The majority of the screened population was free of foot complications. We demonstrated an increasing frequency and severity of foot problems with an increasing age, as well as the association between the loss of protective sensation and the history of foot ulceration, foot deformity and callus formation, the history of foot ulcer or amputation and acute foot ulceration. A new finding was that the location of foot deformity points was closer to female than male gender, indicating the possible role of fashionable high-heel footwear. The appropriateness of therapeutic decisions was confirmed: the points representing absent foot pulses and referral to vascular specialist were close together, as well as points representing foot deformity and special footwear prescription or callus formation and referral to pedicurist. Conclusions MCA was applied to the data on foot pathology in the population attending the out-patient diabetes clinic. The method proved to be a useful statistical tool for analysing the data of screening procedures. PMID:28289465

  2. Comparison of three protein extraction procedures from toxic and non-toxic dinoflagellates for proteomics analysis.

    Science.gov (United States)

    Jiang, Xi-Wen; Wang, Jing; Chan, Leo Lai; Lam, Paul Kwan Sing; Gu, Ji-Dong

    2015-08-01

    Three methods for extraction and preparation of high-quality proteins from both toxic and non-toxic dinoflagellates for proteomics analysis, including Trizol method, Lysis method and Tris method, were compared with the subsequent protein separation profiles using 2-D differential gel electrophoresis (2-D DIGE), Coomassie Blue and silver staining. These methods showed suitability for proteins with different pIs and molecular weights. Tris method was better for low molecular weight and low pI protein isolation; whereas both Lysis and Trizol method were better for high-molecular weight and high pI protein purification. Trizol method showed good results with Alexandrium species and Gynodinium species, and the background in gel was much clearer than the other two methods. At the same time, only Lysis method caused breaking down of the target proteins. On the other hand, Trizol method obtained higher concentration of ribulose-1,5-bisphosphate carboxylase/oxygenase proteins by Western-blotting, while Tris method was the best for peridinin-chlorophyll-protein complexes protein and T1 protein preparation. DIGE was better than Coomassie Blue and silver staining, except for some limitations, such as the high cost of the dyes, relatively short shelf life and the requirements for extensive and special image capturing equipment. Some proteins related to PSTs synthesis in dinoflagellates are hydrophobic with high molecular weight or binding on membranes and Trizol method performed better than Tris method for these proteins. The Trizol method and 2-D DIGE were effective combination for proteomics investigations of dinoflagellates. This procedure allows reliable and high recovery efficiency of proteins from dinoflagellates for better understanding on their occurrence and toxin-production for physiological and biochemical information.

  3. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure.

    Science.gov (United States)

    Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-07-28

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.

  4. Arthroscopic Latarjet procedure: is optimal positioning of the bone block and screws possible? A prospective computed tomography scan analysis.

    Science.gov (United States)

    Kany, Jean; Flamand, Olivier; Grimberg, Jean; Guinand, Régis; Croutzet, Pierre; Amaravathi, Rajkumar; Sekaran, Padmanaban

    2016-01-01

    We hypothesized that the arthroscopic Latarjet procedure could be performed with accurate bone block positioning and screw fixation with a similar rate of complications to the open Latarjet procedure. In this prospective study, 105 shoulders (104 patients) underwent the arthroscopic Latarjet procedure performed by the same senior surgeon. The day after surgery, an independent surgeon examiner performed a multiplanar bidimensional computed tomography scan analysis. We also evaluated our learning curve by comparing 2 chronologic periods (30 procedures performed in each period), separated by an interval during which 45 procedures were performed. Of the 105 shoulders included in the study, 95 (90.5%) (94 patients) were evaluated. The coracoid graft was accurately positioned relative to the equator of the glenoid surface in 87 of 95 shoulders (91.5%). Accurate bone-block positioning on the axial view with "circle" evaluation was obtained for 77 of 95 shoulders (81%). This procedure was performed in a lateralized position in 7 of 95 shoulders (7.3%) and in a medialized position in 11 shoulders (11.6%). The mean screw angulation with the glenoid surface was 21°. One patient had transient axillary nerve palsy. Of the initial 104 patients, 3 (2.8%) underwent revision. The analysis of our results indicated that the screw-glenoid surface angle significantly predicted the accuracy of the bone-block positioning (P = .001). Our learning curve estimates showed that, compared with our initial period, the average surgical time decreased, and the risk of lateralization showed a statistically significant decrease during the last period (P = .006). This study showed that accurate positioning of the bone block onto the anterior aspect of the glenoid is possible, safe, and reproducible with the arthroscopic Latarjet procedure without additional complications compared with open surgery. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc

  5. Health economic analysis of laparoscopic lavage versus Hartmann's procedure for diverticulitis in the randomized DILALA trial

    DEFF Research Database (Denmark)

    Gehrman, J; Angenete, E; Björholt, I

    2016-01-01

    BACKGROUND: Open surgery with resection and colostomy (Hartmann's procedure) has been the standard treatment for perforated diverticulitis with purulent peritonitis. In recent years laparoscopic lavage has emerged as an alternative, with potential benefits for patients with purulent peritonitis...

  6. Health economic analysis of laparoscopic lavage versus Hartmann's procedure for diverticulitis in the randomized DILALA trial

    DEFF Research Database (Denmark)

    Gehrman, J.; Angenete, E; Björholt, I.

    2016-01-01

    Background: Open surgery with resection and colostomy (Hartmann's procedure) has been the standard treatment for perforated diverticulitis with purulent peritonitis. In recent years laparoscopic lavage has emerged as an alternative, with potential benefits for patients with purulent peritonitis......, Hinchey grade III. The aim of this study was to compare laparoscopic lavage and Hartmann's procedure with health economic evaluation within the framework of the DILALA (DIverticulitis – LAparoscopic LAvage versus resection (Hartmann's procedure) for acute diverticulitis with peritonitis) trial. Methods......), from inclusion in the trial throughout the patient's expected life. Results: The study included 43 patients who underwent laparoscopic lavage and 40 who had Hartmann's procedure in Denmark and Sweden during 2010–2014. In base-case A, the difference in mean cost per patient between laparoscopic lavage...

  7. Conditioning Analysis of Incomplete Cholesky Factorizations with Orthogonal Dropping

    Energy Technology Data Exchange (ETDEWEB)

    Napov, Artem [Free Univ. of Brussels (Belgium)

    2013-08-01

    The analysis of preconditioners based on incomplete Cholesky factorization in which the neglected (dropped) components are orthogonal to the approximations being kept is presented. General estimate for the condition number of the preconditioned system is given which only depends on the accuracy of individual approximations. The estimate is further improved if, for instance, only the newly computed rows of the factor are modified during each approximation step. In this latter case it is further shown to be sharp. The analysis is illustrated with some existing factorizations in the context of discretized elliptic partial differential equations.

  8. Conditioning Analysis of Incomplete Cholesky Factorizations with Orthogonal Dropping

    Energy Technology Data Exchange (ETDEWEB)

    Napov, Artem [Free Univ. of Brussels (Belgium)

    2013-08-01

    The analysis of preconditioners based on incomplete Cholesky factorization in which the neglected (dropped) components are orthogonal to the approximations being kept is presented. General estimate for the condition number of the preconditioned system is given which only depends on the accuracy of individual approximations. The estimate is further improved if, for instance, only the newly computed rows of the factor are modified during each approximation step. In this latter case it is further shown to be sharp. The analysis is illustrated with some existing factorizations in the context of discretized elliptic partial differential equations.

  9. Factor analysis of 27Al MAS NMR spectra for identifying nanocrystalline phases in amorphous geopolymers.

    Science.gov (United States)

    Urbanova, Martina; Kobera, Libor; Brus, Jiri

    2013-11-01

    Nanostructured materials offer enhanced physicochemical properties because of the large interfacial area. Typically, geopolymers with specifically synthesized nanosized zeolites are a promising material for the sorption of pollutants. The structural characterization of these aluminosilicates, however, continues to be a challenge. To circumvent complications resulting from the amorphous character of the aluminosilicate matrix and from the low concentrations of nanosized crystallites, we have proposed a procedure based on factor analysis of (27)Al MAS NMR spectra. The capability of the proposed method was tested on geopolymers that exhibited various tendencies to crystallize (i) completely amorphous systems, (ii) X-ray amorphous systems with nanocrystalline phases, and (iii) highly crystalline systems. Although the recorded (27)Al MAS NMR spectra did not show visible differences between the amorphous systems (i) and the geopolymers with the nanocrystalline phase (ii), the applied factor analysis unambiguously distinguished these materials. The samples were separated into the well-defined clusters, and the systems with the evolving crystalline phase were identified even before any crystalline fraction was detected by X-ray powder diffraction. Reliability of the proposed procedure was verified by comparing it with (29)Si MAS NMR spectra. Factor analysis of (27)Al MAS NMR spectra thus has the ability to reveal spectroscopic features corresponding to the nanocrystalline phases. Because the measurement time of (27)Al MAS NMR spectra is significantly shorter than that of (29)Si MAS NMR data, the proposed procedure is particularly suitable for the analysis of large sets of specifically synthesized geopolymers in which the formation of the limited fractions of nanocrystalline phases is desired. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Risk Factors Analysis on Traumatic Brain Injury Prognosis

    Institute of Scientific and Technical Information of China (English)

    Xiao-dong Qu; Resha Shrestha; Mao-de Wang

    2011-01-01

    To investigate the independent risk factors of traumatic brain injury (TBI) prognosis.Methods A retrospective analysis was performed in 885 hospitalized TEl patients from January 1,2003 to January 1, 2010 in the First Affiliated Hospital of Medical College of Xi' an Jiaotong University. Single-factor and logistic regression analysis were conducted to evaluate the association of different variables with TBI outcome.Results The single-factor analysis revealed significant association between several variables and TEl outcome, including age (P=0.044 for the age group 40-60, P<0.001 for the age group ≥60), complications (P<0.001), cerebrospinal fluid leakage (P<0.001), Glasgow Coma Scale (GCS) (P<0.001), pupillary light reflex (P<0.001), shock (P<0.001), associated extra-cranial lesions (P=0.01), subdural hematoma (P<0.001), cerebral contusion (P<0.001), diffuse axonal injury (P<0.001), and subarachnoid hemorrhage (P<0.001), suggesting the influence of those factors on the prognosis of TBI. Furthermore, logistic regression analysis identified age, GCS score, pupillary light reflex, subdural hematoma, and subarachnoid hemorrhage as independent risk factors of TEl prognosis.Conclusion Age, GCS score, papillary light reflex, subdural hematoma, and subarachnoid hemorrhage may be risk factors influencing the prognosis of TEl. Paying attention to those factors might improve the outcome of TBI in clinical treatment.

  11. Analysis on total factor productivity of Chinese provincial economy

    Institute of Scientific and Technical Information of China (English)

    GUO Qingwang; ZHAO Zhiyun; JIA Junxue

    2006-01-01

    This paper applies the nonparametric DEA-Malmquist index approach to estimate total factor productivity growth,efficiency change and the rate of technological progress from 1979 to 2003.This is done to conduct analysis on the total factor productivity of China's provincial economy.Analysis on the evolution of distribution dynamics of relative labor productivity,relative total factor productivity,relative efficiency and relative technological progress is done by using kernel density estimation flor the period from 1979 to 2003 in 29 provinces of China.Our analysis indicates that disparities of provincial economic growth are large and have been increasing owing to the relatively large and increasing disparities of total factor productivity growth especially the rate of technological progress.

  12. Procedure-specific Risks of Thrombosis and Bleeding in Urological Non-cancer Surgery: Systematic Review and Meta-analysis.

    Science.gov (United States)

    Tikkinen, Kari A O; Craigie, Samantha; Agarwal, Arnav; Siemieniuk, Reed A C; Cartwright, Rufus; Violette, Philippe D; Novara, Giacomo; Naspro, Richard; Agbassi, Chika; Ali, Bassel; Imam, Maha; Ismaila, Nofisat; Kam, Denise; Gould, Michael K; Sandset, Per Morten; Guyatt, Gordon H

    2017-03-09

    Pharmacological thromboprophylaxis involves a trade-off between a reduction in venous thromboembolism (VTE) and increased bleeding. No guidance specific for procedure and patient factors exists in urology. To inform estimates of absolute risk of symptomatic VTE and bleeding requiring reoperation in urological non-cancer surgery. We searched for contemporary observational studies and estimated the risk of symptomatic VTE or bleeding requiring reoperation in the 4 wk after urological surgery. We used the GRADE approach to assess the quality of the evidence. The 37 eligible studies reported on 11 urological non-cancer procedures. The duration of prophylaxis varied widely both within and between procedures; for example, the median was 12.3 d (interquartile range [IQR] 3.1-55) for open recipient nephrectomy (kidney transplantation) studies and 1 d (IQR 0-1.3) for percutaneous nephrolithotomy, open prolapse surgery, and reconstructive pelvic surgery studies. Studies of open recipient nephrectomy reported the highest risks of VTE and bleeding (1.8-7.4% depending on patient characteristics and 2.4% for bleeding). The risk of VTE was low for 8/11 procedures (0.2-0.7% for patients with low/medium risk; 0.8-1.4% for high risk) and the risk of bleeding was low for 6/7 procedures (≤0.5%; no bleeding estimates for 4 procedures). The quality of the evidence supporting these estimates was low or very low. Although inferences are limited owing to low-quality evidence, our results suggest that extended prophylaxis is warranted for some procedures (eg, kidney transplantation procedures in high-risk patients) but not others (transurethral resection of the prostate and reconstructive female pelvic surgery in low-risk patients). The best evidence suggests that the benefits of blood-thinning drugs to prevent clots after surgery outweigh the risks of bleeding in some procedures (such as kidney transplantation procedures in patients at high risk of clots) but not others (such as prostate

  13. Analysis of related risk factors for pancreatic fistula after pancreaticoduodenectomy

    Institute of Scientific and Technical Information of China (English)

    Qi-Song Yu; He-Chao Huang; Feng Ding; Xin-Bo Wang

    2016-01-01

    Objective:To explore the related risk factors for pancreatic fistula after pancreaticoduodenectomy to provide a theoretical evidence for effectively preventing the occurrence of pancreatic fistula.Methods:A total of 100 patients who were admitted in our hospital from January, 2012 to January, 2015 and had performed pancreaticoduodenectomy were included in the study. The related risk factors for developing pancreatic fistula were collected for single factor and Logistic multi-factor analysis.Results:Among the included patients, 16 had pancreatic fistula, and the total occurrence rate was 16% (16/100). The single-factor analysis showed that the upper abdominal operation history, preoperative bilirubin, pancreatic texture, pancreatic duct diameter, intraoperative amount of bleeding, postoperative hemoglobin, and application of somatostatin after operation were the risk factors for developing pancreatic fistula (P<0.05). The multi-factor analysis showed that the upper abdominal operation history, the soft pancreatic texture, small pancreatic duct diameter, and low postoperative hemoglobin were the dependent risk factors for developing pancreatic fistula (OR=4.162, 6.104, 5.613, 4.034,P<0.05).Conclusions:The occurrence of pancreatic fistula after pancreaticoduodenectomy is closely associated with the upper abdominal operation history, the soft pancreatic texture, small pancreatic duct diameter, and low postoperative hemoglobin; therefore, effective measures should be taken to reduce the occurrence of pancreatic fistula according to the patients’own conditions.

  14. Environmental Performance in Countries Worldwide: Determinant Factors and Multivariate Analysis

    Directory of Open Access Journals (Sweden)

    Isabel Gallego-Alvarez

    2014-11-01

    Full Text Available The aim of this study is to analyze the environmental performance of countries and the variables that can influence it. At the same time, we performed a multivariate analysis using the HJ-biplot, an exploratory method that looks for hidden patterns in the data, obtained from the usual singular value decomposition (SVD of the data matrix, to contextualize the countries grouped by geographical areas and the variables relating to environmental indicators included in the environmental performance index. The sample used comprises 149 countries of different geographic areas. The findings obtained from the empirical analysis emphasize that socioeconomic factors, such as economic wealth and education, as well as institutional factors represented by the style of public administration, in particular control of corruption, are determinant factors of environmental performance in the countries analyzed. In contrast, no effect on environmental performance was found for factors relating to the internal characteristics of a country or political factors.

  15. A Highly Sensitive Multicommuted Flow Analysis Procedure for Photometric Determination of Molybdenum in Plant Materials without a Solvent Extraction Step

    OpenAIRE

    Felisberto G. Santos; Boaventura F. Reis

    2017-01-01

    A highly sensitive analytical procedure for photometric determination of molybdenum in plant materials was developed and validated. This procedure is based on the reaction of Mo(V) with thiocyanate ions (SCN−) in acidic medium to form a compound that can be monitored at 474 nm and was implemented employing a multicommuted flow analysis setup. Photometric detection was performed using an LED-based photometer coupled to a flow cell with a long optical path length (200 mm) to achieve high sensit...

  16. Determining Dimensionality of Exercise Readiness Using Exploratory Factor Analysis.

    Science.gov (United States)

    Strohacker, Kelley; Zakrajsek, Rebecca A

    2016-06-01

    Assessment of "exercise readiness" is a central component to the flexible non-linear periodization (FNLP) method of organizing training workloads, but the underlying factor structure of this construct has not been empirically determined. The purpose of this study was to assess construct dimensionality of exercise readiness using exploratory factor analysis. The result of which serve as initial steps of developing a brief measure of exercise readiness. Participants consisted of students recruited from undergraduate Kinesiology courses at a racially diverse, southern University. Independent, anonymous online survey data were collected across three stages: 1) generation of item pool (n = 290), 2) assessment of face validity and refinement of item pool (n = 168), and 3) exploratory factor analysis (n = 684). A principal axis factor analysis was conducted with 41 items using oblique rotation (promax). Four statistically significant factors, as determined through parallel analysis, explained 61.5% of the variance in exercise readiness. Factor 1 contained items that represented vitality (e.g., lively, revived). Factor 2 items related to physical fatigue (e.g. tired, drained). Factors 3 and 4 were descriptive of, discomfort (e.g. pain, sick) and health (i.e. healthy, fit), respectively. This inductive approach indicates that exercise readiness is comprised of four dimensions: vitality, physical fatigue, discomfort, and health. This finding supports readiness assessment techniques currently recommended for practitioners according to the FNLP model. These results serve as a theoretical foundation upon which to further develop and refine a brief survey instrument to measure exercise readiness. Key pointsAssessment of exercise readiness is a key component in implementing an exercise program based on flexible nonlinear periodization, but the dimensionality of this concept has not been empirically determined.Based on a series of surveys and a robust exploratory factor analysis

  17. The Latarjet Procedure at the National Football League Scouting Combine: An Imaging and Performance Analysis.

    Science.gov (United States)

    LeBus, George F; Chahla, Jorge; Sanchez, George; Akamefula, Ramesses; Moatshe, Gilbert; Phocas, Alexandra; Price, Mark D; Whalen, James M; LaPrade, Robert F; Provencher, Matthew T

    2017-09-01

    The Latarjet procedure is commonly performed in the setting of glenoid bone loss for treatment of recurrent anterior shoulder instability; however, little is known regarding the outcomes of this procedure in elite American football players. (1) Determine the prevalence, clinical features, and imaging findings of elite college football athletes who present to the National Football League (NFL) Combine with a previous Latarjet procedure and (2) describe these athletes' performance in the NFL in terms of draft status and initial playing time. Case series; Level of evidence, 4. After review of all football players who participated in the NFL Combine from 2009 to 2016, any player with a previous Latarjet procedure was included in this study. Medical records, position on the field, and draft position were recorded for each player. In addition, imaging studies were reviewed to determine fixation type, hardware complications, and status of the bone block. For those players who were ultimately drafted, performance was assessed based on games played and started, total snaps, and percentage of eligible snaps in which the player participated during his rookie season. Overall, 13 of 2617 (Latarjet procedure. Radiographically, 8 of 13 (61%) showed 2-screw fixation, while 5 of 13 (39%) had 1 screw. Of the 13 players, 6 (46%) players demonstrated hardware complications. All players had evidence of degenerative changes on plain radiographs, with 10 (77%) graded as mild, 1 (8%) as moderate, and 2 (15%) as severe according to the Samilson Prieto classification. Six of the 13 (46%) players went undrafted, while the remaining 7 (54%) were drafted; however, no player participated in more than half of the plays for which he was eligible during his rookie season. Only a small percentage of players at the NFL Combine (Latarjet procedure. High rates of postoperative complications and radiographically confirmed degenerative change were observed. Athletes who had undergone a Latarjet

  18. ANALYSIS OF RISK FACTORS IN 3901 PATIENTS WITH STROKE

    Institute of Scientific and Technical Information of China (English)

    Xin-Feng Liu; Guy van Melle; Julien Bogousslavsky

    2005-01-01

    Objective To estimate the frequency of various risk factors for overall stroke and to identify risk factors for cerebral infarction (CI) versus intracerebral hemorrhage (ICH) in a large hospital-based stroke registry.Methods Data from a total of 3901 patients, consisting of 3525 patients with CI and 376 patients with ICH were prospectively coded and entered into a computerized data bank.Results Hypertension and smoking were the most prominent factors affecting overall stroke followed by mild internal carotid artery stenosis (< 50%), hypercholesterolemia, transient ischemic attacks (TIAs), diabetes mellitus, and cardiac ischemia. Univariate analysis showed that factors in male significantly associated with CI versus ICH were old age, a family history of stroke, and intermittent claudication; whereas in female the factors were oral contraception and migraine. By multivariate analysis, in all patients, the factors significantly associated with CI as opposed to ICH were smoking, hypercholesterolemia, migraine, TIAs, atrial fibrillation, structural heart disease, and arterial disease. Hypertension was the only significant factor related with ICH versus CI.Conclusions The factors for ischemic and hemorrhagic stroke are not exactly the same. Cardiac and arterial disease are the most powerful factors associated with CI rather than ICH.

  19. Analysis of effect factors-based stochastic network planning model

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Looking at all the indeterminate factors as a whole and regarding activity durations as independent random variables,the traditional stochastic network planning models ignore the inevitable relationship and dependence among activity durations when more than one activity is possibly affected by the same indeterminate factors.On this basis of analysis of indeterminate effect factors of durations,the effect factors-based stochastic network planning (EFBSNP) model is proposed,which emphasizes on the effects of not only logistic and organizational relationships,but also the dependent relationships,due to indeterminate factors among activity durations on the project period.By virtue of indeterminate factor analysis the model extracts and describes the quantitatively indeterminate effect factors,and then takes into account the indeterminate factors effect schedule by using the Monte Carlo simulation technique.The method is flexible enough to deal with effect factors and is coincident with practice.A software has been developed to simplify the model-based calculation,in VisualStudio.NET language.Finally,a case study is included to demonstrate the applicability of the proposed model and comparison is made with some advantages over the existing models.

  20. Musculoskeletal diseases of spine and risk factors during Dentistry: Multilevel ergonomic analysis

    Directory of Open Access Journals (Sweden)

    Charilaos Koutis

    2011-07-01

    Full Text Available The prevalence of occupational musculoskeletal diseases (MSDs among dentists is estimated to be high, despite the ergonomic interventions in this sector. The aim of the present study was a the evaluation of spine MSDs in dentists and b the assessment of risk factors related to dentist practice. Material and Method: The sample of the present study consisted of 16 dentists (n=16. The participants divided into two (2 groups, based on MSDs of the spine. A multilevel ergonomic analysis was conducted in both groups, which evaluated individual, physical and occupational risk factors during nine (9 dental procedures. For the analysis of data were used, direct methods (video, observation, amended postural analysis OWAS, indirect method (questionnaire and quantitative methods of ergonomic analysis (computerized mediball postural stabilizer cushion. Results: The most frequent MSDs of spine among dentists in the present research are localized on low back (66,7% and neck (8,3%. Based on OWAS analysis of 2348 working postures, statistically significant correlation was found between dentists' MSDs and factors concerning both dentists (weakness of stabilizer muscles of spine, awkward positions during working time, fatigue (p< 0,05 as well as the nature of dental work (specific dental procedures, patient's, the position of patients, tools and dentists during the working time, certain areas of the mouth, working hours, lack of breaks, etc (p< 0,05 respectively. Conclusions: Low back pain and neck pain are the most frequent MSDs of dentists' spine. They are related to individual and other occupational factors which could have been prevented using proper ergonomic interventions.

  1. [Diagnosis and treatment value of colposcopy and loop electrosurgical excision procedure in microinvasive cervical cancer: analysis of 135 cases].

    Science.gov (United States)

    Xiao, F Y; Wang, Q; Zheng, R L; Chen, M; Su, T T; Sui, L

    2016-03-01

    To explore the sensitivity and specificity of colposcopy directed biopsy (CDB) and the value of loop electrosurgical excision procedure (LEEP) for the diagnosis and treatment of microinvasive cervical cancer (MCC). One hundred and thirty five patients with MCC were diagnosed with LEEP in Obstetrics and Gynecology Hospital, Fudan University from April 2008 to November 2010, and were retrospectively analyzed on CDB diagnoses and following treatment after LEEP. According to patient's desire for preservation of fertility and cone margin status, following strategies after LEEP included follow-up, second LEEP, hysterectomy, modified radical hysterectomy and radical hysterectomy. Single and multiple factors related to residual lesions after LEEP were analysed with Pearson Chi-square test and logistic regression model, respectively. CDB diagnosed MCC with a sensitivity of 4.4 % (6/135), specificity of 100.0% (4 680/4 680), and false negative rate of 95.6% (129/135). Among the 135 patients, 29 did not receive further treatment in our hospital and lost contact. One hundred and six patients had secondary treatment or follow-up in our hospital, 4 of among which were closely followed up; one hundred and two received further treatment, which included 6 cases with second LEEP (3 received extrafascial hysterectomy after repeat LEEP), 59 cases hysterectomy, 14 cases modified radical hysterectomy and 26 cases radical hysterectomy. For factors related to residual lesions after LEEP, single factor analysis showed that the ratio of residual lesion in patients aged 27-39, 40-49 and 50-65 years were respectively 19.0% (11/58), 15.4% (10/65) and 5/12 (χ(2)=4.505, P=0.105). Residual lesions occurred in 24.7% (23/93) of patients with positive LEEP margins, which was more than that 7.1% (3/42) of patients with negative LEEP margins (χ(2)=5.756, P=0.016). The ratio of residual lesions in patients with positive endocervical, ectocervical and deep stromal margins were respectively 29

  2. Contemporary analysis of the intraoperative and perioperative complications of neurosurgical procedures performed in the sitting position.

    Science.gov (United States)

    Himes, Benjamin T; Mallory, Grant W; Abcejo, Arnoley S; Pasternak, Jeffrey; Atkinson, John L D; Meyer, Fredric B; Marsh, W Richard; Link, Michael J; Clarke, Michelle J; Perkins, William; Van Gompel, Jamie J

    2017-07-01

    OBJECTIVE Historically, performing neurosurgery with the patient in the sitting position offered advantages such as improved visualization and gravity-assisted retraction. However, this position fell out of favor at many centers due to the perceived risk of venous air embolism (VAE) and other position-related complications. Some neurosurgical centers continue to perform sitting-position cases in select patients, often using modern monitoring techniques that may improve procedural safety. Therefore, this paper reports the risks associated with neurosurgical procedures performed in the sitting position in a modern series. METHODS The authors reviewed the anesthesia records for instances of clinically significant VAE and other complications for all neurosurgical procedures performed in the sitting position between January 1, 2000, and October 8, 2013. In addition, a prospectively maintained morbidity and mortality log of these procedures was reviewed for instances of subdural or intracerebral hemorrhage, tension pneumocephalus, and quadriplegia. Both overall and specific complication rates were calculated in relation to the specific type of procedure. RESULTS In a series of 1792 procedures, the overall complication rate related to the sitting position was 1.45%, which included clinically significant VAE, tension pneumocephalus, and subdural hemorrhage. The rate of any detected VAE was 4.7%, but the rate of VAE requiring clinical intervention was 1.06%. The risk of clinically significant VAE was highest in patients undergoing suboccipital craniotomy/craniectomy with a rate of 2.7% and an odds ratio (OR) of 2.8 relative to deep brain stimulator cases (95% confidence interval [CI] 1.2-70, p = 0.04). Sitting cervical spine cases had a comparatively lower complication rate of 0.7% and an OR of 0.28 as compared with all cranial procedures (95% CI 0.12-0.67, p < 0.01). Sitting cervical cases were further subdivided into extradural and intradural procedures. The rate of

  3. Procedural learning is impaired in dyslexia: Evidence from a meta-analysis of serial reaction time studies☆

    Science.gov (United States)

    Lum, Jarrad A.G.; Ullman, Michael T.; Conti-Ramsden, Gina

    2013-01-01

    A number of studies have investigated procedural learning in dyslexia using serial reaction time (SRT) tasks. Overall, the results have been mixed, with evidence of both impaired and intact learning reported. We undertook a systematic search of studies that examined procedural learning using SRT tasks, and synthesized the data using meta-analysis. A total of 14 studies were identified, representing data from 314 individuals with dyslexia and 317 typically developing control participants. The results indicate that, on average, individuals with dyslexia have worse procedural learning abilities than controls, as indexed by sequence learning on the SRT task. The average weighted standardized mean difference (the effect size) was found to be 0.449 (CI95: .204, .693), and was significant (p dyslexia. PMID:23920029

  4. Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.

    Science.gov (United States)

    Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh

    2014-07-01

    This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management.

  5. A modified release analysis procedure using advanced froth flotation mechanisms: Technical report, March 1, 1996-May 31, 1996

    Energy Technology Data Exchange (ETDEWEB)

    Honaker, R.Q., Mohanty, M.K. [Southern Illinois Univ., Department of Mining Engineering, Carbondale, IL (United States)

    1997-04-01

    Recent studies indicate that the optimum separation performances achieved by multiple stage cleaning using various column flotation technologies and single stage cleaning using a Packed-Flotation Column are superior to the performance achieved by the traditional release procedure, especially in terms of pyritic sulfur rejection. This superior performance is believed to be the result of the advanced flotation mechanisms provided by column flotation technologies. Thus, the objective of this study is to develop a suitable process utilizing the advanced froth flotation mechanisms to characterize the true flotation response of a coal sample. Work in this reporting period concentrated on developing a modified coal flotation characterization procedure, termed as Advanced Flotation Washability (AFW) technique. The new apparatus used for this procedure is essentially a batch operated packed-column device equipped with a controlled wash water system. Several experiments were conducted using the AFW technique on a relatively high sulfur, -100 mesh Illinois No. 5 run-of-mine coal sample collected from a local coal preparation plant. Similar coal characterization experiments were also conducted using the traditional release and tree analysis procedures. The best performance curve generated using the AFW technique was found to be superior to the optimum curve produced by the traditional procedures. For example, at a combustible recovery of 80%, a 19% improvement in the reduction of the pyritic sulfur content was achieved by the AFW method while the ash reduction was also enhanced by 4%. Several tests are on-going to solidify the AFW procedure and verify the above finding by conducting Anova analyses to evaluate the repeatability of the AFW method and the statistical significance of the difference in the performance achieved from the traditional and modified coal characterization procedures.

  6. Identification of noise in linear data sets by factor analysis

    Energy Technology Data Exchange (ETDEWEB)

    Roscoe, B.A.; Hopke, P.K.

    1981-01-01

    The approach to classical factor analysis described in this paper, i.e. doing the analysis for varying numbers of factors without prior assumptions to the number of factors, prevents one from getting eroneous results by inherent computer code assumptions. Identification of a factor containing most of the variance of one variable with little variance of other variables, pinpoints a possible difficulty in the data, if the singularity has no obvious physical significance. Examination of the factor scores will determine whether the problem is isolated to a few samples or over all the samples. Having this information, one may then go back to the raw data and take the appropriate corrective action. Classical factor analysis has the ability to identify several types of errors in data after it has been generated. It is then ideally suited for scanning large data sets. The ease of the identification technique makes it a beneficial tool to use before reduction and analysis of large data sets and should, in the long run, save time and effort.

  7. Development and validation of a novel data analysis procedure for spherical nanoindentation

    Science.gov (United States)

    Pathak, Siddhartha

    This dissertation presents a novel approach for converting the raw load-displacement data measured in spherical nanoindentation into much more meaningful indentation stress-strain curves. This new method entails a novel definition of the indentation strain and a new procedure for establishing the effective zero point in the raw dataset---both with and without the use of the continuous stiffness measurement (CSM) data. The concepts presented here have been validated by simulations and experiments on isotropic metallic samples of aluminum and tungsten. It is demonstrated that these new indentation stress-strain curves accurately capture the loading and unloading elastic moduli, the indentation yield points, as well as the post-yield characteristics in the tested samples. Subsequently this approach has been applied on a wide range of material systems including metals, carbon nanotubes (CNTs), ceramics and bone. In metals, these data analysis techniques have been highly successful in explaining several of the surface preparation artifacts typically encountered during nanoindentation measurements. This approach has also been extended to anisotropic polycrystalline samples, where a judicious combination of Orientation Imaging Microscopy (OIM) and nanoindentation were used to estimate, for the first time, the changes in slip resistance in deformed grains of Fe-3%Si steel. Similar studies on dense CNT brushes, with ˜10 times higher density than CNT brushes produced by other methods, demonstrate the higher modulus (˜17-20 GPa) and orders of magnitude higher resistance to buckling in these brushes than vapor phase deposited CNT brushes or carbon walls, showing their promise for energy-absorbing coatings. Even for a complex hierarchical material system like bone, these techniques have elucidated trends in the elastic and yield behavior at the lamellar level in the femora (thigh bone) of different inbred mouse strains. Thus bone with a higher mineral-to-matrix ratio

  8. The amplitude reduction factor and the cumulant expansion method: crucial factors in the structural analysis of alkoxide precursors in solution.

    Science.gov (United States)

    Bauer, Matthias; Bertagnolli, Helmut

    2007-12-13

    The transition-metal alkoxide yttrium 2-methoxyethoxide Y(OEtOMe)(3) in solution is studied as a model system of the large class of alkoxide precursors used in the sol-gel process by means of EXAFS spectroscopy. The discussion is focused on the amplitude reduction factor S (2)(0) and the cumulant expansion method. If asymmetry is present in the radial distribution function, the determination of the correct structural model can only be achieved by balancing multiple Gaussian shell fits against only one shell fit with a third cumulant C3. A method to identify the best model, based on statistical parameters of the EXAFS fit, is proposed and checked with two well-known reference compounds, Y(5)O(O(i)Pr)(13) and Y(acac)(3).3H(2)O, and applied to the structurally unknown solution of Y(OEtOMe)(3) in 2-methoxyethanol. The two references are also used to discuss the transferability of S(2)(0) values, determined from reference compounds to unknown samples. A model-free procedure to identify the correct amplitude reduction factor S(2)(0) by making use of fits with different k-weighting schemes is critically investigated. This procedure, which does not require any crystallographic data, is used for the case of Y(OEtOMe)(3) in solution, where significant differences of the amplitude reducing factor of both the oxygen and yttrium shell in comparison to the reference Y(5)O(O(i)Pr)(13) were found. With such a detailed analysis of EXAFS data, a reliable characterization of Y(OEtOMe)3 in 2-methoxyethanol by means of EXAFS spectroscopy is possible. The decameric structure unit found in solid Y(OEtOMe)(3) is not preserved, but rather, a pentameric framework similar to that in Y5O(O(i)Pr)(13) is formed.

  9. Use of dental services: an analysis of visits, procedures and providers, 1996.

    Science.gov (United States)

    Manski, Richard J; Moeller, John F

    2002-02-01

    While many studies have provided data on Americans' access to dental care, few have provided a detailed understanding of what specific treatments patients receive. This article provides detailed information about the types of dental services that Americans receive and the types of providers who render them. The authors provide national estimates for the U.S. civilian noninstitutionalized population in several socioeconomic and demographic categories regarding dental visits, procedures performed and the types of providers who performed them, using household data from the 1996 Medical Expenditure Panel Survey, or MEPS. Data show that while the combination of diagnostic and preventive services adds up to 65 percent of all dental procedures, the combination of periodontal and endodontic procedures represents only 3 percent. Additionally, while 81 percent of all dental visits were reported as visits to general dentists, approximately 7 percent and 5 percent of respondents who had had a dental visit reported having visited orthodontists or oral surgeons, respectively. MEPS data show the magnitude and nature of dental visits in aggregate and for each of several demographic and socioeconomic categories. This information establishes a nationally representative baseline for the U.S. population in terms of rates of utilization, number and types of procedures and variations in types of providers performing the procedures. These nationally representative estimates include data elements that describe specific dental visits, dental procedures and type of provider, and they offer details that are useful, important and not found elsewhere. By understanding these analyses, U.S. dentists will be better positioned to provide care and better meet the dental care needs of all Americans.

  10. Biological risk factors for suicidal behaviors: a meta-analysis.

    Science.gov (United States)

    Chang, B P; Franklin, J C; Ribeiro, J D; Fox, K R; Bentley, K H; Kleiman, E M; Nock, M K

    2016-09-13

    Prior studies have proposed a wide range of potential biological risk factors for future suicidal behaviors. Although strong evidence exists for biological correlates of suicidal behaviors, it remains unclear if these correlates are also risk factors for suicidal behaviors. We performed a meta-analysis to integrate the existing literature on biological risk factors for suicidal behaviors and to determine their statistical significance. We conducted a systematic search of PubMed, PsycInfo and Google Scholar for studies that used a biological factor to predict either suicide attempt or death by suicide. Inclusion criteria included studies with at least one longitudinal analysis using a biological factor to predict either of these outcomes in any population through 2015. From an initial screen of 2541 studies we identified 94 cases. Random effects models were used for both meta-analyses and meta-regression. The combined effect of biological factors produced statistically significant but relatively weak prediction of suicide attempts (weighted mean odds ratio (wOR)=1.41; CI: 1.09-1.81) and suicide death (wOR=1.28; CI: 1.13-1.45). After accounting for publication bias, prediction was nonsignificant for both suicide attempts and suicide death. Only two factors remained significant after accounting for publication bias-cytokines (wOR=2.87; CI: 1.40-5.93) and low levels of fish oil nutrients (wOR=1.09; CI: 1.01-1.19). Our meta-analysis revealed that currently known biological factors are weak predictors of future suicidal behaviors. This conclusion should be interpreted within the context of the limitations of the existing literature, including long follow-up intervals and a lack of tests of interactions with other risk factors. Future studies addressing these limitations may more effectively test for potential biological risk factors.

  11. The Latarjet Procedure at the National Football League Scouting Combine: An Imaging and Outcome Analysis

    Science.gov (United States)

    Provencher, Matthew T.; Lebus, George; Chahla, Jorge; Sanchez, George; Ferrari, Marcio Balbinotti; Moatshe, Gilbert

    2017-01-01

    Objectives: The Latarjet procedure is commonly implemented in the treatment of shoulder instability in the setting of glenoid bone loss, particularly in contact athletes such as American football players; however, little is known regarding the outcomes and failures rates of this procedure in collegiate football players prior to participating in the National Football League (NFL). The purposes of this study were to 1) determine the prevalence, clinical features, and imaging findings of National Collegiate Athletic Association (NCAA) football athletes who presented to the NFL scouting Combine having undergone the Latarjet procedure and 2) to evaluate the impact of this procedure, including imaging findings on these athletes’ performance as they entered the NFL. Methods: All NFL football players at the NFL Combine from 2009 to 2016 were reviewed. Inclusion criteria were any player who had a documented Latarjet procedure in the past and participated in medical and performance testing at the NFL Combine. Medical records, imaging, games and position played, and draft position of each player who had undergone a Latarjet procedure was then analyzed. In addition, radiographic features of each players with a Latarjet were evaluated including type of fixation, hardware complications, position of the bone block, degenerative changes, and healing/union/bony resorption were all assessed. NFL performance outcomes (draft position and number of games played and started within first two years) were assessed. Results: Of the 2285 players who participated in the NFL combine between 2009 and 2016, there were 13 athletes (0.6%) who had undergone a Latarjet procedure. Six patients had a two-screw fixation of the bone block while 7 had only one screw, with two patients evidencing a broken hardware (one with one screw and one with 2 screws) and two patients with bent screws (one with one screw and one with 2 screws). Screw prominence was observed in 1 patient. Eight of the 13 patients

  12. An analysis of marketing authorisation applications via the mutual recognition and decentralised procedures in Europe

    DEFF Research Database (Denmark)

    Ebbers, Hans C; Langedijk, Joris; Bouvy, Jacoline C;

    2015-01-01

    PURPOSE: The aim of this study is to provide a comprehensive overview of the outcomes of marketing authorisation applications via the mutual recognition and decentralised procedures (MRP/DCP) and assess determinants of licensing failure during CMDh referral procedures. METHODS: All MRP...... in a referral has reduced substantially over the past years, no specific time trend could be observed regarding the frequency of referrals resulting in licensing failure. Increased knowledge at the level of companies and regulators has reduced the frequency of late-stage failure of marketing applications via...

  13. Factors Surgical Team Members Perceive Influence Choices of Wearing or Not Wearing Personal Protective Equipment during Operative/Invasive Procedures

    Science.gov (United States)

    Cuming, Richard G.

    2009-01-01

    Exposure to certain bloodborne pathogens can prematurely end a person's life. Healthcare workers (HCWs), especially those who are members of surgical teams, are at increased risk of exposure to these pathogens. The proper use of personal protective equipment (PPE) during operative/invasive procedures reduces that risk. Despite this, some HCWs fail…

  14. Assessing the Impact of Faking on Binary Personality Measures: An IRT-Based Multiple-Group Factor Analytic Procedure

    Science.gov (United States)

    Ferrando, Pere J.; Anguiano-Carrasco, Cristina

    2009-01-01

    This article proposes a model-based multiple-group procedure for assessing the impact of faking on personality measures and the scores derived from these measures. The assessment is at the item level and the base model, which is intended for binary items, can be parameterized both as an Item Response Theory (IRT) model and as an Item…

  15. Influence of handling procedures and biological factors on the QIM evaluation of whole herring (Clupea harengus L.)

    DEFF Research Database (Denmark)

    Nielsen, Durita; Hyldig, Grethe

    2004-01-01

    QIM evaluations were performed on herring from ten seasonally and geographically distributed cruises and related to handling procedures and biological and chemical parameters. The results showed clear effects from onboard storage methods. The quality of iced herring was superior to the quality of...

  16. Factors Surgical Team Members Perceive Influence Choices of Wearing or Not Wearing Personal Protective Equipment during Operative/Invasive Procedures

    Science.gov (United States)

    Cuming, Richard G.

    2009-01-01

    Exposure to certain bloodborne pathogens can prematurely end a person's life. Healthcare workers (HCWs), especially those who are members of surgical teams, are at increased risk of exposure to these pathogens. The proper use of personal protective equipment (PPE) during operative/invasive procedures reduces that risk. Despite this, some HCWs fail…

  17. Optimized Standard Operating Procedures for the Analysis of Cerebrospinal Fluid Aβ42 and the Ratios of Aβ Isoforms Using Low Protein Binding Tubes

    Science.gov (United States)

    Vanderstichele, Hugo Marcel Johan; Janelidze, Shorena; Demeyer, Leentje; Coart, Els; Stoops, Erik; Herbst, Victor; Mauroo, Kimberley; Brix, Britta; Hansson, Oskar

    2016-01-01

    Background: Reduced cerebrospinal fluid (CSF) concentration of amyloid-β1-42 (Aβ1-42) reflects the presence of amyloidopathy in brains of subjects with Alzheimer’s disease (AD). Objective: To qualify the use of Aβ1-42/Aβ1-40 for improvement of standard operating procedures (SOP) for measurement of CSF Aβ with a focus on CSF collection, storage, and analysis. Methods: Euroimmun ELISAs for CSF Aβ isoforms were used to set up a SOP with respect to recipient properties (low binding, polypropylene), volume of tubes, freeze/thaw cycles, addition of detergents (Triton X-100, Tween-20) in collection or storage tubes or during CSF analysis. Data were analyzed with linear repeated measures and mixed effects models. Results: Optimization of CSF analysis included a pre-wash of recipients (e.g., tubes, 96-well plates) before sample analysis. Using the Aβ1-42/Aβ1-40 ratio, in contrast to Aβ1-42, eliminated effects of tube type, additional freeze/thaw cycles, or effect of CSF volumes for polypropylene storage tubes. ‘Low binding’ tubes reduced the loss of Aβ when aliquoting CSF or in function of additional freeze/thaw cycles. Addition of detergent in CSF collection tubes resulted in an almost complete absence of variation in function of collection procedures, but affected the concentration of Aβ isoforms in the immunoassay. Conclusion: The ratio of Aβ1-42/Aβ1-40 is a more robust biomarker than Aβ1-42 toward (pre-) analytical interfering factors. Further, ‘low binding’ recipients and addition of detergent in collection tubes are able to remove effects of SOP-related confounding factors. Integration of the Aβ1-42/Aβ1-40 ratio and ‘low-binding tubes’ into guidance criteria may speed up worldwide standardization of CSF biomarker analysis. PMID:27258423

  18. WHY DO SOME NATIONS SUCCEED AND OTHERS FAIL IN INTERNATIONAL COMPETITION? FACTOR ANALYSIS AND CLUSTER ANALYSIS AT EUROPEAN LEVEL

    Directory of Open Access Journals (Sweden)

    Popa Ion

    2015-07-01

    Full Text Available As stated by Michael Porter (1998: 57, 'this is perhaps the most frequently asked economic question of our times.' However, a widely accepted answer is still missing. The aim of this paper is not to provide the BIG answer for such a BIG question, but rather to provide a different perspective on the competitiveness at the national level. In this respect, we followed a two step procedure, called “tandem analysis”. (OECD, 2008. First we employed a Factor Analysis in order to reveal the underlying factors of the initial dataset followed by a Cluster Analysis which aims classifying the 35 countries according to the main characteristics of competitiveness resulting from Factor Analysis. The findings revealed that clustering the 35 states after the first two factors: Smart Growth and Market Development, which recovers almost 76% of common variability of the twelve original variables, are highlighted four clusters as well as a series of useful information in order to analyze the characteristics of the four clusters and discussions on them.

  19. Major morbidity or mortality from office anesthetic procedures: a closed-claim analysis of 13 cases.

    Science.gov (United States)

    Jastak, J T; Peskin, R M

    1991-01-01

    A closed-claim analysis of anesthetic-related deaths and permanent injuries in the dental office setting was conducted in cooperation with a leading insurer of oral and maxillofacial surgeons and dental anesthesiologists. A total of 13 cases occurring between 1974 and 1989 was included. In each case, all available records, reports, depositions, and proceedings were reviewed. The following were determined for each case: preoperative physical status of the patient, anesthetic technique used (classified as either general anesthesia or conscious sedation), probable cause of the morbid event, avoidability of the occurrence, and contributing factors important to the outcome. The majority of patients were classified as American Society of Anesthesiologists (ASA) status II or III. Most patients had preexisting conditions, such as gross obesity, cardiac disease, epilepsy, and chronic obstructive pulmonary disease, that can significantly affect anesthesia care. Hypoxia arising from airway obstruction and/or respiratory depression was the most common cause of untoward events, and most of the adverse events were determined to be avoidable. The disproportionate number of patients in this sample who were at the extremes of age and with ASA classifications below I suggests that anesthesia risk may be significantly increased in patients who fall outside the healthy, young adult category typically treated in the oral surgical/dental outpatient setting.

  20. Radiation densitometry in tree-ring analysis: a review and procedure manual

    Energy Technology Data Exchange (ETDEWEB)

    Parker, M.L.; Taylor, F.G.; Doyle, T.W.; Foster, B.E.; Cooper, C.; West, D.C.

    1985-01-01

    An x-ray densitometry of wood facility is being established by the Environmental Sciences Division, Oak Ridge Natioanl Laboratory (ORNL). The objective is to apply tree-ring data to determine whether or not there is a fertilizer effect on tree growth from increased atmospheric carbon dioxide since the beginning of the industrial era. Intra-ring width and density data, including ring-mass will be detemined from tree-ring samples collected from sites located throughout the United States and Canada. This report is designed as a guide to assist ORNL scientists in building the x-ray densitometry system. The history and development of x-ray densitometry in tree-ring research is examined and x-ray densitometry is compared with other techniques. Relative wood and tree characteristics are described as are environmental and genetic factors affecting tree growth responses. Methods in x-ray densitometry are examined in detail and the techniques used at four operating laboratories are described. Some ways that dendrochronology has been applied in dating, in wood quality, and environmental studies are presented, and a number of tree-ring studies in Canada are described. An annotated bibliography of radiation densitometry in tree-ring analysis and related subjects is included.

  1. Systematic review and meta-analysis of enterocolitis after one-stage transanal pull-through procedure for Hirschsprung's disease.

    LENUS (Irish Health Repository)

    Ruttenstock, Elke

    2012-02-01

    PURPOSE: The transanal one-stage pull-through procedure (TERPT) has gained worldwide popularity over open and laparoscopic-assisted one-stage techniques in children with Hirschsprung\\'s disease (HD). It offers the advantages of avoiding laparotomy, laparoscopy, scars, abdominal contamination, and adhesions. However, enterocolitis associated with Hirschsprung\\'s disease (HAEC) still remains to be a potentially life-threatening complication after pull-through operation. The reported incidence of HAEC ranges from 4.6 to 54%. This meta-analysis was designed to evaluate postoperative incidence of HAEC following TERPT procedure. METHODS: A meta-analysis of cases of TERPT reported between 1998 and 2009 was performed. Detailed information was recorded regarding intraoperative details and postoperative complications with particular emphasis on incidence of HAEC. Diagnosis of HAEC in a HD patient was based on the clinical presentation of diarrhoea, abdominal distension, and fever. RESULTS: Of the 54 published articles worldwide, 27 articles, including 899 patients were identified as reporting entirely TERPT procedure. Postoperative HAEC occurred in 92 patients (10.2%). Recurrent episodes of HAEC were reported in 18 patients (2%). Conservative treatment of HAEC was successful in 75 patients (81.5%), whereas in 17 patients (18.5%) surgical treatment was needed. CONCLUSIONS: This systematic review reveals that TERPT is a safe and less-invasive procedure with a low incidence of postoperative HAEC.

  2. Evaluation of shoulder function in clavicular fracture patients after six surgical procedures based on a network meta-analysis.

    Science.gov (United States)

    Huang, Shou-Guo; Chen, Bo; Lv, Dong; Zhang, Yong; Nie, Feng-Feng; Li, Wei; Lv, Yao; Zhao, Huan-Li; Liu, Hong-Mei

    2017-01-01

    Purpose Using a network meta-analysis approach, our study aims to develop a ranking of the six surgical procedures, that is, Plate, titanium elastic nail (TEN), tension band wire (TBW), hook plate (HP), reconstruction plate (RP) and Knowles pin, by comparing the post-surgery constant shoulder scores in patients with clavicular fracture (CF). Methods A comprehensive search of electronic scientific literature databases was performed to retrieve publications investigating surgical procedures in CF, with the stringent eligible criteria, and clinical experimental studies of high quality and relevance to our area of interest were selected for network meta-analysis. Statistical analyses were conducted using Stata 12.0. Results A total of 19 studies met our inclusion criteria were eventually enrolled into our network meta-analysis, representing 1164 patients who had undergone surgical procedures for CF (TEN group = 240; Plate group = 164; TBW group  =  180; RP group  =  168; HP group  =  245; Knowles pin group  =  167). The network meta-analysis results revealed that RP significantly improved constant shoulder score in patients with CF when compared with TEN, and the post-operative constant shoulder scores in patients with CF after Plate, TBW, HP, Knowles pin and TEN were similar with no statistically significant differences. The treatment relative ranking of predictive probabilities of constant shoulder scores in patients with CF after surgery revealed the surface under the cumulative ranking curves (SUCRA) value is the highest in RP. Conclusion The current network meta-analysis suggests that RP may be the optimum surgical treatment among six inventions for patients with CF, and it can improve the shoulder score of patients with CF. Implications for Rehabilitation RP improves shoulder joint function after surgical procedure. RP achieves stability with minimal complications after surgery. RP may be the optimum surgical treatment for

  3. Retinopathy risk factors in type II diabetic patients using factor analysis and discriminant analysis

    OpenAIRE

    Tazhibi, Mahdi; Sarrafzade, Sheida; Amini, Masoud

    2014-01-01

    Introduction: Diabetes is one of the most common chronic diseases in the world. Incidence and prevalence of diabetes are increasing in developing countries as well as in Iran. Retinopathy is the most common chronic disorder in diabetic patients. Materials and Methods: In this study, we used the information of diabetic patients’ reports that refer to endocrine and metabolism research center of Isfahan University of Medical Sciences to determine diabetic retinopathy risk factors. We used factor...

  4. Multivariate Analysis of Risk Factors of Cerebral Infarction in 439 Patients Undergoing Thoracic Endovascular Aneurysm Repair.

    Science.gov (United States)

    Kanaoka, Yuji; Ohki, Takao; Maeda, Koji; Baba, Takeshi; Fujita, Tetsuji

    2016-04-01

    The aim of the study is to identify the potential risk factors of cerebral infarction associated with thoracic endovascular aneurysm repair (TEVAR). TEVAR was developed as a less invasive surgical alternative to conventional open repair for thoracic aortic aneurysm treatment. However, outcomes following TEVAR of aortic and distal arch aneurysms remain suboptimal. Cerebral infarction is a major concern during the perioperative period. We included 439 patients who underwent TEVAR of aortic aneurysms at a high-volume teaching hospital between July 2006 and June 2013. Univariate and multivariate logistic regression analyses were performed to identify perioperative cerebral infarction risk factors. Four patients (0.9%) died within 30 days of TEVAR; 17 (3.9%) developed cerebral infarction. In univariate analysis, history of ischemic heart disease and cerebral infarction and concomitant cerebrovascular disease were significantly associated with cerebral infarction. "Shaggy aorta" presence, left subclavian artery coverage, carotid artery debranching, and pull-through wire use were identified as independent risk factors of cerebral infarction. In multivariate analysis, history of ischemic heart disease (odds ratio [OR] 6.49, P = 0.046) and cerebral infarction (OR 43.74, P = 0.031), "shaggy aorta" (OR 30.32, P < 0.001), pull-through wire use during surgery (OR 7.196, P = 0.014), and intraoperative blood loss ≥800 mL (OR 24.31, P = 0.017) were found to be independent risk factors of cerebral infarction. This study identified patient- and procedure-related risk factors of cerebral infarction following TEVAR. These results indicate that patient outcomes could be improved through the identification and management of procedure-related risk factors.

  5. Identification of noise in linear data sets by factor analysis

    Energy Technology Data Exchange (ETDEWEB)

    Roscoe, B.A.; Hopke, P.K.

    1982-01-01

    With the use of atomic and nuclear methods to analyze samples for a multitude of elements, very large data sets have been generated. Due to the ease of obtaining these results with computerized systems, the elemental data acquired are not always as thoroughly checked as they should be leading to some, if not many, bad data points. It is advantageous to have some feeling for the trouble spots in a data set before it is used for further studies. A technique which has the ability to identify bad data points, after the data has been generated, is classical factor analysis. The ability of classical factor analysis to identify two different types of data errors make it ideally suited for scanning large data sets. Since the results yielded by factor analysis indicate correlations between parameters, one must know something about the nature of the data set and the analytical techniques used to obtain it to confidentially isolate errors.

  6. The Dependent Converging Instrument Approach Procedure: An Analysis of its Safety and Applicability

    Science.gov (United States)

    1992-11-01

    scenarios, this feature leads to a slightly more conservative measure of the predicited separation (i.e., smaller separation). 3.4.2 Determination of Headwind...for the DCIA procedure may facilitate the maintenance of the independence of one parallel approach, with dependent converging approaches to the other

  7. An Analysis of Public Art on University Campuses: Policies, Procedures, and Best Practices

    Science.gov (United States)

    Grenier, Michael Robert

    2009-01-01

    This study investigated the policies, procedures, and practices of public art programs on the campuses of research institutions with very high activity as defined by the Carnegie Classification. From this particular type of institution, 55 of the 96 public art administrators provided their opinions, attitudes, and behaviors as part of the "Public…

  8. Standard operating procedure: implementation, critical analysis, and validation in the Audiology Department at CESTEH/Fiocruz.

    Science.gov (United States)

    Freitas, Anelisse Vasco Mascarenhas de; Quixabeiro, Elinaldo Leite; Luz, Geórgia Rosangela Soares; Franco, Viviane Moreira; Santos, Viviane Fontes Dos

    2016-01-01

    Evaluate three standard operational procedures (SOPs), regarding the application of the brainstem auditory evoked potential (BAEP) test, implemented by the Audiology Department of the Center for Studies in Occupational Health and Human Ecology (CESTEH) through the application of a questionnaire and to verify whether the SOPs are effective and assess the necessity for improvement. The study was conducted in three phases: in the first phase, eight speech-language pathologists and seven physicians, with no experience in BAEP, were instructed to read and perform each SOP, eventually all individuals evaluated the SOPs by responding to a questionnaire; in the second phase, the questionnaires were analyzed and the three SOP texts were reviewed; in the third phase, nine speech-language pathologists and six physicians, also with no experience in BAEP, read and re-evaluated the reviewed SOPs through a questionnaire. In the first phase, difficulties in understanding the texts were found, raising doubts about the procedures; however, every participant was able to perform the procedure as a whole. In the third phase, after the review, all individuals were able to perform the procedures appropriately and continuously without any doubts. The assessment of the SOPs by questionnaires showed the need for adaptation in the texts. After the texts were reviewed according to the suggestions of the health professionals, it was possible to observe that the SOPs assisted in the execution of the task, which was conducted without any difficulties or doubts, being regarded effective and ensuring quality to the service offered.

  9. Finite element procedures for coupled linear analysis of heat transfer, fluid and solid mechanics

    Science.gov (United States)

    Sutjahjo, Edhi; Chamis, Christos C.

    1993-01-01

    Coupled finite element formulations for fluid mechanics, heat transfer, and solid mechanics are derived from the conservation laws for energy, mass, and momentum. To model the physics of interactions among the participating disciplines, the linearized equations are coupled by combining domain and boundary coupling procedures. Iterative numerical solution strategy is presented to solve the equations, with the partitioning of temporal discretization implemented.

  10. Personality and coping traits: A joint factor analysis.

    Science.gov (United States)

    Ferguson, Eamonn

    2001-11-01

    OBJECTIVES: The main objective of this paper is to explore the structural similarities between Eysenck's model of personality and the dimensions of the dispositional COPE. Costa et al. {Costa P., Somerfield, M., & McCrae, R. (1996). Personality and coping: A reconceptualisation. In (pp. 44-61) Handbook of coping: Theory, research and applications. New York: Wiley} suggest that personality and coping behaviour are part of a continuum based on adaptation. If this is the case, there should be structural similarities between measures of personality and coping behaviour. This is tested using a joint factor analysis of personality and coping measures. DESIGN: Cross-sectional survey. METHODS: The EPQ-R and the dispositional COPE were administered to 154 participants, and the data were analysed using joint factor analysis and bivariate associations. RESULTS: The joint factor analysis indicated that these data were best explained by a four-factor model. One factor was primarily unrelated to personality. There was a COPE-neurotic-introvert factor (NI-COPE) containing coping behaviours such as denial, a COPE-extroversion (E-COPE) factor containing behaviours such as seeking social support and a COPE-psychoticism factor (P-COPE) containing behaviours such as alcohol use. This factor pattern, especially for NI- and E-COPE, was interpreted in terms of Gray's model of personality {Gray, J. A. (1987) The psychology of fear and stress. Cambridge: Cambridge University Press}. NI-, E-, and P-COPE were shown to be related, in a theoretically consistent manner, to perceived coping success and perceived coping functions. CONCLUSIONS: The results indicate that there are indeed conceptual links between models of personality and coping. It is argued that future research should focus on identifying coping 'trait complexes'. Implications for practice are discussed.

  11. Emotional experiences and motivating factors associated with fingerprint analysis.

    Science.gov (United States)

    Charlton, David; Fraser-Mackenzie, Peter A F; Dror, Itiel E

    2010-03-01

    In this study, we investigated the emotional and motivational factors involved in fingerprint analysis in day-to-day routine case work and in significant and harrowing criminal investigations. Thematic analysis was performed on interviews with 13 experienced fingerprint examiners from a variety of law enforcement agencies. The data revealed factors relating to job satisfaction and the use of skill. Individual satisfaction related to catching criminals was observed; this was most notable in solving high profile, serious, or long-running cases. There were positive emotional effects associated with matching fingerprints and apparent fear of making errors. Finally, we found evidence for a need of cognitive closure in fingerprint examiner decision-making.

  12. Unascertained Factor Method of Dynamic Characteristic Analysis for Antenna Structures

    Institute of Scientific and Technical Information of China (English)

    ZHU Zeng-qing; LIANG Zhen-tao; CHEN Jian-jun

    2008-01-01

    The dynamic characteristic analysis model of antenna structures is built, in which the structural physical parameters and geometrical dimensions are all considered as unascertained variables, And a structure dynamic characteristic analysis method based on the unascertained factor method is given. The computational expression of structural characteristic is developed by the mathematics expression of unascertained factor and the principles of unascertained rational numbers arithmetic. An example is given, in which the possible values and confidence degrees of the unascertained structure characteristics are obtained. The calculated results show that the method is feasible and effective.

  13. Using Factor Analysis to Identify Topic Preferences Within MBA Courses

    Directory of Open Access Journals (Sweden)

    Earl Chrysler

    2003-02-01

    Full Text Available This study demonstrates the role of a principal components factor analysis in conducting a gap analysis as to the desired characteristics of business alumni. Typically, gap analyses merely compare the emphases that should be given to areas of inquiry with perceptions of actual emphases. As a result, the focus is upon depth of coverage. A neglected area in need of investigation is the breadth of topic dimensions and their differences between the normative (should offer and the descriptive (actually offer. The implications of factor structures, as well as traditional gap analyses, are developed and discussed in the context of outcomes assessment.

  14. Risk factors for baclofen pump infection in children: a multivariate analysis.

    Science.gov (United States)

    Spader, Heather S; Bollo, Robert J; Bowers, Christian A; Riva-Cambrin, Jay

    2016-06-01

    OBJECTIVE Intrathecal baclofen infusion systems to manage severe spasticity and dystonia are associated with higher infection rates in children than in adults. Factors unique to this population, such as poor nutrition and physical limitations for pump placement, have been hypothesized as the reasons for this disparity. The authors assessed potential risk factors for infection in a multivariate analysis. METHODS Patients who underwent implantation of a programmable pump and intrathecal catheter for baclofen infusion at a single center between January 1, 2000, and March 1, 2012, were identified in this retrospective cohort study. The primary end point was infection. Potential risk factors investigated included preoperative (i.e., demographics, body mass index [BMI], gastrostomy tube, tracheostomy, previous spinal fusion), intraoperative (i.e., surgeon, antibiotics, pump size, catheter location), and postoperative (i.e., wound dehiscence, CSF leak, and number of revisions) factors. Univariate analysis was performed, and a multivariate logistic regression model was created to identify independent risk factors for infection. RESULTS A total of 254 patients were evaluated. The overall infection rate was 9.8%. Univariate analysis identified young age, shorter height, lower weight, dehiscence, CSF leak, and number of revisions within 6 months of pump placement as significantly associated with infection. Multivariate analysis identified young age, dehiscence, and number of revisions as independent risk factors for infection. CONCLUSIONS Young age, wound dehiscence, and number of revisions were independent risk factors for infection in this pediatric cohort. A low BMI and the presence of either a gastrostomy or tracheostomy were not associated with infection and may not be contraindications for this procedure.

  15. Confirmatory Factor Analysis of the Procrastination Assessment Scale for Students

    Directory of Open Access Journals (Sweden)

    Ronald D. Yockey

    2015-10-01

    Full Text Available The relative fit of one- and two-factor models of the Procrastination Assessment Scale for Students (PASS was investigated using confirmatory factor analysis on an ethnically diverse sample of 345 participants. The results indicated that although the two-factor model provided better fit to the data than the one-factor model, neither model provided optimal fit. However, a two-factor model which accounted for common item theme pairs used by Solomon and Rothblum in the creation of the scale provided good fit to the data. In addition, a significant difference by ethnicity was also found on the fear of failure subscale of the PASS, with Whites having significantly lower scores than Asian Americans or Latino/as. Implications of the results are discussed and recommendations made for future work with the scale.

  16. Analysis of Factors Influencing Farmers’ Identification of Entrepreneurial Opportunity

    Institute of Scientific and Technical Information of China (English)

    Jing; GAO; Fang; YANG

    2013-01-01

    Based on the survey data of entrepreneurship concerning farmers in China,this article uses the multivariate adjustment regression analysis method,to analyze the factors influencing farmers’ identification of entrepreneurial opportunity and the mechanism. The results show that demographic characteristics are still an important factor influencing farmers’ identification of entrepreneurial opportunity,but the extent of its influence is weaker than entrepreneurs’ trait. The new trait theory is verified in farmers’ entrepreneurship opportunity behavior; entrepreneurship environment is becoming an important factor influencing entrepreneurial opportunity identification,whose regulation effect on entrepreneurs’ social network and previous experience is stronger than the regulation effect on entrepreneurs’ psychological trait.

  17. A factor analysis to find critical success factors in retail brand

    Directory of Open Access Journals (Sweden)

    Naser Azad

    2013-03-01

    Full Text Available The present exploratory study aims to find critical components of retail brand among some retail stores. The study seeks to build a brand name in retail level and looks to find important factors affecting it. Customer behavior is largely influenced when the first retail customer experience is formed. These factors have direct impacts on customer experience and satisfaction in retail industry. The proposed study performs an empirical investigation on two well-known retain stores located in city of Tehran, Iran. Using a sample of 265 people from regular customers, the study uses factor analysis and extracts four main factors including related brand, product benefits, customer welfare strategy and corporate profits using the existing 31 factors in the literature.

  18. Ranking insurance firms using AHP and Factor Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad Khodaei Valahzaghard

    2013-03-01

    Full Text Available Insurance industry includes a significant part of economy and it is important to learn more about the capabilities of different firms, which are active in this industry. In this paper, we present an empirical study to rank the insurance firms using analytical hierarchy process as well as factor analysis. The study considers four criteria including capital adequacy, quality of earning, quality of cash flow and quality of firms’ assets. The results of the implementation of factor analysis (FA have been verified using Kaiser-Meyer-Olkin (KMO=0.573 and Bartlett's Chi-Square (443.267 P-value=0.000 tests. According to the results FA, the first important factor, capital adequacy, represents 21.557% of total variance, the second factor, quality of income, represents 20.958% of total variance. In addition, the third factor, quality of cash flow, represents 19.417% of total variance and the last factor, quality of assets, represents 18.641% of total variance. The study has also used analytical hierarchy process (AHP to rank insurance firms. The results of our survey indicate that capital adequacy (0.559 is accounted as the most important factor followed by quality of income (0.235, quality of cash flow (0.144 and quality of assets (0.061. The results of AHP are consistent with the results of FA, which somewhat validates the overall study.

  19. Workplace Innovation: Exploratory and Confirmatory Factor Analysis for Construct Validation

    Directory of Open Access Journals (Sweden)

    Wipulanusat Warit

    2017-06-01

    Full Text Available Workplace innovation enables the development and improvement of products, processes and services leading simultaneously to improvement in organisational performance. This study has the purpose of examining the factor structure of workplace innovation. Survey data, extracted from the 2014 APS employee census, comprising 3,125 engineering professionals in the Commonwealth of Australia’s departments were analysed using exploratory factor analysis (EFA and confirmatory factor analysis (CFA. EFA returned a two-factor structure explaining 69.1% of the variance of the construct. CFA revealed that a two-factor structure was indicated as a validated model (GFI = 0.98, AGFI = 0.95, RMSEA = 0.08, RMR = 0.02, IFI = 0.98, NFI = 0.98, CFI = 0.98, and TLI = 0.96. Both factors showed good reliability of the scale (Individual creativity: α = 0.83, CR = 0.86, and AVE = 0.62; Team Innovation: α = 0.82, CR = 0.88, and AVE = 0.61. These results confirm that the two factors extracted for characterising workplace innovation included individual creativity and team innovation.

  20. Analysis of plug-in hybrid electric vehicle utility factors

    Science.gov (United States)

    Bradley, Thomas H.; Quinn, Casey W.

    Plug-in hybrid electric vehicles (PHEVs) are hybrid electric vehicles that can be fueled from both conventional liquid fuels and grid electricity. To represent the total contribution of both of these fuels to the operation, energy use, and environmental impacts of PHEVs, researchers have developed the concept of the utility factor. As standardized in documents such as SAE J1711 and SAE J2841, the utility factor represents the proportion of vehicle distance travelled that can be allocated to a vehicle test condition so as to represent the real-world driving habits of a vehicle fleet. These standards must be used with care so that the results are understood within the context of the assumptions implicit in the standardized utility factors. This study analyzes and derives alternatives to the standard utility factors from the 2001 National Highway Transportation Survey, so as to understand the sensitivity of PHEV performance to assumptions regarding charging frequency, vehicle characteristics, driver characteristics, and means of defining the utility factor. Through analysis of these alternative utility factors, this study identifies areas where analysis, design, and policy development for PHEVs can be improved by alternative utility factor calculations.

  1. Bone dynamic study. Evaluation for factor analysis of hip joint

    Energy Technology Data Exchange (ETDEWEB)

    Nakajima, Kotaro; Toyama, Hinako; Ishikawa, Nobuyoshi; Hatakeyama, Rokuro; Akisada, Masayoshi; Miyagawa, Shunpei

    1989-02-01

    Factor analysis was applied to dynamic study of Tc-99m MDP for the evaluation of hip joint disorders. Fifteen patients were examined; eight were normal, six was osteoarthritis in which one accompanied synovitis was included, and one was aseptic necrosis on the head of the femur. In normals, according to the Tc-99m MDP kinetics, three factor images and time-activity curves were obtained which were named as blood vessel, soft tissue, and bone factor images and curves. In the patient with osteoarthritis, increased accumulation of the hip joint was shown in bone factor image only. But in one patient, who took osteoarthritis with synovitis, marked accumulations of the Tc-99m MDP appeared not only on the bone factor image but also on the soft tissue. Operation revealed thickening synovial tissue around the hip joint, caused by inflammatory process. In follow-up studies of the patient with aseptic necrosis on the head of the left femur, exessive accumulations, which were seemed in his left hip joint on both bone and soft tissue factor images at first, were decreased respondently to the treatment of this lesion. In conclusion, the factor analysis was useful for differential diagnosis of the hip joint disorders and observation of the clinical course of the hip joint disorders.

  2. SWOT Analysis of King Abdullah II School for Information Technology at University of Jordan According to Quality Assurance Procedures

    Directory of Open Access Journals (Sweden)

    Lubna Naser Eddeen

    2013-02-01

    Full Text Available Many books and research papers have defined and referred to the term SWOT Analysis. SWOT Analysis can be defines as "strategic planning method used to evaluate the Strengths, Weaknesses, Opportunities, and Threats involved in a project or in a business venture". It's used to assess internal and external environmental factors which affect on the business. This paper analyze the main SWOT factors at King Abdullah II School for Information Technology.

  3. Surface Area Analysis Using the Brunauer-Emmett-Teller (BET) Method: Standard Operating Procedure Series: SOP-C

    Science.gov (United States)

    2016-09-01

    www.erdc.usace.army.mil. To search for other technical reports published by ERDC, visit the ERDC online library at http://acwc.sdp.sirsi.net/client/default...the analysis. DISCLAIMER: The contents of this report are not to be used for advertising , publication, or promotional purposes. Citation of trade...Procedures link to the ERDC NanoGRID (Guidance for Risk Informed Deployment) framework for testing the exposure and hazard of nanotechnology Environmental

  4. ISLAMIC LEADERSHIP AND MAQASID AL-SHARI’AH: REINVESTIGATING THE DIMENSIONS OF ISLAMIC LEADERSHIP INVENTORY (ILI) VIA CONTENT ANALYSIS PROCEDURES

    OpenAIRE

    Mahazan, A. M.; Nurhafizah, S.; Rozita, A.; H. Siti Aishah; Wan Mohd. Fazrul Azdi, W. R.; Mohd. Rumaizuddin, G.; Yuseri, A.; Mohd. Rosmizi, A. R.; H. Muhammad; Mohd. Azhar, I. R.; Abdullah, A.G.; Muhammad Yusuf, K.; Khairunneezam, M. N.

    2015-01-01

    The purpose of this research is to investigate separate themes of Islamic Leadership based on analyses conducted on selected literature of conventional and Islamic Leadership. The themes of Islamic Leadership were identified for the purpose of developing a specific inventory to measure Islamic Leadership or the Islamic Leadership Inventory (ILI). In identifying the themes of Islamic Leadership, this research applied qualitative content analysis procedures on four categories of literature. The...

  5. Risk Factors Leading to Free Flap Failure: Analysis From the National Surgical Quality Improvement Program Database.

    Science.gov (United States)

    Sanati-Mehrizy, Paymon; Massenburg, Benjamin B; Rozehnal, John M; Ingargiola, Michael J; Hernandez Rosa, Jonatan; Taub, Peter J

    2016-11-01

    The objective of this study was to identify risk factors for free flap failure among various anatomically based free flap subgroups. The 2005 to 2012 American College of Surgeons National Surgical Quality Improvement Program database was queried for patients undergoing microvascular free tissue transfer based on current procedural terminology codes. Univariate analysis was performed to identify any association between flap failure and the following factors: age, gender, race, body mass index (BMI), diabetes, smoking, alcohol use, hypertension, intraoperative transfusion, functional health status, American Society of Anesthesiologists class, operative time, and flap location. Factors yielding a significance of P free flap reconstruction met inclusion criteria. Multivariate logistic regression identified BMI (adjusted odds ratio [AOR] = 1.07, P = 0.004) and male gender (AOR = 2.16, P = 0.033) as independent risk factors for flap failure. Among the "breast flaps" subgroup, BMI (AOR = 1.075, P = 0.012) and smoking (AOR = 3.35, P = 0.02) were independent variables associated with flap failure. In "head and neck flaps," operative time (AOR = 1.003, P = 0.018) was an independent risk factor for flap failure. No independent risk factors were identified for the "extremity flaps" or "trunk flaps" subtypes. BMI, smoking, and operative time were identified as independent risk factors for free flap failure among all flaps or within flap subsets.

  6. Content Analysis of Vomit and Diarrhea Cleanup Procedures To Prevent Norovirus Infections in Retail and Food Service Operations.

    Science.gov (United States)

    Chao, Morgan G; Dubé, Anne-Julie; Leone, Cortney M; Moore, Christina M; Fraser, Angela M

    2016-11-01

    Human noroviruses are the leading cause of foodborne disease in the United States, sickening 19 to 21 million Americans each year. Vomit and diarrhea are both highly concentrated sources of norovirus particles. For this reason, establishing appropriate cleanup procedures for these two substances is critical. Food service establishments in states that have adopted the 2009 or 2013 U.S. Food and Drug Administration Food Code are required to have a program detailing specific cleanup procedures. The aim of our study was to determine the alignment of existing vomit and diarrhea cleanup procedures with the 11 elements recommended in Annex 3 of the 2011 Supplement to the 2009 Food Code and to determine their readability and clarity of presentation. In July 2015, we located vomit and diarrhea cleanup procedures by asking Norovirus Collaborative for Outreach, Research, and Education stakeholders for procedures used by their constituency groups and by conducting a Google Advanced Search of the World Wide Web. We performed content analysis to determine alignment with the recommendations in Annex 3. Readability and clarity of presentation were also assessed. A total of 38 artifacts were analyzed. The mean alignment score was 7.0 ± 1.7 of 11 points; the mean clarity score was 6.7 ± 2.5 of 17 points. Only nine artifacts were classified as high clarity, high alignment. Vomit and diarrhea cleanup procedures should align with Annex 3 in the Food Code and should, as well, be clearly presented; yet, none of the artifacts completely met both conditions. To reduce the spread of norovirus infections in food service establishments, editable guidelines are needed that are aligned with Annex 3 and are clearly written, into which authors could insert their facility-specific information.

  7. Analysis of spatio-temporal variability of C-factor derived from remote sensing data

    Science.gov (United States)

    Pechanec, Vilem; Benc, Antonin; Purkyt, Jan; Cudlin, Pavel

    2016-04-01

    In some risk areas water erosion as the present task has got the strong influence on agriculture and can threaten inhabitants. In our country combination of USLE and RUSLE models has been used for water erosion assessment (Krása et al., 2013). Role of vegetation cover is characterized by the help of vegetation protection factor, so-called C- factor. Value of C-factor is given by the ratio of washing-off on a plot with arable crops to standard plot which is kept as fallow regularly spud after any rain (Janeček et al., 2012). Under conditions we cannot identify crop structure and its turn, determination of C-factor can be problem in large areas. In such case we only determine C-factor according to the average crop representation. New technologies open possibilities for acceleration and specification of the approach. Present-day approach for the C-factor determination is based on the analysis of multispectral image data. Red and infrared spectrum is extracted and these parts of image are used for computation of vegetation index series (NDVI, TSAVI). Acquired values for fractional time sections (during vegetation period) are averaged out. At the same time values of vegetation indices for a forest and cleared area are determined. Also regressive coefficients are computed. Final calculation is done by the help of regressive equations expressing relation between values of NDVI and C-factor (De Jong, 1994; Van der Knijff, 1999; Karaburun, 2010). Up-to-date land use layer is used for the determination of erosion threatened areas on the base of selection of individual landscape segments of erosion susceptible categories of land use. By means of Landsat 7 data C-factor has been determined for the whole area of the Czech Republic in every month of the year of 2014. At the model area in a small watershed C-factor has been determined by the conventional (tabular) procedure. Analysis was focused on: i) variability assessment of C-factor values while using the conventional

  8. A Brief History of the Philosophical Foundations of Exploratory Factor Analysis.

    Science.gov (United States)

    Mulaik, Stanley A.

    1987-01-01

    Exploratory factor analysis derives its key ideas from many sources, including Aristotle, Francis Bacon, Descartes, Pearson and Yule, and Kant. The conclusions of exploratory factor analysis are never complete without subsequent confirmatory factor analysis. (Author/GDC)

  9. Analysis of plant gums and saccharide materials in paint samples: comparison of GC-MS analytical procedures and databases

    Directory of Open Access Journals (Sweden)

    Lluveras-Tenorio Anna

    2012-10-01

    Full Text Available Abstract Background Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. Results This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI. The main steps of the analytical procedures and their optimisation are discussed. Conclusions The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step

  10. Constitutive modeling and finite element procedure development for stress analysis of prismatic high temperature gas cooled reactor graphite core components

    Energy Technology Data Exchange (ETDEWEB)

    Mohanty, Subhasish, E-mail: smohanty@anl.gov [Argonne National Laboratory, South Cass Avenue, Argonne, IL 60439 (United States); Majumdar, Saurindranath [Argonne National Laboratory, South Cass Avenue, Argonne, IL 60439 (United States); Srinivasan, Makuteswara [U.S. Nuclear Regulatory Commission, Washington, DC 20555 (United States)

    2013-07-15

    Highlights: • Finite element procedure developed for stress analysis of HTGR graphite component. • Realistic fluence profile and reflector brick shape considered for the simulation. • Also realistic H-451 grade material properties considered for simulation. • Typical outer reflector of a GT-MHR type reactor considered for numerical study. • Based on the simulation results replacement of graphite bricks can be scheduled. -- Abstract: High temperature gas cooled reactors, such as prismatic and pebble bed reactors, are increasingly becoming popular because of their inherent safety, high temperature process heat output, and high efficiency in nuclear power generation. In prismatic reactors, hexagonal graphite bricks are used as reflectors and fuel bricks. In the reactor environment, graphite bricks experience high temperature and neutron dose. This leads to dimensional changes (swelling and or shrinkage) of these bricks. Irradiation dimensional changes may affect the structural integrity of the individual bricks as well as of the overall core. The present paper presents a generic procedure for stress analysis of prismatic core graphite components using graphite reflector as an example. The procedure is demonstrated through commercially available ABAQUS finite element software using the option of user material subroutine (UMAT). This paper considers General Atomics Gas Turbine-Modular Helium Reactor (GT-MHR) as a bench mark design to perform the time integrated stress analysis of a typical reflector brick considering realistic geometry, flux distribution and realistic irradiation material properties of transversely isotropic H-451 grade graphite.

  11. Human factors evaluation of remote afterloading brachytherapy. Supporting analyses of human-system interfaces, procedures and practices, training and organizational practices and policies. Volume 3

    Energy Technology Data Exchange (ETDEWEB)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L. [Pacific Science & Engineering Group, San Diego, CA (United States)] [and others

    1995-07-01

    A human factors project on the use of nuclear by-product material to treat cancer using remotely operated afterloaders was undertaken by the Nuclear Regulatory Commission. The purpose of the project was to identify factors that contribute to human error in the system for remote afterloading brachytherapy (RAB). This report documents the findings from the second, third, fourth, and fifth phases of the project, which involved detailed analyses of four major aspects of the RAB system linked to human error: human-system interfaces; procedures and practices; training practices and policies; and organizational practices and policies, respectively. Findings based on these analyses provided factual and conceptual support for the final phase of this project, which identified factors leading to human error in RAB. The impact of those factors on RAB performance was then evaluated and prioritized in terms of safety significance, and alternative approaches for resolving safety significant problems were identified and evaluated.

  12. Efficient Procedure for Seismic Analysis of Soil-Structure Interaction System

    Institute of Scientific and Technical Information of China (English)

    LIU Jingbo; GU Yin; WANG Yan; LI Bin

    2006-01-01

    A simplified and efficient procedure, based on the viscous-spring artificial boundary and the modal superposition method, is developed to analyze the dynamic soil-structure interaction system in the time domain. The viscous-spring artificial boundary introduced in this procedure transforms the infinite soil-structure interaction system to an approximately finite system. A seismic wave input method is used to transform the wave scattering problem into the wave source problem. The modal superposition method is then applied to this approximate finite system. The results show that this method with only a few modes can significantly reduce the computational time with almost the same precision as the traditional direct integration method. Comparison of results from different loading times demonstrates that the advantages of this method are evident in computing with long loading time.

  13. A test procedure for energetic and performance analysis of cold appliances for the food industry

    Science.gov (United States)

    Armani, F.; Boscolo, A.

    2013-09-01

    In this article we present a novel approach for the characterization of cold appliances and in particular of refrigerators based on the standard vapour compression cycle with a reciprocating on/off compressor. The test procedure is based on a virtual instrument that perform both the stimulus and the data acquisition on the device under test. Acquired data is elaborated to fit a semi-empirical model based on the energetic balances between thermal and electrical sub systems and the heat exchanged with the environment. This approach results in a simple method to calculate useful parameters of the refrigerator, such as energetic performance, cooling effect and limit values of thermal loads. The test procedure requires only a few temperatures and the electric power consumption to be monitored, resulting in a low impact on the refrigerator. Preliminary tests showed a good estimation of parameters and prediction of energy consumption and heat extraction capacity of the refrigerator under test.

  14. [The radicality of surgical resection in rectal cancer. Analysis of factors associated with incomplete mesorectal excision].

    Science.gov (United States)

    Ferko, A; Orhalmi, J; Nikolov, D H; Hovorková, E; Chobola, M; Vošmik, M; Cermáková, E

    2013-06-01

    Circumferential resection margin (pCRM) and the completeness of mesorectal excision (ME) are two independent prognostic factors significantly associated with the radicality of surgical treatment. Positive pCRM and incomplete mesorectal excision are associated with a significantly higher incidence of local recurrence and worse patient prognosis. The aim of this article is to analyze the risk factors associated with incomplete mesorectal excision. Patients operated on at the Department of Surgery, University Hospital Hradec Kralove between January 2011 and February 2013 were included in the study. The patients data were prospectively collected and entered in the Dg C20 registry. The following factors were analyzed: sex, age, BMI, cN, pT, clinical stage, the involved segment of the rectum, neoadjuvant therapy, circumferential tumour location, the type of surgical approach and the type of surgery. 168 patients were operated on during the above period. 9 (5.3%) palliative stomas and 159 (94.6%) resection procedures were performed in this group of 168 patients. 7 (4.4%) patients were excluded because the quality of excision was not assessed in them. 114 (75%) resections, including 5 intersphincteric resections, were performed in the group of the remaining 152 patients. 10 (7%) were Hartmanns procedures a 28 (18%) were amputation procedures. Out of 152 procedures, 69 (45%) were performed laparoscopically. Positive (y)pCRO was recorded in 26 (17%) patients, predominantly after abdominoperineal resection (APR) - 11 out of 27 (41%), and Hartmanns operation - 6 out of 10 (60%). Incomplete ME was observed in 45 patients (30%), complete ME in 81 patients (53%) and partially complete in 26 patients (17%). Univariate analysis confirmed statistically significant factors associated with incomplete mesorectal excision: (y)pT (P = 0.00027), type of surgery (P = 0.00001) and tumour location (P = 0.00001). Multivariate analysis then confirmed two independent prognostic factors

  15. Analysis of the effectiveness of different hygiene procedures used in dental prostheses.

    Science.gov (United States)

    Rossato, Marisa Bagiotto; Unfer, Beatriz; May, Lilana Gressler; Braun, Katia Olmedo

    2011-01-01

    To compare the effectiveness of bacterial plaque removal of six denture hygiene procedures used by patients to clean their dentures. Fifteen students randomly divided into groups G1, G2, G3, G4, G5 and G6 used maxillary intraoral appliances for 24 h without cleaning them. Afterwards, the appliances were submitted to the following procedures: P1: washing under running water for 20 s; P2 and P3: cleaning with alkaline peroxide (Corega Tabs®) for 5 and 30 min, respectively; P4: brushing with water and liquid soap for 40 s; P5: alkaline hypochlorite for 10 minutes; P6: home use chlorine solution (Q'boa® at 0.45% for 10 min), throughout a period of 6 consecutive weeks. The procedures followed a circulating scheme, so that all the appliances were submitted to all the hygiene methods studied. After the hygiene procedures, the appliances were stained, photographed and submitted to the weighing method. After ANOVA and Tukey's test, differences were observed: P5 = 0.73 ± 0.3 (b), P6 = 1.27 ± 0.4(b,c), P4 = 1.92 ± 0.5 (b,c), P3 = 2.24 ± 1.0 (b,c), P2 = 7.53 ± 2.5 (c) and P1 = 26.86 ± 15. 3 (a). From the results of the study, it could be concluded that the use of alkaline hypochlorite is the best way to remove bacterial plaque, followed by the home-use chlorine solution and brushing with water and liquid soap. Corega Tabs® must be used for 30 min of immersion to have a cleaning effectiveness similar to that of alkaline hypochlorite.

  16. A semi-automated image analysis procedure for in situ plankton imaging systems.

    Science.gov (United States)

    Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M

    2015-01-01

    Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects ( 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed

  17. Heterogeneity of Capital Stocks in Japan: Classification by Factor Analysis

    Directory of Open Access Journals (Sweden)

    Konomi Tonogi

    2014-04-01

    Full Text Available This paper examines the heterogeneity of capital stocks using financial statement data of publicly listed Japanese firms. We conduct factor analysis on investment rates among various capital goods and estimate factor loadings of each as its reactions to common factors like total factor productivity (TFP shocks. Then we estimate the uniqueness for each investment rate, which is the percentage of its variance that is not explained by the common factors. If the estimated factor loadings are similar between some of the heterogeneous capital goods, it may well imply that the adjustment cost structure of these investments is also similar. Further, if some of the estimated values of uniqueness are small, it suggests that certain theoretical models may track the dynamics of the investment rates well. Our estimation results show that Building and Structure have similar factor loadings as do Machinery & Equipment, Vehicles & Delivery Equipment, and Tools, Furniture, & Fixture. This suggests that we could remedy the Curse of Dimensionality by bundling the investments that have similar factor loadings together and that identifying the functional structures of each group of capital goods can greatly improve the performance of empirical investment equations.

  18. Landslides geotechnical analysis. Qualitative assessment by valuation factors

    Science.gov (United States)

    Cuanalo Oscar, Sc D.; Oliva Aldo, Sc D.; Polanco Gabriel, M. E.

    2012-04-01

    In general, a landslide can cause a disaster when it is combined a number of factors such as an extreme event related to a geological phenomenon, vulnerable elements exposed in a specific geographic area, and the probability of loss and damage evaluated in terms of lives and economic assets, in a certain period of time. This paper presents the qualitative evaluation of slope stability through of Valuation Factors, obtained from the characterization of the determinants and triggers factors that influence the instability; for the first the morphology and topography, geology, soil mechanics, hydrogeology and vegetation to the second, the rain, earthquakes, erosion and scour, human activity, and ultimately dependent factors of the stability analysis, and its influence ranges which greatly facilitate the selection of construction processes best suited to improve the behavior of a slope or hillside. The Valuation Factors are a set of parameters for assessing the influence of conditioning and triggering factors that influence the stability of slopes and hillsides. The characteristics of each factor must be properly categorized to involve its effect on behavior; a way to do this is by assigning a weighted value range indicating its effect on the stability of a slope. It is proposed to use Valuation Factors with weighted values between 0 and 1 (arbitrarily selected but common sense and logic), the first corresponds to no or minimal effect on stability (no effect or very little influence) and the second, the greatest impact on it (has a significant influence). The meddle effects are evaluated with intermediate values.

  19. Aortic root performance after valve sparing procedure: a comparative finite element analysis.

    Science.gov (United States)

    Soncini, Monica; Votta, Emiliano; Zinicchino, Silvia; Burrone, Valeria; Mangini, Andrea; Lemma, Massimo; Antona, Carlo; Redaelli, Alberto

    2009-03-01

    David and Yacoub sparing techniques are the most common procedures adopted for the surgical correction of aortic root aneurysms. These surgical procedures entail the replacement of the sinuses of Valsalva with a synthetic graft, inside which the cusps are re-suspended. Root replacement by a synthetic graft may result in altered valve behaviour both in terms of coaptation and stress distribution, thus leading to the failure of the correction. A finite element approach was used to investigate this phenomenon; four 3D models of the aortic root were developed to simulate the root in physiological, pathological and post-operative conditions after the two different surgical procedures. The physiological 3D geometrical model was developed on the basis of anatomical data obtained from echocardiographic images; it was then modified to obtain the pathological and post-operative models. The effectiveness of both techniques was assessed by comparison with the first two simulated conditions, in terms of stresses acting on the root, leaflet coaptation and interaction between leaflets and the graft during valve opening. Results show that both sparing techniques are able to restore aortic valve coaptation and to reduce stresses induced by the initial root dilation. Nonetheless, both techniques lead to altered leaflet kinematics, with more evident alterations after David repair.

  20. Factor analysis of borehole logs for evaluating formation shaliness: a hydrogeophysical application for groundwater studies

    Science.gov (United States)

    Szabó, Norbert Péter; Dobróka, Mihály; Turai, Endre; Szűcs, Péter

    2014-05-01

    The calculation of groundwater reserves in shaly sand aquifers requires a reliable estimation of effective porosity and permeability; the amount of shaliness as a related quantity can be determined from well log analysis. The conventionally used linear model, connecting the natural gamma-ray index to shale content, often gives only a rough estimate of shale volume. A non-linear model is suggested, which is derived from the factor analysis of well-logging data. An earlier study of hydrocarbon wells revealed an empirical relationship between the factor scores and shale volume, independent of the well site. Borehole logs from three groundwater wells drilled in the northeastern Great Hungarian Plain are analyzed to derive depth logs of factor variables, which are then correlated with shale volumes given from the method of Larionov. Shale volume logs derived by the statistical procedure are in close agreement with those derived from Larionov's formula, which confirms the validity of the non-linear approximation. The statistical results are in good accordance with laboratory measurements made on core samples. Whereas conventional methods normally use a single well log as input, factor analysis processes all available logs to provide groundwater exploration with reliable estimations of shale volume.

  1. Complications associated with transobturator sling procedures: analysis of 233 consecutive cases with a 27 months follow-up

    Directory of Open Access Journals (Sweden)

    Dubuisson Jean-Bernard

    2009-09-01

    Full Text Available Abstract Backround The transobturator tape procedure (TOT is an effective surgical treatment of female stress urinary incontinence. However data concerning safety are rare, follow-up is often less than two years, and complications are probably underreported. The aim of this study was to describe early and late complications associated with TOT procedures and identify risk factors for erosions. Methods It was a 27 months follow-up of a cohort of 233 women who underwent TOT with three different types of slings (Aris®, Obtape®, TVT-O®. Follow-up information was available for 225 (96.6% women. Results There were few per operative complications. Forty-eight women (21.3% reported late complications including de novo or worsening of preexisting urgencies (10.2%, perineal pain (2.2%, de novo dyspareunia (9%, and vaginal erosion (7.6%. The risk of erosion significantly differed between the three types of slings and was 4%, 17% and 0% for Aris®, Obtape® and TVT-O® respectively (P = 0.001. The overall proportion of women satisfied by the procedure was 72.1%. The percentage of women satisfied was significantly lower in women who experienced erosion (29.4% compared to women who did not (78.4% (RR 0.14, 95% CI 0.05-0.38, P Conclusion Late post operative complications are relatively frequent after TOT and can impair patient's satisfaction. Women should be informed of these potential complications preoperatively and require careful follow-up after the procedure. Choice of the safest sling material is crucial as it is a risk factor for erosion.

  2. Exploratory factor analysis of the Brazilian OHIP for edentulous subjects.

    Science.gov (United States)

    Souza, R F; Leles, C R; Guyatt, G H; Pontes, C B; Della Vecchia, M P; Neves, F D

    2010-03-01

    The use of seven domains for the Oral Health Impact Profile (OHIP)-EDENT was not supported for its Brazilian version, making data interpretation in clinical settings difficult. Thus, the aim of this study was to assess patients' responses for the translated OHIP-EDENT in a group of edentulous subjects and to develop factor scales for application in future studies. Data from 103 conventional and implant-retained complete denture wearers (36 men, mean age of 69.1 +/- 10.3 years) were assessed using the Brazilian version of the OHIP-EDENT. Oral health-related quality of life domains were identified by factor analysis using principal component analysis as the extraction method, followed by varimax rotation. Factor analysis identified four factors that accounted for 63% of the 19 items total variance, named masticatory discomfort and disability (four items), psychological discomfort and disability (five items), social disability (five items) and oral pain and discomfort (five items). Four factors/domains of the Brazilian OHIP-EDENT version represent patient-important aspects of oral health-related quality of life.

  3. Incidental durotomy during spinal surgery: a multivariate analysis for risk factors.

    Science.gov (United States)

    Du, Jerry Y; Aichmair, Alexander; Kueper, Janina; Lam, Cyrena; Nguyen, Joseph T; Cammisa, Frank P; Lebl, Darren R

    2014-10-15

    Multivariate analysis. The purpose of this study was to investigate risk factors for incidental durotomy (ID) in modern spine surgery techniques. ID, a relatively common complication of spine surgery, has been associated with postoperative complications such as durocutaneous fistulas, pseudomeningoceles, and arachnoiditis. Revision surgery may be necessary if the dural tear is not recognized and repaired during the initial procedure. ID was prospectively documented in patients who underwent spine surgery at a single institution during a 2-year period (n=4822). Patients with ID (n=182) from lumbar or thoracolumbar cases were matched 1:1 to a control cohort without ID. Demographic, diagnostic, and surgical procedure data were retrospectively collected and analyzed. Multivariate analysis identified revision spine surgery (adjusted odds ratio [aOR]: 4.78, 95% confidence interval [CI]: 2.84-8.06, P<0.01), laminectomy (aOR: 3.82, 95% CI: 2.02-7.22, P<0.01), and older age (aOR: 1.03, 95% CI: 1.01-1.04, P<0.01) as independent risk factors for ID. Fusion (aOR: 0.59, 95% CI: 0.35-0.99, P=0.04), foraminectomy, (aOR: 0.42, 95% CI: 0.25-0.69, P<0.01), and lateral approach (aOR: 0.29, 95% CI: 0.14-0.61, P<0.01) were independent protective factors. Prior spine surgery, laminectomy, and older age were significant independent risk factors for ID. The recently developed lateral approach to interbody fusion was identified as a significant protective factor for ID, along with fusion and foraminectomy. These findings may help guide future surgical decisions regarding ID and aid in the patient informed-consent process. 3.

  4. An integrated portfolio optimisation procedure based on data envelopment analysis, artificial bee colony algorithm and genetic programming

    Science.gov (United States)

    Hsu, Chih-Ming

    2014-12-01

    Portfolio optimisation is an important issue in the field of investment/financial decision-making and has received considerable attention from both researchers and practitioners. However, besides portfolio optimisation, a complete investment procedure should also include the selection of profitable investment targets and determine the optimal timing for buying/selling the investment targets. In this study, an integrated procedure using data envelopment analysis (DEA), artificial bee colony (ABC) and genetic programming (GP) is proposed to resolve a portfolio optimisation problem. The proposed procedure is evaluated through a case study on investing in stocks in the semiconductor sub-section of the Taiwan stock market for 4 years. The potential average 6-month return on investment of 9.31% from 1 November 2007 to 31 October 2011 indicates that the proposed procedure can be considered a feasible and effective tool for making outstanding investment plans, and thus making profits in the Taiwan stock market. Moreover, it is a strategy that can help investors to make profits even when the overall stock market suffers a loss.

  5. Using Confirmatory Factor Analysis for Construct Validation: An Empirical Review

    Science.gov (United States)

    DiStefano, Christine; Hess, Brian

    2005-01-01

    This study investigated the psychological assessment literature to determine what applied researchers are using and reporting from confirmatory factor analysis (CFA) studies for evidence of construct validation. One hundred and one articles published in four major psychological assessment journals between 1990 and 2002 were systematically…

  6. Modular Open-Source Software for Item Factor Analysis

    Science.gov (United States)

    Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven M.

    2015-01-01

    This article introduces an item factor analysis (IFA) module for "OpenMx," a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation…

  7. Modular Open-Source Software for Item Factor Analysis

    Science.gov (United States)

    Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven M.

    2015-01-01

    This article introduces an item factor analysis (IFA) module for "OpenMx," a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation…

  8. A Factor Analysis of Barriers to Effective Listening.

    Science.gov (United States)

    Golen, Steven

    1990-01-01

    Conducts a factor analysis to determine listening barriers perceived as most frequently affecting the listening effectiveness among business college students. Finds the presence of six listening barriers, with the barrier "listen primarily for details or facts" as the most frequently encountered barrier perceived by students. (MM)

  9. A Comparison of Imputation Methods for Bayesian Factor Analysis Models

    Science.gov (United States)

    Merkle, Edgar C.

    2011-01-01

    Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…

  10. CT-guided vertebroplasty: analysis of technical results, extraosseous cement leakages, and complications in 500 procedures

    Energy Technology Data Exchange (ETDEWEB)

    Pitton, Michael Bernhard; Herber, Sascha; Koch, Ulrike; Oberholzer, Katja; Dueber, Christoph [Johannes Gutenberg-University of Mainz, Department of Diagnostic and Interventional Radiology, Mainz (Germany); Drees, Philip [University Hospital, Johannes Gutenberg-University of Mainz, Department of Orthopedic Surgery, Mainz (Germany)

    2008-11-15

    The aim of this study was to analyze the technical results, the extraosseous cement leakages, and the complications in our first 500 vertebroplasty procedures. Patients with osteoporotic vertebral compression fractures or osteolytic lesions caused by malignant tumors were treated with CT-guided vertebroplasty. The technical results were documented with CT, and the extraosseous cement leakages and periinterventional clinical complications were analyzed as well as secondary fractures during follow-up. Since 2002, 500 vertebroplasty procedures have been performed on 251 patients (82 male, 169 female, age 71.5 {+-} 9.8 years) suffering from osteoporotic compression fractures (n = 217) and/or malignant tumour infiltration (n = 34). The number of vertebrae treated per patient was 1.96 {+-} 1.29 (range 1-10); the numbers of interventions per patient and interventions per vertebra were 1.33 {+-} 0.75 (range 1-6) and 1.01 {+-} 0.10, respectively. The amount of PMMA cement was 4.5 {+-} 1.9 ml and decreased during the 5-year period of investigation. The procedure-related 30-day mortality was 0.4% (1 of 251 patients) due to pulmonary embolism in this case. The procedure-related morbidity was 2.8% (7/251), including one acute coronary syndrome beginning 12 h after the procedure and one missing patellar reflex in a patients with a cement leak near the neuroformen because of osteolytic destruction of the respective pedicle. Additionally, one patient developed a medullary conus syndrome after a fall during the night after vertebroplasty, two patients reached an inadequate depth of conscious sedation, and two cases had additional fractures (one pedicle fracture, one rib fracture). The overall CT-based cement leak rate was 55.4% and included leakages predominantly into intervertebral disc spaces (25.2%), epidural vein plexus (16.0%), through the posterior wall (2.6%), into the neuroforamen (1.6%), into paravertebral vessels (7.2%), and combinations of these and others. During follow

  11. The Effects of Overextraction on Factor and Component Analysis.

    Science.gov (United States)

    Fava, J L; Velicer, W F

    1992-07-01

    The effects of overextracting factors and components within and between the methods of maximum likelihood factor analysis (MLFA) and principal component analysis (PCA) were examined. Computer-simulated data sets were generated to represent a range of factor and component patterns. Saturation (aij = .8, .6 & .4), sample size (N = 75, 150,225,450), and variable-to-component (factor) ratio (p:m = 12:1,6:1, & 4:1) were conditions manipulated. In Study 1, scores based on the incorrect patterns were correlated with correct scores within each method after each overextraction. In Study 2, scores were correlated between the methods of PCAand MLFA after each overextraction. Overextraction had a negative effect, but scores based on strong component and factor patterns displayed robustness to the effects of overextraction. Low item saturation and low sample size resulted in degraded score reproduction. Degradation was strongest for patterns that combined low saturation and low sample size. Component and factor scores were highly correlated even at maximal levels of overextraction. Dissimilarity between score methods was the greatest in conditions that combined low saturation and low sample size. Some guidelines for researchers concerning the effects of overextraction are noted, as well as some cautions in the interpretation of results.

  12. Exploratory factor analysis of the Oral Health Impact Profile.

    Science.gov (United States)

    John, M T; Reissmann, D R; Feuerstahler, L; Waller, N; Baba, K; Larsson, P; Celebić, A; Szabo, G; Rener-Sitar, K

    2014-09-01

    Although oral health-related quality of life (OHRQoL) as measured by the Oral Health Impact Profile (OHIP) is thought to be multidimensional, the nature of these dimensions is not known. The aim of this report was to explore the dimensionality of the OHIP using the Dimensions of OHRQoL (DOQ) Project, an international study of general population subjects and prosthodontic patients. Using the project's Learning Sample (n = 5173), we conducted an exploratory factor analysis on the 46 OHIP items not specifically referring to dentures for 5146 subjects with sufficiently complete data. The first eigenvalue (27·0) of the polychoric correlation matrix was more than ten times larger than the second eigenvalue (2·6), suggesting the presence of a dominant, higher-order general factor. Follow-up analyses with Horn's parallel analysis revealed a viable second-order, four-factor solution. An oblique rotation of this solution revealed four highly correlated factors that we named Oral Function, Oro-facial Pain, Oro-facial Appearance and Psychosocial Impact. These four dimensions and the strong general factor are two viable hypotheses for the factor structure of the OHIP.

  13. Evaluating Exploratory Factor Analysis: Which Initial-Extraction Techniques Provide the Best Factor Fidelity?

    Science.gov (United States)

    Buley, Jerry L.

    1995-01-01

    States that attacks by communication scholars have cast doubt on the validity of exploratory factor analysis (EFA). Tests EFA's ability to produce results that replicate known dimensions in a data set. Concludes that EFA should be viewed with cautious optimism and be evaluated according to the findings of this and similar studies. (PA)

  14. Factor analysis improves the selection of prescribing indicators

    DEFF Research Database (Denmark)

    Rasmussen, Hanne Marie Skyggedal; Søndergaard, Jens; Sokolowski, Ineta

    2006-01-01

    OBJECTIVE: To test a method for improving the selection of indicators of general practitioners' prescribing. METHODS: We conducted a prescription database study including all 180 general practices in the County of Funen, Denmark, approximately 472,000 inhabitants. Principal factor analysis was used...... indicators directly quantifying choice of coxibs, indicators measuring expenditure per Defined Daily Dose, and indicators taking risk aspects into account, (2) "Frequent NSAID prescribing", comprising indicators quantifying prevalence or amount of NSAID prescribing, and (3) "Diverse NSAID choice", comprising...... appropriate and inappropriate prescribing, as revealed by the correlation of the indicators in the first factor. CONCLUSION: Correlation and factor analysis is a feasible method that assists the selection of indicators and gives better insight into prescribing patterns....

  15. Identifying influential factors of business process performance using dependency analysis

    Science.gov (United States)

    Wetzstein, Branimir; Leitner, Philipp; Rosenberg, Florian; Dustdar, Schahram; Leymann, Frank

    2011-02-01

    We present a comprehensive framework for identifying influential factors of business process performance. In particular, our approach combines monitoring of process events and Quality of Service (QoS) measurements with dependency analysis to effectively identify influential factors. The framework uses data mining techniques to construct tree structures to represent dependencies of a key performance indicator (KPI) on process and QoS metrics. These dependency trees allow business analysts to determine how process KPIs depend on lower-level process metrics and QoS characteristics of the IT infrastructure. The structure of the dependencies enables a drill-down analysis of single factors of influence to gain a deeper knowledge why certain KPI targets are not met.

  16. A modified release analysis procedure using advanced froth flotation mechanisms. Technical report, September 1--November 30, 1995

    Energy Technology Data Exchange (ETDEWEB)

    Honaker, R.Q.; Mohanty, M.K. [Southern Illinois Univ., Carbondale, IL (United States). Dept. of Mining Engineering

    1995-12-31

    The objective of this study is to reinvestigate the release analysis procedure, which is traditionally conducted using a laboratory Denver cell, and to develop a modified process that can be used for all froth flotation technologies. Recent studies have found that the separation performance achieved by multiple stage cleaning and, in some cases, single stage cleaning using column flotation is superior to the performance achieved by the traditional release procedure. These findings are a result of the advanced flotation mechanisms provided by column flotation, which will be incorporated into a modified release analysis procedure developed in this study. A fundamental model of an open column has been developed which incorporates the effects of system hydrodynamics, froth drop-back, selective and non-selective detachment, operating parameters, feed solids content, and feed component flotation kinetics. Simulation results obtained during this reporting period indicate that the ultimate separation that can be achieved by a column flotation process can only be obtained in a single cleaning stage if the detachment mechanism in the froth phase is highly selective, which does not appear to occur in practice based on experimental results. Two to three cleaning stages were found to be required to obtain the ultimate performance if non-selective detachment or kinetic limiting conditions are assumed. this simulated finding agrees well with the experimental results obtained from the multiple stage cleaning of an Illinois No. 5 seam coal using the Packed-Column. Simulated results also indicate that the separation performance achieved by column flotation improves with increasing feed solids content after carrying-capacity limiting conditions are realized. These findings will be utilized in the next reporting period to modify the traditional release analysis procedure.

  17. Confidence ellipses: A variation based on parametric bootstrapping applicable on Multiple Factor Analysis results for rapid graphical evaluation

    DEFF Research Database (Denmark)

    Dehlholm, Christian; Brockhoff, Per B.; Bredie, Wender L. P.

    2012-01-01

    A new way of parametric bootstrapping allows similar construction of confidence ellipses applicable on all results from Multiple Factor Analysis obtained from the FactoMineR package in the statistical program R. With this procedure, a similar approach will be applied to Multiple Factor Analysis...... results regardless of the origin of data and the nature of the original variables. The approach is suitable for getting an overview of product confidence intervals and also applicable for data obtained from ‘one repetition’ evaluations. Furthermore, it is a convenient way to get an overview of variations...

  18. Developmental Coordination Disorder: Validation of a Qualitative Analysis Using Statistical Factor Analysis

    Directory of Open Access Journals (Sweden)

    Kathy Ahern

    2002-09-01

    Full Text Available This study investigates triangulation of the findings of a qualitative analysis by applying an exploratory factor analysis to themes identified in a phenomenological study. A questionnaire was developed from a phenomenological analysis of parents' experiences of parenting a child with Developmental Coordination Disorder (DCD. The questionnaire was administered to 114 parents of DCD children and data were analyzed using an exploratory factor analysis. The extracted factors provided support for the validity of the original qualitative analysis, and a commentary on the validity of the process is provided. The emerging description is of the compromises that were necessary to translate qualitative themes into statistical factors, and of the ways in which the statistical analysis suggests further qualitative study.

  19. Analysis of Recurrence Factor of Postoperative Papillary Thyroid Cancer

    Directory of Open Access Journals (Sweden)

    XING Lan-lan;CHEN Song;LI Ya-ming

    2014-02-01

    Full Text Available To investigate the factors that influences the recurrence of papillary thyroid cancer,69 patients with papillary thyroid cancer since January 1, 2011 to march 30, 2013 were analyzed respectively. They meet the inclusion criteria and complete clinical data, 18 males and 51 females,average age: 40.17±12.97.Thyroid ultrasonography, thyroid function test, thyroglobulin and antibody measurement were performed on all patients and thyroid function were checked three or more times on the premise of continuously levothyroxine. Single factor analysis were performed using SPSS17.0 in these respects including patients' gender, age, tumor size, type of opetation, the inhibition degree of TSH with taking levothyroxine postoperative and whether to perform 131I thyroid remnant ablation. Binary Logistic regression analysis were used for studying recurrence factors in multivariate analysis. The ROC curve were drawn, and then determine the threshold of TSH to evaluate tumor recurrence using Youden index method. Unvaried analysis showed that there was no statistically significance between papillary thyroid cancer recurrence and patients' age, surgical approach (P =0.373, P = 0.226,but were related to patient's gender, tumor size, postoperative TSH suppression degree and the removal of residual thyroid tissue postoperative(P= 0.031, P = 0.004, P = 0.000 01, P = 0.000 05. Males, large tumors, high postoperative TSH values and patients who didn't remove the residual thyroid tissue after surgery had higher recurrence rate. Logistic regression analysis showed that tumor size, postoperative TSH suppression degree and whether to remove the residual thyroid tissue were the influencing factors of tumor recurrence. The postoperative TSH supressive degree evaluation of critical point of tumor recurrence was determined by 0.223 5 mU/L using the Yueden index method. Large tumors, high postoperative TSH values,and no removal of the residual thyroid tissue had more influence

  20. Existing Resources, Standards, and Procedures for Precise Monitoring and Analysis of Structural Deformations. Volume 2. Appendices

    Science.gov (United States)

    1992-09-01

    APPENDIX 5. Geometrical Analysis of Deformation Surveys APPENDIX 6. Integrated Analysis of Deformation Surveys at Mactaquac APPENDIX 7. Combination of...1989a). "Integrated analysis of deformation surveys at Mactaquac ." International Water Power and Dam Construction, August, pp. 17-22. Czaja, J. (1971...New Brunswick P.O.Box 4400 Fredericton, N.B., E3B 5A3, Canada APPENDIX 6. INTEGRATED ANALYSIS OF DEFORMATION SURVEYS AT MACTAQUAC A-. k . - .. USA and

  1. Economics definitions, methods, models, and analysis procedures for Homeland Security applications.

    Energy Technology Data Exchange (ETDEWEB)

    Ehlen, Mark Andrew; Loose, Verne William; Vargas, Vanessa N.; Smith, Braeton J.; Warren, Drake E.; Downes, Paula Sue; Eidson, Eric D.; Mackey, Greg Edward

    2010-01-01

    This report gives an overview of the types of economic methodologies and models used by Sandia economists in their consequence analysis work for the National Infrastructure Simulation & Analysis Center and other DHS programs. It describes the three primary resolutions at which analysis is conducted (microeconomic, mesoeconomic, and macroeconomic), the tools used at these three levels (from data analysis to internally developed and publicly available tools), and how they are used individually and in concert with each other and other infrastructure tools.

  2. Minerals sampling: sensibility analysis and correction factors for Pierre Gy's equation; Muestreo de minerales: analisis de sensibilidad y factores de correccion para la ecuacion de Pierre Gy

    Energy Technology Data Exchange (ETDEWEB)

    Vallebuona, G.; Niedbalski, F.

    2005-07-01

    Pierre Gy's equation is widely used in ore sampling. This equation is based in four parameters: shape factor, size distribution factor, mineralogical factor and liberation factor. The usual practice is to consider fixed values for the shape and size distribution factors. This practice does not represent well several important ores. The mineralogical factor considers only one specie of interest and the gangue, leaving out other cases such as polymetallic ores where there are more than one species of interest. A sensibility analysis to the Gy's equation factors was done and a procedure to determine specific values for them was developed and presented in this work. mean ore characteristics, associated with an insecure use of the actual procedure, were determined. finally, for a case study, the effects of using each alternative were evaluated. (Author) 4 refs.

  3. Risk factors for retained surgical items: a meta-analysis and proposed risk stratification system.

    Science.gov (United States)

    Moffatt-Bruce, Susan D; Cook, Charles H; Steinberg, Steven M; Stawicki, Stanislaw P

    2014-08-01

    Retained surgical items (RSI) are designated as completely preventable "never events". Despite numerous case reports, clinical series, and expert opinions few studies provide quantitative insight into RSI risk factors and their relative contributions to the overall RSI risk profile. Existing case-control studies lack the ability to reliably detect clinically important differences within the long list of proposed risks. This meta-analysis examines the best available data for RSI risk factors, seeking to provide a clinically relevant risk stratification system. Nineteen candidate studies were considered for this meta-analysis. Three retrospective, case-control studies of RSI-related risk factors contained suitable group comparisons between patients with and without RSI, thus qualifying for further analysis. Comprehensive Meta-Analysis 2.0 (BioStat, Inc, Englewood, NJ) software was used to analyze the following "common factor" variables compiled from the above studies: body-mass index, emergency procedure, estimated operative blood loss >500 mL, incorrect surgical count, lack of surgical count, >1 subprocedure, >1 surgical team, nursing staff shift change, operation "afterhours" (i.e., between 5 PM and 7 AM), operative time, trainee presence, and unexpected intraoperative factors. We further stratified resulting RSI risk factors into low, intermediate, and high risk. Despite the fact that only between three and six risk factors were associated with increased RSI risk across the three studies, our analysis of pooled data demonstrates that seven risk factors are significantly associated with increased RSI risk. Variables found to elevate the RSI risk include intraoperative blood loss >500 mL (odds ratio [OR] 1.6); duration of operation (OR 1.7); >1 subprocedure (OR 2.1); lack of surgical counts (OR 2.5); >1 surgical team (OR 3.0); unexpected intraoperative factors (OR 3.4); and incorrect surgical count (OR 6.1). Changes in nursing staff, emergency surgery, body

  4. 环形电切术治疗高级别宫颈上皮内瘤样病变后人乳头瘤病毒16、18持续感染的影响因素分析%Analysis of influencing factors associated with human papillomavirus 16,18 persistent infection after the treatment for high-grade cervical intraepithelial neoplasia with loop electrosurgical excision procedure

    Institute of Scientific and Technical Information of China (English)

    孙晶雪; 李立

    2013-01-01

    Objective:To discuss the short-term efficacy of loop electrosurgical excision procedure (LEEP) in the treatment of human papillomavirus(HPV) 16,18 persistent infection after the operation for high-grade cervical intraepithelial neoplasia(CIN) and to explore the influcing factor of HPV16,18 persistent infection. Methods:Totally 167 cases of CIN II /HI were confirmed by coloposcopy cervical biopsy and received treatment in the department of gynecology of Iiuzhou People' s Hospital from January 2007 to June 2009 were enrolled. HPV16,18 DNA was detected in all patients and cervical LEEP was performed on all of them. Thinprep cytologic test (TCT) ,HPV16,18 DNA detection and follow-up were applied at six months after the operation. Results:HPV infection rate was 77.951% preoperatively and 7.87% at six months after the operation in the follow-up visit. Cervix biopsy was performed in 43 patients with persistent positive HPV-DNA, abnormal TCT or positive surgical margin. Rate of residual lesions was 9.3%. Conclusions-, ①HPV16,18 infection in female genital tract can be effectively cleared by LEEP. ②Influencing factors of persistent HPV16,18 infection after the treatment of high-grade CIN by LEEP include age,age at first sexual activity,multiple deliver)' and positive margin involvement.③Positive margin involvement is the risk factor for the residual lesion of high-grade CIN after treating by LEEP.%目的:探讨宫颈环形电切术(loop electrosurgical excision procedure,LEEP)治疗高级别宫颈上皮内瘤样病变(cervical intraepithelial neoplasia,CIN)术后人乳头瘤病毒(human papillomavirus,HPV) 16、18清除的近期效果并分析对HPV16、18持续感染的相关影响因素.方法:选取2007年1月至2009年6月就诊于柳州市人民医院妇科,阴道镜下宫颈活检确诊为CINⅡ/Ⅲ的患者167例.均对其检测HPV16、18 DNA并行宫颈LEEP治疗.术后6个月检测HPV 16、18 DNA及液基薄层细胞学检查(thinprep cytologic test,TCT)进

  5. Forensic analysis of Salvia divinorum using multivariate statistical procedures. Part II: association of adulterated samples to S. divinorum.

    Science.gov (United States)

    Willard, Melissa A Bodnar; McGuffin, Victoria L; Smith, Ruth Waddell

    2012-01-01

    Salvia divinorum is a plant material that is of forensic interest due to the hallucinogenic nature of the active ingredient, salvinorin A. In this study, S. divinorum was extracted and spiked onto four different plant materials (S. divinorum, Salvia officinalis, Cannabis sativa, and Nicotiana tabacum) to simulate an adulterated sample that might be encountered in a forensic laboratory. The adulterated samples were extracted and analyzed by gas chromatography-mass spectrometry, and the resulting total ion chromatograms were subjected to a series of pretreatment procedures that were used to minimize non-chemical sources of variance in the data set. The data were then analyzed using principal components analysis (PCA) to investigate association of the adulterated extracts to unadulterated S. divinorum. While association was possible based on visual assessment of the PCA scores plot, additional procedures including Euclidean distance measurement, hierarchical cluster analysis, Student's t tests, Wilcoxon rank-sum tests, and Pearson product moment correlation were also applied to the PCA scores to provide a statistical evaluation of the association observed. The advantages and limitations of each statistical procedure in a forensic context were compared and are presented herein.

  6. Confirmatory factor analysis of the WAIS-IV/WMS-IV.

    Science.gov (United States)

    Holdnack, James A; Xiaobin Zhou; Larrabee, Glenn J; Millis, Scott R; Salthouse, Timothy A

    2011-06-01

    The Wechsler Adult Intelligence Scale-fourth edition (WAIS-IV) and the Wechsler Memory Scale-fourth edition (WMS-IV) were co-developed to be used individually or as a combined battery of tests. The independent factor structure of each of the tests has been identified; however, the combined factor structure has yet to be determined. Confirmatory factor analysis was applied to the WAIS-IV/WMS-IV Adult battery (i.e., age 16-69 years) co-norming sample (n = 900) to test 13 measurement models. The results indicated that two models fit the data equally well. One model is a seven-factor solution without a hierarchical general ability factor: Verbal Comprehension, Perceptual Reasoning, Processing Speed, Auditory Working Memory, Visual Working Memory, Auditory Memory, and Visual Memory. The second model is a five-factor model composed of Verbal Comprehension, Perceptual Reasoning, Processing Speed, Working Memory, and Memory with a hierarchical general ability factor. Interpretative implications for each model are discussed.

  7. An exploratory analysis of personality factors contributed to suicide attempts

    Directory of Open Access Journals (Sweden)

    P. N. Suresh Kumar

    2013-01-01

    Full Text Available Background: People who attempt suicide have certain individual predispositions, part of which is contributed by personality traits. Aims: The present study was conducted to identify the psycho-sociodemographic and personality related factors contributing to suicide attempts. Materials and Methods: 104 suicide attempters admitted in various departments and referred to the department of psychiatry of IQRAA Hospital formed the study sample. They were evaluated with a self designed socio-demographic proforma, Eysenck′s Personality Questionnaire Revised, Albert Einstein College of Medicine-Impulsivity Coping Scale, and Past Feelings and Acts of Violence Scale. Statistics Analysis: The data was initially analyzed by percentage of frequencies. Association between socio-demographic and selected psychological factors was analyzed using t-test and Chi-square test. Intercorrelation among psychological factors was calculated by Pearson′s correlation coefficient "r". Results and Conclusion: Factors such as young age, being married, nuclear family, feeling lonely and burden to family, inability to solve the problems of day to day life, and presence of psychiatric diagnosis and personality traits such as neuroticism, impulsivity, and violence were contributed to suicide attempt. A significant positive relationship between these factors was also identified. Findings of the present study call the attention of mental health professionals to identify these high risk factors in susceptible individuals and to modify these factors to prevent them from attempting suicide.

  8. Etiology of Readmissions Following Orthopaedic Procedures and Medical Admissions. A Comparative Analysis.

    Science.gov (United States)

    Maslow, Jed; Hutzler, Lorraine; Slover, James; Bosco, Joseph

    2015-12-01

    The Federal Government, the largest payer of health care, considers readmission within 30 days of discharge an indicator of quality of care. Many studies have focused on causes for and strategies to reduce readmissions following medical admissions. However, few studies have focused on the differences between them. We believe that the causes for readmission following orthopaedic surgery are markedly different than those following medical admissions, and therefore, the strategies developed to reduce medical readmissions will not be as effective in reducing readmissions after elective orthopaedic surgery. All unplanned 30-day readmissions following an index hospitalization for an elective orthopaedic procedure (primary and revision total joint arthroplasty and spine procedure) or for one of the three publicly reported medical conditions (AMI, HF, and pneumonia, which accounted for 11% of readmissions) were identified at our institution from 2010 through 2012. A total of 268 patients and 390 medical patients were identified as having an unplanned 30-day readmission. We reviewed a prospectively collected data base to determine the reason for readmission in each encounter. A total of 233 (86.9%) orthopaedic patients were readmitted for surgical complications, most commonly for a wound infection (56.0%) or wound complication (11.6%). Following an index admission of HF or AMI, the primary reason for readmission was a disease of the circulatory system (55.9% and 57.4%, respectively). Following an index admission for pneumonia, the primary reason for readmission was a disease of the respiratory system (34.5%). The causes of readmissions following orthopaedic surgery and medical admissions are different. Patients undergoing orthopaedic procedures are readmitted for surgical complications, frequently unrelated to aftercare, and medicine patients are readmitted for reasons related to the index diagnosis. Interventions designed to reduce orthopaedic readmissions must focus on

  9. Confirmatory factor analysis for the Eating Disorder Examination Questionnaire: Evidence supporting a three-factor model.

    Science.gov (United States)

    Barnes, Jennifer; Prescott, Tim; Muncer, Steven

    2012-12-01

    The purpose of this investigation was to compare the goodness-of-fit of a one factor model with the four factor model proposed by Fairburn (2008) and the three factor model proposed by Peterson and colleagues (2007) for the Eating Disorder Examination Questionnaire (EDE-Q 6.0) (Fairburn and Beglin, 1994). Using a cross-sectional design, the EDE-Q was completed by 569 adults recruited from universities and eating disorder charities in the UK. Confirmatory factor analysis (CFA) was carried out for both the student and non-student groups. CFA indicated that Peterson et al.'s (2007) three factor model was the best fit for both groups within the current data sample. Acceptable levels of internal reliability were observed and there was clear evidence for a hierarchical factor of eating disorder. The results of this study provide support for the three factor model of the EDE-Q suggested by Peterson and colleagues (2007) in that this model was appropriate for both the student and non-student sample populations. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Replication Analysis in Exploratory Factor Analysis: What It Is and Why It Makes Your Analysis Better

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2012-11-01

    Full Text Available Exploratory Factor Analysis (EFA is a powerful and commonly-used tool for investigating the underlying variable structure of a psychometric instrument. However, there is much controversy in the social sciences with regard to the techniques used in EFA (Ford, MacCallum, & Tait, 1986; Henson & Roberts, 2006 and the reliability of the outcome. Simulations by Costello and Osborne (2005, for example, demonstrate how poorly some EFA analyses replicate, even with clear underlying factor structures and large samples. Thus, we argue that researchers should routinely examine the stability or volatility of their EFA solutions to gain more insight into the robustness of their solutions and insight into how to improve their instruments while still at the exploratory stage of development.

  11. Reproducibility evaluation of standard procedures for the proximate analysis of coals. [Between and within laboratories

    Energy Technology Data Exchange (ETDEWEB)

    Butina, I.V.; Gorelov, P.N.

    1979-01-01

    Four coal samples were analyzed in the laboratories at eight coking plants, for metrological processing to determine the within- and between laboratory reproducibilities of the standard procedures for the basic coal property indices (ash, volatile matter and moisture content). The between-laboratory reproducibility is highest in the ash determination (S/sub b/ up to 0.17%) and the lowest in the volatile matter determination (S/sub b/ up to 0.9%). The within-laboratory reproducibility is up to 0.3% on ash and moisture content and up to 0.5% on volatile matter.

  12. A semi-automated image analysis procedure for in situ plankton imaging systems.

    Directory of Open Access Journals (Sweden)

    Hongsheng Bi

    Full Text Available Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects ( 95%. First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that

  13. Cost-effectiveness analysis of clinic-based chloral hydrate sedation versus general anaesthesia for paediatric ophthalmological procedures.

    Science.gov (United States)

    Burnett, Heather F; Lambley, Rosemary; West, Stephanie K; Ungar, Wendy J; Mireskandari, Kamiar

    2015-11-01

    The inability of some children to tolerate detailed eye examinations often necessitates general anaesthesia (GA). The objective was to assess the incremental cost effectiveness of paediatric eye examinations carried out in an outpatient sedation unit compared with GA. An episode of care cost-effectiveness analysis was conducted from a societal perspective. Model inputs were based on a retrospective cross-over cohort of Canadian children aged 68 successful procedures per child. The result was robust to varying the cost assumptions. Cross-over designs offer a powerful way to assess costs and effectiveness of two interventions because patients serve as their own control. This study demonstrated significant savings when ophthalmological exams were carried out in a hospital outpatient clinic, although with slightly fewer procedures completed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  14. Finite element analysis of donning procedure of a prosthetic transfemoral socket.

    Science.gov (United States)

    Lacroix, Damien; Patiño, Juan Fernando Ramírez

    2011-12-01

    Lower limb amputation is a severe psychological and physical event in a patient. A prosthetic solution can be provided but should respond to a patient-specific need to accommodate for the geometrical and biomechanical specificities. A new approach to calculate the stress-strain state at the interaction between the socket and the stump of five transfemoral amputees is presented. In this study the socket donning procedure is modeled using an explicit finite element method based on the patient-specific geometry obtained from CT and laser scan data. Over stumps the mean maximum pressure is 4 kPa (SD 1.7) and the mean maximum shear stresses are 1.4 kPa (SD 0.6) and 0.6 kPa (SD 0.3) in longitudinal and circumferential directions, respectively. Locations of the maximum values are according to pressure zones at the sockets. The stress-strain states obtained in this study can be considered more reliable than others, since there are normal and tangential stresses associated to the socket donning procedure.

  15. Statistical procedures for the design and analysis of in vitro mutagenesis assays

    Energy Technology Data Exchange (ETDEWEB)

    Kaldor, J.

    1983-03-01

    In previous statistical treatments of a certain class of mutagenesis assays, stochastic models of mutation and cell growth have not been utilized. In this paper, we review the assumptions under which these models are derived, introduce some further assumptions, and propose ways to estimate and test hypotheses regarding the parameters of the models from assay data. It is shown via simulation and exact calculation that if the models are valid, the proposed statistical procedures provide very accurate Type I error rates for hypothesis tests, and coverage probabilities for confidence intervals. The cases of a linear dose response relationship for mutagenesis, and a comparison of a set of treated cell cultures with a set of control cultures are treated in detail. Approximate power functions for hypothesis tests of interest are then derived, and these are also shown to be satisfactorily close to the true power functions. The approximations are used to develop guidelines for planning aspects of a mutagenesis assay, including the number, spacing and range of dose levels employed. Examples of applications of the procedures are provided, and the paper concludes with a discussion of future statistical work which may be carried out in the area of mutagenesis assays. 38 references, 8 figures, 7 tables.

  16. The development of human factors technologies -The development of human behaviour analysis techniques-

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Woon; Lee, Yong Heui; Park, Keun Ok; Chun, Se Woo; Suh, Sang Moon; Park, Jae Chang [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    In order to contribute to human error reduction through the studies on human-machine interaction in nuclear power plants, this project has objectives to develop SACOM(Simulation Analyzer with a Cognitive Operator Model) and techniques for human error analysis and application. In this year, we studied the followings: (1) Site investigation of operator tasks, (2) Development of operator task micro structure and revision of micro structure, (3) Development of knowledge representation software and SACOM prototype, (4) Development of performance assessment methodologies in task simulation and analysis of the effects of performance shaping factors. analysis and application techniques> (1) Classification of error shaping factors(ESFs) and development of software for ESF evaluation, (2) Analysis of human error occurrences and revision of analysis procedure, (3) Experiment for human error data collection using a compact nuclear simulator, (4) Development of a prototype data base system of the analyzed information on trip cases. 55 figs, 23 tabs, 33 refs. (Author).

  17. Analysis of the determinative factors for energy efficiency

    Directory of Open Access Journals (Sweden)

    Antonio Vanderley Herrero Sola

    2007-10-01

    Full Text Available The aim of this paper is to study energy efficiency in productive sector. The main objective is to analyze determinative factors to Energy Efficiency, identifying how external forces influence those factors as weel as energy efficiency, in order to subsidize a future scenery planning in energy management. The result of this analysis, based on scientific works, case study in universities, research in companies, studies by Brazilian Federal Government and studies by specialists, shows that energy efficiency depends on: effectiveness governmental actions for the technological development; technological development; initiative by universities for technology transfer to the companies; the relationship between individuals and small companies to create a corporative structure.

  18. Performance analysis of parallel supernodal sparse LU factorization

    Energy Technology Data Exchange (ETDEWEB)

    Grigori, Laura; Li, Xiaoye S.

    2004-02-05

    We investigate performance characteristics for the LU factorization of large matrices with various sparsity patterns. We consider supernodal right-looking parallel factorization on a bi-dimensional grid of processors, making use of static pivoting. We develop a performance model and we validate it using the implementation in SuperLU-DIST, the real matrices and the IBM Power3 machine at NERSC. We use this model to obtain performance bounds on parallel computers, to perform scalability analysis and to identify performance bottlenecks. We also discuss the role of load balance and data distribution in this approach.

  19. Arabidopsis transcription factors: genome-wide comparative analysis among eukaryotes.

    Science.gov (United States)

    Riechmann, J L; Heard, J; Martin, G; Reuber, L; Jiang, C; Keddie, J; Adam, L; Pineda, O; Ratcliffe, O J; Samaha, R R; Creelman, R; Pilgrim, M; Broun, P; Zhang, J Z; Ghandehari, D; Sherman, B K; Yu, G

    2000-12-15

    The completion of the Arabidopsis thaliana genome sequence allows a comparative analysis of transcriptional regulators across the three eukaryotic kingdoms. Arabidopsis dedicates over 5% of its genome to code for more than 1500 transcription factors, about 45% of which are from families specific to plants. Arabidopsis transcription factors that belong to families common to all eukaryotes do not share significant similarity with those of the other kingdoms beyond the conserved DNA binding domains, many of which have been arranged in combinations specific to each lineage. The genome-wide comparison reveals the evolutionary generation of diversity in the regulation of transcription.

  20. Frequency Scale Factors for Some Double-Hybrid Density Functional Theory Procedures: Accurate Thermochemical Components for High-Level Composite Protocols.

    Science.gov (United States)

    Chan, Bun; Radom, Leo

    2016-08-09

    In the present study, we have obtained geometries and frequency scale factors for a number of double-hybrid density functional theory (DH-DFT) procedures. We have evaluated their performance for obtaining thermochemical quantities [zero-point vibrational energies (ZPVE) and thermal corrections for 298 K enthalpies (ΔH298) and 298 K entropies (S298)] to be used within high-level composite protocols (using the W2X procedure as a probe). We find that, in comparison with the previously prescribed protocol for optimization and frequency calculations (B3-LYP/cc-pVTZ+d), the use of contemporary DH-DFT methods such as DuT-D3 and DSD-type procedures leads to a slight overall improved performance compared with B3-LYP. A major strength of this approach, however, lies in the better robustness of the DH-DFT methods in that the largest deviations are notably smaller than those for B3-LYP. In general, the specific choices of the DH-DFT procedure and the associated basis set do not drastically change the performance. Nonetheless, we find that the DSD-PBE-P86/aug'-cc-pVTZ+d combination has a very slight edge over the others that we have examined, and we recommend its general use for geometry optimization and vibrational frequency calculations, in particular within high-level composite methods such as the higher-level members of the WnX series of protocols. The scale factors determined for DSD-PBE-P86/aug'-cc-pVTZ+d are 0.9830 (ZPVE), 0.9876 (ΔH298), and 0.9923 (S298).

  1. Statistical design-principal component analysis optimization of a multiple response procedure using cloud point extraction and simultaneous determination of metals by ICP OES.

    Science.gov (United States)

    Bezerra, Marcos A; Bruns, Roy E; Ferreira, Sergio L C

    2006-11-24

    A procedure has been developed for the simultaneous determination of traces amounts of Cd, Cr, Cu, Mn, Ni and Pb from saline oil-refinery effluents and digested vegetable samples using inductively coupled plasma optical emission spectrometry (ICP OES). The procedure is based on cloud point extraction (CPE) of these metals as 2-(bromo-2-pyridylazo)-5-diethyl-amino-phenol (Br-PADAP) complexes into a micellar phase of octylphenoxypolyethoxyethanol (Triton X-114). Optimization of the procedure was performed by response surface methodology (RSM) using a Doehlert design. Principal components (PC) were used to simplify the multiple response analysis. A response surface for the first PC score is useful in determining the optimum conditions for the Cd, Cr, Cu, Mn and Pb determinations whereas the second PC is highly correlated with the Ni response. Improvement factors of 22, 36, 46, 25, 65 and 39, along with limits of detection (3sigma(B)) of 0.081, 0.79, 0.38, 0.83, 0.28 and 0.69 microg L(-1), and precision expressed as relative standard deviation (%R.S.D., n=8, 20.0 microg L(-1)) of 1.5, 2.2, 3.5, 2.6, 2.5 and 2.5 were achieved for Cd, Cr, Cu, Mn, Ni and Pb, respectively. The accuracy was evaluated by spike tests in oil-refinery effluent samples and analysis of a vegetable certified reference material (NIST 1571, orchard leaves). Results found were in agreement with certified values.

  2. Seasonal performance of air conditioners - an analysis of the DOE test procedures: the thermostat and measurement errors. Report No. 2

    Energy Technology Data Exchange (ETDEWEB)

    Lamb, G.D.; Tree, D.R.

    1981-01-01

    Two aspects of the DOE test procedures are analyzed. First, the role of the thermostat in controlling the cycling of conditioning equipment is investigated. The test procedures call for a cycling scheme of 6 minutes on, 24 minutes off for Test D. To justify this cycling scheme as being representative of cycling in the field, it is assumed that the thermostat is the major factor in controlling the cycle rate. This assumption is examined by studying a closed-loop feedback model consisting of a thermostat, a heating/cooling plant and a conditioned space. Important parameters of this model are individually studied to determine their influence on the system. It is found that the switch differential and the anticipator gain are the major parameters in controlling the cycle rate. This confirms the thermostat's dominant role in the cycling of a system. The second aspect of the test procedures concerns transient errors or differences in the measurement of cyclic capacity. In particular, errors due to thermocouple response, thermocouple grid placement, dampers and nonuniform velocity and temperature distributions are considered. Problems in these four areas are mathematically modeled and the basic assumptions are stated. Results from these models help to clarify the problem areas and give an indication of the magnitude of the errors involved. It is found that major disagreement in measured capacity can arise in these four areas and can be mainly attributed to test set-up differences even though such differences are allowable in the test procedures. An understanding of such differences will aid in minimizing many problems in the measurement of cyclic capacity.

  3. An analysis of technical aspects of the arthroscopic Bankart procedure as performed in the United States.

    Science.gov (United States)

    Burks, Robert T; Presson, Angela P; Weng, Hsin-Yi

    2014-10-01

    The purpose of this study was to investigate the intersurgeon variation in technical aspects of performing an arthroscopic Bankart repair. A unique approach with experienced equipment representatives from 3 different arthroscopic companies was used. Experienced representatives were identified by DePuy Mitek, Smith & Nephew, and Arthrex and filled out questionnaires on how their surgeons performed arthroscopic Bankart procedures. This was performed in a blinded fashion with no knowledge of the identities of the specific surgeons or representatives by us. A video on different aspects of the procedure was observed by each representative before filling out the questionnaire to help standardize responses. Data were collected using REDCap (Research Electronic Data Capture). Data were analyzed as an infrequent observation with 0% to 30% of representatives reporting the observation; sometimes, 31% to 70% reporting the observation; and often, greater than 70% of representatives reporting. Seventy-six percent of representatives had 6 or more years of arthroscopic experience. Forty-three percent of representatives reported that their surgeons use 3 portals for the procedure often. Forty-four percent reported that viewing was performed exclusively from the posterior portal while the surgeon was performing the repair. Seventy-three percent reported that the Hill-Sachs lesion was observed often, and 61% reported that the posterior labrum was evaluated often before the repair. Only 25% of representatives reported that the Bankart lesion was extensively released and mobilized often. Thirty-three percent reported 3 anchors as being used often. Seventy-five percent reported biocomposite anchors as being used often. Single-loaded anchors were reported as being used often by 47%. Eighty-one percent reported that sutures were placed in a simple fashion. Eighty-three percent reported the use of any posterior sutures or anchors for additional plication as infrequent. There is significant

  4. Factor analysis and predictive validity of microcomputer-based tests

    Science.gov (United States)

    Kennedy, R. S.; Baltzley, D. R.; Turnage, J. J.; Jones, M. B.

    1989-01-01

    11 tests were selected from two microcomputer-based performance test batteries because previously these tests exhibited rapid stability (less than 10 min, of practice) and high retest reliability efficiencies (r greater than 0.707 for each 3 min. of testing). The battery was administered three times to each of 108 college students (48 men and 60 women) and a factor analysis was performed. Two of the three identified factors appear to be related to information processing ("encoding" and "throughput/decoding"), and the third named an "output/speed" factor. The spatial, memory, and verbal tests loaded on the "encoding" factor and included Grammatical Reasoning, Pattern Comparison, Continuous Recall, and Matrix Rotation. The "throughput/decoding" tests included perceptual/numerical tests like Math Processing, Code Substitution, and Pattern Comparison. The output speed factor was identified by Tapping and Reaction Time tests. The Wonderlic Personnel Test was group administered before the first and after the last administration of the performance tests. The multiple Rs in the total sample between combined Wonderlic as a criterion and less than 5 min. of microcomputer testing on Grammatical Reasoning and Math Processing as predictors ranged between 0.41 and 0.52 on the three test administrations. Based on these results, the authors recommend a core battery which, if time permits, would consist of two tests from each factor. Such a battery is now known to permit stable, reliable, and efficient assessment.

  5. Human Factors in Financial Trading: An Analysis of Trading Incidents.

    Science.gov (United States)

    Leaver, Meghan; Reader, Tom W

    2016-09-01

    This study tests the reliability of a system (FINANS) to collect and analyze incident reports in the financial trading domain and is guided by a human factors taxonomy used to describe error in the trading domain. Research indicates the utility of applying human factors theory to understand error in finance, yet empirical research is lacking. We report on the development of the first system for capturing and analyzing human factors-related issues in operational trading incidents. In the first study, 20 incidents are analyzed by an expert user group against a referent standard to establish the reliability of FINANS. In the second study, 750 incidents are analyzed using distribution, mean, pathway, and associative analysis to describe the data. Kappa scores indicate that categories within FINANS can be reliably used to identify and extract data on human factors-related problems underlying trading incidents. Approximately 1% of trades (n = 750) lead to an incident. Slip/lapse (61%), situation awareness (51%), and teamwork (40%) were found to be the most common problems underlying incidents. For the most serious incidents, problems in situation awareness and teamwork were most common. We show that (a) experts in the trading domain can reliably and accurately code human factors in incidents, (b) 1% of trades incur error, and (c) poor teamwork skills and situation awareness underpin the most critical incidents. This research provides data crucial for ameliorating risk within financial trading organizations, with implications for regulation and policy. © 2016, Human Factors and Ergonomics Society.

  6. Analysis of Factors Affecting Testing in Object oriented systems

    Directory of Open Access Journals (Sweden)

    Mrs. Sujata Khatri,

    2011-03-01

    Full Text Available Software testing is an important software quality assurance activity to ensure that the benefits of Object oriented programming will be realized. Testing object oriented systems is little bit challenging ascomplexity shifted from functions and procedures as in traditional procedural systems to the interconnections among its components. Object oriented development has presented a numerous variety of new challenges due to its features like encapsulation, inheritance, polymorphism and dynamic binding. Earlier the faults used to bein the software units, whereas the problem now is primarily in the way in which we connect the software. Development is writing the code, testing is finding out whether or not the code runs the way you expect it to. A major challenge to the software developers remains how to reduce the cost while improving the quality of software testing. Software testing is difficult and expensive, and testing object oriented system is even more difficult. A tester often needs to spend significant time in developing lengthy testing code to ensure that system under test is reasonably well tested. Substantial research has been carried out in object oriented analysis, design.However relatively less attention has been paid to testing of object oriented programs. This paper describes the various features of object oriented programming and how they effect testing of object oriented systems.

  7. Systems Analysis in Designing Toilet Training Procedures for Developmentally Disabled Persons.

    Science.gov (United States)

    Brooking, Emerson D.; Anderson, Dana M.

    The use of systems analysis may help child developmental specialists improve the success rates of toilet training programs with developmentally disabled children. Such a systems analysis includes the sociocultural, family, and/or individual ecosystems of the individual. Two detailed case studies of mentally retarded elementary school age children…

  8. A SOLUTION PROCEDURE FOR LOWER BOUND LIMIT AND SHAKEDOWN ANALYSIS BY SGBEM

    Institute of Scientific and Technical Information of China (English)

    Zhang Xiaofeng; Liu Yinghua; Cen Zhangzhi

    2001-01-01

    The symmetric Galerkin boundary element method (SGBEM) instead of the finite element method is used to perform lower bound limit and shakedown analysis of structures. The self-equilibrium stress fields are constructed by a linear combination of several basic self-equilibrium stress fields with parameters to be determined. These basic self-equilibrium stress fields are expressed as elastic responses of the body to imposed permanent strains and obtained through elastic-plastic incremental analysis. The complex method is used to solve nonlinear programming and determine the maximal load amplifier. The limit analysis is treated as a special case of shakedown analysis in which only the proportional loading is considered. The numerical results show that SGBEM is effcient and accurate for solving limit and shakedown analysis problems.

  9. Factor analysis in optimization of formulation of high content uniformity tablets containing low dose active substance.

    Science.gov (United States)

    Lukášová, Ivana; Muselík, Jan; Franc, Aleš; Goněc, Roman; Mika, Filip; Vetchý, David

    2017-09-11

    Warfarin is intensively discussed drug with narrow therapeutic range. There have been cases of bleeding attributed to varying content or altered quality of the active substance. Factor analysis is useful for finding suitable technological parameters leading to high content uniformity of tablets containing low amount of active substance. The composition of tabletting blend and technological procedure were set with respect to factor analysis of previously published results. The correctness of set parameters was checked by manufacturing and evaluation of tablets containing 1-10mg of warfarin sodium. The robustness of suggested technology was checked by using "worst case scenario" and statistical evaluation of European Pharmacopoeia (EP) content uniformity limits with respect to Bergum division and process capability index (Cpk). To evaluate the quality of active substance and tablets, dissolution method was developed (water; EP apparatus II; 25rpm), allowing for statistical comparison of dissolution profiles. Obtained results prove the suitability of factor analysis to optimize the composition with respect to batches manufactured previously and thus the use of metaanalysis under industrial conditions is feasible. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Impact factors of fractal analysis of porous structure

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Characterization of pore structure is one of the key problems for fabrication and application research on porous materials. But, complexity of pore structure makes it difficult to characterize pore structure by Euclidean geometry and traditional experimental methods. Fractal theory has been proved effective to characterize the complex pore structure. The box dimension method based on fractal theory was applied to characterizing the pore structure of fiber porous materials by analyzing the electronic scanning microscope (SEM) images of the porous materials in this paper. The influences of image resolution, threshold value, and image magnification on fractal analysis were investigated. The results indicate that such factors greatly affect fractal analysis process and results. The appropriate magnification threshold and fractal analysis are necessary for fractal analysis.

  11. Determining organisation-specific factors for developing health interventions in companies by a Delphi procedure: organisational mapping

    NARCIS (Netherlands)

    Scheppingen, A.R. van; Have, K.C.J.M. ten; Zwetsloot, G.I.J.M.; Kok, G.; Mechelen, W. van

    2015-01-01

    Companies, seen as social communities, are major health promotion contexts. However, health promotion in the work setting is often less successful than intended. An optimal adjustment to the organisational context is required. Knowledge of which organisation-specific factors are relevant to health p

  12. Integrated Data Collection Analysis (IDCA) Program - Mixing Procedures and Materials Compatibility

    Energy Technology Data Exchange (ETDEWEB)

    Olinger, Becky D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC-IHD), Indian Head, MD (United States); Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC-IHD), Indian Head, MD (United States); Remmers, Daniel L. [Naval Surface Warfare Center (NSWC-IHD), Indian Head, MD (United States); Moran, Jesse S. [Naval Surface Warfare Center (NSWC-IHD), Indian Head, MD (United States); Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Whinnery, LeRoy L. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Whipple, Richard E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kashgarian, Michaele [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-01-14

    Three mixing procedures have been standardized for the IDCA proficiency test—solid-solid, solid-liquid, and liquid-liquid. Due to the variety of precursors used in formulating the materials for the test, these three mixing methods have been designed to address all combinations of materials. Hand mixing is recommended for quantities less than 10 grams and Jar Mill mixing is recommended for quantities over 10 grams. Consideration must also be given to the type of container used for the mixing due to the wide range of chemical reactivity of the precursors and mixtures. Eight web site sources from container and chemical manufacturers have been consulted. Compatible materials have been compiled as a resource for selecting containers made of materials stable to the mixtures. In addition, container materials used in practice by the participating laboratories are discussed. Consulting chemical compatibility tables is highly recommended for each operation by each individual engaged in testing the materials in this proficiency test.

  13. Exploratory factor analysis of the Dizziness Handicap Inventory (German version

    Directory of Open Access Journals (Sweden)

    de Bruin Eling D

    2010-03-01

    Full Text Available Abstract Background The Dizziness Handicap Inventory (DHI is a validated, self-report questionnaire which is widely used as an outcome measure. Previous studies supported the multidimensionality of the DHI, but not the original subscale structure. The objectives of this survey were to explore the dimensions of the Dizziness Handicap Inventory - German version, and to investigate the associations of the retained factors with items assessing functional disability and the Hospital Anxiety and Depression Scale (HADS. Secondly we aimed to explore the retained factors according to the International Classification of Functioning, Disability and Health (ICF. Methods Patients were recruited from a tertiary centre for vertigo, dizziness or balance disorders. They filled in two questionnaires: (1 The DHI assesses precipitating physical factors associated with dizziness/unsteadiness and functional/emotional consequences of symptoms. (2 The HADS assesses non-somatic symptoms of anxiety and depression. In addition, patients answered the third question of the University of California Los Angeles-Dizziness Questionnaire which covers the impact of dizziness and unsteadiness on everyday activities. Principal component analysis (PCA was performed to explore the dimensions of the DHI. Associations were estimated by Spearman correlation coefficients. Results One hundred ninety-four patients with dizziness or unsteadiness associated with a vestibular disorder, mean age (standard deviation of 50.6 (13.6 years, participated. Based on eigenvalues greater one respectively the scree plot we analysed diverse factor solutions. The 3-factor solution seems to be reliable, clinically relevant and can partly be explained with the ICF. It explains 49.2% of the variance. Factor 1 comprises the effect of dizziness and unsteadiness on emotion and participation, factor 2 informs about specific activities or effort provoking dizziness and unsteadiness, and factor 3 focuses on self

  14. Physics faculty beliefs and values about the teaching and learning of problem solving. II. Procedures for measurement and analysis

    Directory of Open Access Journals (Sweden)

    Patricia Heller

    2007-12-01

    Full Text Available To identify and describe the basis upon which instructors make curricular and pedagogical decisions, we have developed an artifact-based interview and an analysis technique based on multilayered concept maps. The policy capturing technique used in the interview asks instructors to make judgments about concrete instructional artifacts similar to those they likely encounter in their teaching environment. The analysis procedure alternatively employs both an a priori systems view analysis and an emergent categorization to construct a multilayered concept map, which is a hierarchically arranged set of concept maps where child maps include more details than parent maps. Although our goal was to develop a model of physics faculty beliefs about the teaching and learning of problem solving in the context of an introductory calculus-based physics course, the techniques described here are applicable to a variety of situations in which instructors make decisions that influence teaching and learning.

  15. Physics faculty beliefs and values about the teaching and learning of problem solving. II. Procedures for measurement and analysis

    Science.gov (United States)

    Henderson, Charles; Yerushalmi, Edit; Kuo, Vince H.; Heller, Kenneth; Heller, Patricia

    2007-12-01

    To identify and describe the basis upon which instructors make curricular and pedagogical decisions, we have developed an artifact-based interview and an analysis technique based on multilayered concept maps. The policy capturing technique used in the interview asks instructors to make judgments about concrete instructional artifacts similar to those they likely encounter in their teaching environment. The analysis procedure alternatively employs both an a priori systems view analysis and an emergent categorization to construct a multilayered concept map, which is a hierarchically arranged set of concept maps where child maps include more details than parent maps. Although our goal was to develop a model of physics faculty beliefs about the teaching and learning of problem solving in the context of an introductory calculus-based physics course, the techniques described here are applicable to a variety of situations in which instructors make decisions that influence teaching and learning.

  16. An experimental analysis of some procedures to teach priming and reinforcement skills to preschool teachers.

    Science.gov (United States)

    Thomson, C L; Holmberg, M C; Baer, D M

    1978-01-01

    This Monograph reports the results of teaching preschool teachers to be successful at increasing desired behaviors of their children, thus becoming successful teachers. Five teacher-training techniques were examined experimentally under single-subject designs: written assignments, feedback from viewing graphs, on-the-spot feedback from a wireless radio (Bug-in-the-Ear), feedback from an observer, and self-counting. Those teaching procedures that included prompt and frequent information to the teacher about the behavior under study were the most effective techniques. Self-counting, in which the teacher tallied the number of times she emitted the behavior of either priming or reinforcing social or verbal behavior of a child (or children), and observer feedback, in which the observer reported to the teacher periodically during the hour the frequency of her behavior, were the most reliable teaching techniques. The other procedures, while less reliable than self-counting and observer feedback, were effective with some teachers. Maintenance of teacher behavior across settings was examined with a group of Head Start teachers, and maintenance of teacher behaviors across different child behaviors and different children was examined with three student teachers. The results indicated that teaching was more likely to maintain if it occurred in the teacher's home setting rather than at another site. In all cases, when generalization occurred across settings, time, or children, the frequency of the teacher's behavior was not as high as when the relevant behavior had been trained directly. Results supported the proposal that it is possible to define effective teacher behavior, not just characterize it, as it occurs in the classroom, and that effectiveness can be measured by defining and observing the child behaviors to which teacher behaviors are directed.

  17. Effective dose analysis of three-dimensional rotational angiography during catheter ablation procedures

    Science.gov (United States)

    Wielandts, J.-Y.; Smans, K.; Ector, J.; De Buck, S.; Heidbüchel, H.; Bosmans, H.

    2010-02-01

    There is increasing use of three-dimensional rotational angiography (3DRA) during cardiac ablation procedures. As compared with 2D angiography, a large series of images are acquired, creating the potential for high radiation doses. The aim of the present study was to quantify patient-specific effective doses. In this study, we developed a computer model to accurately calculate organ doses and the effective dose incurred during 3DRA image acquisition. The computer model simulates the exposure geometry and uses the actual exposure parameters, including the variation in tube voltage and current that is realized through the automatic exposure control (AEC). We performed 3DRA dose calculations in 42 patients referred for ablation on the Siemens Axiom Artis DynaCT system (Erlangen, Germany). Organ doses and effective dose were calculated separately for all projections in the course of the C-arm rotation. The influence of patient body mass index (BMI), dose-area product (DAP), collimation and dose per frame (DPF) rate setting on the calculated doses was also analysed. The effective dose was found to be 5.5 ± 1.4 mSv according to ICRP 60 and 6.6 ± 1.8 mSv according to ICRP 103. Effective dose showed an inversely proportional relationship to BMI, while DAP was nearly BMI independent. No simple conversion coefficient between DAP and effective dose could be derived. DPF reduction did not result in a proportional effective dose decrease. These paradoxical findings were explained by the settings of the AEC and the limitations of the x-ray tube. Collimation reduced the effective dose by more than 20%. Three-dimensional rotational angiography is associated with a definite but acceptable radiation dose that can be calculated for all patients separately. Their BMI is a predictor of the effective dose. The dose reduction achieved with collimation suggests that its use is imperative during the 3DRA procedure.

  18. Effective dose analysis of three-dimensional rotational angiography during catheter ablation procedures

    Energy Technology Data Exchange (ETDEWEB)

    Wielandts, J-Y; Ector, J; De Buck, S; Heidbuechel, H [Department of Electrophysiology-Cardiology, University Hospital Gasthuisberg, 49, Herestraat, 3000-Leuven (Belgium); Smans, K [Belgian Nuclear Research Centre (SCK-CEN), Radiation Protection, Dosimetry and Calibration, Boeretang, 2400-Mol (Belgium); Bosmans, H [Department of Radiology, University Hospital Gasthuisberg, 49, Herestraat, 3000-Leuven (Belgium)], E-mail: jean-yves.wielandts@uz.kuleuven.ac.be

    2010-02-07

    There is increasing use of three-dimensional rotational angiography (3DRA) during cardiac ablation procedures. As compared with 2D angiography, a large series of images are acquired, creating the potential for high radiation doses. The aim of the present study was to quantify patient-specific effective doses. In this study, we developed a computer model to accurately calculate organ doses and the effective dose incurred during 3DRA image acquisition. The computer model simulates the exposure geometry and uses the actual exposure parameters, including the variation in tube voltage and current that is realized through the automatic exposure control (AEC). We performed 3DRA dose calculations in 42 patients referred for ablation on the Siemens Axiom Artis DynaCT system (Erlangen, Germany). Organ doses and effective dose were calculated separately for all projections in the course of the C-arm rotation. The influence of patient body mass index (BMI), dose-area product (DAP), collimation and dose per frame (DPF) rate setting on the calculated doses was also analysed. The effective dose was found to be 5.5 {+-} 1.4 mSv according to ICRP 60 and 6.6 {+-} 1.8 mSv according to ICRP 103. Effective dose showed an inversely proportional relationship to BMI, while DAP was nearly BMI independent. No simple conversion coefficient between DAP and effective dose could be derived. DPF reduction did not result in a proportional effective dose decrease. These paradoxical findings were explained by the settings of the AEC and the limitations of the x-ray tube. Collimation reduced the effective dose by more than 20%. Three-dimensional rotational angiography is associated with a definite but acceptable radiation dose that can be calculated for all patients separately. Their BMI is a predictor of the effective dose. The dose reduction achieved with collimation suggests that its use is imperative during the 3DRA procedure.

  19. Tensor-Dictionary Learning with Deep Kruskal-Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Andrew J.; Pu, Yunchen; Sun, Yannan; Spell, Gregory; Carin, Lawrence

    2017-04-20

    We introduce new dictionary learning methods for tensor-variate data of any order. We represent each data item as a sum of Kruskal decomposed dictionary atoms within the framework of beta-process factor analysis (BPFA). Our model is nonparametric and can infer the tensor-rank of each dictionary atom. This Kruskal-Factor Analysis (KFA) is a natural generalization of BPFA. We also extend KFA to a deep convolutional setting and develop online learning methods. We test our approach on image processing and classification tasks achieving state of the art results for 2D & 3D inpainting and Caltech 101. The experiments also show that atom-rank impacts both overcompleteness and sparsity.

  20. Risk factors for progressive ischemic stroke A retrospective analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    BACKGROUND: Progressive ischemic stroke has higher fatality rate and disability rate than common cerebral infarction, thus it is very significant to investigate the early predicting factors related to the occurrence of progressive ischemic stroke, thc potential pathological mechanism and the risk factors of early intervention for preventing the occurrence of progressive ischemic stroke and ameliorating its outcome.OBJECTIVE: To analyze the possible related risk factors in patients with progressive ishcemic stroke, so as to provide reference for the prevention and treatment of progressive ishcemic stroke.DESIGN: A retrospective analysis.SETTING: Department of Neurology, General Hospital of Beijing Coal Mining Group.PARTICIPANTS: Totally 280 patients with progressive ischemic stroke were selected from the Department of Neurology, General Hospital of Beijing Coal Mining Group from March 2002 to June 2006, including 192 males and 88 females, with a mean age of (62±7) years old. They were all accorded with the diagnostic standards for cerebral infarction set by the Fourth National Academic Meeting for Cerebrovascular Disease in 1995, and confired by CT or MRI, admitted within 24 hours after attack, and the neurological defect progressed gradually or aggravated in gradients within 72 hours after attack, and the aggravation of neurological defect was defined as the neurological deficit score decreased by more than 2 points. Meanwhile,200 inpatients with non-progressive ischemic stroke (135 males and 65 females) were selected as the control group.METHODS: After admission, a univariate analysis of variance was conducted using the factors of blood pressure, history of diabetes mellitus, fever, leukocytosis, levels of blood lipids, fibrinogen, blood glucose and plasma homocysteine, cerebral arterial stenosis, and CT symptoms of early infarction, and the significant factors were involved in the multivariate non-conditional Logistic regression analysis.MAIN OUTCOME MEASURES