WorldWideScience

Sample records for modeling procedure results

  1. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  2. A new procedure to built a model covariance matrix: first results

    Science.gov (United States)

    Barzaghi, R.; Marotta, A. M.; Splendore, R.; Borghi, A.

    2012-04-01

    In order to validate the results of geophysical models a common procedure is to compare model predictions with observations by means of statistical tests. A limit of this approach is the lack of a covariance matrix associated to model results, that may frustrate the achievement of a confident statistical significance of the results. Trying to overcome this limit, we have implemented a new procedure to build a model covariance matrix that could allow a more reliable statistical analysis. This procedure has been developed in the frame of the thermo-mechanical model described in Splendore et al. (2010), that predicts the present-day crustal velocity field in the Tyrrhenian due to Africa-Eurasia convergence and to lateral rheological heterogeneities of the lithosphere. Modelled tectonic velocity field has been compared to the available surface velocity field based on GPS observation, determining the best fit model and the degree of fitting, through the use of a χ2 test. Once we have identified the key models parameters and defined their appropriate ranges of variability, we have run 100 different models for 100 sets of randomly values of the parameters extracted within the corresponding interval, obtaining a stack of 100 velocity fields. Then, we calculated variance and empirical covariance for the stack of results, taking into account also cross-correlation, obtaining a positive defined, diagonal matrix that represents the covariance matrix of the model. This empirical approach allows us to define a more robust statistical analysis with respect the classic approach. Reference Splendore, Marotta, Barzaghi, Borghi and Cannizzaro, 2010. Block model versus thermomechanical model: new insights on the present-day regional deformation in the surroundings of the Calabrian Arc. In: Spalla, Marotta and Gosso (Eds) Advances in Interpretation of Geological Processes: Refinement of Multi scale Data and Integration in Numerical Modelling. Geological Society, London, Special

  3. Comparing uncertainty resulting from two-step and global regression procedures applied to microbial growth models.

    Science.gov (United States)

    Martino, K G; Marks, B P

    2007-12-01

    Two different microbial modeling procedures were compared and validated against independent data for Listeria monocytogenes growth. The most generally used method is two consecutive regressions: growth parameters are estimated from a primary regression of microbial counts, and a secondary regression relates the growth parameters to experimental conditions. A global regression is an alternative method in which the primary and secondary models are combined, giving a direct relationship between experimental factors and microbial counts. The Gompertz equation was the primary model, and a response surface model was the secondary model. Independent data from meat and poultry products were used to validate the modeling procedures. The global regression yielded the lower standard errors of calibration, 0.95 log CFU/ml for aerobic and 1.21 log CFU/ml for anaerobic conditions. The two-step procedure yielded errors of 1.35 log CFU/ml for aerobic and 1.62 log CFU/ ml for anaerobic conditions. For food products, the global regression was more robust than the two-step procedure for 65% of the cases studied. The robustness index for the global regression ranged from 0.27 (performed better than expected) to 2.60. For the two-step method, the robustness index ranged from 0.42 to 3.88. The predictions were overestimated (fail safe) in more than 50% of the cases using the global regression and in more than 70% of the cases using the two-step regression. Overall, the global regression performed better than the two-step procedure for this specific application.

  4. TESTING THE ASSUMPTIONS AND INTERPRETING THE RESULTS OF THE RASCH MODEL USING LOG-LINEAR PROCEDURES IN SPSS

    NARCIS (Netherlands)

    TENVERGERT, E; GILLESPIE, M; KINGMA, J

    This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total

  5. TESTING THE ASSUMPTIONS AND INTERPRETING THE RESULTS OF THE RASCH MODEL USING LOG-LINEAR PROCEDURES IN SPSS

    NARCIS (Netherlands)

    TENVERGERT, E; GILLESPIE, M; KINGMA, J

    1993-01-01

    This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total score

  6. Results of Sauve-kapandji procedure.

    Science.gov (United States)

    Low, C K; Chew, W Y C

    2002-03-01

    Sauve-Kapandji procedure is used to treat distal radioulnar joint disorder. Sixteen patients with distal radioulnar joint (DRUJ) disease treated with Sauve-Kapandji procedure between 1996 and 1998 were available for review at an average follow up period of 32.8 months,ranging from 24 to 48 months. The patients were young and the average age at the time of procedure was 33.6 years. There were eight cases of post-traumatic DRUJ arthritis, two cases of dislocation of DRUJ with malunion of radial fractures and six cases of rheumatoid patients with destruction of DRUJ. The distal end of ulnar shaft was stabilised with a sling created using radial 1/2 slip of extensor carpi ulnaris (ECU) tendon. Functional results were evaluated with Mayo wrist score. Fusion of DRUJ was achieved in all cases by two months. Excellent results were achieved in eight cases, good in six, fair in one and poor in one. All except one case gained increase range of forearm rotation. Complications included one case of closure of pseudoarthrosis and required excision of the ulna head to restore forearm rotation. Sauve-Kapandji procedure is recommended in young patients with distal radioulnar joint disorder.

  7. Multifactor Screener in OPEN: Scoring Procedures & Results

    Science.gov (United States)

    Scoring procedures were developed to convert a respondent's screener responses to estimates of individual dietary intake for percentage energy from fat, grams of fiber, and servings of fruits and vegetables.

  8. 49 CFR 219.605 - Positive drug test results; procedures.

    Science.gov (United States)

    2010-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION CONTROL OF ALCOHOL AND DRUG USE Random Alcohol and Drug Testing Programs § 219.605 Positive drug test results; procedures. (a) [Reserved] (b) Procedures for administrative... 49 Transportation 4 2010-10-01 2010-10-01 false Positive drug test results; procedures. 219.605...

  9. Procedural Personas for Player Decision Modeling and Procedural Content Generation

    DEFF Research Database (Denmark)

    Holmgård, Christoffer

    2016-01-01

    in specific games. It further explores how simple utility functions, easily defined and changed by game designers, can be used to construct agents expressing a variety of decision making styles within a game, using a variety of contemporary AI approaches, naming the resulting agents "Procedural Personas......." These methods for constructing procedural personas are then integrated with existing procedural content generation systems, acting as critics that shape the output of these systems, optimizing generated content for different personas and by extension, different kinds of players and their decision making styles...

  10. Random effect modelling of patient-related risk factors in orthopaedic procedures: results from the Dutch nosocomial infection surveillance network 'PREZIES'.

    NARCIS (Netherlands)

    Muilwijk, J.; Walenkamp, G.H.; Voss, A.; Wille, J.C.; Hof, S. van den

    2006-01-01

    In the Dutch surveillance for surgical site infections (SSIs), data from 70277 orthopaedic procedures with 1895 SSIs were collected between 1996 and 2003. The aims of this study were: (1) to analyse the trends in SSIs associated with Gram-positive and Gram-negative bacteria; (2) to estimate patient-

  11. Procedural Personas for Player Decision Modeling and Procedural Content Generation

    DEFF Research Database (Denmark)

    Holmgård, Christoffer

    2016-01-01

    ." These methods for constructing procedural personas are then integrated with existing procedural content generation systems, acting as critics that shape the output of these systems, optimizing generated content for different personas and by extension, different kinds of players and their decision making styles....... This thesis explores methods for creating low-complexity, easily interpretable, generative AI agents for use in game and simulation design. Based on insights from decision theory and behavioral economics, the thesis investigates how player decision making styles may be defined, operationalised, and measured...... in specific games. It further explores how simple utility functions, easily defined and changed by game designers, can be used to construct agents expressing a variety of decision making styles within a game, using a variety of contemporary AI approaches, naming the resulting agents "Procedural Personas...

  12. Early changes in experimental osteoarthritis using the Pond-Nuki dog model: technical procedure and initial results of in vivo MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Libicher, Martin [University of Heidelberg, Department of Radiology, Heidelberg (Germany); Ivancic, Mate; Hoffmann, Volker; Wenz, Wolfram [University of Heidelberg, Department of Orthopaedics, Heidelberg (Germany)

    2005-02-01

    The purpose of this study was to prove the feasibility of combining in vivo MR imaging with the Pond-Nuki animal model for the evaluation of osteoarthritis. In an experimental study, 24 beagle dogs underwent transection of the anterior cruciate ligament of the left leg (modified Pond-Nuki model). The dogs were randomly assigned into four groups and examined by MRI after 6, 12, 24 and 48 weeks. MR imaging of both knees was performed under general anesthesia with the contralateral joint serving as control. In group 1 (6 weeks postoperatively), the first sign detected on MRI was subchondral bone marrow edema in the posteromedial tibia. After 12 weeks, erosion of the posteromedial tibial cartilage could be observed, followed by meniscus degeneration and osteophytosis after 24 and 48 weeks. The contralateral knee joint showed transient joint effusion, but no significant signs of internal derangement (P<0.001). By combining in vivo MR imaging with the Pond-Nuki model, it is possible to detect early signs of osteoarthritis. The first sign was posteromedial subchondral bone marrow edema in the tibia followed by progressive cartilage degeneration and joint derangement. The in vivo model therefore seems to be suitable for longitudinal studies or monitoring the therapeutic effects of osteoarthritis. (orig.)

  13. Robust estimation procedure in panel data model

    Energy Technology Data Exchange (ETDEWEB)

    Shariff, Nurul Sima Mohamad [Faculty of Science of Technology, Universiti Sains Islam Malaysia (USIM), 71800, Nilai, Negeri Sembilan (Malaysia); Hamzah, Nor Aishah [Institute of Mathematical Sciences, Universiti Malaya, 50630, Kuala Lumpur (Malaysia)

    2014-06-19

    The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependence is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.

  14. Applying Modeling Tools to Ground System Procedures

    Science.gov (United States)

    Di Pasquale, Peter

    2012-01-01

    As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.

  15. Results of consecutive training procedures in pediatric cardiac surgery

    Directory of Open Access Journals (Sweden)

    Campbell David N

    2010-11-01

    Full Text Available Abstract This report from a single institution describes the results of consecutive pediatric heart operations done by trainees under the supervision of a senior surgeon. The 3.1% mortality seen in 1067 index operations is comparable across procedures and risk bands to risk-stratified results reported by the Society of Thoracic Surgeons. With appropriate mentorship, surgeons-in-training are able to achieve good results as first operators.

  16. Procedural Modeling for Digital Cultural Heritage

    Directory of Open Access Journals (Sweden)

    Müller Pascal

    2009-01-01

    Full Text Available The rapid development of computer graphics and imaging provides the modern archeologist with several tools to realistically model and visualize archeological sites in 3D. This, however, creates a tension between veridical and realistic modeling. Visually compelling models may lead people to falsely believe that there exists very precise knowledge about the past appearance of a site. In order to make the underlying uncertainty visible, it has been proposed to encode this uncertainty with different levels of transparency in the rendering, or of decoloration of the textures. We argue that procedural modeling technology based on shape grammars provides an interesting alternative to such measures, as they tend to spoil the experience for the observer. Both its efficiency and compactness make procedural modeling a tool to produce multiple models, which together sample the space of possibilities. Variations between the different models express levels of uncertainty implicitly, while letting each individual model keeping its realistic appearance. The underlying, structural description makes the uncertainty explicit. Additionally, procedural modeling also yields the flexibility to incorporate changes as knowledge of an archeological site gets refined. Annotations explaining modeling decisions can be included. We demonstrate our procedural modeling implementation with several recent examples.

  17. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  18. A Procedural Model for Process Improvement Projects

    OpenAIRE

    Kreimeyer, Matthias;Daniilidis, Charampos;Lindemann, Udo

    2017-01-01

    Process improvement projects are of a complex nature. It is therefore necessary to use experience and knowledge gained in previous projects when executing a new project. Yet, there are few pragmatic planning aids, and transferring the institutional knowledge from one project to the next is difficult. This paper proposes a procedural model that extends common models for project planning to enable staff on a process improvement project to adequately plan their projects, enabling them to documen...

  19. A procedure for building product models

    DEFF Research Database (Denmark)

    Hvam, Lars; Riis, Jesper; Malis, Martin

    2001-01-01

    with product models. The next phase includes an analysis of the product assortment, and the set up of a so-called product master. Finally the product model is designed and implemented using object oriented modelling. The procedure is developed in order to ensure that the product models constructed are fit...... for the business processes they support, and properly structured and documented, in order to facilitate that the systems can be maintained continually and further developed. The research has been carried out at the Centre for Industrialisation of Engineering, Department of Manufacturing Engineering, Technical...

  20. New procedure for declaring accidents resulting in bodily injuries

    CERN Multimedia

    2014-01-01

    The HR Department would like to remind members of personnel that, according to Administrative Circular No. 14 (Rev. 3), entitled “Protection of members of the personnel against the financial consequences of illness, accident and incapacity for work”, accidents resulting in bodily injuries and presumed to be of an occupational nature should, under normal circumstances, be declared within 10 working days of the accident having occurred, accompanied by a medical certificate. In an effort to streamline procedures, occupational accident declarations should be made via EDH using the “declaration of occupational accident” electronic form. For the declaration of non-occupational accidents resulting in bodily injuries of members of the CERN Health Insurance Scheme (CHIS), a new paper form has been elaborated that can be downloaded from the CHIS website and is also available from the UNIQA Helpdesk in the Main Building. If you encounter technical difficulties with these new ...

  1. Comparisons of Estimation Procedures for Nonlinear Multilevel Models

    Directory of Open Access Journals (Sweden)

    Ali Reza Fotouhi

    2003-05-01

    Full Text Available We introduce General Multilevel Models and discuss the estimation procedures that may be used to fit multilevel models. We apply the proposed procedures to three-level binary data generated in a simulation study. We compare the procedures by two criteria, Bias and efficiency. We find that the estimates of the fixed effects and variance components are substantially and significantly biased using Longford's Approximation and Goldstein's Generalized Least Squares approaches by two software packages VARCL and ML3. These estimates are not significantly biased and are very close to real values when we use Markov Chain Monte Carlo (MCMC using Gibbs sampling or Nonparametric Maximum Likelihood (NPML approach. The Gaussian Quadrature (GQ approach, even with small number of mass points results in consistent estimates but computationally problematic. We conclude that the MCMC and the NPML approaches are the recommended procedures to fit multilevel models.

  2. Procedural Optimization Models for Multiobjective Flexible JSSP

    Directory of Open Access Journals (Sweden)

    Elena Simona NICOARA

    2013-01-01

    Full Text Available The most challenging issues related to manufacturing efficiency occur if the jobs to be sched-uled are structurally different, if these jobs allow flexible routings on the equipments and mul-tiple objectives are required. This framework, called Multi-objective Flexible Job Shop Scheduling Problems (MOFJSSP, applicable to many real processes, has been less reported in the literature than the JSSP framework, which has been extensively formalized, modeled and analyzed from many perspectives. The MOFJSSP lie, as many other NP-hard problems, in a tedious place where the vast optimization theory meets the real world context. The paper brings to discussion the most optimization models suited to MOFJSSP and analyzes in detail the genetic algorithms and agent-based models as the most appropriate procedural models.

  3. Generic Graph Grammar: A Simple Grammar for Generic Procedural Modelling

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Bærentzen, Jakob Andreas

    2012-01-01

    in a directed cyclic graph. Furthermore, the basic productions are chosen such that Generic Graph Grammar seamlessly combines the capabilities of L-systems to imitate biological growth (to model trees, animals, etc.) and those of split grammars to design structured objects (chairs, houses, etc.). This results......Methods for procedural modelling tend to be designed either for organic objects, which are described well by skeletal structures, or for man-made objects, which are described well by surface primitives. Procedural methods, which allow for modelling of both kinds of objects, are few and usually...

  4. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  5. Vector operations for modelling data-conversion procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rivkin, M.N.

    1992-03-01

    This article presents a set of vector operations that permit effective modelling of operations from extended relational algebra for implementations of variable-construction procedures in data-conversion processors. Vector operations are classified, and test results are given for the ARIUS UP and other popular database management systems for PC`s. 10 refs., 5 figs.

  6. Results of measurement procedure of innovation maturity of business

    Directory of Open Access Journals (Sweden)

    O. A. Kozlova

    2010-09-01

    Full Text Available In this article the basic approaches of innovation maturity of business has been applied. The realization procedure of innovation maturity as an example of organizations of Perm’s region has been call attention. The preliminary potential assessments of perception of plans innovations will give optimize selection innovative strategy for organization and increase the effectiveness of business for microeconomics and region’s level.

  7. Ceramic inlays and onlays: clinical procedures for predictable results.

    Science.gov (United States)

    Meyer, Alfredo; Cardoso, Luiz Clovis; Araujo, Elito; Baratieri, Luiz Narciso

    2003-01-01

    The use of ceramics as restorative materials has increased substantially in the past two decades. This trend can be attributed to the greater interest of patients and dentists in this esthetic and long-lasting material, and to the ability to effectively bond metal-free ceramic restorations to tooth structure using acid-etch techniques and adhesive cements. The purpose of this article is to review the pertinent literature on ceramic systems, direct internal buildup materials, and adhesive cements. Current clinical procedures for the planning, preparation, impression, and bonding of ceramic inlays and onlays are also briefly reviewed. A representative clinical case is presented, illustrating the technique. When posterior teeth are weakened owing to the need for wide cavity preparations, the success of direct resin-based composites is compromised. In these clinical situations, ceramic inlays/onlays can be used to achieve esthetic, durable, and biologically compatible posterior restorations.

  8. Microplastics in Baltic bottom sediments: Quantification procedures and first results.

    Science.gov (United States)

    Zobkov, M; Esiukova, E

    2017-01-30

    Microplastics in the marine environment are known as a global ecological problem but there are still no standardized analysis procedures for their quantification. The first breakthrough in this direction was the NOAA Laboratory Methods for quantifying synthetic particles in water and sediments, but fibers numbers have been found to be underestimated with this approach. We propose modifications for these methods that will allow us to analyze microplastics in bottom sediments, including small fibers. Addition of an internal standard to sediment samples and occasional empty runs are advised for analysis quality control. The microplastics extraction efficiency using the proposed modifications is 92±7%. Distribution of microplastics in bottom sediments of the Russian part of the Baltic Sea is presented. Microplastic particles were found in all of the samples with an average concentration of 34±10 items/kg DW and have the same order of magnitude as neighbor studies reported.

  9. A Survey on Procedural Modelling for Virtual Worlds

    NARCIS (Netherlands)

    Smelik, R.M.; Tutenel, T.; Bidarra, R.; Benes, B.

    2014-01-01

    Procedural modelling deals with (semi-)automatic content generation by means of a program or procedure. Among other advantages, its data compression and the potential to generate a large variety of detailed content with reduced human intervention, have made procedural modelling attractive for creati

  10. Markov chain decision model for urinary incontinence procedures.

    Science.gov (United States)

    Kumar, Sameer; Ghildayal, Nidhi; Ghildayal, Neha

    2017-03-13

    Purpose Urinary incontinence (UI) is a common chronic health condition, a problem specifically among elderly women that impacts quality of life negatively. However, UI is usually viewed as likely result of old age, and as such is generally not evaluated or even managed appropriately. Many treatments are available to manage incontinence, such as bladder training and numerous surgical procedures such as Burch colposuspension and Sling for UI which have high success rates. The purpose of this paper is to analyze which of these popular surgical procedures for UI is effective. Design/methodology/approach This research employs randomized, prospective studies to obtain robust cost and utility data used in the Markov chain decision model for examining which of these surgical interventions is more effective in treating women with stress UI based on two measures: number of quality adjusted life years (QALY) and cost per QALY. Treeage Pro Healthcare software was employed in Markov decision analysis. Findings Results showed the Sling procedure is a more effective surgical intervention than the Burch. However, if a utility greater than certain utility value, for which both procedures are equally effective, is assigned to persistent incontinence, the Burch procedure is more effective than the Sling procedure. Originality/value This paper demonstrates the efficacy of a Markov chain decision modeling approach to study the comparative effectiveness analysis of available treatments for patients with UI, an important public health issue, widely prevalent among elderly women in developed and developing countries. This research also improves upon other analyses using a Markov chain decision modeling process to analyze various strategies for treating UI.

  11. Resampling procedures to validate dendro-auxometric regression models

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Regression analysis has a large use in several sectors of forest research. The validation of a dendro-auxometric model is a basic step in the building of the model itself. The more a model resists to attempts of demonstrating its groundlessness, the more its reliability increases. In the last decades many new theories, that quite utilizes the calculation speed of the calculators, have been formulated. Here we show the results obtained by the application of a bootsprap resampling procedure as a validation tool.

  12. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  13. Some procedures for displaying results from three-way methods

    NARCIS (Netherlands)

    Kiers, Henk A.L.

    2000-01-01

    Three-way Tucker analysis and CANDECOMP/PARAFAC are popular methods for the analysis of three-way data (data pertaining to three sets of entities). To interpret the results from these methods, one can, in addition to inspecting the component matrices and the core array, inspect visual representation

  14. Inference-based procedural modeling of solids

    KAUST Repository

    Biggers, Keith

    2011-11-01

    As virtual environments become larger and more complex, there is an increasing need for more automated construction algorithms to support the development process. We present an approach for modeling solids by combining prior examples with a simple sketch. Our algorithm uses an inference-based approach to incrementally fit patches together in a consistent fashion to define the boundary of an object. This algorithm samples and extracts surface patches from input models, and develops a Petri net structure that describes the relationship between patches along an imposed parameterization. Then, given a new parameterized line or curve, we use the Petri net to logically fit patches together in a manner consistent with the input model. This allows us to easily construct objects of varying sizes and configurations using arbitrary articulation, repetition, and interchanging of parts. The result of our process is a solid model representation of the constructed object that can be integrated into a simulation-based environment. © 2011 Elsevier Ltd. All rights reserved.

  15. Procedures, Resources and Selected Results of the Deep Ecliptic Survey

    Science.gov (United States)

    Buie, M. W.; Millis, R. L.; Wasserman, L. H.; Elliot, J. L.; Kern, S. D.; Clancy, K. B.; Chiang, E. I.; Jordan, A. B.; Meech, K. J.; Wagner, R. M.; Trilling, D. E.

    2003-06-01

    The Deep Ecliptic Survey is a project whose goal is to survey a large area of the near-ecliptic region to a faint limiting magnitude (R ~ 24) in search of objects in the outer solar system. We are collecting a large homogeneous data sample from the Kitt Peak Mayall 4-m and Cerro Tololo Blanco 4-m telescopes with the Mosaic prime-focus CCD cameras. Our goal is to collect a sample of 500 objects with good orbits to further our understanding of the dynamical structure of the outer solar system. This survey has been in progress since 1998 and is responsible for 272 designated discoveries as of March 2003. We summarize our techniques, highlight recent results, and describe publically available resources.

  16. Proceduralization, transfer of training and retention of knowledge as a result of output practice

    Directory of Open Access Journals (Sweden)

    Mehrnoosh Fakharzadeh

    2013-11-01

    Full Text Available This study investigated the effect of output practice on the proceduralization, transfer and retention of knowledge on English modals, adopting Anderson's ACT-R model of skill acquisition. A pretest posttest and delayed posttest design was used where the procedural knowledge on production skill was specifically operationalized through the groups’ performance on Dual Task Timed Completion Test and transfer of training was measures through a Dual Task Timed Grammaticality Judgment Test. Two intact classes of intermediate EFL learners were randomly assigned to treatment and control groups. The output group (n= 27 received explicit grammar instruction, a combination of three output tasks including dictogloss, individual text reconstruction, and corrected-close translation, and feedback. The control group (n=25 were just exposed to the identical texts trough listening and reading tasks followed by some questions irrelevant to the target structure. Results showed that on the posttest, three days after the last treatment session, the output group outperformed the control group in both measures of procedural knowledge and transfer of knowledge. As for retention, forty days later, the output group’s performance was still significantly better than that on the pretest. This group also succeeded in retaining its outperformance on both measures of procedural knowledge and transfer of knowledge delayed posttest. The results may help language teachers design more effective activities for the learners considering the institutional constraints.

  17. Modeling and prediction of surgical procedure times

    NARCIS (Netherlands)

    P.S. Stepaniak (Pieter); C. Heij (Christiaan); G. de Vries (Guus)

    2009-01-01

    textabstractAccurate prediction of medical operation times is of crucial importance for cost efficient operation room planning in hospitals. This paper investigates the possible dependence of procedure times on surgeon factors like age, experience, gender, and team composition. The effect of these f

  18. A procedure for Building Product Models

    DEFF Research Database (Denmark)

    Hvam, Lars

    1999-01-01

    , easily adaptable concepts and methods from data modeling (object oriented analysis) and domain modeling (product modeling). The concepts are general and can be used for modeling all types of specifications in the different phases in the product life cycle. The modeling techniques presented have been...

  19. Modeling a radiotherapy clinical procedure: total body irradiation.

    Science.gov (United States)

    Esteban, Ernesto P; García, Camille; De La Rosa, Verónica

    2010-09-01

    Leukemia, non-Hodgkin's lymphoma, and neuroblastoma patients prior to bone marrow transplants may be subject to a clinical radiotherapy procedure called total body irradiation (TBI). To mimic a TBI procedure, we modified the Jones model of bone marrow radiation cell kinetics by adding mutant and cancerous cell compartments. The modified Jones model is mathematically described by a set of n + 4 differential equations, where n is the number of mutations before a normal cell becomes a cancerous cell. Assuming a standard TBI radiotherapy treatment with a total dose of 1320 cGy fractionated over four days, two cases were considered. In the first, repopulation and sub-lethal repair in the different cell populations were not taken into account (model I). In this case, the proposed modified Jones model could be solved in a closed form. In the second, repopulation and sub-lethal repair were considered, and thus, we found that the modified Jones model could only be solved numerically (model II). After a numerical and graphical analysis, we concluded that the expected results of TBI treatment can be mimicked using model I. Model II can also be used, provided the cancer repopulation factor is less than the normal cell repopulation factor. However, model I has fewer free parameters compared to model II. In either case, our results are in agreement that the standard dose fractionated over four days, with two irradiations each day, provides the needed conditioning treatment prior to bone marrow transplant. Partial support for this research was supplied by the NIH-RISE program, the LSAMP-Puerto Rico program, and the University of Puerto Rico-Humacao.

  20. Procedures for Geometric Data Reduction in Solid Log Modelling

    Science.gov (United States)

    Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt

    1995-01-01

    One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.

  1. Procedure to Determine Coefficients for the Sandia Array Performance Model (SAPM)

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce Hardison; Hansen, Clifford; Riley, Daniel; Robinson, Charles David; Pratt, Larry

    2016-06-01

    The Sandia Array Performance Model (SAPM), a semi-empirical model for predicting PV system power, has been in use for more than a decade. While several studies have presented comparisons of measurements and analysis results among laboratories, detailed procedures for determining model coefficients have not yet been published. Independent test laboratories must develop in-house procedures to determine SAPM coefficients, which contributes to uncertainty in the resulting models. Here we present a standard procedure for calibrating the SAPM using outdoor electrical and meteorological measurements. Analysis procedures are illustrated with data measured outdoors for a 36-cell silicon photovoltaic module.

  2. Using GOMS models and hypertext to create representations of medical procedures for online display

    Science.gov (United States)

    Gugerty, Leo; Halgren, Shannon; Gosbee, John; Rudisill, Marianne

    1991-01-01

    This study investigated two methods to improve organization and presentation of computer-based medical procedures. A literature review suggested that the GOMS (goals, operators, methods, and selecton rules) model can assist in rigorous task analysis, which can then help generate initial design ideas for the human-computer interface. GOMS model are hierarchical in nature, so this study also investigated the effect of hierarchical, hypertext interfaces. We used a 2 x 2 between subjects design, including the following independent variables: procedure organization - GOMS model based vs. medical-textbook based; navigation type - hierarchical vs. linear (booklike). After naive subjects studies the online procedures, measures were taken of their memory for the content and the organization of the procedures. This design was repeated for two medical procedures. For one procedure, subjects who studied GOMS-based and hierarchical procedures remembered more about the procedures than other subjects. The results for the other procedure were less clear. However, data for both procedures showed a 'GOMSification effect'. That is, when asked to do a free recall of a procedure, subjects who had studies a textbook procedure often recalled key information in a location inconsistent with the procedure they actually studied, but consistent with the GOMS-based procedure.

  3. Procedures and results of the measurements on large area photomultipliers for the NEMO project

    Science.gov (United States)

    Aiello, S.; Leonora, E.; Aloisio, A.; Ameli, F.; Amore, I.; Anghinolfi, M.; Anzalone, A.; Barbarino, G.; Barbarito, E.; Battaglieri, M.; Bazzotti, M.; Bellotti, R.; Bersani, A.; Beverini, N.; Biagi, S.; Bonori, M.; Bouhdaef, B.; Cacopardo, G.; Calı, C.; Capone, A.; Caponetto, L.; Carminati, G.; Cassano, B.; Ceres, A.; Chiarusi, T.; Circella, M.; Cocimano, R.; Coniglione, R.; Cordelli, M.; Costa, M.; D'Amico, A.; DeBonis, G.; DeRosa, G.; DeRuvo, G.; DeVita, R.; Distefano, C.; Flaminio, V.; Fratini, K.; Gabrielli, A.; Galeotti, S.; Gandolfi, E.; Giacomelli, G.; Giorgi, F.; Giovanetti, G.; Grimaldi, A.; Grmek, A.; Habel, R.; Imbesi, M.; Lonardo, A.; LoPresti, D.; Lucarelli, F.; Margiotta, A.; Marinelli, A.; Martini, A.; Masullo, R.; Maugeri, F.; Migneco, E.; Minutoli, S.; Mongelli, M.; Morganti, M.; Musico, P.; Musumeci, M.; Orlando, A.; Osipenko, M.; Papaleo, R.; Pappalardo, V.; Piattelli, P.; Piombo, D.; Raffaelli, F.; Raia, G.; Randazzo, N.; Reito, S.; Ricco, G.; Riccobene, G.; Ripani, M.; Rovelli, A.; Ruppi, M.; Russo, G. V.; Russo, S.; Sapienza, P.; Sedita, M.; Shirokov, E.; Simeone, F.; Sciliberto, D.; Sipala, V.; Sollima, C.; Spurio, M.; Stefani, F.; Taiuti, M.; Terreni, G.; Trasatti, L.; Urso, S.; Vecchi, M.; Vicini, P.; Wischnewski, R.

    2010-03-01

    The selection of the photomultiplier plays a crucial role in the R&D activity related to a large-scale underwater neutrino telescope. This paper illustrates the main procedures and facilities used to characterize the performances of 72 large area photomultipliers, Hamamatsu model R7081 sel. The voltage to achieve a gain of 5×10 7, dark count rate and single photoelectron time and charge properties of the overall response were measured with a properly attenuated 410 nm pulsed laser. A dedicated study of the spurious pulses was also performed. The results prove that the photomultipliers comply with the general requirements imposed by the project.

  4. Power mos devices: structures and modelling procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rossel, P.; Charitat, G.; Tranduc, H.; Morancho, F.; Moncoqut

    1997-05-01

    In this survey, the historical evolution of power MOS transistor structures is presented and currently used devices are described. General considerations on current and voltage capabilities are discussed and configurations of popular structures are given. A synthesis of different modelling approaches proposed last three years is then presented, including analytical solutions, for basic electrical parameters such as threshold voltage, on-resistance, saturation and quasi-saturation effects, temperature influence and voltage handling capability. The numerical solutions of basic semiconductor devices is then briefly reviewed along with some typical problems which can be solved this way. A compact circuit modelling method is finally explained with emphasis on dynamic behavior modelling

  5. A procedure for Applying a Maturity Model to Process Improvement

    Directory of Open Access Journals (Sweden)

    Elizabeth Pérez Mergarejo

    2014-09-01

    Full Text Available A maturity model is an evolutionary roadmap for implementing the vital practices from one or moredomains of organizational process. The use of the maturity models is poor in the Latin-Americancontext. This paper presents a procedure for applying the Process and Enterprise Maturity Modeldeveloped by Michael Hammer [1]. The procedure is divided into three steps: Preparation, Evaluationand Improvement plan. The Hammer´s maturity model joint to the proposed procedure can be used byorganizations to improve theirs process, involving managers and employees.

  6. Multi-block and path modelling procedures

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2008-01-01

    The author has developed a unified theory of path and multi-block modelling of data. The data blocks are arranged in a directional path. Each data block can lead to one or more data blocks. It is assumed that there is given a collection of input data blocks. Each of them is supposed to describe one...

  7. Transport Simulation Model Calibration with Two-Step Cluster Analysis Procedure

    Directory of Open Access Journals (Sweden)

    Zenina Nadezda

    2015-12-01

    Full Text Available The calibration results of transport simulation model depend on selected parameters and their values. The aim of the present paper is to calibrate a transport simulation model by a two-step cluster analysis procedure to improve the reliability of simulation model results. Two global parameters have been considered: headway and simulation step. Normal, uniform and exponential headway generation models have been selected for headway. Application of two-step cluster analysis procedure to the calibration procedure has allowed reducing time needed for simulation step and headway generation model value selection.

  8. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan

    2017-05-24

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  9. A Precision-weighted Rank-ordering Procedure for the Combination of Voice Coder Evaluation Results

    NARCIS (Netherlands)

    Tardelli, J.D.; Wijngaarden, S.J. van; Hassanein, H.; Collura, J.S.

    2002-01-01

    A precision-weighted rank-ordering procedure was developed for the combination of voice coder subjective evaluation results to provide a final measure of performance for the reference and candidate coders in the NATO 1200/2400 bps coder selection project. The combination procedure is based on calcul

  10. Predictive models of procedural human supervisory control behavior

    Science.gov (United States)

    Boussemart, Yves

    Human supervisory control systems are characterized by the computer-mediated nature of the interactions between one or more operators and a given task. Nuclear power plants, air traffic management and unmanned vehicles operations are examples of such systems. In this context, the role of the operators is typically highly proceduralized due to the time and mission-critical nature of the tasks. Therefore, the ability to continuously monitor operator behavior so as to detect and predict anomalous situations is a critical safeguard for proper system operation. In particular, such models can help support the decision J]l8king process of a supervisor of a team of operators by providing alerts when likely anomalous behaviors are detected By exploiting the operator behavioral patterns which are typically reinforced through standard operating procedures, this thesis proposes a methodology that uses statistical learning techniques in order to detect and predict anomalous operator conditions. More specifically, the proposed methodology relies on hidden Markov models (HMMs) and hidden semi-Markov models (HSMMs) to generate predictive models of unmanned vehicle systems operators. Through the exploration of the resulting HMMs in two distinct single operator scenarios, the methodology presented in this thesis is validated and shown to provide models capable of reliably predicting operator behavior. In addition, the use of HSMMs on the same data scenarios provides the temporal component of the predictions missing from the HMMs. The final step of this work is to examine how the proposed methodology scales to more complex scenarios involving teams of operators. Adopting a holistic team modeling approach, both HMMs and HSMMs are learned based on two team-based data sets. The results show that the HSMMs can provide valuable timing information in the single operator case, whereas HMMs tend to be more robust to increased team complexity. In addition, this thesis discusses the

  11. Effect of different pretreatment procedures on the particle size distribution results

    Science.gov (United States)

    Zacháry, Dóra; Szabó, Judit; Jakab, Gergely; Pál, Tamás; Kiss, Klaudia; Kovács, József; Szalai, Zoltán

    2017-04-01

    For soil particle size distribution (PSD) laser diffraction is a widely used and fast method. Although the data results are input data for further measurements and modeling applications, there are just few papers highlighting the effect of the pretreatment techniques on the PSD results. According to the different standards there are distinct sample preparation procedures for laser diffraction, which contain chemical (using sodium hexametaphosphate, anhydrous calcium carbonate or hydrogen peroxide) and physical (using ultrasounds) dispersion techniques to remove the cementing agents and break down the aggregates. To measure the effect of the sample preparation on PSD results 8 soil layer samples from typical Hungarian soil horizons were studied. We applied the most commonly used international (Buurman et al. 1996; Van Reeuwijk 2002; Burt 2004) and national (Hungarian Standard, MSZ-08 0205-78 1978) sample preparation procedures with and without modifications, therefore 8 distinct sample preparation series were carried out for each soils. The role of soil organic matter (SOM) content in the aggregate formation was analyezed on humic Arenosol sample. To measure the effect of carbonate as a cementing agent Loess and calcaric Arenosol sample was analysed. The effect of small particles as binder material was investigated by using Luvisol Clay and Red Clay samples. Solonetz, Chernozem, and 'Erubáz'(a shallow soil type influenced by high SOM content and the volcanic parent rock) were used to represent the interaction effects of at least two binders. The pretreated soils were analyzed by the Horiba Partica LA-950 Analyser. The applied refractive indexes differ from each other (real part: 1.55-1.60; imaginary part: 0.10-0.50) according to the soil type and the pretreatment procedure. The dispersion medium was distilled water in all cases. Hierarchical cluster analysis was applied to classify the measured PSD results. Results show that sample preparation is very important

  12. Improvement of procedures for evaluating photochemical models. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Tesche, T.W.; Lurmann, F.R.; Roth, P.M.; Georgopoulos, P.; Seinfeld, J.H.

    1990-08-01

    The study establishes a set of procedures that should be used by all groups evaluating the performance of a photochemical model application. A set of ten numerical measures are recommended for evaluating a photochemical model's accuracy in predicting ozone concentrations. Nine graphical methods and six investigative simulations are also recommended to give additional insight into model performance. Standards are presented that each modeling study should try to meet. To complement the operational model evaluation procedures, several diagnostic procedures are suggested. The sensitivity of the model to uncertainties in hydrocarbon emission rates and speciation, and other parameters should be assessed. Uncertainty bounds of key input variables and parameters can be propagated through the model to provide estimated uncertainties in the ozone predictions. Comparisons between measurements and predictions of species other than ozone will help ensure that the model is predicting the right ozone for the right reasons. Plotting concentrations residuals (differences) against a variety of variables may give insight into the reasons for poor model performance. Mass flux and balance calculations can identify the relative importance of emissions and transport. The study also identifies testing a model's response to emission changes as the most important research need. Another important area is testing the emissions inventory.

  13. Landing Procedure in Model Ditching Tests of Bf 109

    Science.gov (United States)

    Sottorf, W.

    1949-01-01

    The purpose of the model tests is to clarify the motions in the alighting on water of a land plane. After discussion of the model laws, the test method and test procedure are described. The deceleration-time-diagrams of the landing of a model of the Bf 109 show a high deceleration peek of greater than 20g which can be lowered to 4 to 6g by radiator cowling and brake skid.

  14. The HSG procedure for modelling integrated urban wastewater systems.

    Science.gov (United States)

    Muschalla, D; Schütze, M; Schroeder, K; Bach, M; Blumensaat, F; Gruber, G; Klepiszewski, K; Pabst, M; Pressl, A; Schindler, N; Solvi, A-M; Wiese, J

    2009-01-01

    Whilst the importance of integrated modelling of urban wastewater systems is ever increasing, there is still no concise procedure regarding how to carry out such modelling studies. After briefly discussing some earlier approaches, the guideline for integrated modelling developed by the Central European Simulation Research Group (HSG - Hochschulgruppe) is presented. This contribution suggests a six-step standardised procedure to integrated modelling. This commences with an analysis of the system and definition of objectives and criteria, covers selection of modelling approaches, analysis of data availability, calibration and validation and also includes the steps of scenario analysis and reporting. Recent research findings as well as experience gained from several application projects from Central Europe have been integrated in this guideline.

  15. Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.

    Science.gov (United States)

    Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh

    2014-07-01

    This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management.

  16. The ligation of the intersphincteric fistula tract procedure for anal fistula: a mixed bag of results.

    Science.gov (United States)

    Sirany, Anne-Marie E; Nygaard, Rachel M; Morken, Jeffrey J

    2015-06-01

    true efficacy of the procedure is unknown because of the number of technical variations and the pooled results reported in the literature.

  17. Procedural Skills Education – Colonoscopy as a Model

    Directory of Open Access Journals (Sweden)

    Maitreyi Raman

    2008-01-01

    Full Text Available Traditionally, surgical and procedural apprenticeship has been an assumed activity of students, without a formal educational context. With increasing barriers to patient and operating room access such as shorter work week hours for residents, and operating room and endoscopy time at a premium, alternate strategies to maximizing procedural skill development are being considered. Recently, the traditional surgical apprenticeship model has been challenged, with greater emphasis on the need for surgical and procedural skills training to be more transparent and for alternatives to patient-based training to be considered. Colonoscopy performance is a complex psychomotor skill requiring practioners to integrate multiple sensory inputs, and involves higher cortical centres for optimal performance. Colonoscopy skills involve mastery in the cognitive, technical and process domains. In the present review, we propose a model for teaching colonoscopy to the novice trainee based on educational theory.

  18. Computer–Based Procedures for Nuclear Power Plant Field Workers: Preliminary Results from Two Evaluation Studies

    Energy Technology Data Exchange (ETDEWEB)

    Katya L Le Blanc; Johanna H Oxstrand

    2013-10-01

    The Idaho National Laboratory and participants from the U.S. nuclear industry are collaborating on a research effort aimed to augment the existing guidance on computer-based procedure (CBP) design with specific guidance on how to design CBP user interfaces such that they support procedure execution in ways that exceed the capabilities of paper-based procedures (PBPs) without introducing new errors. Researchers are employing an iterative process where the human factors issues and interface design principles related to CBP usage are systematically addressed and evaluated in realistic settings. This paper describes the process of developing a CBP prototype and the two studies conducted to evaluate the prototype. The results indicate that CBPs may improve performance by reducing errors, but may increase the time it takes to complete procedural tasks.

  19. Effect of adipose tissue processing procedures in culture result: a study preliminary

    Directory of Open Access Journals (Sweden)

    Jeanne A. Pawitan

    2011-02-01

    Full Text Available Background: There are various methods of processing adipose tissue before culture, depending on the adipose tissue samples. The aim of this study is to compare several modifications of culturing and sub-culturing procedures of adipose tissue to fit the condition in our laboratory.Method: This is a descriptive study that was done in the Immunology and Endocrinology Integrated Laboratory, University of Indonesia, from  October 2009 to April 2010. Three adipose tissue processing procedures, various amount of seeding and two subculture methods were compared in term of cell yield and time needed. In the first procedure, collagenase-1 digestion was done in 30minutes, cell seeding were 24,000 and 36,000 per flask; in the second procedure, collagenase-1 digestion was done in 60minutes, cell seeding were 24,000, 48,000, and 72,000 per flask; and in the third procedure, the adipose tissue remnants from the first  procedure were again digested for another 45 minutes, cell seeding were 74,000, and 148,000 per flask. Difference in subculture methods were the presence or absence of washing step.Result: Procedure 1 yielded the lowest amount of cell, and after culture, the cells grew very slow, and was contaminated before harvest of primary culture. Procedure-2 and -3 succeeded to yield primary cultures. Some of the cultures were contaminated, so that further subculture was not  applicable, and only one tissue processing procedure (procedure 2: 60 minute collagenase-1 digestion, without lysis buffer, cell seeding 48,000 and 72,000 could complete the three subcultures. Though some of the procedures could not be completed, final result could be concluded.Conclusion: In this preliminary study, 60 minute colagenase-1 digestion with intermittent shaking every 5 minutes and cell seeding around 50,000 or more, followed by subculture method without washing step gave the best result. (Med J Indones 2011; 20:15-9Keywords: collagenase-1, primary culture, subculture

  20. A MODEL SELECTION PROCEDURE IN MIXTURE-PROCESS EXPERIMENTS FOR INDUSTRIAL PROCESS OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Márcio Nascimento de Souza Leão

    2015-08-01

    Full Text Available We present a model selection procedure for use in Mixture and Mixture-Process Experiments. Certain combinations of restrictions on the proportions of the mixture components can result in a very constrained experimental region. This results in collinearity among the covariates of the model, which can make it difficult to fit the model using the traditional method based on the significance of the coefficients. For this reason, a model selection methodology based on information criteria will be proposed for process optimization. Two examples are presented to illustrate this model selection procedure.

  1. Modified uterine allotransplantation and immunosuppression procedure in the sheep model.

    Directory of Open Access Journals (Sweden)

    Li Wei

    Full Text Available OBJECTIVE: To develop an orthotopic, allogeneic, uterine transplantation technique and an effective immunosuppressive protocol in the sheep model. METHODS: In this pilot study, 10 sexually mature ewes were subjected to laparotomy and total abdominal hysterectomy with oophorectomy to procure uterus allografts. The cold ischemic time was 60 min. End-to-end vascular anastomosis was performed using continuous, non-interlocking sutures. Complete tissue reperfusion was achieved in all animals within 30 s after the vascular re-anastomosis, without any evidence of arterial or venous thrombosis. The immunosuppressive protocol consisted of tacrolimus, mycophenolate mofetil and methylprednisolone tablets. Graft viability was assessed by transrectal ultrasonography and second-look laparotomy at 2 and 4 weeks, respectively. RESULTS: Viable uterine tissue and vascular patency were observed on transrectal ultrasonography and second-look laparotomy. Histological analysis of the graft tissue (performed in one ewe revealed normal tissue architecture with a very subtle inflammatory reaction but no edema or stasis. CONCLUSION: We have developed a modified procedure that allowed us to successfully perform orthotopic, allogeneic, uterine transplantation in sheep, whose uterine and vascular anatomy (apart from the bicornuate uterus is similar to the human anatomy, making the ovine model excellent for human uterine transplant research.

  2. Results of the open surgery after endoscopic basketimpaction during ERCP procedure

    Institute of Scientific and Technical Information of China (English)

    Sezgin Yilmaz; Ogun Ersen; Taner Ozkececi; Kadir S Turel; Serdar Kokulu; Emre Kacar; Murat Akici; Murat Cilekar; Ozgur Kavak; Yuksel Arikan

    2015-01-01

    AIM: To report the results of open surgery for patientswith basket impaction during endoscopic retrogradecholangiopancreatography (ERCP) procedure.METHODS: Basket impaction of either classicalDormia basket or mechanical lithotripter basket with anentrapped stone occurred in six patients. These patientswere immediately operated for removal of stone(s) andimpacted basket. The postoperative course, length ofhospital stay, diameter of the stone, complication andthe surgical procedure of the patients were reportedretrospectively.RESULTS: Six patients (M/F, 0/6) were operateddue to impacted basket during ERCP procedure. Themean age of the patients was 64.33 ± 14.41 years.In all cases the surgery was performed immediatelyafter the failed ERCP procedure by making a right Yilmaz S et al . Surgery for basket impaction during ERCP subcostal incision. The baskets containing the stone were removed through longitudinal choledochotomy with the stone. The choledochotomy incisions were closed by primary closure in four patients and T tube placement in two patients. All patients were also performed cholecystectomy additionally since they had cholelithiasis. In patients with T-tube placement it was removed on the 13th day after a normal T-tube cholangiogram. The patients remained stable at postoperative period and discharged without any complication at median 7 d. CONCLUSION: Open surgical procedures can be applied in patients with basket impaction during ERCP procedure in selected cases.

  3. [Clinical results after Sauve-Kapandji procedure in relation to diagnosis].

    Science.gov (United States)

    Daecke, W; Martini, A-K; Schneider, S; Streich, N A

    2004-11-01

    We present the results of a retrospective study on 56 patients who underwent the Sauve-Kapandji procedure for chronic disorders of the distal radioulnar joint (DRUJ). Outcome was assessed with special regard to the diagnosis. The average follow-up was 5.9 years (1-12 years). Patients were assessed for pain, range of motion of wrist and forearm, and radiological features. The DASH score and Mayo wrist score were used. The diagnosis had an influence on the outcome. Patients with primary arthrosis of the DRUJ demonstrated better results than patients with traumatic disorders. Patients with growth deficiency-related complaint of the DRUJ showed slightly inferior results after the Sauve-Kapandji procedure compared to all patients. Patients were free of pain or had pain only during heavy labor in 81% of cases; 95% of the patients rated the outcome as excellent or improved, but only 50% were free of symptoms on the operated side during heavy manual labor. Symptoms of ulnar impingement were found in 11%. Improvement in range of motion of wrist and forearm was significant. The postoperative DASH score was 24.2+/-22.5 and the Mayo wrist score was 76.1+/-17.6. Our results confirm the Sauve-Kapandji procedure to be a reliable salvage procedure resulting in high patient satisfaction and reliable improvement in range of motion. However, decreased grip strength on the affected side must be accepted to some extent. The diagnosis of a DRUJ disorder influences the outcome.

  4. A baseline-free procedure for transformation models under interval censorship.

    Science.gov (United States)

    Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin

    2005-12-01

    An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.

  5. Evaluating procedural modelling for 3D models of informal settlements in urban design activities

    Directory of Open Access Journals (Sweden)

    Victoria Rautenbach

    2015-11-01

    Full Text Available Three-dimensional (3D modelling and visualisation is one of the fastest growing application fields in geographic information science. 3D city models are being researched extensively for a variety of purposes and in various domains, including urban design, disaster management, education and computer gaming. These models typically depict urban business districts (downtown or suburban residential areas. Despite informal settlements being a prevailing feature of many cities in developing countries, 3D models of informal settlements are virtually non-existent. 3D models of informal settlements could be useful in various ways, e.g. to gather information about the current environment in the informal settlements, to design upgrades, to communicate these and to educate inhabitants about environmental challenges. In this article, we described the development of a 3D model of the Slovo Park informal settlement in the City of Johannesburg Metropolitan Municipality, South Africa. Instead of using time-consuming traditional manual methods, we followed the procedural modelling technique. Visualisation characteristics of 3D models of informal settlements were described and the importance of each characteristic in urban design activities for informal settlement upgrades was assessed. Next, the visualisation characteristics of the Slovo Park model were evaluated. The results of the evaluation showed that the 3D model produced by the procedural modelling technique is suitable for urban design activities in informal settlements. The visualisation characteristics and their assessment are also useful as guidelines for developing 3D models of informal settlements. In future, we plan to empirically test the use of such 3D models in urban design projects in informal settlements.

  6. The Bootstrap and Multiple Comparisons Procedures as Remedy on Doubts about Correctness of ANOVA Results

    Directory of Open Access Journals (Sweden)

    Izabela CHMIEL

    2012-03-01

    Full Text Available Aim: To determine and analyse an alternative methodology for the analysis of a set of Likert responses measured on a common attitudinal scale when the primary focus of interest is on the relative importance of items in the set - with primary application to health-related quality of life (HRQOL measures. HRQOL questionnaires usually generate data that manifest evident departures from fundamental assumptions of Analysis of Variance (ANOVA approach, not only because of their discrete, bounded and skewed distributions, but also due to significant correlation between mean scores and their variances. Material and Methods: Questionnaire survey with SF-36 has been conducted among 142 convalescents after acute pancreatitis. The estimated scores of HRQOL were compared with use of the multiple comparisons procedures under Bonferroni-like adjustment, and with the bootstrap procedures. Results: In the data set studied, with the SF-36 outcome, the use of the multiple comparisons and bootstrap procedures for analysing HRQOL data provides results quite similar to conventional ANOVA and Rasch methods, suggested at frames of Classical Test Theory and Item Response Theory. Conclusions: These results suggest that the multiple comparisons and bootstrap both are valid methods for analysing HRQOL outcome data, in particular at case of doubts with appropriateness of the standard methods. Moreover, from practical point of view, the processes of the multiple comparisons and bootstrap procedures seems to be much easy to interpret by non-statisticians aimed to practise evidence based health care.

  7. Displacement-Based Seismic Design Procedure for Framed Buildings with Dissipative Braces Part II: Numerical Results

    Science.gov (United States)

    Mazza, Fabio; Vulcano, Alfonso

    2008-07-01

    For a widespread application of dissipative braces to protect framed buildings against seismic loads, practical and reliable design procedures are needed. In this paper a design procedure based on the Direct Displacement-Based Design approach is adopted, assuming the elastic lateral storey-stiffness of the damped braces proportional to that of the unbraced frame. To check the effectiveness of the design procedure, presented in an associate paper, a six-storey reinforced concrete plane frame, representative of a medium-rise symmetric framed building, is considered as primary test structure; this structure, designed in a medium-risk region, is supposed to be retrofitted as in a high-risk region, by insertion of diagonal braces equipped with hysteretic dampers. A numerical investigation is carried out to study the nonlinear static and dynamic responses of the primary and the damped braced test structures, using step-by-step procedures described in the associate paper mentioned above; the behaviour of frame members and hysteretic dampers is idealized by bilinear models. Real and artificial accelerograms, matching EC8 response spectrum for a medium soil class, are considered for dynamic analyses.

  8. Detection of cow milk in donkey milk by chemometric procedures on triacylglycerol stereospecific analysis results.

    Science.gov (United States)

    Cossignani, Lina; Blasi, Francesca; Bosi, Ancilla; D'Arco, Gilda; Maurelli, Silvia; Simonetti, Maria Stella; Damiani, Pietro

    2011-08-01

    Stereospecific analysis is an important tool for the characterization of lipid fraction of food matrices, and also of milk samples. The results of a chemical-enzymatic-chromatographic analytical method were elaborated by chemometric procedures such as linear discriminant analysis (LDA) and artificial neural network (ANN). According to the total composition and intrapositional fatty acid distribution in the triacylglycerol (TAG) backbone, the obtained results were able to characterize pure milk samples and milk mixtures with 1, 3, 5% cow milk added to donkey milk. The resulting score was very satisfactory. Totally correct classified samples were obtained when the TAG stereospecific results of all the considered milk mixtures (donkey-cow) were elaborated by LDA and ANN chemometric procedures.

  9. Results and comparison of seven accelerated cycling test procedures for the photovoltaic application

    Science.gov (United States)

    Potteau, E.; Desmettre, D.; Mattera, F.; Bach, O.; Martin, J.-L.; Malbranche, P.

    Choosing the right battery for a given photovoltaic (PV) system is a key question, because it strongly influences the cost and reliability of the system. This paper presents results of seven test procedures experienced at the GENEC battery test facility. A set of four complementary tests is selected, covering the various types of photovoltaic systems. Moreover, the analysis of these results gives an estimation of the ageing rate for the different types of batteries used in photovoltaic systems.

  10. Complications after Surgical Procedures in Patients with Cardiac Implantable Electronic Devices: Results of a Prospective Registry

    Science.gov (United States)

    da Silva, Katia Regina; Albertini, Caio Marcos de Moraes; Crevelari, Elizabeth Sartori; de Carvalho, Eduardo Infante Januzzi; Fiorelli, Alfredo Inácio; Martinelli Filho, Martino; Costa, Roberto

    2016-01-01

    Background: Complications after surgical procedures in patients with cardiac implantable electronic devices (CIED) are an emerging problem due to an increasing number of such procedures and aging of the population, which consequently increases the frequency of comorbidities. Objective: To identify the rates of postoperative complications, mortality, and hospital readmissions, and evaluate the risk factors for the occurrence of these events. Methods: Prospective and unicentric study that included all individuals undergoing CIED surgical procedures from February to August 2011. The patients were distributed by type of procedure into the following groups: initial implantations (cohort 1), generator exchange (cohort 2), and lead-related procedures (cohort 3). The outcomes were evaluated by an independent committee. Univariate and multivariate analyses assessed the risk factors, and the Kaplan-Meier method was used for survival analysis. Results: A total of 713 patients were included in the study and distributed as follows: 333 in cohort 1, 304 in cohort 2, and 76 in cohort 3. Postoperative complications were detected in 7.5%, 1.6%, and 11.8% of the patients in cohorts 1, 2, and 3, respectively (p = 0.014). During a 6-month follow-up, there were 58 (8.1%) deaths and 75 (10.5%) hospital readmissions. Predictors of hospital readmission included the use of implantable cardioverter-defibrillators (odds ratio [OR] = 4.2), functional class III­-IV (OR = 1.8), and warfarin administration (OR = 1.9). Predictors of mortality included age over 80 years (OR = 2.4), ventricular dysfunction (OR = 2.2), functional class III-IV (OR = 3.3), and warfarin administration (OR = 2.3). Conclusions: Postoperative complications, hospital readmissions, and deaths occurred frequently and were strongly related to the type of procedure performed, type of CIED, and severity of the patient's underlying heart disease. PMID:27579544

  11. Recreation of architectural structures using procedural modeling based on volumes

    Directory of Open Access Journals (Sweden)

    Santiago Barroso Juan

    2013-11-01

    Full Text Available While the procedural modeling of buildings and other architectural structures has evolved very significantly in recent years, there is noticeable absence of high-level tools that allow a designer, an artist or an historian, creating important buildings or architectonic structures in a particular city. In this paper we present a tool for creating buildings in a simple and clear, following rules that use the language and methodology of creating their own buildings, and hiding the user the algorithmic details of the creation of the model.

  12. Procedural Modeling for Rapid-Prototyping of Multiple Building Phases

    Science.gov (United States)

    Saldana, M.; Johanson, C.

    2013-02-01

    RomeLab is a multidisciplinary working group at UCLA that uses the city of Rome as a laboratory for the exploration of research approaches and dissemination practices centered on the intersection of space and time in antiquity. In this paper we present a multiplatform workflow for the rapid-prototyping of historical cityscapes through the use of geographic information systems, procedural modeling, and interactive game development. Our workflow begins by aggregating archaeological data in a GIS database. Next, 3D building models are generated from the ArcMap shapefiles in Esri CityEngine using procedural modeling techniques. A GIS-based terrain model is also adjusted in CityEngine to fit the building elevations. Finally, the terrain and city models are combined in Unity, a game engine which we used to produce web-based interactive environments which are linked to the GIS data using keyhole markup language (KML). The goal of our workflow is to demonstrate that knowledge generated within a first-person virtual world experience can inform the evaluation of data derived from textual and archaeological sources, and vice versa.

  13. Using Video Modeling with Voiceover Instruction Plus Feedback to Train Staff to Implement Direct Teaching Procedures.

    Science.gov (United States)

    Giannakakos, Antonia R; Vladescu, Jason C; Kisamore, April N; Reeve, Sharon A

    2016-06-01

    Direct teaching procedures are often an important part of early intensive behavioral intervention for consumers with autism spectrum disorder. In the present study, a video model with voiceover (VMVO) instruction plus feedback was evaluated to train three staff trainees to implement a most-to-least direct (MTL) teaching procedure. Probes for generalization were conducted with untrained direct teaching procedures (i.e., least-to-most, prompt delay) and with an actual consumer. The results indicated that VMVO plus feedback was effective in training the staff trainees to implement the MTL procedure. Although additional feedback was required for the staff trainees to show mastery of the untrained direct teaching procedures (i.e., least-to-most and prompt delay) and with an actual consumer, moderate to high levels of generalization were observed.

  14. a Procedural Solution to Model Roman Masonry Structures

    Science.gov (United States)

    Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.

    2013-07-01

    The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.

  15. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    Energy Technology Data Exchange (ETDEWEB)

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L. [Pacific Northwest Lab., Richland, WA (United States)

    1995-10-01

    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual`s performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average.

  16. NASA Glenn Icing Research Tunnel: 2012 Cloud Calibration Procedure and Results

    Science.gov (United States)

    VanZante, Judith Foss; Ide, Robert F.; Steen, Laura E.

    2012-01-01

    In 2011, NASA Glenn s Icing Research Tunnel underwent a major modification to it s refrigeration plant and heat exchanger. This paper presents the results of the subsequent full cloud calibration. Details of the calibration procedure and results are presented herein. The steps include developing a nozzle transfer map, establishing a uniform cloud, conducting a drop sizing calibration and finally a liquid water content calibration. The goal of the calibration is to develop a uniform cloud, and to build a transfer map from the inputs of air speed, spray bar atomizing air pressure and water pressure to the output of median volumetric droplet diameter and liquid water content.

  17. Long-term results of the Sauvé-Kapandji procedure in the rheumatoid wrist.

    Science.gov (United States)

    Papp, Miklós; Papp, Levente; Lenkei, Balázs; Károlyi, Zoltán

    2013-12-01

    This retrospective long-term study evaluates the clinical and radiological results of the Sauvé-Kapandji procedure in rheumatoid wrists. Fourteen patients with rheumatoid arthritis who had undergone a Sauvé-Kapandji procedure were examined 10 to 16.5 years after surgery. Range of motion and grip strength were measured. The patients' complaints related with instability of the ulnar stump, the residual pain in the wrist, and the function of the operated hand were assessed. The review also included a radiological examination. Pain was found to have decreased and the gripping strength of the hand to have increased in all the patients. The range of wrist rotation was significantly improved. On radiographs, there were no signs of increased ulnar translation of the carpus. We noted no instance of subluxation or dislocation of the ulnar stump. In this long-term evaluation, the Sauvé-Kapandji procedure was found to provide long-term improvement of the function of the wrist-hand complex, by eliminating the distal radio-ulnar joint which is a major source of pain in the rheumatoid wrist.

  18. The effectiveness of cognitive-behavioral interventions in reduction of distress resulting from dentistry procedures

    Directory of Open Access Journals (Sweden)

    Abolghasemi A

    2007-06-01

    Full Text Available Background and Aim: Dental anxiety is a common problem in pediatric dentistry and results in behaviors like fear and anger that can negatively affect dental treatments. Exposure to various dental treatments and distressful experiences are reasons for anxiety during dental treatments. The aim of this study was to evaluate effect of cognitive behavioral interventions in reduction of stress during dental procedures in children. Materials and Methods: In this clinical trial, 42 boys and girls, undergoing dental treatments were selected from dental clinics in Tehran. Patients were assigned to cognitive-behavioral interventions, placebo and control conditions. The fear scale, anger facial scale, pain facial scale and physiologic measure of pulse beat were evaluated. One way ANOVA and Tukey test were used to analyze the results and p<0.05 was the level of significance. Results: Results showed significant differences between cognitive-behavioral interventions, placebo and control groups regarding fear, anger, pain and pulse beat. Comparison tests revealed that cognitive-behavioral interventions were more effective in reducing fear, anger, pain and pulse beat compared to the placebo or control.Conclusion: According to the results of this study cognitive-behavioral interventions can be used to reduce distress of children undergoing dental procedures.

  19. Immediate results of aortic valve reconstruction by using autologous pericardium (Ozaki procedure

    Directory of Open Access Journals (Sweden)

    Е. В. Россейкин

    2016-11-01

    Full Text Available Aim. The study was designed to compare the immediate echocardiographic characteristics of aortic valve reconstruction by using autologous pericardium and the method proposed in 2007 by Shigeyuki Ozaki, as well as aortic valve replacement by means of frame-mounted biological prostheses Medtronic HANCOCK®II T505 CINCH® II and the Carpentier-Edwards PERIMOUNT.Methods. Over a period from January 2014 to February 2016, 76 patients underwent aortic valve replacement by means of frame-mounted biological prostheses Medtronic HANCOCK®II T505 CINCH® II (n=41 and Carpentier-Edwards PERIMOUNT (n=35 at our hospital. 20 patients underwent the Ozaki procedure. These three groups of patients were assigned to the study. Demographic and preoperative indicators of patients from all three groups were homogeneous (р>0.05. The evaluation of the aortic valves replaced was carried out by echocardiography.Results. Echocardiography was performed before the procedure and in the early postoperative period. Statistical analysis using ANOVA showed significantly lower values of the aortic valve pressure gradient (p<0.001 and larger effective orifice area and indexed effective orifice area of the valve (p<0.001 in the group of the Ozaki procedure.Conclusion. According to echocardiography data, in the immediate postoperative period the Ozaki procedure is associated with lower mean and peak gradients of pressure on the aortic valve and larger effective orifice area and indexed effective orifice area of the valve, as compared with the frame-mounted biological aortic prostheses Medtronic HANCOCK®II T505 CINCH® II and the Carpentier-Edwards PERIMOUNT.Received 27 May 2016. Accepted 24 June 2016.Funding: The study had no sponsorship. Conflict of interest: The authors declare no conflict of interest.

  20. Immediate results of aortic valve reconstruction by using autologous pericardium (Ozaki procedure

    Directory of Open Access Journals (Sweden)

    Е. В. Россейкин

    2016-08-01

    Full Text Available Aim: The study was designed to compare the immediate echocardiographic characteristics of aortic valve reconstruction by using autologous pericardium and the method proposed in 2007 by Shigeyuki Ozaki, as well as aortic valve replacement by means of frame-mounted biological prostheses Medtronic HANCOCK®II T505 CINCH® II and the Carpentier-Edwards PERIMOUNT.Methods: Over a period from January 2014 to February 2016, 76 patients underwent aortic valve replacement by means of frame-mounted biological prostheses Medtronic HANCOCK®II T505 CINCH® II (n=41 and Carpentier-Edwards PERIMOUNT (n=35 at our hospital. 20 patients underwent the Ozaki procedure. These three groups of patients were assigned to the study. Demographic and preoperative indicators of patients from all three groups were homogeneous (р>0.05. The evaluation of the aortic valves replaced was carried out by echocardiography.Results: Echocardiography was performed before the procedure and in the early postoperative period. Statistical analysis using ANOVA showed significantly lower values of the aortic valve pressure gradient (p<0.001 and larger effective orifice area and indexed effective orifice area of the valve (p<0.001 in the group of the Ozaki procedure.Conclusion: According to echocardiography data, in the immediate postoperative period the Ozaki procedure is associated with lower mean and peak gradients of pressure on the aortic valve and larger effective orifice area and indexed effective orifice area of the valve, as compared with the frame-mounted biological aortic prostheses Medtronic HANCOCK®II T505 CINCH® II and the Carpentier-Edwards PERIMOUNT.FundingThe study had no sponsorship.Conflict of interestThe authors declare no conflict of interest.

  1. DoD Simulations: Improved Assessment Procedures Would Increase the Credibility of Results.

    Science.gov (United States)

    1987-12-01

    antiaircraft gun models. In 1971. during the gun air defense effectiveness study, a simulation model for the VI TLCAN was built and validated with field...Results from CARMONETTE/TRASANA Simulation Model: TRASANA Executive Summary." Draft, White Sands Missile Range, New Mexico . 1985. CONFIDENTIAL. U.S

  2. [Extraarticular Subtalar Arthrodesis with the Grice Procedure in Children with Cerebral Palsy: Mid-Term Results].

    Science.gov (United States)

    Němejcová, E; Schejbalová, A; Trč, T; Havlas, V

    2016-01-01

    PURPOSE OF THE STUDY The aim of the study was to evaluate, on the basis of radiographic findings and AOFAS scores, the results of the Grice extra-articular subtalar arthrodesis for treatment of planovalgus foot deformity in cerebral palsy patients. MATERIAL AND METHODS A total of 38 patients (50 feet) with cerebral palsy indicated to the Grice procedure for planovalgus foot deformity between 2006 and 2010 were assessed. The group comprised 18 girls and 20 boys, of whom 10 had spastic quadriparesis (four undergoing bilateral surgery), three had triparesis, four had hemiparesis and 21 had diparesis (treated on both sides in eight). The average age at surgery was 12 years (range, 7 years and 2 months to 17 years and 8 months). All patients were evaluated based on the AOFAS scoring system and radiographic findings before and after surgery. RESULTS The average follow-up was 4.5 years. The average AOFAS score increased from 54.9 points pre-operatively to 76.6 points post-operatively. The pre- and post-operative average values for the talocalcaneal angle were 49.8° and 25°, respectively; for the calcaneal inclination angle they were 8.6° and 13.4°, respectively. DISCUSSION The Grice procedure has long been considered a primary surgical treatment for planovalgus foot deformity in patients with cerebral palsy. Recently, calcaneal osteotomy has been used more frequently, but with no evidence of provably better results. CONSLUSIONS The mid-term results of the Grice extra-articular arthrodesis in our group of cerebral palsy children were very good in terms of both radiographic and AOFAS score evaluation; the latter includes objective assessment as well as the patient's subjective evaluation. Grice procedure, extra-articular subtalar arthrodesis, cerebral palsy, planovalgus foot deformity.

  3. CT-guided vertebroplasty: analysis of technical results, extraosseous cement leakages, and complications in 500 procedures

    Energy Technology Data Exchange (ETDEWEB)

    Pitton, Michael Bernhard; Herber, Sascha; Koch, Ulrike; Oberholzer, Katja; Dueber, Christoph [Johannes Gutenberg-University of Mainz, Department of Diagnostic and Interventional Radiology, Mainz (Germany); Drees, Philip [University Hospital, Johannes Gutenberg-University of Mainz, Department of Orthopedic Surgery, Mainz (Germany)

    2008-11-15

    The aim of this study was to analyze the technical results, the extraosseous cement leakages, and the complications in our first 500 vertebroplasty procedures. Patients with osteoporotic vertebral compression fractures or osteolytic lesions caused by malignant tumors were treated with CT-guided vertebroplasty. The technical results were documented with CT, and the extraosseous cement leakages and periinterventional clinical complications were analyzed as well as secondary fractures during follow-up. Since 2002, 500 vertebroplasty procedures have been performed on 251 patients (82 male, 169 female, age 71.5 {+-} 9.8 years) suffering from osteoporotic compression fractures (n = 217) and/or malignant tumour infiltration (n = 34). The number of vertebrae treated per patient was 1.96 {+-} 1.29 (range 1-10); the numbers of interventions per patient and interventions per vertebra were 1.33 {+-} 0.75 (range 1-6) and 1.01 {+-} 0.10, respectively. The amount of PMMA cement was 4.5 {+-} 1.9 ml and decreased during the 5-year period of investigation. The procedure-related 30-day mortality was 0.4% (1 of 251 patients) due to pulmonary embolism in this case. The procedure-related morbidity was 2.8% (7/251), including one acute coronary syndrome beginning 12 h after the procedure and one missing patellar reflex in a patients with a cement leak near the neuroformen because of osteolytic destruction of the respective pedicle. Additionally, one patient developed a medullary conus syndrome after a fall during the night after vertebroplasty, two patients reached an inadequate depth of conscious sedation, and two cases had additional fractures (one pedicle fracture, one rib fracture). The overall CT-based cement leak rate was 55.4% and included leakages predominantly into intervertebral disc spaces (25.2%), epidural vein plexus (16.0%), through the posterior wall (2.6%), into the neuroforamen (1.6%), into paravertebral vessels (7.2%), and combinations of these and others. During follow

  4. Lumping procedure for a kinetic model of catalytic naphtha reforming

    Directory of Open Access Journals (Sweden)

    H. M. Arani

    2009-12-01

    Full Text Available A lumping procedure is developed for obtaining kinetic and thermodynamic parameters of catalytic naphtha reforming. All kinetic and deactivation parameters are estimated from industrial data and thermodynamic parameters are calculated from derived mathematical expressions. The proposed model contains 17 lumps that include the C6 to C8+ hydrocarbon range and 15 reaction pathways. Hougen-Watson Langmuir-Hinshelwood type reaction rate expressions are used for kinetic simulation of catalytic reactions. The kinetic parameters are benchmarked with several sets of plant data and estimated by the SQP optimization method. After calculation of deactivation and kinetic parameters, plant data are compared with model predictions and only minor deviations between experimental and calculated data are generally observed.

  5. Procedures and Methods of Digital Modeling in Representation Didactics

    Science.gov (United States)

    La Mantia, M.

    2011-09-01

    At the Bachelor degree course in Engineering/Architecture of the University "La Sapienza" of Rome, the courses of Design and Survey, in addition to considering the learning of methods of representation, the application of descriptive geometry and survey, in order to expand the vision and spatial conception of the student, pay particular attention to the use of information technology for the preparation of design and survey drawings, achieving their goals through an educational path of "learning techniques, procedures and methods of modeling architectural structures." The fields of application involved two different educational areas: the analysis and that of survey, both from the acquisition of the given metric (design or survey) to the development of three-dimensional virtual model.

  6. Different results on tetrachorical correlations in Mplus and Stata--Stata announces modified procedure.

    Science.gov (United States)

    Günther, Agnes; Höfler, Michael

    2006-01-01

    To identify the structure of mental disorders in large-scale epidemiological data sets, investigators frequently use tetrachoric correlations as a first step for subsequent application of latent class and factor analytic methods. It has been possible to do this with Stata since 2005, whereas the corresponding Mplus routine has been on the market for some years. Using an identical data set we observed considerable differences between the results of the packages. This paper illustrates the differences with several examples from the Early Developmental Stages of Psychopathology Study data set, which consists of 3021 subjects, with diagnostic information assessed by the CIDI. Results reveal that tetrachoric correlations resulting from Mplus were often considerably smaller than those computed with Stata. The results were dramatically different, especially where there were few observation per cell or even empty cells. These findings were put to Mplus and Stata, whose responses clarified the discrepancies by describing the different mathematical assumptions and procedures used. Stata announced that it intended to launch a modified procedure.

  7. Learning curve estimation in medical devices and procedures: hierarchical modeling.

    Science.gov (United States)

    Govindarajulu, Usha S; Stillo, Marco; Goldfarb, David; Matheny, Michael E; Resnic, Frederic S

    2017-07-30

    In the use of medical device procedures, learning effects have been shown to be a critical component of medical device safety surveillance. To support their estimation of these effects, we evaluated multiple methods for modeling these rates within a complex simulated dataset representing patients treated by physicians clustered within institutions. We employed unique modeling for the learning curves to incorporate the learning hierarchy between institution and physicians and then modeled them within established methods that work with hierarchical data such as generalized estimating equations (GEE) and generalized linear mixed effect models. We found that both methods performed well, but that the GEE may have some advantages over the generalized linear mixed effect models for ease of modeling and a substantially lower rate of model convergence failures. We then focused more on using GEE and performed a separate simulation to vary the shape of the learning curve as well as employed various smoothing methods to the plots. We concluded that while both hierarchical methods can be used with our mathematical modeling of the learning curve, the GEE tended to perform better across multiple simulated scenarios in order to accurately model the learning effect as a function of physician and hospital hierarchical data in the use of a novel medical device. We found that the choice of shape used to produce the 'learning-free' dataset would be dataset specific, while the choice of smoothing method was negligibly different from one another. This was an important application to understand how best to fit this unique learning curve function for hierarchical physician and hospital data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. A Multi-objective Procedure for Efficient Regression Modeling

    CERN Document Server

    Sinha, Ankur; Kuosmanen, Timo

    2012-01-01

    Variable selection is recognized as one of the most critical steps in statistical modeling. The problems encountered in engineering and social sciences are commonly characterized by over-abundance of explanatory variables, non-linearities and unknown interdependencies between the regressors. An added difficulty is that the analysts may have little or no prior knowledge on the relative importance of the variables. To provide a robust method for model selection, this paper introduces a technique called the Multi-objective Genetic Algorithm for Variable Selection (MOGA-VS) which provides the user with an efficient set of regression models for a given data-set. The algorithm considers the regression problem as a two objective task, where the purpose is to choose those models over the other which have less number of regression coefficients and better goodness of fit. In MOGA-VS, the model selection procedure is implemented in two steps. First, we generate the frontier of all efficient or non-dominated regression m...

  9. Predictive market segmentation model: An application of logistic regression model and CHAID procedure

    Directory of Open Access Journals (Sweden)

    Soldić-Aleksić Jasna

    2009-01-01

    Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.

  10. A National project for in vivo dosimetry procedures in radiotherapy: First results

    Science.gov (United States)

    Piermattei, Angelo; Greco, Francesca; Azario, Luigi; Porcelli, Andrea; Cilla, Savino; Zucca, Sergio; Russo, Aniello; di Castro, Elisabetta; Russo, Mariateresa; Caivano, Rocchina; Fusco, Vincenzo; Morganti, Alessio; Fidanzio, Andrea

    2012-03-01

    The paper reports the results of a National project financed by the Istituto Nazionale di Fisica Nucleare (INFN) for the development of in vivo dosimetric procedures in radiotherapy. In particular, a generalized procedure for the in vivo reconstruction of the isocenter dose, Diso, has been developed for 3D conformal radiotherapy treatments with open and wedged X-ray beams supplied by linacs of different manufacturers and equipped with aSi Electronic Portal Imaging Devices (EPIDs). In this way, the commissioning procedure is very simplified and applicable to Elekta, Siemens and Varian linacs. The method here reported is based on measurements in solid-water phantoms of different thicknesses, w, irradiated by square field sizes, L. Generalized mid-plane doses, D0, and transit signal by EPIDs, st0, obtained by 19 open and 38 wedged beams of 8 different linacs, were determined taking into account X-ray beam and EPID calibrations. Generalized ratios F0 = st0/D0 for open and wedged beams were fitted by surface equations and used by a dedicated software, for the Diso reconstruction. Moreover, for each beam the software supplied a set of transit signal profiles crossing the beam central axis to test the beam irradiation reproducibility. The tolerance level of the comparison between the Diso and the dose computed by the TPS, Diso,TPS, was estimated 5%. The generalized in vivo dosimetry procedure was adopted by 3 centers that used different linacs. The results of 480 tests showed that in absence of errors, the comparison between Diso and Diso,TPS resulted well-within the tolerance level. The presence of errors was detected in 10% of the tests and were due to essentially an incorrect set-up, presence of an attenuator on the beams and patient morphological changes. Moreover, the dedicated software used the information of the Record and Verify system of the centres and consequently the extra-time needed to obtain, for each beam, the Diso reconstruction after the dose delivery

  11. Procedural learning as a measure of functional impairment in a mouse model of ischemic stroke.

    Science.gov (United States)

    Linden, Jérôme; Van de Beeck, Lise; Plumier, Jean-Christophe; Ferrara, André

    2016-07-01

    Basal ganglia stroke is often associated with functional deficits in patients, including difficulties to learn and execute new motor skills (procedural learning). To measure procedural learning in a murine model of stroke (30min right MCAO), we submitted C57Bl/6J mice to various sensorimotor tests, then to an operant procedure (Serial Order Learning) specifically assessing the ability to learn a simple motor sequence. Results showed that MCAO affected the performance in some of the sensorimotor tests (accelerated rotating rod and amphetamine rotation test) and the way animals learned a motor sequence. The later finding seems to be caused by difficulties regarding the chunking of operant actions into a coherent motor sequence; the appeal for food rewards and ability to press levers appeared unaffected by MCAO. We conclude that assessment of motor learning in rodent models of stroke might improve the translational value of such models.

  12. Indications, Results and Mortality of Pulmonary Artery Banding Procedure: a Brief Review and Five- year Experiences

    Directory of Open Access Journals (Sweden)

    Hamid Hoseinikhah

    2016-05-01

    Full Text Available Background Pulmonary artery banding (PAB is a technique of palliative surgical therapy used by congenital heart surgeons as a staged approach to operative correction of congenital heart defects. Materials and Methods We report 5- year experiences from January 2011 to January 2016 of Imam Reza Hospital center (a tertiary referral hospital in Mashhad city, North East of Iran that consist of 50 patients with congenital heart disease with left to right shunt that pulmonary artery banding procedure was performed for them were studied. Results Age of patients (n=50 was 1to 9 months (mean=4.6 + 1.3. In this study, the most common disease that need to PAB procedure was Ventricular septal defect (VSD with twenty-eight patients (56%. Mean of extubation time (hour was 10.4 + 0.8 and mean of hospital stay (day was 13.3 + 2.4 respectively. Conclusion Although the number of pulmonary artery banding palliation surgery was decreased, but in selected group of congenital heart disease, this palliation to reduce over circulation of Pulmonary system, can use successfully with acceptable results and low mortality. We suggest pulmonary artery banding palliative surgery in these selected patients.

  13. Congenital penile curvature: long-term results of operative treatment using the plication procedure

    Institute of Scientific and Technical Information of China (English)

    S.-S.Lee; E.Meng; E-RChuang; C.-Y.Yen; S.-Y.Chang; D.-S.Yu; G.-H.Sun

    2004-01-01

    Aim: To determine the long-term outcome, effectiveness and patient satisfaction of congenital penile curvature correction by plication of tunica albuginea. Methods: From January 1992 to January 2002, 106 young patients underwent surgical correction of congenital penile curvature by corporeal plication. Indications for operation were difficult or impossible vaginal penetration and cosmetic problems. The technique of corporeal plication consists of placing longitudinal plication sutures of 2-zero braided polyester on the convex side of the curvature until the curvature is corrected when erection is artificially induced. Results of this procedure were obtained by retrospective chart reviews and questionnaires via mail. Long-term follow-up ranged from 11 to 132 (mean 69.3) months and data were available for 68 patients. Results: Penile straightening was excellent in 62 patients (91%) and good with less than 15 degree of residual curvature in 6 patients (9 %). Sixty-seven patients reported no change in erectile rigidity or maintenance postoperatively, while 1 described early detumescence. Shortening of the penis without functional problems was noted by 26 patients (38 %). Thirty-Five patients (51%) reported feeling palpable indurations (suture knots) on the penis. Temporary numbness of glans penis was described in 3 patients. Overall, 60 patients were very satisfied, 6 satisfied, 2 unsatisfied. Conclusion: Corporeal plication is an effective and durable procedure with a high rate of patient satisfaction. (Asian J Androl 2004 Sep; 6: 273-276)

  14. An innovative 3-D numerical modelling procedure for simulating repository-scale excavations in rock - SAFETI

    Energy Technology Data Exchange (ETDEWEB)

    Young, R. P.; Collins, D.; Hazzard, J.; Heath, A. [Department of Earth Sciences, Liverpool University, 4 Brownlow street, UK-0 L69 3GP Liverpool (United Kingdom); Pettitt, W.; Baker, C. [Applied Seismology Consultants LTD, 10 Belmont, Shropshire, UK-S41 ITE Shrewsbury (United Kingdom); Billaux, D.; Cundall, P.; Potyondy, D.; Dedecker, F. [Itasca Consultants S.A., Centre Scientifique A. Moiroux, 64, chemin des Mouilles, F69130 Ecully (France); Svemar, C. [Svensk Karnbranslemantering AB, SKB, Aspo Hard Rock Laboratory, PL 300, S-57295 Figeholm (Sweden); Lebon, P. [ANDRA, Parc de la Croix Blanche, 7, rue Jean Monnet, F-92298 Chatenay-Malabry (France)

    2004-07-01

    This paper presents current results from work performed within the European Commission project SAFETI. The main objective of SAFETI is to develop and test an innovative 3D numerical modelling procedure that will enable the 3-D simulation of nuclear waste repositories in rock. The modelling code is called AC/DC (Adaptive Continuum/ Dis-Continuum) and is partially based on Itasca Consulting Group's Particle Flow Code (PFC). Results are presented from the laboratory validation study where algorithms and procedures have been developed and tested to allow accurate 'Models for Rock' to be produced. Preliminary results are also presented on the use of AC/DC with parallel processors and adaptive logic. During the final year of the project a detailed model of the Prototype Repository Experiment at SKB's Hard Rock Laboratory will be produced using up to 128 processors on the parallel super computing facility at Liverpool University. (authors)

  15. 49 CFR 219.611 - Test result indicating prohibited alcohol concentration; procedures.

    Science.gov (United States)

    2010-10-01

    ... concentration; procedures. 219.611 Section 219.611 Transportation Other Regulations Relating to Transportation... concentration; procedures. Procedures for administrative handling by the railroad in the event an employee's confirmation test indicates an alcohol concentration of .04 or greater are set forth in § 219.104. ...

  16. Developing Physiologic Models for Emergency Medical Procedures Under Microgravity

    Science.gov (United States)

    Parker, Nigel; O'Quinn, Veronica

    2012-01-01

    Several technological enhancements have been made to METI's commercial Emergency Care Simulator (ECS) with regard to how microgravity affects human physiology. The ECS uses both a software-only lung simulation, and an integrated mannequin lung that uses a physical lung bag for creating chest excursions, and a digital simulation of lung mechanics and gas exchange. METI s patient simulators incorporate models of human physiology that simulate lung and chest wall mechanics, as well as pulmonary gas exchange. Microgravity affects how O2 and CO2 are exchanged in the lungs. Procedures were also developed to take into affect the Glasgow Coma Scale for determining levels of consciousness by varying the ECS eye-blinking function to partially indicate the level of consciousness of the patient. In addition, the ECS was modified to provide various levels of pulses from weak and thready to hyper-dynamic to assist in assessing patient conditions from the femoral, carotid, brachial, and pedal pulse locations.

  17. [Long-term results of two temporalis muscle transfer procedures in correction of paralytic lagophthalmos].

    Science.gov (United States)

    Qian, Jiange; Yan, Liangbin; Zhang, Guocheng

    2004-11-01

    To compare the long-term results and possible complications of a modified temporalis muscle transfer (TMT) with the Johnson's procedure in correction of paralytic lagophthalmos. From September 1997 to March 2000, paralytic lagophthalmos due to leprosy in 92 patients were corrected with TMT. The 89 cases (127 to eyes including 51 unilateral and 38 bilateral) followed up 3 years after operation were analyzed. There were 69 males and 20 females with ages ranging from 18 to 65 years (52 years on average). The duration of lagophthalmos was 1-22 years with an average of 8.2 years. And 36 eyes were complicated with lower eyelid ectropion. Sixty-five eyes were corrected with Johnson's procedure (Johnson's TMT group), 62 with the modified TMT procedure (modified TMT group). The modifications were as follows: (1) omitting the fascial strip in the lower eyelid to avoid postoperative ectropion. (2) fixing the fascial strip of the upper eyelid to the middle or inner margin of the tarsal palate depending on the degree of the lagophthalmos to avoid possible ptosis of the upper eyelid. In Johnson's TMT group, the mean lid gap on light closure was reduced to 3.1 mm postoperatively from 7.7 mm preoperatively; and the mean lid gap on tight closure was reduced to 0.5 mm postoperatively from 6.1 mm preoperatively. The symptoms of redness (73.7%) and tearing (63.7%) disappeared or were improved postoperatively. However, ectropion and ptosis occurred in 24 eyes and 9 eyes respectively. The overall excellent and good rate was 58.5%. In the modified TMT group, the mean lid gap on light closure was reduced to 3.3 mm postoperatively from 7.5 mm preoperatively; and the mean lid gap on tight closure was reduced to 0. 6 mm postoperatively from 6. 3 mm preoperatively. The symptoms of redness (90.9%) and tearing (71.0%) disappeared or were improved postoperatively, and no ectropion or ptosis was found except one ectropion. The overall excellent and good rate was 87.1%, which was significantly

  18. The results of a third Gamma Knife procedure for recurrent trigeminal neuralgia.

    Science.gov (United States)

    Tempel, Zachary J; Chivukula, Srinivas; Monaco, Edward A; Bowden, Greg; Kano, Hideyuki; Niranjan, Ajay; Chang, Edward F; Sneed, Penny K; Kaufmann, Anthony M; Sheehan, Jason; Mathieu, David; Lunsford, L Dade

    2015-01-01

    Gamma Knife radiosurgery (GKRS) is the least invasive treatment option for medically refractory, intractable trigeminal neuralgia (TN) and is especially valuable for treating elderly, infirm patients or those on anticoagulation therapy. The authors reviewed pain outcomes and complications in TN patients who required 3 radiosurgical procedures for recurrent or persistent pain. A retrospective review of all patients who underwent 3 GKRS procedures for TN at 4 participating centers of the North American Gamma Knife Consortium from 1995 to 2012 was performed. The Barrow Neurological Institute (BNI) pain score was used to evaluate pain outcomes. Seventeen patients were identified; 7 were male and 10 were female. The mean age at the time of last GKRS was 79.6 years (range 51.2-95.6 years). The TN was Type I in 16 patients and Type II in 1 patient. No patient suffered from multiple sclerosis. Eight patients (47.1%) reported initial complete pain relief (BNI Score I) following their third GKRS and 8 others (47.1%) experienced at least partial relief (BNI Scores II-IIIb). The average time to initial response was 2.9 months following the third GKRS. Although 3 patients (17.6%) developed new facial sensory dysfunction following primary GKRS and 2 patients (11.8%) experienced new or worsening sensory disturbance following the second GKRS, no patient sustained additional sensory disturbances after the third procedure. At a mean follow-up of 22.9 months following the third GKRS, 6 patients (35.3%) reported continued Score I complete pain relief, while 7 others (41.2%) reported pain improvement (BNI Scores II-IIIb). Four patients (23.5%) suffered recurrent TN following the third procedure at a mean interval of 19.1 months. A third GKRS resulted in pain reduction with a low risk of additional complications in most patients with medically refractory and recurrent, intractable TN. In patients unsuitable for other microsurgical or percutaneous strategies, especially those receiving

  19. Experiences with a procedure for modeling product knowledge

    DEFF Research Database (Denmark)

    Hansen, Benjamin Loer; Hvam, Lars

    2002-01-01

    This paper presents experiences with a procedure for building configurators. The procedure has been used in an American company producing custom-made precision air conditioning equipment. The paper describes experiences with the use of the procedure and experiences with the project in general....

  20. Endoscopic video-assisted breast surgery: procedures and short-term results.

    Science.gov (United States)

    Yamashita, Koji; Shimizu, Kazuo

    2006-08-01

    We devised a new endoscopic operation for breast diseases. We report the aesthetic and treatment results of this procedure. A 2.5-cm axillary skin incision was made for a single approaching port, and a working space was created by retraction. Under video assistance, we resected the mammary gland partially or totally, and in the case of malignant diseases we also performed a sentinel lymph node biopsy and dissected axillary lymph nodes (levels I and II). From December 2001 through April 2005, we performed endoscopic video-assisted breast surgery (VABS) in 100 patients with breast diseases. The diseases were benign in 18 patients and malignant in 82 patients. Of the malignant diseases, 80 underwent breast-conserving surgery and 2 underwent skin-sparing mastectomy. There was no significant difference in operation time, blood loss, or blood examinations related with the acute phase reaction between VABS and conventional breast-conserving procedures. All surgical margins were negative on examination of permanent histological preparations. The wounds healed without noticeable scarring. The original shapes of the breast were preserved. All patients expressed their great satisfaction with VABS. VABS can be considered as a surgical option and can provide aesthetic advantages for patients with breast disease.

  1. [Diagnosis and treatment of varicose veins: part 2: therapeutic procedures and results].

    Science.gov (United States)

    Nüllen, H; Noppeney, T

    2010-12-01

    This is the second of two articles on the diagnosis and treatment of varicose veins. Primary varicosis is a congenital degenerative disease of the peripheral venous system of the lower extremities. Treatment is carried out according to an individualized concept which takes the incurability and progression of the disease into consideration. Conservative treatment with compression bandages is an option for all forms of varicosis and the accompanying complications. Veins can be specifically ablated by sclerotherapy of varices. In addition to high ligation and stripping mini-phlebectomy and subfascial endoscopic perforator surgery (SEPS) can also be performed. The indications in cases of SEPS should be extremely limited because of possible severe complications. Radiofrequency ablation (RFO) and endovenous laser therapy (ELT) are also available as endovenous therapy options. Information in the literature on recurrence rates of the various procedures is extremely variable and the reasons for recurrent varicosis are the subject of controversy. The data relating to the results of RFO and ELT are relatively good and both procedures show a significant improvement in quality of life and the venous clinical severity score (VCSS).

  2. Congenital penile curvature: long-term results of operative treatment using the plication procedure.

    Science.gov (United States)

    Lee, S-S; Meng, E; Chuang, F-P; Yen, C-Y; Chang, S-Y; Yu, D-S; Sun, G-H

    2004-09-01

    To determine the long-term outcome, effectiveness and patient satisfaction of congenital penile curvature correction by plication of tunica albuginea. From January 1992 to January 2002, 106 young patients underwent surgical correction of congenital penile curvature by corporeal plication. Indications for operation were difficult or impossible vaginal penetration and cosmetic problems. The technique of corporeal plication consists of placing longitudinal plication sutures of 2-zero braided polyester on the convex side of the curvature until the curvature is corrected when erection is artificially induced. Results of this procedure were obtained by retrospective chart reviews and questionnaires via mail. Long-term follow-up ranged from 11 to 132 (mean 69.3) months and data were available for 68 patients. Penile straightening was excellent in 62 patients (91 %) and good with less than 15 degree of residual curvature in 6 patients (9 %). Sixty-seven patients reported no change in erectile rigidity or maintenance postoperatively, while 1 described early detumescence. Shortening of the penis without functional problems was noted by 26 patients (38 %). Thirty-Five patients (51 %) reported feeling palpable indurations (suture knots) on the penis. Temporary numbness of glans penis was described in 3 patients. Overall, 60 patients were very satisfied, 6 satisfied, 2 unsatisfied. Corporeal plication is an effective and durable procedure with a high rate of patient satisfaction.

  3. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...

  4. Robotic right colectomy: A worthwhile procedure? Results of a meta-analysis of trials comparing robotic versus laparoscopic right colectomy

    Directory of Open Access Journals (Sweden)

    Niccolή Petrucciani

    2015-01-01

    Full Text Available Background: Robotic right colectomy (RRC is a complex procedure, offered to selected patients at institutions highly experienced with the procedure. It is still not clear if this approach is worthwhile in enhancing patient recovery and reducing post-operative complications, compared with laparoscopic right colectomy (LRC. Literature is still fragmented and no meta-analyses have been conducted to compare the two procedures. This work aims at reducing this gap in literature, in order to draw some preliminary conclusions on the differences and similarities between RRC and LRC, focusing on short-term outcomes. Materials and Methods: A systematic literature review was conducted to identify studies comparing RRC and LRC, and meta-analysis was performed using a random-effects model. Peri-operative outcomes (e.g., morbidity, mortality, anastomotic leakage rates, blood loss, operative time constituted the study end points. Results: Six studies, including 168 patients undergoing RRC and 348 patients undergoing LRC were considered as suitable. The patients in the two groups were similar with respect to sex, body mass index, presence of malignant disease, previous abdominal surgery, and different with respect to age and American Society of Anesthesiologists score. There were no statistically significant differences between RRC and LRC regarding estimated blood loss, rate of conversion to open surgery, number of retrieved lymph nodes, development of anastomotic leakage and other complications, overall morbidity, rates of reoperation, overall mortality, hospital stays. RRC resulted in significantly longer operative time. Conclusions: The RRC procedure is feasible, safe, and effective in selected patients. However, operative times are longer comparing to LRC and no advantages in peri-operative and post-operative outcomes are demonstrated with the use of the robotic surgical system.

  5. Recurrent Anterior Shoulder Instability With Combined Bone Loss: Treatment and Results With the Modified Latarjet Procedure.

    Science.gov (United States)

    Yang, Justin S; Mazzocca, Augustus D; Cote, Mark P; Edgar, Cory M; Arciero, Robert A

    2016-04-01

    Recurrent anterior glenohumeral dislocation in the setting of an engaging Hill-Sachs lesion is high. The Latarjet procedure has been well described for restoring glenohumeral stability in patients with >25% glenoid bone loss. However, the treatment for patients with combined humeral head and mild (Latarjet for patients with combined humeral and glenoid defects and compares the results for patients with ≤25% glenoid bone loss versus patients with >25% glenoid bone loss. The hypothesis was that the 2 groups would have equivalent subjective outcomes and recurrence rates. Cohort Study; Level of evidence, 3. Modified Latarjet was performed in 40 patients with recurrent anterior shoulder instability, engaging Hill-Sachs by examination confirmed with arthroscopy, and ≤25% anterior glenoid bone loss (group A). A second group of 12 patients were identified to have >25% glenoid bone loss with an engaging Hill-Sachs lesion (group B). The mean follow-up time was 3.5 years. All patients were assessed for their risk of recurrence using the Instability Severity Index score and Beighton score and had preoperative 3-dimensional imaging to assess humeral and glenoid bone loss. Single Assessment Numeric Evaluation (SANE), Western Ontario Shoulder Instability Index (WOSI), recurrence rate, radiographs, range of motion, and dynamometer strength were used to assess outcomes. A multivariate analysis was performed. Glenoid bone loss averaged 15% in group A compared with 34% in group B. Both groups had comparable WOSI scores (356 vs 475; P = .311). In multivariate analysis, the number of previous surgeries and Beighton score were directly correlated with WOSI score in Latarjet patients. The SANE score was better in group A (86 vs 77; P = .02). Group B experienced more loss of external rotation (9.2° vs 15.8°; P = .0001) and weaker thumbs-down abduction and external rotation strength (P .999) were similar for both groups. The complication rate was 25% for both groups. The modified

  6. Optimisation need of dental radiodiagnostic procedures: results of effective dose evaluation from Rando phantom measurements

    Energy Technology Data Exchange (ETDEWEB)

    Borio, R.; Chiocchini, S.; Cicioni, R.; Degli Esposti, P.; Rongoni, A.; Sabatini, P.; Saetta, D.M.S. (Perugia Univ. (Italy). Health Physics Lab. Istituto Nazionale di Fisica Nucleare, Perugia (Italy)); Regi, L.; Caprino, G. (Perugia Univ. (Italy). Dept. of Radiology)

    1994-01-01

    Radiological examinations of different types are needed in dental practice both to make a correct diagnosis and to carry out an adequate therapy. Particularly in orthodentic practices, because of the youth of the majority of the patients, an assessment of the detriment to health (through the effective dose equivalent) caused by medical diagnostic exposure to ionising radiation is needed to make decisions about the optimisation of dental radiodiagnostic procedures. Experimental data from measurements on a Rando phantom were collected for the radiological examinations required for dental and for orthodontic practices (with and without protective apron and collar). The results show the effectiveness of the leaded apron and collar in all the examinations carried out, particularly in reducing thyroid dose. (author).

  7. How to stretch and shrink vowel systems: results from a vowel normalization procedure.

    Science.gov (United States)

    Geng, Christian; Mooshammer, Christine

    2009-05-01

    One of the goals of phonetic investigations is to find strategies for vowel production independent of speaker-specific vocal-tract anatomies and individual biomechanical properties. In this study techniques for speaker normalization that are derived from Procrustes methods were applied to acoustic and articulatory data. More precisely, data consist of the first two formants and EMMA fleshpoint markers of stressed and unstressed vowels of German from seven speakers in the consonantal context /t/. Main results indicate that (a) for the articulatory data, the normalization can be related to anatomical properties (palate shapes), (b) the recovery of phonemic identity is of comparable quality for acoustic and articulatory data, (c) the procedure outperforms the Lobanov transform in the acoustic domain in terms of phoneme recovery, and (d) this advantage comes at the cost of partly also changing ellipse orientations, which is in accordance with the formulation of the algorithms.

  8. A Review of Models and Procedures for Synthetic Validation for Entry-Level Army Jobs

    Science.gov (United States)

    1988-12-01

    ARI Research Note 88-107 A Review of Models and Procedures for Co Synthetic Validation for Entry-LevelM -£.2 Army Jobs C i Jennifer L. Crafts, Philip...of Models and Procecures for Synthetic Validation for Entry-Level Army Jobs 12. PERSONAL AUTHOR(S) Crafts, Jennifer L., Szenas, Fhilip L., Chia, Wel...well as ability. ProJect A Validity Results Campbell (1986) and McHerry, Houigh. Thquam, Hanson, and Ashworth (1987) have conducted extensive

  9. 49 CFR 40.247 - What procedures does the BAT or STT follow after a screening test result?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false What procedures does the BAT or STT follow after a... What procedures does the BAT or STT follow after a screening test result? (a) If the test result is an alcohol concentration of less than 0.02, as the BAT or STT, you must do the following: (1) Sign and date...

  10. A Stepwise Time Series Regression Procedure for Water Demand Model Identification

    Science.gov (United States)

    Miaou, Shaw-Pin

    1990-09-01

    Annual time series water demand has traditionally been studied through multiple linear regression analysis. Four associated model specification problems have long been recognized: (1) the length of the available time series data is relatively short, (2) a large set of candidate explanatory or "input" variables needs to be considered, (3) input variables can be highly correlated with each other (multicollinearity problem), and (4) model error series are often highly autocorrelated or even nonstationary. A step wise time series regression identification procedure is proposed to alleviate these problems. The proposed procedure adopts the sequential input variable selection concept of stepwise regression and the "three-step" time series model building strategy of Box and Jenkins. Autocorrelated model error is assumed to follow an autoregressive integrated moving average (ARIMA) process. The stepwise selection procedure begins with a univariate time series demand model with no input variables. Subsequently, input variables are selected and inserted into the equation one at a time until the last entered variable is found to be statistically insignificant. The order of insertion is determined by a statistical measure called between-variable partial correlation. This correlation measure is free from the contamination of serial autocorrelation. Three data sets from previous studies are employed to illustrate the proposed procedure. The results are then compared with those from their original studies.

  11. Renormalization procedure for random tensor networks and the canonical tensor model

    CERN Document Server

    Sasakura, Naoki

    2015-01-01

    We discuss a renormalization procedure for random tensor networks, and show that the corresponding renormalization-group flow is given by the Hamiltonian vector flow of the canonical tensor model, which is a discretized model of quantum gravity. The result is the generalization of the previous one concerning the relation between the Ising model on random networks and the canonical tensor model with N=2. We also prove a general theorem which relates discontinuity of the renormalization-group flow and the phase transitions of random tensor networks.

  12. Surgical procedures for a rat model of partial orthotopic liver transplantation with hepatic arterial reconstruction.

    Science.gov (United States)

    Nagai, Kazuyuki; Yagi, Shintaro; Uemoto, Shinji; Tolba, Rene H

    2013-03-07

    Orthotopic liver transplantation (OLT) in rats using a whole or partial graft is an indispensable experimental model for transplantation research, such as studies on graft preservation and ischemia-reperfusion injury, immunological responses, hemodynamics, and small-for-size syndrome. The rat OLT is among the most difficult animal models in experimental surgery and demands advanced microsurgical skills that take a long time to learn. Consequently, the use of this model has been limited. Since the reliability and reproducibility of results are key components of the experiments in which such complex animal models are used, it is essential for surgeons who are involved in rat OLT to be trained in well-standardized and sophisticated procedures for this model. While various techniques and modifications of OLT in rats have been reported since the first model was described by Lee et al. in 1973, the elimination of the hepatic arterial reconstruction and the introduction of the cuff anastomosis technique by Kamada et al. were a major advancement in this model, because they simplified the reconstruction procedures to a great degree. In the model by Kamada et al., the hepatic rearterialization was also eliminated. Since rats could survive without hepatic arterial flow after liver transplantation, there was considerable controversy over the value of hepatic arterialization. However, the physiological superiority of the arterialized model has been increasingly acknowledged, especially in terms of preserving the bile duct system and the liver integrity. In this article, we present detailed surgical procedures for a rat model of OLT with hepatic arterial reconstruction using a 50% partial graft after ex vivo liver resection. The reconstruction procedures for each vessel and the bile duct are performed by the following methods: a 7-0 polypropylene continuous suture for the supra- and infrahepatic vena cava; a cuff technique for the portal vein; and a stent technique for the

  13. Experimental testing procedures and dynamic model validation for vanadium redox flow battery storage system

    Science.gov (United States)

    Baccino, Francesco; Marinelli, Mattia; Nørgård, Per; Silvestro, Federico

    2014-05-01

    The paper aims at characterizing the electrochemical and thermal parameters of a 15 kW/320 kWh vanadium redox flow battery (VRB) installed in the SYSLAB test facility of the DTU Risø Campus and experimentally validating the proposed dynamic model realized in Matlab-Simulink. The adopted testing procedure consists of analyzing the voltage and current values during a power reference step-response and evaluating the relevant electrochemical parameters such as the internal resistance. The results of different tests are presented and used to define the electrical characteristics and the overall efficiency of the battery system. The test procedure has general validity and could also be used for other storage technologies. The storage model proposed and described is suitable for electrical studies and can represent a general model in terms of validity. Finally, the model simulation outputs are compared with experimental measurements during a discharge-charge sequence.

  14. Results of clinical application of the modified maze procedure as concomitant surgery

    NARCIS (Netherlands)

    R.C. Bakker (Robbert); S. Akin (Sakir); D. Rizopoulos (Dimitris); C. Kik (Charles); J.J.M. Takkenberg (Hanneke); A.J.J.C. Bogers (Ad)

    2013-01-01

    textabstractObjectives Atrial fibrillation is the most common cardiac arrhythmia and is associated with significant morbidity and mortality. The classic cut-and-sew maze procedure is successful in 85-95% of patients. However, the technical complexity has prompted modifications of the maze procedure.

  15. Hydraulic fracture model comparison study: Complete results

    Energy Technology Data Exchange (ETDEWEB)

    Warpinski, N.R. [Sandia National Labs., Albuquerque, NM (United States); Abou-Sayed, I.S. [Mobil Exploration and Production Services (United States); Moschovidis, Z. [Amoco Production Co. (US); Parker, C. [CONOCO (US)

    1993-02-01

    Large quantities of natural gas exist in low permeability reservoirs throughout the US. Characteristics of these reservoirs, however, make production difficult and often economic and stimulation is required. Because of the diversity of application, hydraulic fracture design models must be able to account for widely varying rock properties, reservoir properties, in situ stresses, fracturing fluids, and proppant loads. As a result, fracture simulation has emerged as a highly complex endeavor that must be able to describe many different physical processes. The objective of this study was to develop a comparative study of hydraulic-fracture simulators in order to provide stimulation engineers with the necessary information to make rational decisions on the type of models most suited for their needs. This report compares the fracture modeling results of twelve different simulators, some of them run in different modes for eight separate design cases. Comparisons of length, width, height, net pressure, maximum width at the wellbore, average width at the wellbore, and average width in the fracture have been made, both for the final geometry and as a function of time. For the models in this study, differences in fracture length, height and width are often greater than a factor of two. In addition, several comparisons of the same model with different options show a large variability in model output depending upon the options chosen. Two comparisons were made of the same model run by different companies; in both cases the agreement was good. 41 refs., 54 figs., 83 tabs.

  16. Performance results of HESP physical model

    Science.gov (United States)

    Chanumolu, Anantha; Thirupathi, Sivarani; Jones, Damien; Giridhar, Sunetra; Grobler, Deon; Jakobsson, Robert

    2017-02-01

    As a continuation to the published work on model based calibration technique with HESP(Hanle Echelle Spectrograph) as a case study, in this paper we present the performance results of the technique. We also describe how the open parameters were chosen in the model for optimization, the glass data accuracy and handling the discrepancies. It is observed through simulations that the discrepancies in glass data can be identified but not quantifiable. So having an accurate glass data is important which is possible to obtain from the glass manufacturers. The model's performance in various aspects is presented using the ThAr calibration frames from HESP during its pre-shipment tests. Accuracy of model predictions and its wave length calibration comparison with conventional empirical fitting, the behaviour of open parameters in optimization, model's ability to track instrumental drifts in the spectrum and the double fibres performance were discussed. It is observed that the optimized model is able to predict to a high accuracy the drifts in the spectrum from environmental fluctuations. It is also observed that the pattern in the spectral drifts across the 2D spectrum which vary from image to image is predictable with the optimized model. We will also discuss the possible science cases where the model can contribute.

  17. Randomization in laboratory procedure is key to obtaining reproducible microarray results.

    Directory of Open Access Journals (Sweden)

    Hyuna Yang

    Full Text Available The quality of gene expression microarray data has improved dramatically since the first arrays were introduced in the late 1990s. However, the reproducibility of data generated at multiple laboratory sites remains a matter of concern, especially for scientists who are attempting to combine and analyze data from public repositories. We have carried out a study in which a common set of RNA samples was assayed five times in four different laboratories using Affymetrix GeneChip arrays. We observed dramatic differences in the results across laboratories and identified batch effects in array processing as one of the primary causes for these differences. When batch processing of samples is confounded with experimental factors of interest it is not possible to separate their effects, and lists of differentially expressed genes may include many artifacts. This study demonstrates the substantial impact of sample processing on microarray analysis results and underscores the need for randomization in the laboratory as a means to avoid confounding of biological factors with procedural effects.

  18. Procedure for identifying models for the heat dynamics of buildings

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik

    This report describes a new method for obtaining detailed information about the heat dynamics of a building using frequent reading of the heat consumption. Such a procedure is considered to be of uttermost importance as a key procedure for using readings from smart meters, which is expected...... to be installed in almost all buildings in the coming years....

  19. Modeling Malaysia's Energy System: Some Preliminary Results

    Directory of Open Access Journals (Sweden)

    Ahmad M. Yusof

    2011-01-01

    Full Text Available Problem statement: The current dynamic and fragile world energy environment necessitates the development of new energy model that solely caters to analyze Malaysia’s energy scenarios. Approach: The model is a network flow model that traces the flow of energy carriers from its sources (import and mining through some conversion and transformation processes for the production of energy products to final destinations (energy demand sectors. The integration to the economic sectors is done exogeneously by specifying the annual sectoral energy demand levels. The model in turn optimizes the energy variables for a specified objective function to meet those demands. Results: By minimizing the inter temporal petroleum product imports for the crude oil system the annual extraction level of Tapis blend is projected at 579600 barrels per day. The aggregate demand for petroleum products is projected to grow at 2.1% year-1 while motor gasoline and diesel constitute 42 and 38% of the petroleum products demands mix respectively over the 5 year planning period. Petroleum products import is expected to grow at 6.0% year-1. Conclusion: The preliminary results indicate that the model performs as expected. Thus other types of energy carriers such as natural gas, coal and biomass will be added to the energy system for the overall development of Malaysia energy model.

  20. Spatial Statistical Procedures to Validate Input Data in Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, G.; Stewart, J.; Barr, C.; Brady Sabeff, L.; George, R.; Heimiller, D.; Milbrandt, A.

    2006-01-01

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the abovementioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  1. Spatial Statistical Procedures to Validate Input Data in Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence Livermore National Laboratory

    2006-01-27

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy-related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the above-mentioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  2. Summary of FY15 results of benchmark modeling activities

    Energy Technology Data Exchange (ETDEWEB)

    Arguello, J. Guadalupe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    Sandia is participating in the third phase of an is a contributing partner to a U.S.-German "Joint Project" entitled "Comparison of current constitutive models and simulation procedures on the basis of model calculations of the thermo-mechanical behavior and healing of rock salt." The first goal of the project is to check the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure storage of radioactive wastes in rock salt, thereby enhancing the acceptance of the results. These results may ultimately be used to make various assertions regarding both the stability analysis of an underground repository in salt, during the operating phase, and the long-term integrity of the geological barrier against the release of harmful substances into the biosphere, in the post-operating phase.

  3. Asymptotic results and statistical procedures for time-changed L\\'evy processes sampled at hitting times

    CERN Document Server

    Rosenbaum, Mathieu

    2010-01-01

    We provide asymptotic results and develop high frequency statistical procedures for time-changed L\\'evy processes sampled at random instants. The sampling times are given by first hitting times of symmetric barriers whose distance with respect to the starting point is equal to $\\varepsilon$. This setting can be seen as a first step towards a model for tick-by-tick financial data allowing for large jumps. For a wide class of L\\'evy processes, we introduce a renormalization depending on $\\varepsilon$, under which the L\\'evy process converges in law to an $\\alpha$-stable process as $\\varepsilon$ goes to $0$. The convergence is extended to moments of hitting times and overshoots. In particular, these results allow us to construct consistent estimators of the time change and of the Blumenthal-Getoor index of the underlying L\\'evy process. Convergence rates and a central limit theorem are established under additional assumptions.

  4. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  5. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  6. Computer-based procedure for field activities: Results from three evaluations at nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna [Idaho National Lab. (INL), Idaho Falls, ID (United States); bly, Aaron [Idaho National Lab. (INL), Idaho Falls, ID (United States); LeBlanc, Katya [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-09-01

    Nearly all activities that involve human interaction with the systems of a nuclear power plant are guided by procedures. The paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety; however, improving procedure use could yield tremendous savings in increased efficiency and safety. One potential way to improve procedure-based activities is through the use of computer-based procedures (CBPs). Computer-based procedures provide the opportunity to incorporate context driven job aids, such as drawings, photos, just-in-time training, etc into CBP system. One obvious advantage of this capability is reducing the time spent tracking down the applicable documentation. Additionally, human performance tools can be integrated in the CBP system in such way that helps the worker focus on the task rather than the tools. Some tools can be completely incorporated into the CBP system, such as pre-job briefs, placekeeping, correct component verification, and peer checks. Other tools can be partly integrated in a fashion that reduces the time and labor required, such as concurrent and independent verification. Another benefit of CBPs compared to PBPs is dynamic procedure presentation. PBPs are static documents which limits the degree to which the information presented can be tailored to the task and conditions when the procedure is executed. The CBP system could be configured to display only the relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the user down the path of relevant steps based on the current conditions. This feature will reduce the user’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. As part of the Department of Energy’s (DOE) Light Water Reactors Sustainability Program

  7. Modelling rainfall erosion resulting from climate change

    Science.gov (United States)

    Kinnell, Peter

    2016-04-01

    It is well known that soil erosion leads to agricultural productivity decline and contributes to water quality decline. The current widely used models for determining soil erosion for management purposes in agriculture focus on long term (~20 years) average annual soil loss and are not well suited to determining variations that occur over short timespans and as a result of climate change. Soil loss resulting from rainfall erosion is directly dependent on the product of runoff and sediment concentration both of which are likely to be influenced by climate change. This presentation demonstrates the capacity of models like the USLE, USLE-M and WEPP to predict variations in runoff and erosion associated with rainfall events eroding bare fallow plots in the USA with a view to modelling rainfall erosion in areas subject to climate change.

  8. Simulation Modeling of Radio Direction Finding Results

    Directory of Open Access Journals (Sweden)

    K. Pelikan

    1994-12-01

    Full Text Available It is sometimes difficult to determine analytically error probabilities of direction finding results for evaluating algorithms of practical interest. Probalistic simulation models are described in this paper that can be to study error performance of new direction finding systems or to geographical modifications of existing configurations.

  9. Analysis and Design Procedure of LVLP Sub-bandgap Reference - Development and Results

    Directory of Open Access Journals (Sweden)

    T. Urban

    2011-04-01

    Full Text Available This work presents an thorough analysis and design of a low-voltage low-power voltage reference circuit with sub-bandgap output voltage. The outcome of the analysis and the resulting design rules are universal and it is supposed to be general and suitable for similar topologies with just minor modifications. The general analysis is followed by a selection of specific topology. The given topology is analyzed for particular parameters which are standard industrial circuit specifications. These parameters are mathematically expressed, some are simplified and equivalent circuits are used. The analysis and proposed design procedure focuses mainly on versatility of the IP block. The features of the circuit suit to low-voltage low-power design with less than 10μA supply current draw at 1.3V supply voltage. For testing purposes a complex transistor level design was created and verified in wide range of supply voltages (1.3 to 3.3V and temperatures (-45 to 95°C all in concrete 0.35μm IC design process using Mentor Graphics® and Cadence® software.

  10. Using X-ray computed tomography to evaluate the initial saturation resulting from different saturation procedures

    DEFF Research Database (Denmark)

    Christensen, Britt Stenhøj Baun; Wildenschild, D; Jensen, K.H.

    2006-01-01

    with pressurized nitrogen between each saturation and allowed to saturate for the same length of time for all the different procedures. Both gravimetric measurements and CT attenuation levels showed that venting the sample with carbon dioxide prior to saturation clearly improved initial saturation whereas the use...... saturation. In this study three techniques often applied in the laboratory have been evaluated for a fine sand sample: (1) venting of the sample with carbon dioxide prior to saturation, (2) applying vacuum to the sample in the beginning of the saturation procedure, and finally (3) the use of degassed water...... the sample was scanned in 1 mm intervals over the height of the 3.5 cm tall sample, providing detailed information on the performance of the different procedures. Five different combinations of the above mentioned saturation procedures were applied to a disturbed silica sand sample. The sample was drained...

  11. A P-value model for theoretical power analysis and its applications in multiple testing procedures

    Directory of Open Access Journals (Sweden)

    Fengqing Zhang

    2016-10-01

    Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.

  12. Long-term results of the Latarjet procedure for anterior instability of the shoulder.

    Science.gov (United States)

    Mizuno, Naoko; Denard, Patrick J; Raiss, Patric; Melis, Barbara; Walch, Gilles

    2014-11-01

    The Latarjet procedure is effective in managing anterior glenohumeral instability in the short term, but there is concern for postoperative arthritis. The purpose of this study was to evaluate the long-term functional outcome after the Latarjet procedure and to assess the prevalence of and risk factors for glenohumeral arthritis after this procedure. A retrospective review was conducted of 68 Latarjet procedures at a mean of 20 years postoperatively. The mean age at surgery was 29.4 years. Functional outcome was determined by the Rowe score, subjective shoulder value, and recurrence of instability. Preoperative arthritis and postoperative radiographs were reviewed to evaluate the development or progression of arthritis. The mean Rowe score increased from 37.9 preoperatively to 89.6 at final follow-up (P Latarjet procedure provides excellent long-term outcomes in the treatment of recurrent anterior glenohumeral instability. Twenty years after the Latarjet procedure, arthritis may develop or progress in 23.5% of cases, but the majority of arthritis is mild. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  13. Numerical study of RF exposure and the resulting temperature rise in the foetus during a magnetic resonance procedure

    Energy Technology Data Exchange (ETDEWEB)

    Hand, J W; Li, Y; Hajnal, J V [Imaging Sciences Department, Imperial College London (Hammersmith Campus), London W12 0NN (United Kingdom)], E-mail: j.hand@imperial.ac.uk

    2010-02-21

    Numerical simulations of specific absorption rate (SAR) and temperature changes in a 26-week pregnant woman model within typical birdcage body coils as used in 1.5 T and 3 T MRI scanners are described. Spatial distributions of SAR and the resulting spatial and temporal changes in temperature are determined using a finite difference time domain method and a finite difference bio-heat transfer solver that accounts for discrete vessels. Heat transfer from foetus to placenta via the umbilical vein and arteries as well as that across the foetal skin/amniotic fluid/uterine wall boundaries is modelled. Results suggest that for procedures compliant with IEC normal mode conditions (maternal whole-body averaged SAR{sub MWB} {<=} 2 W kg{sup -1} (continuous or time-averaged over 6 min)), whole foetal SAR, local foetal SAR{sub 10g} and average foetal temperature are within international safety limits. For continuous RF exposure at SAR{sub MWB} = 2 W kg{sup -1} over periods of 7.5 min or longer, a maximum local foetal temperature >38 deg. C may occur. However, assessment of the risk posed by such maximum temperatures predicted in a static model is difficult because of frequent foetal movement. Results also confirm that when SAR{sub MWB} = 2 W kg{sup -1}, some local SAR{sub 10g} values in the mother's trunk and extremities exceed recommended limits.

  14. Numerical study of RF exposure and the resulting temperature rise in the foetus during a magnetic resonance procedure

    Science.gov (United States)

    Hand, J. W.; Li, Y.; Hajnal, J. V.

    2010-02-01

    Numerical simulations of specific absorption rate (SAR) and temperature changes in a 26-week pregnant woman model within typical birdcage body coils as used in 1.5 T and 3 T MRI scanners are described. Spatial distributions of SAR and the resulting spatial and temporal changes in temperature are determined using a finite difference time domain method and a finite difference bio-heat transfer solver that accounts for discrete vessels. Heat transfer from foetus to placenta via the umbilical vein and arteries as well as that across the foetal skin/amniotic fluid/uterine wall boundaries is modelled. Results suggest that for procedures compliant with IEC normal mode conditions (maternal whole-body averaged SARMWB 38 °C may occur. However, assessment of the risk posed by such maximum temperatures predicted in a static model is difficult because of frequent foetal movement. Results also confirm that when SARMWB = 2 W kg-1, some local SAR10g values in the mother's trunk and extremities exceed recommended limits.

  15. Glaucoma-inducing Procedure in an In Vivo Rat Model and Whole-mount Retina Preparation.

    Science.gov (United States)

    Gossman, Cynthia A; Linn, David M; Linn, Cindy

    2016-01-01

    Glaucoma is a disease of the central nervous system affecting retinal ganglion cells (RGCs). RGC axons making up the optic nerve carry visual input to the brain for visual perception. Damage to RGCs and their axons leads to vision loss and/or blindness. Although the specific cause of glaucoma is unknown, the primary risk factor for the disease is an elevated intraocular pressure. Glaucoma-inducing procedures in animal models are a valuable tool to researchers studying the mechanism of RGC death. Such information can lead to the development of effective neuroprotective treatments that could aid in the prevention of vision loss. The protocol in this paper describes a method of inducing glaucoma - like conditions in an in vivo rat model where 50 µl of 2 M hypertonic saline is injected into the episcleral venous plexus. Blanching of the vessels indicates successful injection. This procedure causes loss of RGCs to simulate glaucoma. One month following injection, animals are sacrificed and eyes are removed. Next, the cornea, lens, and vitreous are removed to make an eyecup. The retina is then peeled from the back of the eye and pinned onto sylgard dishes using cactus needles. At this point, neurons in the retina can be stained for analysis. Results from this lab show that approximately 25% of RGCs are lost within one month of the procedure when compared to internal controls. This procedure allows for quantitative analysis of retinal ganglion cell death in an in vivo rat glaucoma model.

  16. A Two-Dimensional Modeling Procedure to Estimate the Loss Equivalent Resistance Including the Saturation Effect

    Directory of Open Access Journals (Sweden)

    Rosa Ana Salas

    2013-11-01

    Full Text Available We propose a modeling procedure specifically designed for a ferrite inductor excited by a waveform in time domain. We estimate the loss resistance in the core (parameter of the electrical model of the inductor by means of a Finite Element Method in 2D which leads to significant computational advantages over the 3D model. The methodology is validated for an RM (rectangular modulus ferrite core working in the linear and the saturation regions. Excellent agreement is found between the experimental data and the computational results.

  17. Marginal production in the Gulf of Mexico - II. Model results

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, Mark J.; Yu, Yunke [Center for Energy Studies, Louisiana State University, Baton Rouge, LA 70803 (United States)

    2010-08-15

    In the second part of this two-part article on marginal production in the Gulf of Mexico, we estimate the number of committed assets in water depth less than 1000 ft that are expected to be marginal over a 60-year time horizon. We compute the expected quantity and value of the production and gross revenue streams of the gulf's committed asset inventory circa. January 2007 using a probabilistic model framework. Cumulative hydrocarbon production from the producing inventory is estimated to be 1056 MMbbl oil and 13.3 Tcf gas. Marginal production from the committed asset inventory is expected to contribute 4.1% of total oil production and 5.4% of gas production. A meta-evaluation procedure is adapted to present the results of sensitivity analysis. Model results are discussed along with a description of the model framework and limitations of the analysis. (author)

  18. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    Science.gov (United States)

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  19. A review of mechanisms and modelling procedures for landslide tsunamis

    Science.gov (United States)

    Løvholt, Finn; Harbitz, Carl B.; Glimsdal, Sylfest

    2017-04-01

    Landslides, including volcano flank collapses or volcanically induced flows, constitute the second-most important cause of tsunamis after earthquakes. Compared to earthquakes, landslides are more diverse with respect to how they generation tsunamis. Here, we give an overview over the main tsunami generation mechanisms for landslide tsunamis. In the presentation, a mix of results using analytical models, numerical models, laboratory experiments, and case studies are used to illustrate the diversity, but also to point out some common characteristics. Different numerical modelling techniques for the landslide evolution, and the tsunami generation and propagation, as well as the effect of frequency dispersion, are also briefly discussed. Basic tsunami generation mechanisms for different types of landslides, including large submarine translational landslide, to impulsive submarine slumps, and violent subaerial landslides and volcano flank collapses, are reviewed. The importance of the landslide kinematics is given attention, including the interplay between landslide acceleration, landslide velocity to depth ratio (Froude number) and dimensions. Using numerical simulations, we demonstrate how landslide deformation and retrogressive failure development influence tsunamigenesis. Generation mechanisms for subaerial landslides, are reviewed by means of scaling relations from laboratory experiments and numerical modelling. Finally, it is demonstrated how the different degree of complexity in the landslide tsunamigenesis needs to be reflected by increased sophistication in numerical models.

  20. Capturing phenotypic heterogeneity in MPS I: results of an international consensus procedure.

    Science.gov (United States)

    de Ru, Minke H; Teunissen, Quirine Ga; van der Lee, Johanna H; Beck, Michael; Bodamer, Olaf A; Clarke, Lorne A; Hollak, Carla E; Lin, Shuan-Pei; Rojas, Maria-Verónica Muñoz; Pastores, Gregory M; Raiman, Julian A; Scarpa, Maurizio; Treacy, Eileen P; Tylki-Szymanska, Anna; Wraith, J Edmond; Zeman, Jiri; Wijburg, Frits A

    2012-04-23

    Mucopolysaccharidosis type I (MPS I) is traditionally divided into three phenotypes: the severe Hurler (MPS I-H) phenotype, the intermediate Hurler-Scheie (MPS I-H/S) phenotype and the attenuated Scheie (MPS I-S) phenotype. However, there are no clear criteria for delineating the different phenotypes. Because decisions about optimal treatment (enzyme replacement therapy or hematopoietic stem cell transplantation) need to be made quickly and depend on the presumed phenotype, an assessment of phenotypic severity should be performed soon after diagnosis. Therefore, a numerical severity scale for classifying different MPS I phenotypes at diagnosis based on clinical signs and symptoms was developed. A consensus procedure based on a combined modified Delphi method and a nominal group technique was undertaken. It consisted of two written rounds and a face-to-face meeting. Sixteen MPS I experts participated in the process. The main goal was to identify the most important indicators of phenotypic severity and include these in a numerical severity scale. The correlation between the median subjective expert MPS I rating and the scores derived from this severity scale was used as an indicator of validity. Full consensus was reached on six key clinical items for assessing severity: age of onset of signs and symptoms, developmental delay, joint stiffness/arthropathy/contractures, kyphosis, cardiomyopathy and large head/frontal bossing. Due to the remarkably large variability in the expert MPS I assessments, however, a reliable numerical scale could not be constructed. Because of this variability, such a scale would always result in patients whose calculated severity score differed unacceptably from the median expert severity score, which was considered to be the 'gold standard'. Although consensus was reached on the six key items for assessing phenotypic severity in MPS I, expert opinion on phenotypic severity at diagnosis proved to be highly variable. This subjectivity

  1. [Results of decompressive-stabilizing procedures via unilateral approach in lumbar spinal stenosis].

    Science.gov (United States)

    Krut'ko, A V

    2012-01-01

    Aim of this study was to investigate the capabilities, advantages and limitations of bilateral decompression via unilateral approach in decompressive-stabilizing procedures in patients with degenerative lumbar spine disease, and to develop the technology and its technical performance. The controlled study included 372 patients (age range was 27-74 years). All of them were operated due to clinical manifestation of lumbar spinal stenosis. The main group consisted of 44 patients who underwent bilateral decompression via unilateral approach with stabilization of involved segments. The control group included 328 patients who were operated using standard bilateral technique with stabilization. A total of 52 segments were treated in the first group and 351 in the second one. In all patients with neurogenic intermittent claudication symptoms relieved after decompressive-stabilizing surgery. Analysis of duration of surgery (considering 1 segment) demonstrated that less invasive technique requires as much time as conventional. However mean intraoperative blood loss in the first group was twice as low as the second. Neither patient from the first group required hemotransfusion while in the second group in 57 (17.4%) cases hemotransfusion was performed due to blood loss. In the early postoperative period in both groups intensity of pain (according to VAS) gradually decreased. Mean hospital stay was 9.9 +/- 3.1 day in the main group and 14.7 +/- 4.7 days in the control group. Bilateral spinal canal decompression via unilateral approach decreases surgical trauma, blood loss, complication rate and hospital stay. Postoperative results are comparable with conventional technique.

  2. Patient radiation doses in the most common interventional cardiology procedures in Croatia: first results.

    Science.gov (United States)

    Brnić, Z; Krpan, T; Faj, D; Kubelka, D; Ramac, J Popić; Posedel, D; Steiner, R; Vidjak, V; Brnić, V; Visković, K; Baraban, V

    2010-02-01

    Apart from its benefits, the interventional cardiology (IC) is known to generate high radiation doses to patients and medical staff involved. The European Union Medical Exposures Directive 97/43/Euroatom strongly recommend patient dosimetry in interventional radiology, including IC. IC patient radiation doses in four representative IC rooms in Croatia were investigated. Setting reference levels for these procedures have difficulties due to the large difference in procedure complexity. Nevertheless, it is important that some guideline values are available as a benchmark to guide the operators during these potentially high-dose procedures. Local and national diagnostic reference levels (DRLs) were proposed as a guidance. A total of 138 diagnostic (coronary angiography, CA) and 151 therapeutic (PTCA, stenting) procedures were included. Patient irradiation was measured in terms of kerma-area product (KAP), fluoroscopy time (FT) and number of cine-frames (F). KAP was recorded using calibrated KAP-meters. DRLs of KAP, FT and F were calculated as third quartile values rounded up to the integer. Skin doses were assessed on a selected sample of high skin dose procedures, using radiochromic films, and peak skin doses (PSD) were presented. A relative large range of doses in IC was detected. National DRLs were proposed as follows: 32 Gy cm(2), 6.6 min and 610 frames for CA and 72 Gy cm(2), 19 min and 1270 frames for PTCA. PSD 2 Gy in 8 % of selected patients. Measuring the patient doses in radiological procedures is required by law, but rarely implemented in Croatia. The doses recorded in the study are acceptable when compared with the literature, but optimisation is possible. The preliminary DRL values proposed may be used as a guideline for local departments, and should be a basis for radiation reduction measures and quality assurance programmes in IC in Croatia.

  3. A visual graphic/haptic rendering model for hysteroscopic procedures.

    Science.gov (United States)

    Lim, Fabian; Brown, Ian; McColl, Ryan; Seligman, Cory; Alsaraira, Amer

    2006-03-01

    Hysteroscopy is an extensively popular option in evaluating and treating women with infertility. The procedure utilises an endoscope, inserted through the vagina and cervix to examine the intra-uterine cavity via a monitor. The difficulty of hysteroscopy from the surgeon's perspective is the visual spatial perception of interpreting 3D images on a 2D monitor, and the associated psychomotor skills in overcoming the fulcrum-effect. Despite the widespread use of this procedure, current qualified hysteroscopy surgeons have not been trained the fundamentals through an organised curriculum. The emergence of virtual reality as an educational tool for this procedure, and for other endoscopic procedures, has undoubtedly raised interests. The ultimate objective is for the inclusion of virtual reality training as a mandatory component for gynaecologic endoscopy training. Part of this process involves the design of a simulator, encompassing the technical difficulties and complications associated with the procedure. The proposed research examines fundamental hysteroscopy factors, current training and accreditation, and proposes a hysteroscopic simulator design that is suitable for educating and training.

  4. A Comparison of Exposure Control Procedures in CATs Using the 3PL Model

    Science.gov (United States)

    Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G.

    2013-01-01

    This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…

  5. New perspective for third generation percutaneous vertebral augmentation procedures: Preliminary results at 12 months

    Directory of Open Access Journals (Sweden)

    Daniele Vanni

    2012-01-01

    Full Text Available Introduction: The prevalence of osteoporotic vertebral fractures (OVF increased in the last years. Compression fractures promote a progressive spine kyphosis increase, resulting in a weight shift and anterior column overload, with OVF additional risk (domino effect. The aim of this study is to evaluate the OVF treatment outcome using Spine Jack ®, a titanium device for third generation percutaneous vertebral augmentation procedures (PVAPs. Materials and Methods: From February 2010, a prospective randomized study was performed examining 300 patients who underwent PVAP due to OVF type A1 according to Magerl/AO spine classification. Patients enrolled in the study were divided in two homogenous groups with regards to age (65-85 years, sex, and general clinical findings. Group A included 150 patients who underwent PVAP using Spine Jack ® system; the second, group B (control group, included 150 patients treated by conventional balloon kyphoplasty. Patients underwent a clinical (visual analogue scale and Oswestry disability index and radiographic follow-up, with post-operative standing plain radiogram of the spine at 1, 6, and 12 months. The radiographic parameters that were taken into account were: Post-operative anterior vertebral body height, pre-operative anterior vertebral body height, cephalic anterior vertebral body height, and caudal anterior vertebral body height. Results: Compared to the Spine Jack ® group, the kyphoplasty group required a little longer operation time (an average of 40 min-group A vs. 45 min-group B, P < 0.05 and a greater amount of polymethylmethacrylate (4.0 mL-group A vs. 5.0 mL-group B, P < 0.05;. The post-operative increase in vertebral body height was greater in the Spine Jack ® group than in the kyphoplasty group (P < 0.05. Discussion: PVAP are based on the cement injection into the vertebral body. Vertebroplasty does not allow the vertebral body height recovery. Balloon kyphoplasty allows a temporary height

  6. Capturing phenotypic heterogeneity in MPS I: results of an international consensus procedure

    Directory of Open Access Journals (Sweden)

    de Ru Minke H

    2012-04-01

    Full Text Available Abstract Background Mucopolysaccharidosis type I (MPS I is traditionally divided into three phenotypes: the severe Hurler (MPS I-H phenotype, the intermediate Hurler-Scheie (MPS I-H/S phenotype and the attenuated Scheie (MPS I-S phenotype. However, there are no clear criteria for delineating the different phenotypes. Because decisions about optimal treatment (enzyme replacement therapy or hematopoietic stem cell transplantation need to be made quickly and depend on the presumed phenotype, an assessment of phenotypic severity should be performed soon after diagnosis. Therefore, a numerical severity scale for classifying different MPS I phenotypes at diagnosis based on clinical signs and symptoms was developed. Methods A consensus procedure based on a combined modified Delphi method and a nominal group technique was undertaken. It consisted of two written rounds and a face-to-face meeting. Sixteen MPS I experts participated in the process. The main goal was to identify the most important indicators of phenotypic severity and include these in a numerical severity scale. The correlation between the median subjective expert MPS I rating and the scores derived from this severity scale was used as an indicator of validity. Results Full consensus was reached on six key clinical items for assessing severity: age of onset of signs and symptoms, developmental delay, joint stiffness/arthropathy/contractures, kyphosis, cardiomyopathy and large head/frontal bossing. Due to the remarkably large variability in the expert MPS I assessments, however, a reliable numerical scale could not be constructed. Because of this variability, such a scale would always result in patients whose calculated severity score differed unacceptably from the median expert severity score, which was considered to be the 'gold standard'. Conclusions Although consensus was reached on the six key items for assessing phenotypic severity in MPS I, expert opinion on phenotypic severity at

  7. Penalized variable selection procedure for Cox models with semiparametric relative risk

    CERN Document Server

    Du, Pang; Liang, Hua; 10.1214/09-AOS780

    2010-01-01

    We study the Cox models with semiparametric relative risk, which can be partially linear with one nonparametric component, or multiple additive or nonadditive nonparametric components. A penalized partial likelihood procedure is proposed to simultaneously estimate the parameters and select variables for both the parametric and the nonparametric parts. Two penalties are applied sequentially. The first penalty, governing the smoothness of the multivariate nonlinear covariate effect function, provides a smoothing spline ANOVA framework that is exploited to derive an empirical model selection tool for the nonparametric part. The second penalty, either the smoothly-clipped-absolute-deviation (SCAD) penalty or the adaptive LASSO penalty, achieves variable selection in the parametric part. We show that the resulting estimator of the parametric part possesses the oracle property, and that the estimator of the nonparametric part achieves the optimal rate of convergence. The proposed procedures are shown to work well i...

  8. A Survey of Procedural Methods for Terrain Modelling

    NARCIS (Netherlands)

    Smelik, R.M.; Kraker, J.K. de; Groenewegen, S.A.; Tutenel, T.; Bidarra, R.

    2009-01-01

    Procedural methods are a promising but underused alternative to manual content creation. Commonly heard drawbacks are the randomness of and the lack of control over the output and the absence of integrated solutions, although more recent publications increasingly address these issues. This paper sur

  9. Comparison of Estimation Procedures for Multilevel AR(1 Models

    Directory of Open Access Journals (Sweden)

    Tanja eKrone

    2016-04-01

    Full Text Available To estimate a time series model for multiple individuals, a multilevel model may be used.In this paper we compare two estimation methods for the autocorrelation in Multilevel AR(1 models, namely Maximum Likelihood Estimation (MLE and Bayesian Markov Chain Monte Carlo.Furthermore, we examine the difference between modeling fixed and random individual parameters.To this end, we perform a simulation study with a fully crossed design, in which we vary the length of the time series (10 or 25, the number of individuals per sample (10 or 25, the mean of the autocorrelation (-0.6 to 0.6 inclusive, in steps of 0.3 and the standard deviation of the autocorrelation (0.25 or 0.40.We found that the random estimators of the population autocorrelation show less bias and higher power, compared to the fixed estimators. As expected, the random estimators profit strongly from a higher number of individuals, while this effect is small for the fixed estimators.The fixed estimators profit slightly more from a higher number of time points than the random estimators.When possible, random estimation is preferred to fixed estimation.The difference between MLE and Bayesian estimation is nearly negligible. The Bayesian estimation shows a smaller bias, but MLE shows a smaller variability (i.e., standard deviation of the parameter estimates.Finally, better results are found for a higher number of individuals and time points, and for a lower individual variability of the autocorrelation. The effect of the size of the autocorrelation differs between outcome measures.

  10. The Danish national passenger modelModel specification and results

    DEFF Research Database (Denmark)

    Rich, Jeppe; Hansen, Christian Overgaard

    2016-01-01

    The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...

  11. TWO-PROCEDURE OF MODEL RELIABILITY-BASED OPTIMIZATION FOR WATER DISTRIBUTION SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Recently, considerable emphasis has been laid to the reliability-based optimization model for water distribution systems. But considerable computational effort is needed to determine the reliability-based optimal design of large networks, even of mid-sized networks. In this paper, a new methodology is presented for the reliability analysis for water distribution systems. This methodology consists of two procedures. The first is that the optimal design is constrained only by the pressure heads at demand nodes, done in GRG2. Because the reliability constrains are removed from the optimal problem, a number of simulations do not need to be conducted, so the computer time is greatly decreased. Then, the second procedure is a linear optimal search procedure. In this linear procedure, the optimal results obtained by GRG2 are adjusted by the reliability constrains. The results are a group of commercial diameters of pipes and the constraints of pressure heads and reliability at nodes are satisfied. Therefore, the computer burden is significantly decreased, and the reliability-based optimization is of more practical use.

  12. CMS standard model Higgs boson results

    Directory of Open Access Journals (Sweden)

    Garcia-Abia Pablo

    2013-11-01

    Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.

  13. Manual ventilation and open suction procedures contribute to negative pressures in a mechanical lung model

    Science.gov (United States)

    Nakstad, Espen Rostrup; Opdahl, Helge; Heyerdahl, Fridtjof; Borchsenius, Fredrik; Skjønsberg, Ole Henning

    2017-01-01

    Introduction Removal of pulmonary secretions in mechanically ventilated patients usually requires suction with closed catheter systems or flexible bronchoscopes. Manual ventilation is occasionally performed during such procedures if clinicians suspect inadequate ventilation. Suctioning can also be performed with the ventilator entirely disconnected from the endotracheal tube (ETT). The aim of this study was to investigate if these two procedures generate negative airway pressures, which may contribute to atelectasis. Methods The effects of device insertion and suctioning in ETTs were examined in a mechanical lung model with a pressure transducer inserted distal to ETTs of 9 mm, 8 mm and 7 mm internal diameter (ID). A 16 Fr bronchoscope and 12, 14 and 16 Fr suction catheters were used at two different vacuum levels during manual ventilation and with the ETTs disconnected. Results During manual ventilation with ETTs of 9 mm, 8 mm and 7 mm ID, and bronchoscopic suctioning at moderate suction level, peak pressure (PPEAK) dropped from 23, 22 and 24.5 cm H2O to 16, 16 and 15 cm H2O, respectively. Maximum suction reduced PPEAK to 20, 17 and 11 cm H2O, respectively, and the end-expiratory pressure fell from 5, 5.5 and 4.5 cm H2O to –2, –6 and –17 cm H2O. Suctioning through disconnected ETTs (open suction procedure) gave negative model airway pressures throughout the duration of the procedures. Conclusions Manual ventilation and open suction procedures induce negative end-expiratory pressure during endotracheal suctioning, which may have clinical implications in patients who need high PEEP (positive end-expiratory pressure). PMID:28725445

  14. A Long-Term Memory Competitive Process Model of a Common Procedural Error

    Science.gov (United States)

    2013-08-01

    A novel computational cognitive model explains human procedural error in terms of declarative memory processes. This is an early version of a process ... model intended to predict and explain multiple classes of procedural error a priori. We begin with postcompletion error (PCE) a type of systematic

  15. A Procedure for Building Product Models in Intelligent Agent-based OperationsManagement

    DEFF Research Database (Denmark)

    Hvam, Lars; Riis, Jesper; Malis, Martin;

    2003-01-01

    by product models. The next phase includes an analysis of the product assortment, and the set up of a so-called product master. Finally the product model is designed and implemented by using object oriented modelling. The procedure is developed in order to ensure that the product models constructed are fit......This article presents a procedure for building product models to support the specification processes dealing with sales, design of product variants and production preparation. The procedure includes, as the first phase, an analysis and redesign of the business processes that are to be supported...

  16. Comparison of Real World Energy Consumption to Models and Department of Energy Test Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Goetzler, William [Navigant Consulting, Inc., Burlington, MA (United States); Sutherland, Timothy [Navigant Consulting, Inc., Burlington, MA (United States); Kar, Rahul [Navigant Consulting, Inc., Burlington, MA (United States); Foley, Kevin [Navigant Consulting, Inc., Burlington, MA (United States)

    2011-09-01

    This study investigated the real-world energy performance of appliances and equipment as it compared with models and test procedures. The study looked to determine whether the U.S. Department of Energy and industry test procedures actually replicate real world conditions, whether performance degrades over time, and whether installation patterns and procedures differ from the ideal procedures. The study first identified and prioritized appliances to be evaluated. Then, the study determined whether real world energy consumption differed substantially from predictions and also assessed whether performance degrades over time. Finally, the study recommended test procedure modifications and areas for future research.

  17. modelling room cooling capacity with fuzzy logic procedure

    African Journals Online (AJOL)

    user

    Modelling with fuzzy logic is an approach to forming ... the way humans think and make judgments [10]. ... artificial intelligence and expert systems [17, 18] to .... from selected cases, human professional computation and the Model predictions.

  18. International system of units traceable results of Hg mass concentration at saturation in air from a newly developed measurement procedure.

    Science.gov (United States)

    Quétel, Christophe R; Zampella, Mariavittoria; Brown, Richard J C; Ent, Hugo; Horvat, Milena; Paredes, Eduardo; Tunc, Murat

    2014-08-05

    Data most commonly used at present to calibrate measurements of mercury vapor concentrations in air come from a relationship known as the "Dumarey equation". It uses a fitting relationship to experimental results obtained nearly 30 years ago. The way these results relate to the international system of units (SI) is not known. This has caused difficulties for the specification and enforcement of limit values for mercury concentrations in air and in emissions to air as part of national or international legislation. Furthermore, there is a significant discrepancy (around 7% at room temperature) between the Dumarey data and data calculated from results of mercury vapor pressure measurements in the presence of only liquid mercury. As an attempt to solve some of these problems, a new measurement procedure is described for SI traceable results of gaseous Hg concentrations at saturation in milliliter samples of air. The aim was to propose a scheme as immune as possible to analytical biases. It was based on isotope dilution (ID) in the liquid phase with the (202)Hg enriched certified reference material ERM-AE640 and measurements of the mercury isotope ratios in ID blends, subsequent to a cold vapor generation step, by inductively coupled plasma mass spectrometry. The process developed involved a combination of interconnected valves and syringes operated by computer controlled pumps and ensured continuity under closed circuit conditions from the air sampling stage onward. Quantitative trapping of the gaseous mercury in the liquid phase was achieved with 11.5 μM KMnO4 in 2% HNO3. Mass concentrations at saturation found from five measurements under room temperature conditions were significantly higher (5.8% on average) than data calculated from the Dumarey equation, but in agreement (-1.2% lower on average) with data based on mercury vapor pressure measurement results. Relative expanded combined uncertainties were estimated following a model based approach. They ranged from 2

  19. First results from the RAO Variable Star Search Program: I. Background, Procedure, and Results from RAO Field 1

    CERN Document Server

    Williams, Michael D

    2011-01-01

    We describe an ongoing variable star search program and present the first reduced results of a search in a 19 square degree (4.4\\circle x 4.4\\circle) field centered on J2000 {\\alpha} = 22:03:24, {\\delta} = +18:54:32. The search was carried out with the Baker-Nunn Patrol Camera located at the Rothney Astrophysical Observatory in the foothills of the Canadian Rockies. A total of 26,271 stars were detected in the field, over a range of about 11-15 (instrumental) magnitudes. Our image processing made use of the IRAF version of the DAOPHOT aperture photometry routine and we used the ANOVA method to search for periodic variations in the light curves. We formally detected periodic variability in 35 stars, that we tentatively classify according to light curve characteristics: 6 EA (Algol), 5 EB ({\\beta} Lyrae), 19 EW (W UMa), and 5 RR (RR Lyrae) stars. Eleven of the detected variable stars have been reported previously in the literature. The eclipsing binary light curves have been analyzed with a package of light cur...

  20. Fusion of range camera and photogrammetry: a systematic procedure for improving 3-D models metric accuracy.

    Science.gov (United States)

    Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C

    2003-01-01

    The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.

  1. Scapular position after the open Latarjet procedure: results of a computed tomography scan study.

    Science.gov (United States)

    Cerciello, Simone; Edwards, T Bradley; Cerciello, Giuliano; Walch, Gilles

    2015-02-01

    The aim of this study was to investigate, through a computed tomography (CT) scan analysis, the effects of the Latarjet procedure on scapular position in an axial plane. Twenty healthy young male subjects (mean age, 22 years; range, 18-27 years) were enrolled as a control group. Twenty young male patients (mean age, 23 years; range, 17-30 years) with recurrent anterior shoulder dislocation were enrolled as the study group. CT cuts at a proper level allowed the identification of an α angle, which defined the tilt of the scapula relative to the anterior-posterior axis. In the control population, the α angles on the right and left shoulders were 48° (44°-52°) and 48° (44°-54°), respectively. In the study group, the preoperative α angles at the affected and healthy shoulders were 49° (46°-52°) and 49° (44°-52°), respectively. At day 45, the corresponding angles were 45° (40°-50°) and 49° (46°-52°). At 6 months, the average α angle of the shoulder operated on was 52° (46°-58°). The α angle value was restored in 5 cases, increased in 9 cases (mean, 8°), and decreased in 6 cases (mean, 3°). A general symmetry of scapular position was observed during CT scan analysis. This balance was lost initially after the Latarjet procedure, with a decrease of the α angle and scapular protraction. Six months after surgery, a small trend toward scapular retraction was conversely observed; however, the data were not statistically significant. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  2. Complications after Surgical Procedures in Patients with Cardiac Implantable Electronic Devices: Results of a Prospective Registry.

    Science.gov (United States)

    Silva, Katia Regina da; Albertini, Caio Marcos de Moraes; Crevelari, Elizabeth Sartori; Carvalho, Eduardo Infante Januzzi de; Fiorelli, Alfredo Inácio; Martinelli, Martino; Costa, Roberto

    2016-09-01

    Complications after surgical procedures in patients with cardiac implantable electronic devices (CIED) are an emerging problem due to an increasing number of such procedures and aging of the population, which consequently increases the frequency of comorbidities. To identify the rates of postoperative complications, mortality, and hospital readmissions, and evaluate the risk factors for the occurrence of these events. Prospective and unicentric study that included all individuals undergoing CIED surgical procedures from February to August 2011. The patients were distributed by type of procedure into the following groups: initial implantations (cohort 1), generator exchange (cohort 2), and lead-related procedures (cohort 3). The outcomes were evaluated by an independent committee. Univariate and multivariate analyses assessed the risk factors, and the Kaplan-Meier method was used for survival analysis. A total of 713 patients were included in the study and distributed as follows: 333 in cohort 1, 304 in cohort 2, and 76 in cohort 3. Postoperative complications were detected in 7.5%, 1.6%, and 11.8% of the patients in cohorts 1, 2, and 3, respectively (p = 0.014). During a 6-month follow-up, there were 58 (8.1%) deaths and 75 (10.5%) hospital readmissions. Predictors of hospital readmission included the use of implantable cardioverter-defibrillators (odds ratio [OR] = 4.2), functional class III--IV (OR = 1.8), and warfarin administration (OR = 1.9). Predictors of mortality included age over 80 years (OR = 2.4), ventricular dysfunction (OR = 2.2), functional class III-IV (OR = 3.3), and warfarin administration (OR = 2.3). Postoperative complications, hospital readmissions, and deaths occurred frequently and were strongly related to the type of procedure performed, type of CIED, and severity of the patient's underlying heart disease. Complicações após procedimentos cirúrgicos em portadores de dispositivos cardíacos eletrônicos implantáveis (DCEI) são um

  3. Revisiting Runoff Model Calibration: Airborne Snow Observatory Results Allow Improved Modeling Results

    Science.gov (United States)

    McGurk, B. J.; Painter, T. H.

    2014-12-01

    Deterministic snow accumulation and ablation simulation models are widely used by runoff managers throughout the world to predict runoff quantities and timing. Model fitting is typically based on matching modeled runoff volumes and timing with observed flow time series at a few points in the basin. In recent decades, sparse networks of point measurements of the mountain snowpacks have been available to compare with modeled snowpack, but the comparability of results from a snow sensor or course to model polygons of 5 to 50 sq. km is suspect. However, snowpack extent, depth, and derived snow water equivalent have been produced by the NASA/JPL Airborne Snow Observatory (ASO) mission for spring of 20013 and 2014 in the Tuolumne River basin above Hetch Hetchy Reservoir. These high-resolution snowpack data have exposed the weakness in a model calibration based on runoff alone. The U.S. Geological Survey's Precipitation Runoff Modeling System (PRMS) calibration that was based on 30-years of inflow to Hetch Hetchy produces reasonable inflow results, but modeled spatial snowpack location and water quantity diverged significantly from the weekly measurements made by ASO during the two ablation seasons. The reason is that the PRMS model has many flow paths, storages, and water transfer equations, and a calibrated outflow time series can be right for many wrong reasons. The addition of a detailed knowledge of snow extent and water content constrains the model so that it is a better representation of the actual watershed hydrology. The mechanics of recalibrating PRMS to the ASO measurements will be described, and comparisons in observed versus modeled flow for both a small subbasin and the entire Hetch Hetchy basin will be shown. The recalibrated model provided a bitter fit to the snowmelt recession, a key factor for water managers as they balance declining inflows with demand for power generation and ecosystem releases during the final months of snow melt runoff.

  4. Physical parameters of IPHAS-selected classical Be stars. (I. Determination procedure and evaluation of the results.)

    CERN Document Server

    Gkouvelis, L; Zorec, J; Steeghs, D; Drew, J E; Raddi, R; Wright, N J; Drake, J J

    2016-01-01

    We present a semi-automatic procedure to obtain fundamental physical parameters and distances of classical Be (CBe) stars, based on the Barbier-Chalonge-Divan (BCD) spectrophotometric system. Our aim is to apply this procedure to a large sample of CBe stars detected by the IPHAS photometric survey, to determine their fundamental physical parameters and to explore their suitability as galactic structure tracers. In this paper we describe the methodology used and the validation of the procedure by comparing our results with those obtained from different independent astrophysical techniques for subsamples of stars in common with other studies. We also present a test case study of the galactic structure in the direction of the Perseus Galactic Arm, in order to compare our results with others recently obtained with different techniques and the same sample of stars. We did not find any significant clustering of stars at the expected positions of the Perseus and Outer Galactic Arms, in agreement with previous studie...

  5. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Katya Le Blanc; Johanna Oxstrand

    2012-04-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

  6. Modeling Malaysia's Energy System: Some Preliminary Results

    OpenAIRE

    Ahmad M. Yusof

    2011-01-01

    Problem statement: The current dynamic and fragile world energy environment necessitates the development of new energy model that solely caters to analyze Malaysias energy scenarios. Approach: The model is a network flow model that traces the flow of energy carriers from its sources (import and mining) through some conversion and transformation processes for the production of energy products to final destinations (energy demand sectors). The integration to the economic sectors is done exogene...

  7. EFFECT OF LOCATION AND BONE GRAFT REMODELING ON RESULTS OF BRISTOW-LATARJET PROCEDURE

    Directory of Open Access Journals (Sweden)

    D. A. Malanin

    2016-01-01

    Full Text Available Introduction. Operation Bristow-Latarjet proved itself as one of the most effective and predictable surgical treatments. despite its widespread use, there are various complications associated with improper installation of the bone block and the violation of its remodeling.Objective: To obtain new data on the effect of location and remodeling of bone graft block on functional outcome and stability of the shoulder joint in patients with recurrent anterior instability after the operation Bristow-latarjet.Material and methods. The material for the study served as the analysis of results of treatment of 64 patients with posttraumatic recurrent anterior shoulder dislocation who underwent Bristow-latarjet operation. postoperatively, assessed a provision and the degree of bone remodeling unit according to computed tomography in the sagittal, axial slices, and through 3d modeling. To evaluate the functional outcome scale were used western Ontario Shoulder Index (wOSI and Rowe scale.Results. At the level of the articular surface (congruent or flattening in the axial plane were 89% bone blocks, too medially or laterally arranged 9% and 2% grafts, respectively. On sagittal cT images in the middle third of the articular surface of the scapula was located 28% of the bone blocks at the bottom 60%, in the upper third of 12%. Analysis of the dependence of the results of treatment of graft positioning showed that patients with excellent and good summary on the scale WOSI and Rowe, had a correct location of the bone block in the middle and lower third of the articular process of the blade. It can be assumed that excessive lateralized or medialized bone block position in the axial plane of a more profound effect on the outcome than cranial displacement of the latter with the sagittal plane. Bony union of the graft was found by CT in 74% of cases, soft tissue 26%, the degree of resorption of the graft revealed 0-1 84% 2-3 degree in 26% of cases. In the last periods

  8. Engineering Glass Passivation Layers -Model Results

    Energy Technology Data Exchange (ETDEWEB)

    Skorski, Daniel C.; Ryan, Joseph V.; Strachan, Denis M.; Lepry, William C.

    2011-08-08

    The immobilization of radioactive waste into glass waste forms is a baseline process of nuclear waste management not only in the United States, but worldwide. The rate of radionuclide release from these glasses is a critical measure of the quality of the waste form. Over long-term tests and using extrapolations of ancient analogues, it has been shown that well designed glasses exhibit a dissolution rate that quickly decreases to a slow residual rate for the lifetime of the glass. The mechanistic cause of this decreased corrosion rate is a subject of debate, with one of the major theories suggesting that the decrease is caused by the formation of corrosion products in such a manner as to present a diffusion barrier on the surface of the glass. Although there is much evidence of this type of mechanism, there has been no attempt to engineer the effect to maximize the passivating qualities of the corrosion products. This study represents the first attempt to engineer the creation of passivating phases on the surface of glasses. Our approach utilizes interactions between the dissolving glass and elements from the disposal environment to create impermeable capping layers. By drawing from other corrosion studies in areas where passivation layers have been successfully engineered to protect the bulk material, we present here a report on mineral phases that are likely have a morphological tendency to encrust the surface of the glass. Our modeling has focused on using the AFCI glass system in a carbonate, sulfate, and phosphate rich environment. We evaluate the minerals predicted to form to determine the likelihood of the formation of a protective layer on the surface of the glass. We have also modeled individual ions in solutions vs. pH and the addition of aluminum and silicon. These results allow us to understand the pH and ion concentration dependence of mineral formation. We have determined that iron minerals are likely to form a complete incrustation layer and we plan

  9. Procedure for assessing the performance of a rockfall fragmentation model

    Science.gov (United States)

    Matas, Gerard; Lantada, Nieves; Corominas, Jordi; Gili, Josep Antoni; Ruiz-Carulla, Roger; Prades, Albert

    2017-04-01

    A Rockfall is a mass instability process frequently observed in road cuts, open pit mines and quarries, steep slopes and cliffs. It is frequently observed that the detached rock mass becomes fragmented when it impacts with the slope surface. The consideration of the fragmentation of the rockfall mass is critical for the calculation of block's trajectories and their impact energies, to further assess their potential to cause damage and design adequate preventive structures. We present here the performance of the RockGIS model. It is a GIS-Based tool that simulates stochastically the fragmentation of the rockfalls, based on a lumped mass approach. In RockGIS, the fragmentation initiates by the disaggregation of the detached rock mass through the pre-existing discontinuities just before the impact with the ground. An energy threshold is defined in order to determine whether the impacting blocks break or not. The distribution of the initial mass between a set of newly generated rock fragments is carried out stochastically following a power law. The trajectories of the new rock fragments are distributed within a cone. The model requires the calibration of both the runout of the resultant blocks and the spatial distribution of the volumes of fragments generated by breakage during their propagation. As this is a coupled process which is controlled by several parameters, a set of performance criteria to be met by the simulation have been defined. The criteria includes: position of the centre of gravity of the whole block distribution, histogram of the runout of the blocks, extent and boundaries of the young debris cover over the slope surface, lateral dispersion of trajectories, total number of blocks generated after fragmentation, volume distribution of the generated fragments, the number of blocks and volume passages past a reference line and the maximum runout distance Since the number of parameters to fit increases significantly when considering fragmentation, the

  10. Laparoscopic hysteropexy: the initial results of a uterine suspension procedure for uterovaginal prolapse.

    Science.gov (United States)

    Price, Natalia; Slack, A; Jackson, S R

    2010-01-01

    The aim of this study was to evaluate the outcome of laparoscopic hysteropexy, a surgical technique for the management of uterine prolapse, involving suspension of the uterus from the sacral promontory using bifurcated polypropylene mesh. The investigation was designed as a prospective observational study (clinical audit). The study was undertaken at a tertiary referral urogynaecology unit in the UK. The participants comprised 51 consecutive women with uterovaginal prolapse, who chose laparoscopic hysteropexy as one of the available surgical options. The hysteropexy was conducted laparoscopically in all cases. A bifurcated polypropylene mesh was used to suspend the uterus from the sacral promontory. The two arms of the mesh were introduced through bilateral windows created in the broad ligaments, and were sutured to the anterior cervix; the mesh was then fixed to the anterior longitudinal ligament over the sacral promontory, to elevate the uterus. Cure of the uterine prolapse was evaluated subjectively using the International Consultation on Incontinence Questionnaire for vaginal symptoms (ICIQ-VS), and objectively by vaginal examination using the Baden-Walker halfway system and the pelvic organ prolapse quantification (POP-Q) scale. Operative and postoperative complications were also assessed. The mean age of the 51 women was 52.5 years (range 19-71 years). All were sexually active, and at least three of them expressed a strong desire to have children in the future. All were available for follow-up in clinic at 10 weeks, and 38 have completed the questionnaires. In 50 out of 51 women the procedure was successful, with no objective evidence of uterine prolapse on examination at follow-up; there was one failure. Significant subjective improvements in prolapse symptoms, sexual wellbeing and related quality of life were observed, as detected by substantial reductions in the respective questionnaire scores. Laparoscopic hysteropexy is both a feasible and an effective

  11. GENERATION OF MULTI-LOD 3D CITY MODELS IN CITYGML WITH THE PROCEDURAL MODELLING ENGINE RANDOM3DCITY

    Directory of Open Access Journals (Sweden)

    F. Biljecki

    2016-09-01

    Full Text Available The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is – as we discuss in this paper – well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.

  12. Generation of Multi-Lod 3d City Models in Citygml with the Procedural Modelling Engine RANDOM3DCITY

    Science.gov (United States)

    Biljecki, F.; Ledoux, H.; Stoter, J.

    2016-09-01

    The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is - as we discuss in this paper - well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at github.com/tudelft3d/Random3Dcity"target="_blank">http://github.com/tudelft3d/Random3Dcity.

  13. Adjunctive 830 nm light-emitting diode therapy can improve the results following aesthetic procedures

    Science.gov (United States)

    Kim, Won-Serk; Ohshiro, Toshio; Trelles, Mario A; Vasily, David B

    2015-01-01

    Background: Aggressive, or even minimally aggressive, aesthetic interventions are almost inevitably followed by such events as discomfort, erythema, edema and hematoma formation which could lengthen patient downtime and represent a major problem to the surgeon. Recently, low level light therapy with light-emitting diodes (LED-LLLT) at 830 nm has attracted attention in wound healing indications for its anti-inflammatory effects and control of erythema, edema and bruising. Rationale: The wavelength of 830 nm offers deep penetration into living biological tissue, including bone. A new-generation of 830 nm LEDs, based on those developed in the NASA Space Medicine Laboratory, has enabled the construction of planar array-based LED-LLLT systems with clinically useful irradiances. Irradiation with 830 nm energy has been shown in vitro and in vivo to increase the action potential of epidermal and dermal cells significantly. The response of the inflammatory stage cells is enhanced both in terms of function and trophic factor release, and fibroblasts demonstrate superior collagenesis and elastinogenesis. Conclusions: A growing body of clinical evidence is showing that applying 830 nm LED-LLLT as soon as possible post-procedure, both invasive and noninvasive, successfully hastens the resolution of sequelae associated with patient downtime in addition to significantly speeding up frank wound healing. This article reviews that evidence, and attempts to show that 830 nm LED-LLLT delivers swift resolution of postoperative sequelae, minimizes downtime and enhances patient satisfaction. PMID:26877592

  14. Arthroscopic procedures and therapeutic results of anatomical reconstruction of the coracoclavicular ligaments for acromioclavicular Joint dislocation.

    Science.gov (United States)

    Takase, K; Yamamoto, K

    2016-09-01

    Surgical treatment is recommended for type 5 acromioclavicular joint dislocation on Rockwood's classification. We believe that anatomic repair of the coracoclavicular ligaments best restores the function of the acromioclavicular joint. We attempted to correctly reconstruct the anatomy of the coracoclavicular ligaments under arthroscopy, and describe the minimally invasive arthroscopic procedure. There were 22 patients; mean age at surgery, 38.1 years. Mean time to surgery was 13.2 days. Mean follow-up was 3 years 2 months. The palmaris longus tendon was excised from the ipsilateral side to replace the conoid ligament, while artificial ligament was used for reconstructing the trapezoid ligament. Both ligament reconstructions were performed arthroscopically. No temporary fixation of the acromioclavicular joint was performed. On postoperative radiographic evaluation, 4 patients showed subluxation and 2 showed dislocation of the acromioclavicular joint; the other 16 patients had maintained reduction at the final consultation. MR images 1year after surgery clearly revealed the reconstructed ligaments in 19 patients. Only 1 patient showed osteoarthritis of the acromioclavicular joint. Although it requires resection of the ipsilateral palmaris longus for grafting, we believe that anatomic reconstruction of both coracoclavicular ligaments best restores the function of the acromioclavicular joint. 4. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  15. Automated control procedures and first results from the temporary seismic monitoring of the 2012 Emilia sequence

    Directory of Open Access Journals (Sweden)

    Simone Marzorati

    2012-10-01

    Full Text Available After moderate to strong earthquakes in Italy or in the surrounding areas, the Istituto Nazionale di Geofisica e Vulcanologia (INGV; National Institute for Geophysics and Volcanology activates a temporary seismic network infrastructure. This is devoted to integration with the Italian National Seismic Network (RSN [Delladio 2011] in the epicentral area, thus improving the localization of the aftershocks distribution after a mainshock. This infrastructure is composed of a stand-alone, locally recording part (Re.Mo. [Moretti et al. 2010] and a real-time telemetered part (Re.Mo.Tel. [Abruzzese et al. 2011a, 2011b] that can stream data to the acquisition centers in Rome and Grottaminarda. After the May 20, 2012, Ml 5.9 earthquake in the Emilia region (northern Italy, the temporary network was deployed in the epicentral area; in particular, 10 telemetered and 12 stand-alone stations were installed [Moretti et al. 2012, this volume]. Using the dedicated connection between the acquisition center in Rome and the Ancona acquisition sub-center [Cattaneo et al. 2011], the signals of the real-time telemetered stations were acquired also in this sub-center. These were used for preliminary quality control, by adopting the standard procedures in use here (see next paragraph, and Monachesi et al. [2011]. The main purpose of the present study is a first report on this quality check, which should be taken into account for the correct use of these data. […

  16. Intermediate-term results of Medtronic freestyle valve for right ventricular outflow tract reconstruction in the Ross procedure.

    Science.gov (United States)

    Bilal, Mehmet S; Aydemir, Numan A; Cine, Nihat; Turan, Tamer; Yildiz, Yahya; Yalcin, Yalim; Celebi, Ahmet

    2006-09-01

    The Ross procedure has become the first choice for aortic valve replacement in children and young adults at many institutions. Since 1997, a lack of availability of homograft valves in Turkey has prompted the use of alternative substitutes for right ventricular outflow tract (RVOT) reconstruction during the Ross procedure. Before April 2005, among 20 patients (age range: 14 months to 45 years) at the present authors' institution, the Ross procedure was performed in 14 and a Ross-Konno procedure in six. Sixteen patients underwent RVOT repair using alternative methods for homograft valve replacement. Fourteen patients received a Medtronic Freestyle valve and one patient a Medtronic Contegra bovine jugular vein conduit. An autologous RVOT repair was used in one patient. Ten of the Medtronic Freestyle valve patients were aged Medtronic Freestyle valve echocardiographic evaluations were conducted shortly after surgery and during follow up. There was no early mortality. One patient died from pneumonia after six months, and another (asymptomatic) patient died suddenly at 34 months after surgery. Before hospital discharge the mean peak pressure gradient across the Freestyle valve was 12.1 +/- 11.0 mmHg, and this increased to 24.1 +/- 20.0 mmHg after a mean follow up of 51.2 +/- 6.9 months (range: 6 to 101 months) (p Medtronic Freestyle valve, the present results show that the valve can be used with intermediate-term success in the Ross procedure - and even in children as an alternative - if homograft valves are not available.

  17. [The emphases and basic procedures of genetic counseling in psychotherapeutic model].

    Science.gov (United States)

    Zhang, Yuan-Zhi; Zhong, Nanbert

    2006-11-01

    The emphases and basic procedures of genetic counseling are all different with those in old models. In the psychotherapeutic model, genetic counseling will not only focus on counselees' genetic disorders and birth defects, but also their psychological problems. "Client-centered therapy" termed by Carl Rogers plays an important role in genetic counseling process. The basic procedures of psychotherapeutic model of genetic counseling include 7 steps: initial contact, introduction, agendas, inquiry of family history, presenting information, closing the session and follow-up.

  18. A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling.

    Science.gov (United States)

    Kuprat, A P; Kabilan, S; Carson, J P; Corley, R A; Einstein, D R

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton's Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple sets

  19. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Science.gov (United States)

    Kuprat, A. P.; Kabilan, S.; Carson, J. P.; Corley, R. A.; Einstein, D. R.

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton's method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  20. Quantitative magnetospheric models: results and perspectives.

    Science.gov (United States)

    Kuznetsova, M.; Hesse, M.; Gombosi, T.; Csem Team

    Global magnetospheric models are indispensable tool that allow multi-point measurements to be put into global context Significant progress is achieved in global MHD modeling of magnetosphere structure and dynamics Medium resolution simulations confirm general topological pictures suggested by Dungey State of the art global models with adaptive grids allow performing simulations with highly resolved magnetopause and magnetotail current sheet Advanced high-resolution models are capable to reproduced transient phenomena such as FTEs associated with formation of flux ropes or plasma bubbles embedded into magnetopause and demonstrate generation of vortices at magnetospheric flanks On the other hand there is still controversy about the global state of the magnetosphere predicted by MHD models to the point of questioning the length of the magnetotail and the location of the reconnection sites within it For example for steady southwards IMF driving condition resistive MHD simulations produce steady configuration with almost stationary near-earth neutral line While there are plenty of observational evidences of periodic loading unloading cycle during long periods of southward IMF Successes and challenges in global modeling of magnetispheric dynamics will be addessed One of the major challenges is to quantify the interaction between large-scale global magnetospheric dynamics and microphysical processes in diffusion regions near reconnection sites Possible solutions to controversies will be discussed

  1. A modified procedure for velopharyngeal sphincteroplasty in primary cleft palate repair and secondary velopharyngeal incompetence treatment and its preliminary results.

    Science.gov (United States)

    Cheng, Ningxin; Zhao, Min; Qi, Kemin; Deng, Hui; Fang, Zeng; Song, Ruyao

    2006-01-01

    During cleft repair, velopharyngeal sphincter reconstruction is still a challenge to plastic surgeons. To improve the surgical treatment for cleft palate and secondary velopharyngeal incompetence (VPI), a carefully designed modified procedure for primary palatoplasty and secondary VPI was presented. Fifty-six patients (48 for primary cleft palate repair and eight for secondary VPI of previously repaired clefts) underwent this procedure from 1988 to 2001. The modified procedure is a combination of the tunnelled palatopharyngeus myomucosal flap for dynamic circular reconstruction of the pharyngeal element of the velopharyngeal sphincter and the double-reversing Z-plasty with levator velo palatini muscles reposition in the velar element of the sphincter. The satisfactory velopharyngeal competence (complete velopharyngeal closure and marginal velopharyngeal closure) was achieved in 23 of 25 patients with primary cleft palate repair examined by nasendoscopy and the nasality, speech articulation and intelligibility are also assessed in 25 primary cleft palate repaired patients with 92% satisfactory result (normal speech and speech with mild VPI) in single word test and 88% in continuous speech evaluation. Based on our experience, we believe that this modified procedure is a reasonable choice for primary cleft repair and secondary VPI treatment because it is in accord with normal physiology and anatomy of the velopharyngeal sphincter, can lengthen the soft palate, decrease the enlarged velopharynx, augment the posterior pharyngeal wall, and enhance the relationship between the muscles of velopharyngeal sphincter which results in a dynamic neo-sphincter in palatopharyngoplasty. Further study of the procedure is needed. The theoretical basis, operative highlights, velopharyngeal function, advantages and disadvantages of the modified procedure were discussed.

  2. SAFETY AND EFFECTIVENESS OF SINGLE ANASTOMOSIS DUODENAL SWITCH PROCEDURE: PRELIMINARY RESULT FROM A SINGLE INSTITUTION.

    Science.gov (United States)

    Nelson, Lars; Moon, Rena C; Teixeira, Andre F; Galvão, Manoel; Ramos, Almino; Jawad, Muhammad A

    Single anastomosis duodeno-ileal bypass with sleeve gastrectomy (SADI-S) was introduced into bariatric surgery by Sanchez-Pernaute et al. as an advancement of the biliopancreatic diversion with duodenal switch. To evaluate the SADI-S procedure with regard to weight loss, comorbidity resolution, and complication rate in the super obese population. A retrospective chart review was performed on initial 72 patients who underwent laparoscopic or robot-assisted laparoscopic SADI-S between December 17th, 2013 and July 29th, 2015. A total of 48 female and 21 male patients were included with a mean age of 42.4±10.0 years (range, 22-67). The mean body mass index (BMI) at the time of procedure was 58.4±8.3 kg/m2 (range, 42.3-91.8). Mean length of hospital stay was 4.3±2.6 days (range, 3-24). Thirty-day readmission rate was 4.3% (n=3), due to tachycardia (n=1), deep venous thrombosis (n=1), and viral gastroenteritis (n=1). Thirty-day reoperation rate was 5.8% (n=4) for perforation of the small bowel (n=1), leakage (n=1), duodenal stump leakage (n=1), and diagnostic laparoscopy (n=1). Percentage of excess weight loss (%EWL) was 28.5±8.8 % (range, 13.3-45.0) at three months (n=28), 41.7±11.1 % (range, 19.6-69.6) at six months (n=50), and 61.6±12.0 % (range, 40.1-91.2) at 12 months (n=23) after the procedure. A total of 18 patients (26.1%) presented with type II diabetes mellitus at the time of surgery. Of these patients, 9 (50.0%) had their diabetes resolved, and six (33.3%) had it improved by 6-12 months after SADI-S. SADI-S is a feasible operation with a promising weight loss and diabetes resolution in the super-obese population. Anastomose única em bypass duodenoileal com gastrectomia vertical (SADI-S) foi introduzida na cirurgia bariátrica por Sanchez-Pernaute et al. como um avanço da derivação biliopancreática com switch duodenal. Avaliar o procedimento SADI-S no que diz respeito à perda de peso, resolução de comorbidades e taxa de complicações na popula

  3. Optimal control of CPR procedure using hemodynamic circulation model

    Science.gov (United States)

    Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok

    2007-12-25

    A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.

  4. A Review of Different Estimation Procedures in the Rasch Model. Research Report 87-6.

    Science.gov (United States)

    Engelen, R. J. H.

    A short review of the different estimation procedures that have been used in association with the Rasch model is provided. These procedures include joint, conditional, and marginal maximum likelihood methods; Bayesian methods; minimum chi-square methods; and paired comparison estimation. A comparison of the marginal maximum likelihood estimation…

  5. A Connectionist Model of Stimulus Class Formation with a Yes/No Procedure and Compound Stimuli

    Science.gov (United States)

    Tovar, Angel E.; Chavez, Alvaro Torres

    2012-01-01

    We analyzed stimulus class formation in a human study and in a connectionist model (CM) with a yes/no procedure, using compound stimuli. In the human study, the participants were six female undergraduate students; the CM was a feed-forward back-propagation network. Two 3-member stimulus classes were trained with a similar procedure in both the…

  6. TSCALE: A New Multidimensional Scaling Procedure Based on Tversky's Contrast Model.

    Science.gov (United States)

    DeSarbo, Wayne S.; And Others

    1992-01-01

    TSCALE, a multidimensional scaling procedure based on the contrast model of A. Tversky for asymmetric three-way, two-mode proximity data, is presented. TSCALE conceptualizes a latent dimensional structure to describe the judgmental stimuli. A Monte Carlo analysis and two consumer psychology applications illustrate the procedure. (SLD)

  7. Communication and Procedural Models of the E-Commerce Systems

    OpenAIRE

    Suchánek, Petr

    2009-01-01

    E-commerce systems became a standard interface between sellers (or suppliers) and customers. One of basic condition of an e-commerce system to be efficient is correct definitions and describes of the all internal and external processes. All is targeted the customers´ needs and requirements. The optimal and most exact way how to obtain and find optimal solution of e-commerce system and its processes structure in companies is the modeling and simulation. In this article author shows basic model...

  8. Communication and Procedural Models of the E-commerce Systems

    OpenAIRE

    Suchánek, Petr

    2009-01-01

    E-commerce systems became a standard interface between sellers (or suppliers) and customers. One of basic condition of an e-commerce system to be efficient is correct definitions and describes of the all internal and external processes. All is targeted the customers´ needs and requirements. The optimal and most exact way how to obtain and find optimal solution of e-commerce system and its processes structure in companies is the modeling and simulation. In this article author shows basic model...

  9. Comparison of In-Flight Measured and Computed Aeroelastic Damping: Modal Identification Procedures and Modeling Approaches

    Directory of Open Access Journals (Sweden)

    Roberto da Cunha Follador

    2016-04-01

    Full Text Available The Operational Modal Analysis technique is a methodology very often applied for the identification of dynamic systems when the input signal is unknown. The applied methodology is based on a technique to estimate the Frequency Response Functions and extract the modal parameters using only the structural dynamic response data, without assuming the knowledge of the excitation forces. Such approach is an adequate way for measuring the aircraft aeroelastic response due to random input, like atmospheric turbulence. The in-flight structural response has been measured by accelerometers distributed along the aircraft wings, fuselage and empennages. The Enhanced Frequency Domain Decomposition technique was chosen to identify the airframe dynamic parameters. This technique is based on the hypothesis that the system is randomly excited with a broadband spectrum with almost constant power spectral density. The system identification procedure is based on the Single Value Decomposition of the power spectral densities of system output signals, estimated by the usual Fast Fourier Transform method. This procedure has been applied to different flight conditions to evaluate the modal parameters and the aeroelastic stability trends of the airframe under investigation. The experimental results obtained by this methodology were compared with the predicted results supplied by aeroelastic numerical models in order to check the consistency of the proposed output-only methodology. The objective of this paper is to compare in-flight measured aeroelastic damping against the corresponding parameters computed from numerical aeroelastic models. Different aerodynamic modeling approaches should be investigated such as the use of source panel body models, cruciform and flat plate projection. As a result of this investigation it is expected the choice of the better aeroelastic modeling and Operational Modal Analysis techniques to be included in a standard aeroelastic

  10. Literature Evidence on Live Animal Versus Synthetic Models for Training and Assessing Trauma Resuscitation Procedures.

    Science.gov (United States)

    Hart, Danielle; McNeil, Mary Ann; Hegarty, Cullen; Rush, Robert; Chipman, Jeffery; Clinton, Joseph; Reihsen, Troy; Sweet, Robert

    2016-01-01

    There are many models currently used for teaching and assessing performance of trauma-related airway, breathing, and hemorrhage procedures. Although many programs use live animal (live tissue [LT]) models, there is a congressional effort to transition to the use of nonanimal- based methods (i.e., simulators, cadavers) for military trainees. We examined the existing literature and compared the efficacy, acceptability, and validity of available models with a focus on comparing LT models with synthetic systems. Literature and Internet searches were conducted to examine current models for seven core trauma procedures. We identified 185 simulator systems. Evidence on acceptability and validity of models was sparse. We found only one underpowered study comparing the performance of learners after training on LT versus simulator models for tube thoracostomy and cricothyrotomy. There is insufficient data-driven evidence to distinguish superior validity of LT or any other model for training or assessment of critical trauma procedures.

  11. A Computational Model of the Temporal Dynamics of Plasticity in Procedural Learning: Sensitivity to Feedback Timing

    Directory of Open Access Journals (Sweden)

    Vivian V. Valentin

    2014-07-01

    Full Text Available The evidence is now good that different memory systems mediate the learning of different types of category structures. In particular, declarative memory dominates rule-based (RB category learning and procedural memory dominates information-integration (II category learning. For example, several studies have reported that feedback timing is critical for II category learning, but not for RB category learning – results that have broad support within the memory systems literature. Specifically, II category learning has been shown to be best with feedback delays of 500ms compared to delays of 0 and 1000ms, and highly impaired with delays of 2.5 seconds or longer. In contrast, RB learning is unaffected by any feedback delay up to 10 seconds. We propose a neurobiologically detailed theory of procedural learning that is sensitive to different feedback delays. The theory assumes that procedural learning is mediated by plasticity at cortical-striatal synapses that are modified by dopamine-mediated reinforcement learning. The model captures the time-course of the biochemical events in the striatum that cause synaptic plasticity, and thereby accounts for the empirical effects of various feedback delays on II category learning.

  12. [Results of Habitat Evaluation Procedures at Rocky Mountain Arsenal NWR : Spatial and Tabular Data

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This document contains a basic map depicting the average of 5-7 transects/sections and 5 species, as well as a table detailing the preliminary results of habitat...

  13. Distinguishing between population bottleneck and population subdivision by a Bayesian model choice procedure.

    Science.gov (United States)

    Peter, Benjamin M; Wegmann, Daniel; Excoffier, Laurent

    2010-11-01

    Although most natural populations are genetically subdivided, they are often analysed as if they were panmictic units. In particular, signals of past demographic size changes are often inferred from genetic data by assuming that the analysed sample is drawn from a population without any internal subdivision. However, it has been shown that a bottleneck signal can result from the presence of some recent immigrants in a population. It thus appears important to contrast these two alternative scenarios in a model choice procedure to prevent wrong conclusions to be made. We use here an Approximate Bayesian Computation (ABC) approach to infer whether observed patterns of genetic diversity in a given sample are more compatible with it being drawn from a panmictic population having gone through some size change, or from one or several demes belonging to a recent finite island model. Simulations show that we can correctly identify samples drawn from a subdivided population in up to 95% of the cases for a wide range of parameters. We apply our model choice procedure to the case of the chimpanzee (Pan troglodytes) and find conclusive evidence that Western and Eastern chimpanzee samples are drawn from a spatially subdivided population. © 2010 Blackwell Publishing Ltd.

  14. WEMo (Wave Exposure Model): Formulation, Procedures and Validation

    OpenAIRE

    Malhotra, Amit; Mark S. Fonseca

    2007-01-01

    This report describes the working of National Centers for Coastal Ocean Service (NCCOS) Wave Exposure Model (WEMo) capable of predicting the exposure of a site in estuarine and closed water to local wind generated waves. WEMo works in two different modes: the Representative Wave Energy (RWE) mode calculates the exposure using physical parameters like wave energy and wave height, while the Relative Exposure Index (REI) empirically calculates exposure as a unitless index. Detailed working of th...

  15. Modeling clicks beyond the first result page

    NARCIS (Netherlands)

    Chuklin, A.; Serdyukov, P.; de Rijke, M.

    2013-01-01

    Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more

  16. Modeling clicks beyond the first result page

    NARCIS (Netherlands)

    Chuklin, A.; Serdyukov, P.; de Rijke, M.

    2013-01-01

    Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more docu

  17. Minkowski Momentum Resulting from a Vacuum-Medium Mapping Procedure, and a Brief Review of Minkowski Momentum Experiments

    CERN Document Server

    Brevik, Iver

    2016-01-01

    A discussion is given on the interpretation and physical importance of the Minkowski momentum in macroscopic electrodynamics (essential for the Abraham-Minkowski problem). We focus on the following two facets: (1) Adopting a simple dielectric model where the refractive index $n$ is constant, we demonstrate by means of a mapping procedure how the electromagnetic field in a medium can be mapped into a corresponding field in vacuum. This mapping was presented many years ago [I. Brevik and B. Lautrup, Mat. Fys. Medd. Dan. Vid. Selsk {\\bf 38}(1), 1 (1970)], but is apparently not well known. A characteristic property of this procedure is that it shows how natural the Minkowski energy-momentum tensor fits into the canonical formalism. Especially the spacelike character of the electromagnetic total four-momentum for a radiation field (implying negative electromagnetic energy in some inertial frames), so strikingly demonstrated in the Cherenkov effect, is worth attention. (2) Our second objective is to give a critical...

  18. Loop electrosurgical excision procedure: an effective, inexpensive, and durable teaching model.

    Science.gov (United States)

    Connor, R Shae; Dizon, A Mitch; Kimball, Kristopher J

    2014-12-01

    The effectiveness of simulation training for enhancing operative skills is well established. Here we describe the construction of a simple, low-cost model for teaching the loop electrosurgical excision procedure. Composed of common materials such as polyvinyl chloride pipe and sausages, the simulation model, shown in the accompanying figure, can be easily reproduced by other training programs. In addition, we also present an instructional video that utilizes this model to review loop electrosurgical excision procedure techniques, highlighting important steps in the procedure and briefly addressing challenging situations and common mistakes as well as strategies to prevent them. The video and model can be used in conjunction with a simulation skills laboratory to teach the procedure to students, residents, and new practitioners.

  19. Engineering model development and test results

    Science.gov (United States)

    Wellman, John A.

    1993-08-01

    The correctability of the primary mirror spherical error in the Wide Field/Planetary Camera (WF/PC) is sensitive to the precise alignment of the incoming aberrated beam onto the corrective elements. Articulating fold mirrors that provide +/- 1 milliradian of tilt in 2 axes are required to allow for alignment corrections in orbit as part of the fix for the Hubble space telescope. An engineering study was made by Itek Optical Systems and the Jet Propulsion Laboratory (JPL) to investigate replacement of fixed fold mirrors within the existing WF/PC optical bench with articulating mirrors. The study contract developed the base line requirements, established the suitability of lead magnesium niobate (PMN) actuators and evaluated several tilt mechanism concepts. Two engineering model articulating mirrors were produced to demonstrate the function of the tilt mechanism to provide +/- 1 milliradian of tilt, packaging within the space constraints and manufacturing techniques including the machining of the invar tilt mechanism and lightweight glass mirrors. The success of the engineering models led to the follow on design and fabrication of 3 flight mirrors that have been incorporated into the WF/PC to be placed into the Hubble Space Telescope as part of the servicing mission scheduled for late 1993.

  20. Testing of Subgrid—Scale Stress Models by Using Results from Direct Numerical SImulations

    Institute of Scientific and Technical Information of China (English)

    HongruiGONG

    1998-01-01

    The most commonly used dynamic subgrid models,Germano's model and dynamic kinetic energy model,and their base models-the Smagorinsky model and the kinetic energy model,were tested using results from direct numerical simulations of various turbulent flows.In germano's dynamic model,the model coefficient was treated as a constant within the test filter,This treatment is conceptually inconsistent.An iteration procedure was proposed to calculate the model coefficient and an improved correlation coefficient was found.

  1. RESULTS OF SHOULDER STABILIZATION BY A MODIFIED BRISTOW - LATARJET PROCEDURE WITH ARTHROSCOPY

    Directory of Open Access Journals (Sweden)

    R. V. Gladkov

    2014-01-01

    Full Text Available The authors describe the minimally invasive technique for Bristow-Latarjet bone unfree autoplasty with arthroscopy in patients with bone loss more than 25% of anterior-posterior diameter of the glenoid, the poor quality of the capsule or deep defects of Hill-Sachs. The analysis of the early results of treatment in 19 patients and midterm results - in 13 soldiers operated in 2011-2014. Features of the proposed technique are the shortening of surgical approach and the reduction of subscapularis muscle damage. In addition, arthroscopic support allows to attain the precision location of the graft relative to the articular surface of scapula, at the same time restoring the damaged anatomy SLAP, rotator cuff tendons and posterior labrum and restore shoulder ligaments tension and isolate bone graft from the joint cavity, contributing to a better articulation of the humeral head and reducing the risk of nonunion and resorption.

  2. New Procedure to Develop Lumped Kinetic Models for Heavy Fuel Oil Combustion

    KAUST Repository

    Han, Yunqing

    2016-09-20

    A new procedure to develop accurate lumped kinetic models for complex fuels is proposed, and applied to the experimental data of the heavy fuel oil measured by thermogravimetry. The new procedure is based on the pseudocomponents representing different reaction stages, which are determined by a systematic optimization process to ensure that the separation of different reaction stages with highest accuracy. The procedure is implemented and the model prediction was compared against that from a conventional method, yielding a significantly improved agreement with the experimental data. © 2016 American Chemical Society.

  3. Microplasticity of MMC. Experimental results and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Maire, E. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Lormand, G. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Gobin, P.F. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Fougeres, R. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France))

    1993-11-01

    The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.).

  4. Modelling dental implant extraction by pullout and torque procedures.

    Science.gov (United States)

    Rittel, D; Dorogoy, A; Shemtov-Yona, K

    2017-07-01

    Dental implants extraction, achieved either by applying torque or pullout force, is used to estimate the bone-implant interfacial strength. A detailed description of the mechanical and physical aspects of the extraction process in the literature is still missing. This paper presents 3D nonlinear dynamic finite element simulations of a commercial implant extraction process from the mandible bone. Emphasis is put on the typical load-displacement and torque-angle relationships for various types of cortical and trabecular bone strengths. The simulations also study of the influence of the osseointegration level on those relationships. This is done by simulating implant extraction right after insertion when interfacial frictional contact exists between the implant and bone, and long after insertion, assuming that the implant is fully bonded to the bone. The model does not include a separate representation and model of the interfacial layer for which available data is limited. The obtained relationships show that the higher the strength of the trabecular bone the higher the peak extraction force, while for application of torque, it is the cortical bone which might dictate the peak torque value. Information on the relative strength contrast of the cortical and trabecular components, as well as the progressive nature of the damage evolution, can be revealed from the obtained relations. It is shown that full osseointegration might multiply the peak and average load values by a factor 3-12 although the calculated work of extraction varies only by a factor of 1.5. From a quantitative point of view, it is suggested that, as an alternative to reporting peak load or torque values, an average value derived from the extraction work be used to better characterize the bone-implant interfacial strength. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. NASA Glenn Icing Research Tunnel: 2014 Cloud Calibration Procedure and Results

    Science.gov (United States)

    Van Zante, Judith F.; Ide, Robert F.; Steen, Laura E.; Acosta, Waldo J.

    2014-01-01

    The results of the December 2013 to February 2014 Icing Research Tunnel full icing cloud calibration are presented. The calibration steps included establishing a uniform cloud and conducting drop size and liquid water content calibrations. The goal of the calibration was to develop a uniform cloud, and to generate a transfer function from the inputs of air speed, spray bar atomizing air pressure and water pressure to the outputs of median volumetric drop diameter and liquid water content. This was done for both 14 CFR Parts 25 and 29, Appendix C ('typical' icing) and soon-to-be released Appendix O (supercooled large drop) conditions.

  6. "Long term results of intraoperative 5-FU in Glaucoma filtering procedures "

    Directory of Open Access Journals (Sweden)

    "Hashemian MN

    2000-11-01

    Full Text Available This prospective study evaluated the long-term results of intraoperative 5-FU in glaucoma patients undergoing trabeculectomy. 14 patients categorized as high risk or medium risk underwent trabeculectomy with 5-FU and were followed for a mean period of 32 months. Patients were evaluated for visual acuity, cup-disc ratio and intraocular pressure (IOP; the number of medications was also taken into consideration. 78% (11 of patients achieved controlled IOP (< 21 mmHg with or without medication. There was statistically significant reduction of IOP and number of medications after the operation. There were no significant complications observed during the follow-up period.

  7. Reconstructive microsurgery of the fallopian tube with the carbon dioxide laser - procedures and preliminary results.

    Science.gov (United States)

    Bellina, J H

    1981-01-01

    In 1974 the carbon dioxide laser was adapted to the operating microscope for reconstructive pelvic surgery. A protocol was designed to test the efficacy of this surgical modality and a new study begun. Complete documentation of laparoscopic findings, surgical technique, and pre- and post-operative hysterosalpingograms are kept on videotape. One hundred cases will be entered into this study. Patency and pregnancy failures will be compared with patency and pregnancy successes to determine, if possible, the reasons for failure. In this paper detailed descriptions of the surgical techniques employed in the first 61 cases are presented. PRELIMINARY RESULTS in terms of patency and pregnancy experience to date are reported. Eighty-two couples have been evaluated at the Reproductive Biology Unit. Sixty-one cases have undergone fertility enhancement laser microsurgery and/or interim medical management prior to surgery for infertility. Bilateral patency or patency of the only existing fallopian tube was demonstrated post-treatment in 93% or 57 of the cases. Eliminating those cases who are not at risk of pregnancy due to elective contraception or medical prohibition during Danocrine therapy (28), conception has occurred in 10 of 33 patients. This represents approximately one of every three patients at risk. Considering the limited exposure to pregnancy, these results are very encouraging.

  8. NASA Glenn Icing Research Tunnel: 2014 and 2015 Cloud Calibration Procedures and Results

    Science.gov (United States)

    Steen, Laura E.; Ide, Robert F.; Van Zante, Judith F.; Acosta, Waldo J.

    2015-01-01

    This report summarizes the current status of the NASA Glenn Research Center (GRC) Icing Research Tunnel cloud calibration: specifically, the cloud uniformity, liquid water content, and drop-size calibration results from both the January-February 2014 full cloud calibration and the January 2015 interim cloud calibration. Some aspects of the cloud have remained the same as what was reported for the 2014 full calibration, including the cloud uniformity from the Standard nozzles, the drop-size equations for Standard and Mod1 nozzles, and the liquid water content for large-drop conditions. Overall, the tests performed in January 2015 showed good repeatability to 2014, but there is new information to report as well. There have been minor updates to the Mod1 cloud uniformity on the north side of the test section. Also, successful testing with the OAP-230Y has allowed the IRT to re-expand its operating envelopes for large-drop conditions to a maximum median volumetric diameter of 270 microns. Lastly, improvements to the collection-efficiency correction for the SEA multi-wire have resulted in new calibration equations for Standard- and Mod1-nozzle liquid water content.

  9. Early Results of the “Clamp and Sew” Fontan Procedure Without the Use of Circulatory Support

    Science.gov (United States)

    Shinkawa, Takeshi; Anagnostopoulos, Petros V.; Johnson, Natalie C.; Presnell, Laura; Watanabe, Naruhito; Sapru, Anil; Azakie, Anthony

    2017-01-01

    Background A modification of the Fontan operation was recently applied, which includes anastomoses of the extra-cardiac conduit to the right pulmonary artery and inferior vena cava using simple clamping with no additional circulatory support, venous shunting, pulmonary artery preparation, or prior maintenance of azygos vein patency. The objective of this study is to assess the outcomes of this novel off-pump “clamp and sew” Fontan procedure. Methods This is a retrospective review of all patients having a Fontan procedure between January 2009 and October 2010 at a single institution. Results Twelve patients had a Fontan procedure with the use of cardiopulmonary bypass (CPB group), and 12 had an off-pump Fontan procedure (off-pump group). Preoperative demographic and hemodynamic data were similar except for higher mean pulmonary artery pressure in the CPB group (12.2 ± 1.6 mm Hg versus 9.9 ± 2.4 mm Hg; p = 0.02). No patients in the off-pump group required conversion to CPB. The mean inferior vena cava clamp time in the off-pump group patients was 10 ± 3 minutes. There were no early or midterm deaths. No patients exhibited postoperative hepatic or renal dysfunction. Postoperative maximal serum creatinine and aspartate transaminase were significantly lower in the off-pump group compared with the CPB group (0.59 ± 0.12 versus 0.77 ± 0.22 mg/dL; p = 0.03 and 35.5 ± 8.3 versus 53.1 ± 19.0 U/L; p = 0.02, respectively). At median follow-up of 13 months (range, 1 to 20 months), all but 1 patient in the CPB group are in New York Heart Association class I with unobstructed Fontan circulation. Conclusions The clamp and sew technique for completion of an extracardiac conduit Fontan procedure appears safe and feasible for selected patients. PMID:21524454

  10. Spirometry in healthy subjects: do technical details of the test procedure affect the results?

    Directory of Open Access Journals (Sweden)

    Luciana Sipoli

    Full Text Available Spirometry should follow strict quality criteria. The American Thoracic Society (ATS recommends the use of a noseclip; however there are controversies about its need. ATS also indicates that tests should be done in the sitting position, but there are no recommendations neither about position of the upper limbs and lower limbs nor about who should hold the mouthpiece while performing the maneuvers: evaluated subject or evaluator.To compare noseclip use or not, different upper and lower limbs positions and who holds the mouthpiece, verifying if these technical details affect spirometric results in healthy adults.One hundred and three healthy individuals (41 men; age: 47 [33-58] years; normal lung function: FEV₁/FVC = 83±5, FEV₁ = 94 [88-104]%predicted, FVC = 92 [84-102]%predicted underwent a protocol consisting of four spirometric comparative analysis in the sitting position: 1 maximum voluntary ventilation (MVV with vs without noseclip; 2 FVC performed with vs without upper limbs support; 3 FVC performed with lower limbs crossed vs lower limbs in neutral position; 4 FVC, slow vital capacity and MVV comparing the evaluated subject holding the mouthpiece vs evaluator holding it.Different spirometric variables presented statistically significant difference (p<0.05 when analysing the four comparisons; however, none of them showed any variation larger than those considered as acceptable according to the ATS reproducibility criteria.There was no relevant variation in spirometric results when analyzing technical details such as noseclip use during MVV, upper and lower limb positions and who holds the mouthpiece when performing the tests in healthy adults.

  11. A Novel Camera Calibration Algorithm as Part of an HCI System: Experimental Procedure and Results

    Directory of Open Access Journals (Sweden)

    Sauer Kristal

    2006-02-01

    Full Text Available Camera calibration is an initial step employed in many computer vision applications for the estimation of camera parameters. Along with images of an arbitrary scene, these parameters allow for inference of the scene's metric information. This is a primary reason for camera calibration's significance to computer vision. In this paper, we present a novel approach to solving the camera calibration problem. The method was developed as part of a Human Computer Interaction (HCI System for the NASA Virtual GloveBox (VGX Project. Our algorithm is based on the geometric properties of perspective projections and provides a closed form solution for the camera parameters. Its accuracy is evaluated in the context of the NASA VGX, and the results indicate that our algorithm achieves accuracy similar to other calibration methods which are characterized by greater complexity and computational cost. Because of its reliability and wide variety of potential applications, we are confident that our calibration algorithm will be of interest to many.

  12. Congenital pseudoarthrosis of the tibia – results of treatment by free fibular transfer and associated procedures – preliminary study.

    Science.gov (United States)

    Iamaguchi, Raquel B; Fucs, Patricia M M B; Carlos da Costa, Antonio; Chakkour, Ivan; Gomes, Mogar D

    2011-09-01

    We evaluated 16 children with congenital pseudoarthrosis of the tibia treated with contralateral fibular graft, with the aim to report the difficulties and clinical results in the affected limb after consolidation. Sixty-three percent of the children had characteristics of neurofibromatosis. Consolidation was achieved after the main surgery in 37%of patients, and the remainder, after multiple procedures. Consolidation time was longer for male patients. Refracture was observed in six patients and recurrence of the anterior bowing in six; four of these patients were submitted to correction. Four patients presented femur overgrowth. The average shortening of the affected leg was 3.6 cm. The proposed procedure leads to a long treatment course with many reoperations for correction of possible complications.

  13. New Inference Procedures for Semiparametric Varying-Coefficient Partially Linear Cox Models

    Directory of Open Access Journals (Sweden)

    Yunbei Ma

    2014-01-01

    Full Text Available In biomedical research, one major objective is to identify risk factors and study their risk impacts, as this identification can help clinicians to both properly make a decision and increase efficiency of treatments and resource allocation. A two-step penalized-based procedure is proposed to select linear regression coefficients for linear components and to identify significant nonparametric varying-coefficient functions for semiparametric varying-coefficient partially linear Cox models. It is shown that the penalized-based resulting estimators of the linear regression coefficients are asymptotically normal and have oracle properties, and the resulting estimators of the varying-coefficient functions have optimal convergence rates. A simulation study and an empirical example are presented for illustration.

  14. Procedures for adjusting regional regression models of urban-runoff quality using local data

    Science.gov (United States)

    Hoos, A.B.; Sisolak, J.K.

    1993-01-01

    Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for

  15. A computational model to investigate assumptions in the headturn preference procedure

    Directory of Open Access Journals (Sweden)

    Christina eBergmann

    2013-10-01

    Full Text Available In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP: (1 behavioural differences originate in different processing; (2 processing involves some form of recognition; (3 words are segmented from connected speech; and (4 differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a the specific voices used in the two parts on HPP experiments (familiarisation and test and (b the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximise cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviours observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP.

  16. SITE-94. Discrete-feature modelling of the Aespoe site: 4. Source data and detailed analysis procedures

    Energy Technology Data Exchange (ETDEWEB)

    Geier, J.E. [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    Specific procedures and source data are described for the construction and application of discrete-feature hydrological models for the vicinity of Aespoe. Documentation is given for all major phases of the work, including: Statistical analyses to develop and validate discrete-fracture network models, Preliminary evaluation, construction, and calibration of the site-scale model based on the SITE-94 structural model of Aespoe, Simulation of multiple realizations of the integrated model, and variations, to predict groundwater flow, and Evaluation of near-field and far-field parameters for performance assessment calculations. Procedures are documented in terms of the computer batch files and executable scripts that were used to perform the main steps in these analyses, to provide for traceability of results that are used in the SITE-94 performance assessment calculations. 43 refs.

  17. Evaluation of the results and complications of the Latarjet procedure for recurrent anterior dislocation of the shoulder

    Directory of Open Access Journals (Sweden)

    Luciana Andrade da Silva

    2015-12-01

    Full Text Available ABSTRACT OBJECTIVE: Evaluate the results and complications of Latarjet procedure in patients with anterior recurrent dislocation of the shoulder. METHODS: Fifty-one patients (52 shoulders with anterior recurrent dislocation, surgically treated by Latarjet procedure, were analyzed retrospectively. The average follow-up time was 22 months, range 12-66 months; The age range was 15-59 years with a mean of 31; regarding sex, 42 (82.4% patients were male and nine (17.6% were female. The dominant side was affected in 29 (55.8% shoulders. Regarding the etiology, 48 (92.3% reported trauma and four (7.6% had the first episode after a convulsion. RESULTS: The average elevation, lateral rotation and medial rotation of the operated shoulder were, respectively, 146° (60-80°, 59° (0-85° and T8 (T5 gluteus, with statistical significance for decreased range of motion in all planes, compared with the other side. The scores of Rowe and UCLA were 90.6 and 31.4, respectively, in the postoperative period. Eleven shoulders (21.2% had poor results: signs of instability (13.4%, non-union (11.5% and early loosening of the synthesis material (1.9%. There was a correlation between poor results and convulsive patients ( p = 0.026. CONCLUSION: We conclude that the Latarjet procedure for correction of anterior recurrent dislocation leads to good and excellent results in 82.7% of cases. Complications are related to errors in technique.

  18. Late Results of Cox Maze III Procedure in Patients with Atrial Fibrillation Associated with Structural Heart Disease.

    Science.gov (United States)

    Gomes, Gustavo Gir; Gali, Wagner Luis; Sarabanda, Alvaro Valentim Lima; Cunha, Claudio Ribeiro da; Kessler, Iruena Moraes; Atik, Fernando Antibas

    2017-07-01

    Cox-Maze III procedure is one of the surgical techniques used in the surgical treatment of atrial fibrillation (AF). To determine late results of Cox-Maze III in terms of maintenance of sinus rhythm, and mortality and stroke rates. Between January 2006 and January 2013, 93 patients were submitted to the cut-and-sew Cox-Maze III procedure in combination with structural heart disease repair. Heart rhythm was determined by 24-hour Holter monitoring. Procedural success rates were determined by longitudinal methods and recurrence predictors by multivariate Cox regression models. Thirteen patients that obtained hospital discharge alive were excluded due to lost follow-up. The remaining 80 patients were aged 49.9 ± 12 years and 47 (58.7%) of them were female. Involvement of mitral valve and rheumatic heart disease were found in 67 (83.7%) and 63 (78.7%) patients, respectively. Seventy patients (87.5%) had persistent or long-standing persistent AF. Mean follow-up with Holter monitoring was 27.5 months. There were no hospital deaths. Sinus rhythm maintenance rates were 88%, 85.1% and 80.6% at 6 months, 24 months and 36 months, respectively. Predictors of late recurrence of AF were female gender (HR 3.52; 95% CI 1.21-10.25; p = 0.02), coronary artery disease (HR 4.73 95% CI 1.37-16.36; p = 0.01) and greater left atrium diameter (HR 1.05; 95% CI 1.01-1.09; p = 0.02). Actuarial survival was 98.5% at 12, 24 and 48 months and actuarial freedom from stroke was 100%, 100% and 97.5% in the same time frames. The Cox-Maze III procedure, in our experience, is efficacious for sinus rhythm maintenance, with very low late mortality and stroke rates. A operação de Cox-Maze III é uma das variantes técnicas no tratamento cirúrgico da fibrilação atrial (FA). Estudar os resultados tardios da operação de Cox-Maze III, quanto à eficácia na manutenção de ritmo sinusal e taxas de mortalidade e acidente vascular cerebral (AVC). Entre janeiro de 2006 a janeiro de 2013, 93 pacientes

  19. PROcedures for TESTing and measuring wind turbine components. Results for yaw and pitch system and drive train

    Energy Technology Data Exchange (ETDEWEB)

    Holierhoek, J.G.; Savenije, F.J.; Engels, W.P.; Van de Pieterman, R.P. [Unit Wind Energy, Energy research Centre of the Netherlands, 1755 ZG Petten (Netherlands); Lekou, D.J. [Wind Energy Section, Centre for Renewable Energy Sources and Saving (Greece); Hecquet, T. [SWE, Universitaet Stuttgart, Stuttgart (Germany); Soeker, H. [DEWI, Wilhelmshaven (Germany); Ehlers, B. [Suzlon Energy GmbH, Suzlon Energy GmbH, Rostock (Germany); Ristow, M.; Kochmann, M. [Load Assumptions, Germanischer Lloyd Industrial Services GmbH, Hamburg (Germany); Smolders, K.; Peeters, J. [R and D technology, Hansen Transmissions International, Lommel (Belgium)

    2012-07-16

    PROcedures for TESTing (PROTEST) and measuring wind energy systems was a pre-normative project that ran from 2008 to 2010 in order to improve the reliability of mechanical components of wind turbines. Initiating the project, it was concluded that the procedures concerning these components should be further improved. Within the PROTEST project, complementary procedures have been developed to improve the specification of the design loads at the interfaces where the mechanical components (pitch and yaw system, as well as the drive train) are attached to the wind turbine. This is required, since in optimizing wind turbine operation and improving reliability, focus should be given to the design, not only to safety related components but also to the rest of the components affecting the overall behaviour of the wind turbine as a system. The project has resulted in a proposal for new design load cases, specifically for the drive train, a description of the loads to be defined at the interfaces of each mechanical system, as well as a method to set up and use the prototype measurements to validate or improve the load calculations concerning the mechanical components. Following this method would improve the reliability of wind turbines, although more experience is needed to efficiently use the method. Examples are given for the analysis of the drive train, pitch system and yaw system.

  20. Requirements for Computer Based-Procedures for Nuclear Power Plant Field Operators Results from a Qualitative Study

    Energy Technology Data Exchange (ETDEWEB)

    Katya Le Blanc; Johanna Oxstrand

    2012-05-01

    Although computer-based procedures (CBPs) have been investigated as a way to enhance operator performance on procedural tasks in the nuclear industry for almost thirty years, they are not currently widely deployed at United States utilities. One of the barriers to the wide scale deployment of CBPs is the lack of operational experience with CBPs that could serve as a sound basis for justifying the use of CBPs for nuclear utilities. Utilities are hesitant to adopt CBPs because of concern over potential costs of implementation, and concern over regulatory approval. Regulators require a sound technical basis for the use of any procedure at the utilities; without operating experience to support the use CBPs, it is difficult to establish such a technical basis. In an effort to begin the process of developing a technical basis for CBPs, researchers at Idaho National Laboratory are partnering with industry to explore CBPs with the objective of defining requirements for CBPs and developing an industry-wide vision and path forward for the use of CBPs. This paper describes the results from a qualitative study aimed at defining requirements for CBPs to be used by field operators and maintenance technicians.

  1. Impact of data quality and quantity and the calibration procedure on crop growth model calibration

    Science.gov (United States)

    Seidel, Sabine J.; Werisch, Stefan

    2014-05-01

    , biomass partitioning, LAI, plant height, rooting depth and duration of growing period, as well as an (3) automated calibration using the AMALGAM optimization algorithm and Pareto front analysis based on the data listed in (2). Three different calibration strategies have been applied for the estimation of the parameters of the soil hydraulic property functions: (1) using pedotransfer functions based on soil texture data derived from soil sampling (2) using a laboratory evaporation method for the determination of pF-curves and unsaturated hydraulic conductivity (HYPROP), and (3) inverse estimation by multiobjective optimization and Pareto front analysis using the AMALGAM algorithm based on time series of soil moisture in three soil depths. The results show that simulations of yield and soil water dynamics can simultaneously be improved if the data quantity used for calibration increases (from strategy 1 to 3). The study quantifies the impacts of different model calibration procedures and data input on the modeling results. Even though parameter estimation using an multiobjective optimization algorithm is computationally demanding, it enhances the accuracy of model predictions and thus the overall reliability of the modeling results. To estimate climate change impacts based on crop growth modeling, we suggest a proper model calibration based on the simultaneous estimation of soil hydraulic parameters, crop phenology, growth and yield-related parameters using comprehensive experimental data.

  2. Procedural results and 30-day clinical events analysis following Edwards transcatheter aortic valve implantation in 48 consecutive patients: initial experience

    Institute of Scientific and Technical Information of China (English)

    ZHAO Quan-ming; Therese Lognone; Calin Ivascau; Remi Sabatier; Vincent Roule; Ziad Dahdouh; Massimo Massetti; Gilles Grollier

    2012-01-01

    Background Transcatheter aortic valve implantation (TAVI) is a rapidly evolving strategy for therapy of aortic stenosis.We presented the procedural results and analyzed the death causes of 30-day mortality and clinical events in patients who underwent TAVI with Edwards prosthetic valves in University Hospital of Caen,France.Methods The patients with severe aortic stenosis but at high surgical risk or inoperable were considered as candidates for TAVI.Forty-eight patients undergoing TAVI from July 2010 to September 2011 were enrolled in this registry.The Edwards prosthetic valves were solely used in this clinical trial.Results Overall 48 patients underwent TAVI,28 of which accepted TAVI by trans-femoral (TF) approaches,20 by trans-apical approaches (TA).The aortic valve area (AVA) was (0.70±0.23) cm2,left ventricular ejection fraction (LVEF)was (57.4±17.6)%,Log EuroSCORE was (19.2±15.8)%,mean gradient was (47.0±16.6) mmHg.There were no significant differences between TF and TA groups in all these baseline parameters.Device success rate was 95.8%,and procedural success rate was 93.7% in total.Procedural mortality was 6.7% (3/48):two deaths in TA group (10%),and one death in TF group (3.6%).Forty-six Edwards valves were implanted:10 Edwards Sapien and 36 Edwards XT.Procedure-related complications included cardiac tamponade in 2 cases (4.2%),acute myocardial infarction (AMI) in 1 case (2.1%),permanent pacemaker implantation in 1 case (2.1%),life-threatening and major bleeding in 3 cases; access site related major complication in 1 case,AKI stage 3 in 3 cases (6.3%),minor stroke in 1 case (2.1%).Thirty-day survival rate was 89.6%.There were 5 deaths in total (10.4%):4 in TA group (20%) and 1 in TF group (3.6%).Conclusion The procedural success rate and 30-day mortality were acceptable in these high risk patients with Edwards prosthetic valves in the first 48 TAVI.

  3. Formulating "Principles of Procedure" for the Foreign Language Classroom: A Framework for Process Model Language Curricula

    Science.gov (United States)

    Villacañas de Castro, Luis S.

    2016-01-01

    This article aims to apply Stenhouse's process model of curriculum to foreign language (FL) education, a model which is characterized by enacting "principles of procedure" which are specific to the discipline which the school subject belongs to. Rather than to replace or dissolve current approaches to FL teaching and curriculum…

  4. Clinical results of combined palliative procedures for cyanotic congenital heart defects with intractable hypoplasia of pulmonary arteries

    Institute of Scientific and Technical Information of China (English)

    FAN Xiang-ming; ZHU Yao-bin; SU Jun-wu; ZHANG Jing; LI Zhi-qiang; XU Yao-qiang; LI Xiao-feng

    2013-01-01

    Background Congenital heart defects with intractable hypoplasia of the pulmonary arteries without intercourse or with intercourse stenosis is unsuitable for surgical correction or regular palliative procedures.We reported our experience with combined palliative procedures for congenital heart defects with intractable hypoplasia pulmonary arteries.Methods From 2001 to 2012,a total of 41 patients with cyanotic congenital heart defects and intractable hypoplasia of the pulmonary arteries underwent surgical procedures.From among them,31 patients had pulmonary atresia with ventricular septal defect (VSD) and the other 10 cases had complicated congenital heart defects with pulmonary stenosis.Different kinds of palliative procedures were performed according to the morphology of the right and left pulmonary arteries in every patient.If the pulmonary artery was well developed,a Glenn procedure was performed.A modified Blalock-Taussi9 shunt or modified Waterston shunt was performed if pulmonary arteries were hypoplastic.If the pulmonary arteries were severely hypoplastic,a Melbourne shunt was performed.Systemic pulmonary artery shunts were performed bilaterally in 25 cases.A systemic-pulmonary shunt was performed on one side and a Glenn procedure was performed contralaterally in 16 cases.Major aortopulmonary collateral arteries were unifocalized in six cases,ligated in two cases and interventionally embolized in two cases.There was one early death because of cardiac arrest and the hospital mortality was 2.4%.Results Five patients suffered from postoperative low cardiac output syndrome,three had perfusion of the lungs,and two pulmonary infections.Systemic pulmonary shunts were repeated after the original operation in three cases due to the occlusion of conduits.The mean follow-up time was 25 months.The pre-and the post-operation left pulmonary indices were (8.13±3.68) vs.(14.9±6.21) mm2/m2.The pre-and post-operation right pulmonary indices were (12.7±8.13) vs.(17.7±7

  5. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    Science.gov (United States)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  6. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing

    Science.gov (United States)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2016-08-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  7. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing

    Science.gov (United States)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  8. Identification procedures for the charge-controlled nonlinear noise model of microwave electron devices

    Science.gov (United States)

    Filicori, Fabio; Traverso, Pier Andrea; Florian, Corrado; Borgarino, Mattia

    2004-05-01

    The basic features of the recently proposed Charge-Controlled Non-linear Noise (CCNN) model for the prediction of low-to-high-frequency noise up-conversion in electron devices under large-signal RF operation are synthetically presented. It is shown that the different noise generation phenomena within the device can be described by four equivalent noise sources, which are connected at the ports of a "noiseless" device model and are non-linearly controlled by the time-varying instantaneous values of the intrinsic device voltages. For the empirical identification of the voltage-controlled equivalent noise sources, different possible characterization procedures, based not only on conventional low-frequency noise data, but also on different types of noise measurements carried out under large-signal RF operating conditions are discussed. As an example of application, the measurement-based identification of the CCNN model for a GaInP heterojunction bipolar microwave transistor is presented. Preliminary validation results show that the proposed model can describe with adequate accuracy not only the low-frequency noise of the HBT, but also its phase-noise performance in a prototype VCO implemented by using the same monolithic GaAs technology.

  9. Role of Computational Modelling in Planning and Executing Interventional Procedures for Congenital Heart Disease.

    Science.gov (United States)

    Slesnick, Timothy C

    2017-09-01

    Increasingly, computational modelling and numerical simulations are used to help plan complex surgical and interventional cardiovascular procedures in children and young adults with congenital heart disease. From its origins more than 30 years ago, surgical planning with analysis of flow hemodynamics and energy loss/efficiency has helped design and implement many modifications to existing techniques. On the basis of patient-specific medical imaging, surgical planning allows accurate model production that can then be manipulated in a virtual surgical environment, with the proposed solutions finally tested with advanced computational fluid dynamics to evaluate the results. Applications include a broad range of congenital heart disease, including patients with single-ventricle anatomy undergoing staged palliation, those with arch obstruction, with double outlet right ventricle, or with tetralogy of Fallot. In the present work, we focus on clinical applications of this exciting field. We describe the framework for these techniques, including brief descriptions of the engineering principles applied and the interaction between "benchtop" data with medical decision-making. We highlight some early insights learned from pioneers over the past few decades, including refinements in Fontan baffle geometries and configurations. Finally, we offer a glimpse into exciting advances that are presently being explored, including use of modelling for transcatheter interventions. In this era of personalized medicine, computational modelling and surgical planning allows patient-specific tailoring of interventions to optimize clinical outcomes. Copyright © 2017 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  10. Theory and procedures for finding a correct kinetic model for the bacteriorhodopsin photocycle.

    Science.gov (United States)

    Hendler, R W; Shrager, R; Bose, S

    2001-04-26

    In this paper, we present the implementation and results of new methodology based on linear algebra. The theory behind these methods is covered in detail in the Supporting Information, available electronically (Shragerand Hendler). In brief, the methods presented search through all possible forward sequential submodels in order to find candidates that can be used to construct a complete model for the BR-photocycle. The methodology is limited only to forward sequential models. If no such models are compatible with the experimental data,none will be found. The procedures apply objective tests and filters to eliminate possibilities that cannot be correct, thus cutting the total number of candidate sequences to be considered. In the current application,which uses six exponentials, the total sequences were cut from 1950 to 49. The remaining sequences were further screened using known experimental criteria. The approach led to a solution which consists of a pair of sequences, one with 5 exponentials showing BR* f L(f) M(f) N O BR and the other with three exponentials showing BR* L(s) M(s) BR. The deduced complete kinetic model for the BR photocycle is thus either a single photocycle branched at the L intermediate or a pair of two parallel photocycles. Reasons for preferring the parallel photocycles are presented. Synthetic data constructed on the basis of the parallel photocycles were indistinguishable from the experimental data in a number of analytical tests that were applied.

  11. A Novel Hyperbolization Procedure for The Two-Phase Six-Equation Flow Model

    Energy Technology Data Exchange (ETDEWEB)

    Samet Y. Kadioglu; Robert Nourgaliev; Nam Dinh

    2011-10-01

    We introduce a novel approach for the hyperbolization of the well-known two-phase six equation flow model. The six-equation model has been frequently used in many two-phase flow applications such as bubbly fluid flows in nuclear reactors. One major drawback of this model is that it can be arbitrarily non-hyperbolic resulting in difficulties such as numerical instability issues. Non-hyperbolic behavior can be associated with complex eigenvalues that correspond to characteristic matrix of the system. Complex eigenvalues are often due to certain flow parameter choices such as the definition of inter-facial pressure terms. In our method, we prevent the characteristic matrix receiving complex eigenvalues by fine tuning the inter-facial pressure terms with an iterative procedure. In this way, the characteristic matrix possesses all real eigenvalues meaning that the characteristic wave speeds are all real therefore the overall two-phase flowmodel becomes hyperbolic. The main advantage of this is that one can apply less diffusive highly accurate high resolution numerical schemes that often rely on explicit calculations of real eigenvalues. We note that existing non-hyperbolic models are discretized mainly based on low order highly dissipative numerical techniques in order to avoid stability issues.

  12. SilMush: A procedure for modeling of the geochemical evolution of silicic magmas and granitic rocks

    Science.gov (United States)

    Hertogen, Jan; Mareels, Joyce

    2016-07-01

    A boundary layer crystallization modeling program is presented that specifically addresses the chemical fractionation in silicic magma systems and the solidification of plutonic bodies. The model is a Langmuir (1989) type approach and does not invoke crystal settling in high-viscosity silicic melts. The primary aim is to model a granitic rock as a congealed crystal-liquid mush, and to integrate major element and trace element modeling. The procedure allows for some exploratory investigation of the exsolution of H2O-fluids and of the fluid/melt partitioning of trace elements. The procedure is implemented as a collection of subroutines for the MS Excel spreadsheet environment and is coded in the Visual Basic for Applications (VBA) language. To increase the flexibility of the modeling, the procedure is based on discrete numeric process simulation rather than on solution of continuous differential equations. The program is applied to a study of the geochemical variation within and among three granitic units (Senones, Natzwiller, Kagenfels) from the Variscan Northern Vosges Massif, France. The three units cover the compositional range from monzogranite, over syenogranite to alkali-feldspar granite. An extensive set of new major element and trace element data is presented. Special attention is paid to the essential role of accessory minerals in the fractionation of the Rare Earth Elements. The crystallization model is able to reproduce the essential major and trace element variation trends in the data sets of the three separate granitic plutons. The Kagenfels alkali-feldspar leucogranite couples very limited variation in major element composition to a considerable and complex variation of trace elements. The modeling results can serve as a guide for the reconstruction of the emplacement sequence of petrographically distinct units. Although the modeling procedure essentially deals with geochemical fractionation within a single pluton, the modeling results bring up a

  13. Experimental modeling of the chemical remanent magnetization and Thellier procedure on titanomagnetite-bearing basalts

    Science.gov (United States)

    Gribov, S. K.; Dolotov, A. V.; Shcherbakov, V. P.

    2017-03-01

    The results of the experimental studies on creating chemical and partial thermal remanent magnetizations (or their combination), which are imparted at the initial stage of the laboratory process of the oxidation of primary magmatic titanomagnetites (Tmts) contained in the rock, are presented. For creating chemical remanent magnetization, the samples of recently erupted Kamchatka basalts were subjected to 200-h annealing in air in the temperature interval from 400 to 500°C under the action of the magnetic field on the order of the Earth's magnetic field. After creation of this magnetization, the laboratory modeling of the Thellier-Coe and Wilson-Burakov paleointensity determination procedures was conducted on these samples. It is shown that when the primary magnetization is chemical, created at the initial stage of oxidation, and the paleointensity determined by these techniques is underestimated by 15-20% relative to its true values.

  14. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  15. Precipitation projections under GCMs perspective and Turkish Water Foundation (TWF) statistical downscaling model procedures

    Science.gov (United States)

    Dabanlı, İsmail; Şen, Zekai

    2017-02-01

    The statistical climate downscaling model by the Turkish Water Foundation (TWF) is further developed and applied to a set of monthly precipitation records. The model is structured by two phases as spatial (regional) and temporal downscaling of global circulation model (GCM) scenarios. The TWF model takes into consideration the regional dependence function (RDF) for spatial structure and Markov whitening process (MWP) for temporal characteristics of the records to set projections. The impact of climate change on monthly precipitations is studied by downscaling Intergovernmental Panel on Climate Change-Special Report on Emission Scenarios (IPCC-SRES) A2 and B2 emission scenarios from Max Plank Institute (EH40PYC) and Hadley Center (HadCM3). The main purposes are to explain the TWF statistical climate downscaling model procedures and to expose the validation tests, which are rewarded in same specifications as "very good" for all stations except one (Suhut) station in the Akarcay basin that is in the west central part of Turkey. Eventhough, the validation score is just a bit lower at the Suhut station, the results are "satisfactory." It is, therefore, possible to say that the TWF model has reasonably acceptable skill for highly accurate estimation regarding standard deviation ratio (SDR), Nash-Sutcliffe efficiency (NSE), and percent bias (PBIAS) criteria. Based on the validated model, precipitation predictions are generated from 2011 to 2100 by using 30-year reference observation period (1981-2010). Precipitation arithmetic average and standard deviation have less than 5% error for EH40PYC and HadCM3 SRES (A2 and B2) scenarios.

  16. Long-term results of single-procedure catheter ablation for atrial fibrillationin pre-and post-menopausal women

    Institute of Scientific and Technical Information of China (English)

    Tao LIN; Chang-Sheng MA; Jian-Zeng DONG; Xing DU; Rong BAI; Ying-Wei CHEN; Rong-Hui YU; De-Yong LONG; Ri-Bo TANG; Cai-Hua SANG; Song-Nan LI

    2014-01-01

    Objectives To address whether menopause affects outcome of catheter ablation (CA) for atrial fibrillation (AF) by comparing the safety and long-term outcome of a single-procedure in pre-and post-menopausal women. Methods A total of 743 female patients who underwent a single CA procedure of drug-refractory AF were retrospectively analyzed. The differences in clinical presentation and outcomes of CA for AF between the pre-menopausal women (PreM group, 94 patients, 12.7%) and the post-menopausal women (PostM group, 649 patients, 87.3%) were assessed. Results The patients in the PreM group were younger (P<0.001) and less likely to have hypertension (P<0.001) and diabetes (P=0.005) than those in the PostM group. The two groups were similar with regards to the proportion of concomitant mitral valve regurgitation coronary artery disease, left atrium dimensions, and left ventricular ejection fraction. The overall rate of complica-tions related to AF ablation was similar in both groups (P=0.385). After 43 (16-108) months of follow-up, the success rate of ablation was 54.3%in the PreM group and 54.2%in the PostM group (P=0.842). The overall freedom from atrial tachyarrhythmia recurrence was simi-lar in both groups. Menopause was not found to be an independent predictive factor of the recurrence of atrial tachyarrhythmia. Conclusions The long-term outcomes of single-procedure CA for AF are similar in pre-and post-menopausal women. Results indicated that CA of AF appears to be as safe and effective in pre-menopausal women as in post-menopausal women.

  17. The Chain-Link Fence Model: A Framework for Creating Security Procedures

    OpenAIRE

    Houghton, Robert F.

    2013-01-01

    A long standing problem in information technology security is how to help reduce the security footprint. Many specific proposals exist to address specific problems in information technology security. Most information technology solutions need to be repeatable throughout the course of an information systems lifecycle. The Chain-Link Fence Model is a new model for creating and implementing information technology procedures. This model was validated by two different methods: the first being int...

  18. A network society communicative model for optimizing the Refugee Status Determination (RSD procedures

    Directory of Open Access Journals (Sweden)

    Andrea Pacheco Pacífico

    2013-01-01

    Full Text Available This article recommends a new way to improve Refugee Status Determination (RSD procedures by proposing a network society communicative model based on active involvement and dialogue among all implementing partners. This model, named after proposals from Castells, Habermas, Apel, Chimni, and Betts, would be mediated by the United Nations High Commissioner for Refugees (UNHCR, whose role would be modeled after that of the International Committee of the Red Cross (ICRC practice.

  19. MODELING IN MAPLE AS THE RESEARCHING MEANS OF FUNDAMENTAL CONCEPTS AND PROCEDURES IN LINEAR ALGEBRA

    Directory of Open Access Journals (Sweden)

    Vasil Kushnir

    2016-05-01

    Full Text Available The article is devoted to binary technology and "fundamental training technology." Binary training refers to the simultaneous teaching of mathematics and computer science, for example differential equations and Maple, linear algebra and Maple. Moreover the system of traditional course of Maple is not performed. The use of the opportunities of Maple-technology in teaching mathematics is based on the following fundamental concepts of computer science as an algorithm, program, a linear program, cycle, branching, relative operators, etc. That’s why only a certain system of command operators in Maple is considered. They are necessary for fundamental concepts of linear algebra and differential equations studying in Maple-environment. Relative name - "the technology of fundamental training" reflects the study of fundamental mathematical concepts and procedures that express the properties of these concepts in Maple-environment. This article deals with the study of complex fundamental concepts of linear algebra (determinant of the matrix and algorithm of its calculation, the characteristic polynomial of the matrix and the eigenvalues of matrix, canonical form of characteristic matrix, eigenvectors of matrix, elementary divisors of the characteristic matrix, etc., which are discussed in the appropriate courses briefly enough, and sometimes are not considered at all, but they are important in linear systems of differential equations, asymptotic methods for solving differential equations, systems of linear equations. Herewith complex and voluminous procedures of finding of these linear algebra concepts embedded in Maple can be performed as a result of a simple command-operator. Especially important issue is building matrix to canonical form. In fact matrix functions are effectively reduced to the functions of the diagonal matrix or matrix in Jordan canonical form. These matrices are used to rise a square matrix to a power, to extract the roots of the n

  20. Off-site toxic consequence assessment: a simplified modeling procedure and case study.

    Science.gov (United States)

    Guarnaccia, Joe; Hoppe, Tom

    2008-11-15

    An assessment of off-site exposure from spills/releases of toxic chemicals can be conducted by compiling site-specific operational, geographic, demographic, and meteorological data and by using screening-level public-domain modeling tools (e.g., RMP Comp, ALOHA and DEGADIS). In general, the analysis is confined to the following: event-based simulations (allow for the use of known, constant, atmospheric conditions), known receptor distances (on the order of miles or less), short time scale for the distances considered (order of 10's of minutes or less), gently sloping rough terrain, dense and neutrally buoyant gas dispersion, known chemical inventory and infrastructure (used to define source-term), and known toxic endpoint (defines significance). While screening-level models are relatively simple to use, care must be taken to ensure that the results are meaningful. This approach allows one to assess risk from catastrophic release (e.g., via terrorism), or plausible release scenarios (related to standard operating procedures and industry standards). In addition, given receptor distance and toxic endpoint, the model can be used to predict the critical spill volume to realize significant off-site risk. This information can then be used to assess site storage and operation parameters and to determine the most economical and effective risk reduction measures to be applied.

  1. FRP-RC Beam in Shear: Mechanical Model and Assessment Procedure for Pseudo-Ductile Behavior

    Directory of Open Access Journals (Sweden)

    Floriana Petrone

    2014-07-01

    Full Text Available This work deals with the development of a mechanics-based shear model for reinforced concrete (RC elements strengthened in shear with fiber-reinforced polymer (FRP and a design/assessment procedure capable of predicting the failure sequence of resisting elements: the yielding of existing transverse steel ties and the debonding of FRP sheets/strips, while checking the corresponding compressive stress in concrete. The research aims at the definition of an accurate capacity equation, consistent with the requirement of the pseudo-ductile shear behavior of structural elements, that is, transverse steel ties yield before FRP debonding and concrete crushing. For the purpose of validating the proposed model, an extended parametric study and a comparison against experimental results have been conducted: it is proven that the common accepted rule of assuming the shear capacity of RC members strengthened in shear with FRP as the sum of the maximum contribution of both FRP and stirrups can lead to an unsafe overestimation of the shear capacity. This issue has been pointed out by some authors, when comparing experimental shear capacity values with the theoretical ones, but without giving a convincing explanation of that. In this sense, the proposed model represents also a valid instrument to better understand the mechanical behavior of FRP-RC beams in shear and to calculate their actual shear capacity.

  2. A computer touch screen system and training procedure for use with primate infants: Results from pigtail monkeys (Macaca nemestrina).

    Science.gov (United States)

    Mandell, Dorothy J; Sackett, Gene P

    2008-03-01

    Computerized cognitive and perceptual testing has resulted in many advances towards understanding adult brain-behavior relations across a variety of abilities and species. However, there has been little migration of this technology to the assessment of very young primate subjects. We describe a training procedure and software that was developed to teach infant monkeys to interact with a touch screen computer. Eighteen infant pigtail macaques began training at 90-postnatal days and five began at 180-postnatal days. All animals were trained to reliably touch a stimulus presented on a computer screen and no significant differences were found between the two age groups. The results demonstrate the feasibility of using computers to assess cognitive and perceptual abilities early in development.

  3. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  4. RESULTS OF INTERBANK EXCHANGE RATES FORECASTING USING STATE SPACE MODEL

    Directory of Open Access Journals (Sweden)

    Muhammad Kashif

    2008-07-01

    Full Text Available This study evaluates the performance of three alternative models for forecasting daily interbank exchange rate of U.S. dollar measured in Pak rupees. The simple ARIMA models and complex models such as GARCH-type models and a state space model are discussed and compared. Four different measures are used to evaluate the forecasting accuracy. The main result is the state space model provides the best performance among all the models.

  5. An incremental procedure model for e-learning projects at universities

    Directory of Open Access Journals (Sweden)

    Pahlke, Friedrich

    2006-11-01

    Full Text Available E-learning projects at universities are produced under different conditions than in industry. The main characteristic of many university projects is that these are realized quasi in a solo effort. In contrast, in private industry the different, interdisciplinary skills that are necessary for the development of e-learning are typically supplied by a multimedia agency.A specific procedure tailored for the use at universities is therefore required to facilitate mastering the amount and complexity of the tasks.In this paper an incremental procedure model is presented, which describes the proceeding in every phase of the project. It allows a high degree of flexibility and emphasizes the didactical concept – instead of the technical implementation. In the second part, we illustrate the practical use of the theoretical procedure model based on the project “Online training in Genetic Epidemiology”.

  6. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.

    Science.gov (United States)

    Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  7. Minkowski momentum resulting from a vacuum-medium mapping procedure, and a brief review of Minkowski momentum experiments

    Science.gov (United States)

    Brevik, Iver

    2017-02-01

    A discussion is given on the interpretation and physical importance of the Minkowski momentum in macroscopic electrodynamics (essential for the Abraham-Minkowski problem). We focus on the following two facets: (1) Adopting a simple dielectric model where the refractive index n is constant, we demonstrate by means of a mapping procedure how the electromagnetic field in a medium can be mapped into a corresponding field in vacuum. This mapping was presented many years ago (Brevik and Lautrup, 1970), but is apparently not well known. A characteristic property of this procedure is that it shows how naturally the Minkowski energy-momentum tensor fits into the canonical formalism. Especially the spacelike character of the electromagnetic total four-momentum for a radiation field (implying negative electromagnetic energy in some inertial frames), so strikingly demonstrated in the Cherenkov effect, is worth attention. (2) Our second objective is to give a critical analysis of some recent experiments on electromagnetic momentum. Care must here be taken in the interpretations: it is easy to be misled and conclude that an experiment is important for the energy-momentum problem, while what is demonstrated experimentally is merely the action of the Abraham-Minkowski force acting in surface layers or inhomogeneous regions. The Abraham-Minkowski force is common for the two energy-momentum tensors and carries no information about field momentum. As a final item, we propose an experiment that might show the existence of the Abraham force at high frequencies. This would eventually be a welcome optical analogue to the classic low-frequency 1975 Lahoz-Walker experiment.

  8. Numerical identification procedure between a micro-Cauchy model and a macro-second gradient model for planar pantographic structures

    Science.gov (United States)

    Giorgio, Ivan

    2016-08-01

    In order to design the microstructure of metamaterials showing high toughness in extension (property to be shared with muscles), it has been recently proposed (Dell'Isola et al. in Z Angew Math Phys 66(6):3473-3498, 2015) to consider pantographic structures. It is possible to model such structures at a suitably small length scale (resolving in detail the interconnecting pivots/cylinders) using a standard Cauchy first gradient theory. However, the computational costs for such modelling choice are not allowing for the study of more complex mechanical systems including for instance many pantographic substructures. The microscopic model considered here is a quadratic isotropic Saint-Venant first gradient continuum including geometric nonlinearities and characterized by two Lamé parameters. The introduced macroscopic two-dimensional model for pantographic sheets is characterized by a deformation energy quadratic both in the first and second gradient of placement. However, as underlined in Dell'Isola et al. (Proc R Soc Lond A 472(2185):20150790, 2016), it is needed that the second gradient stiffness depends on the first gradient of placement if large deformations and large displacements configurations must be described. The numerical identification procedure presented in this paper consists in fitting the macro-constitutive parameters using several numerical simulations performed with the micro-model. The parameters obtained by the best fit identification in few deformation problems fit very well also in many others, showing that the reduced proposed model is suitable to get an effective model at relevantly lower computational effort. The presented numerical evidences suggest that a rigorous mathematical homogenization result most likely holds.

  9. A new experimental procedure for incorporation of model contaminants in polymer hosts

    NARCIS (Netherlands)

    Papaspyrides, C.D.; Voultzatis, Y.; Pavlidou, S.; Tsenoglou, C.; Dole, P.; Feigenbaum, A.; Paseiro, P.; Pastorelli, S.; Cruz Garcia, C. de la; Hankemeier, T.; Aucejo, S.

    2005-01-01

    A new experimental procedure for incorporation of model contaminants in polymers was developed as part of a general scheme for testing the efficiency of functional barriers in food packaging. The aim was to progressively pollute polymers in a controlled fashion up to a high level in the range of 100

  10. A new experimental procedure for incorporation of model contaminants in polymer hosts

    NARCIS (Netherlands)

    Papaspyrides, C.D.; Voultzatis, Y.; Pavlidou, S.; Tsenoglou, C.; Dole, P.; Feigenbaum, A.; Paseiro, P.; Pastorelli, S.; Cruz Garcia, C. de la; Hankemeier, T.; Aucejo, S.

    2005-01-01

    A new experimental procedure for incorporation of model contaminants in polymers was developed as part of a general scheme for testing the efficiency of functional barriers in food packaging. The aim was to progressively pollute polymers in a controlled fashion up to a high level in the range of 100

  11. Studying Differential Item Functioning via Latent Variable Modeling: A Note on a Multiple-Testing Procedure

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.; Lee, Chun-Lung; Chang, Chi

    2013-01-01

    This note is concerned with a latent variable modeling approach for the study of differential item functioning in a multigroup setting. A multiple-testing procedure that can be used to evaluate group differences in response probabilities on individual items is discussed. The method is readily employed when the aim is also to locate possible…

  12. User Acceptance of YouTube for Procedural Learning: An Extension of the Technology Acceptance Model

    Science.gov (United States)

    Lee, Doo Young; Lehto, Mark R.

    2013-01-01

    The present study was framed using the Technology Acceptance Model (TAM) to identify determinants affecting behavioral intention to use YouTube. Most importantly, this research emphasizes the motives for using YouTube, which is notable given its extrinsic task goal of being used for procedural learning tasks. Our conceptual framework included two…

  13. Extremity and eye lens doses in interventional radiology and cardiology procedures: first results of the ORAMED project.

    Science.gov (United States)

    Domienik, J; Brodecki, M; Carinou, E; Donadille, L; Jankowski, J; Koukorava, C; Krim, S; Nikodemova, D; Ruiz-Lopez, N; Sans-Mercé, M; Struelens, L; Vanhavere, F

    2011-03-01

    The main objective of WP1 of the ORAMED (Optimization of RAdiation protection for MEDical staff) project is to obtain a set of standardised data on extremity and eye lens doses for staff in interventional radiology (IR) and cardiology (IC) and to optimise staff protection. A coordinated measurement program in different hospitals in Europe will help towards this direction. This study aims at analysing the first results of the measurement campaign performed in IR and IC procedures in 34 European hospitals. The highest doses were found for pacemakers, renal angioplasties and embolisations. Left finger and wrist seem to receive the highest extremity doses, while the highest eye lens doses are measured during embolisations. Finally, it was concluded that it is difficult to find a general correlation between kerma area product and extremity or eye lens doses.

  14. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist;

    2015-01-01

    exercise, thereby bypassing the challenging task of model structure determination and identification. Parameter identification problems can thus lead to ill-calibrated models with low predictive power and large model uncertainty. Every calibration exercise should therefore be precededby a proper model...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...

  15. Parameter Vertex Color Pada Animation Procedural 3D Model Vegetasi Musaceae

    Directory of Open Access Journals (Sweden)

    I Gede Ngurah Arya Indrayasa

    2017-02-01

    Full Text Available Penggunaan vegetasi untuk industri film, video game, simulasi, dan arsitektur visualisas merupakan faktor penting untuk menghasilkan adegan pemandangan alam lebih hidup. Penelitian ini bertujuan untuk mengetahui pengaruh dari vertex color terhadap efek angin  pada animasi prosedural 3d model vegetasi musaceae serta parameter vertex color yang tepat untuk menghasilkan animasi 3d model vegetasi musaceae realistis. Hasil akhir yang di capai adalah meneliti apakah perubahan parameter vertex color dapat mempengaruhi bentuk animasi procedural 3d vegetasi musaceae serta pengaruh dari vertex color terhadap efek angin pada animasi prosedural 3d model vegetasi Musaceae. Berdasarkan pengamat dan perbandingan pada pengujian 5 sample vertex color diperoleh hasil bahwa perubahan parameter vertex color dapat mempengaruhi bentuk animasi procedural 3d vegetasi musaceae serta di peroleh kesimpulan Sample No.5 merupakan parameter vertex color yang tepat untuk menghasilkan animasi 3d model vegetasi Musaceae yang realistis. Kata kunci—3D, Animasi Prosedural, Vegetation  

  16. Combining Decision Diagrams and SAT Procedures for Efficient Symbolic Model Checking

    DEFF Research Database (Denmark)

    Williams, Poul Frederick; Biere, Armin; Clarke, Edmund M.

    2000-01-01

    in the specification of a 16 bit multiplier. As opposed to Bounded Model Checking (BMC) our method is complete in practice. Our technique is based on a quantification procedure that allows us to eliminate quantifiers in Quantified Boolean Formulas (QBF). The basic step of this procedure is the up-one operation...... for BEDs. In addition we list a number of important optimizations to reduce the number of basic steps. In particular the optimization rule of quantification-by-substitution turned out to be very useful: exists x : {g /\\ ( x f )} = g[f/x]. The rule is used (1) during fixed point iterations, (2) for deciding...

  17. Detection Procedure for a Single Additive Outlier and Innovational Outlier in a Bilinear Model

    Directory of Open Access Journals (Sweden)

    Azami Zaharim

    2007-01-01

    Full Text Available A single outlier detection procedure for data generated from BL(1,1,1,1 models is developed. It is carried out in three stages. Firstly, the measure of impact of an IO and AO denoted by IO ω , AO ω , respectively are derived based on least squares method. Secondly, test statistics and test criteria are defined for classifying an observation as an outlier of its respective type. Finally, a general single outlier detection procedure is presented to distinguish a particular type of outlier at a time point t.

  18. A Long-Term Memory Competitive Process Model of a Common Procedural Error. Part II: Working Memory Load and Capacity

    Science.gov (United States)

    2013-07-01

    A Long-Term Memory Competitive Process Model of a Common Procedural Error, Part II: Working Memory Load and Capacity Franklin P. Tamborello, II...00-00-2013 4. TITLE AND SUBTITLE A Long-Term Memory Competitive Process Model of a Common Procedural Error, Part II: Working Memory Load and...07370024.2011.601692 Tamborello, F. P., & Trafton, J. G. (2013). A long-term competitive process model of a common procedural error. In Proceedings of the 35th

  19. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  20. A step-by-step procedure for pH model construction in aquatic systems

    Directory of Open Access Journals (Sweden)

    A. F. Hofmann

    2007-10-01

    Full Text Available We present, by means of a simple example, a comprehensive step-by-step procedure to consistently derive a pH model of aquatic systems. As pH modeling is inherently complex, we make every step of the model generation process explicit, thus ensuring conceptual, mathematical, and chemical correctness. Summed quantities, such as total inorganic carbon and total alkalinity, and the influences of modeled processes on them are consistently derived. The model is subsequently reformulated until numerically and computationally simple dynamical solutions, like a variation of the operator splitting approach (OSA and the direct substitution approach (DSA, are obtained. As several solution methods are pointed out, connections between previous pH modelling approaches are established. The final reformulation of the system according to the DSA allows for quantification of the influences of kinetic processes on the rate of change of proton concentration in models containing multiple biogeochemical processes. These influences are calculated including the effect of re-equilibration of the system due to a set of acid-base reactions in local equilibrium. This possibility of quantifying influences of modeled processes on the pH makes the end-product of the described model generation procedure a powerful tool for understanding the internal pH dynamics of aquatic systems.

  1. An iterative statistical tolerance analysis procedure to deal with linearized behavior models

    Institute of Scientific and Technical Information of China (English)

    Antoine DUMAS; Jean-Yves DANTAN; Nicolas GAYTON; Thomas BLES; Robin LOEBL

    2015-01-01

    Tolerance analysis consists of analyzing the impact of variations on the mechanism behavior due to the manufacturing process. The goal is to predict its quality level at the design stage. The technique involves computing probabilities of failure of the mechanism in a mass production process. The various analysis methods have to consider the component’s variations as random variables and the worst configuration of gaps for over-constrained systems. This consideration varies in function by the type of mechanism behavior and is realized by an optimization scheme combined with a Monte Carlo simulation. To simplify the optimization step, it is necessary to linearize the mechanism behavior into several parts. This study aims at analyzing the impact of the linearization strategy on the probability of failure estimation; a highly over-constrained mechanism with two pins and five cotters is used as an illustration for this study. The purpose is to strike a balance among model error caused by the linearization, computing time, and result accuracy. In addition, an iterative procedure is proposed for the assembly requirement to provide accurate results without using the entire Monte Carlo simulation.

  2. Evaluation of relevant information for optimal reflector modeling through data assimilation procedures

    Directory of Open Access Journals (Sweden)

    Argaud Jean-Philippe

    2015-01-01

    Full Text Available The goal of this study is to look after the amount of information that is mandatory to get a relevant parameters optimisation by data assimilation for physical models in neutronic diffusion calculations, and to determine what is the best information to reach the optimum of accuracy at the cheapest cost. To evaluate the quality of the optimisation, we study the covariance matrix that represents the accuracy of the optimised parameter. This matrix is a classical output of the data assimilation procedure, and it is the main information about accuracy and sensitivity of the parameter optimal determination. From these studies, we present some results collected from the neutronic simulation of nuclear power plants. On the basis of the configuration studies, it has been shown that with data assimilation we can determine a global strategy to optimise the quality of the result with respect to the amount of information provided. The consequence of this is a cost reduction in terms of measurement and/or computing time with respect to the basic approach.

  3. Approximation of skewed interfaces with tensor-based model reduction procedures: Application to the reduced basis hierarchical model reduction approach

    Science.gov (United States)

    Ohlberger, Mario; Smetana, Kathrin

    2016-09-01

    In this article we introduce a procedure, which allows to recover the potentially very good approximation properties of tensor-based model reduction procedures for the solution of partial differential equations in the presence of interfaces or strong gradients in the solution which are skewed with respect to the coordinate axes. The two key ideas are the location of the interface either by solving a lower-dimensional partial differential equation or by using data functions and the subsequent removal of the interface of the solution by choosing the determined interface as the lifting function of the Dirichlet boundary conditions. We demonstrate in numerical experiments for linear elliptic equations and the reduced basis-hierarchical model reduction approach that the proposed procedure locates the interface well and yields a significantly improved convergence behavior even in the case when we only consider an approximation of the interface.

  4. Validation Tests of Open-Source Procedures for Digital Camera Calibration and 3d Image-Based Modelling

    Science.gov (United States)

    Toschi, I.; Rivola, R.; Bertacchini, E.; Castagnetti, C.; Dubbini, M.; Capra, A.

    2013-07-01

    Among the many open-source software solutions recently developed for the extraction of point clouds from a set of un-oriented images, the photogrammetric tools Apero and MicMac (IGN, Institut Géographique National) aim to distinguish themselves by focusing on the accuracy and the metric content of the final result. This paper firstly aims at assessing the accuracy of the simplified and automated calibration procedure offered by the IGN tools. Results obtained with this procedure were compared with those achieved with a test-range calibration approach using a pre-surveyed laboratory test-field. Both direct and a-posteriori validation tests turned out successfully showing the stability and the metric accuracy of the process, even when low textured or reflective surfaces are present in the 3D scene. Afterwards, the possibility of achieving accurate 3D models from the subsequently extracted dense point clouds is also evaluated. Three different types of sculptural elements were chosen as test-objects and "ground-truth" data were acquired with triangulation laser scanners. 3D models derived from point clouds oriented with a simplified relative procedure show a suitable metric accuracy: all comparisons delivered a standard deviation of millimeter-level. The use of Ground Control Points in the orientation phase did not improve significantly the accuracy of the final 3D model, when a small figure-like corbel was used as test-object.

  5. Estimation of the uncertainty associated with the results based on the validation of chromatographic analysis procedures: application to the determination of chlorides by high performance liquid chromatography and of fatty acids by high resolution gas chromatography.

    Science.gov (United States)

    Quintela, M; Báguena, J; Gotor, G; Blanco, M J; Broto, F

    2012-02-03

    This article presents a model to calculate the uncertainty associated with an analytical result based on the validation of the analysis procedure. This calculation model is proposed as an alternative to commonly used bottom-up and top-down methods. This proposal is very advantageous as the validation of the procedures and the estimation of the uncertainty of the measurement are part of the technical requirements needed in order to obtain the ISO 17025:2005 accreditation. This model has been applied to the determination of chloride by liquid chromatography in lixiviates and in the determination of palmitic acid and stearic acid by gas chromatography in magnesium stearate samples.

  6. A double-step truncation procedure for large-scale shell-model calculations

    CERN Document Server

    Coraggio, L; Itaco, N

    2016-01-01

    We present a procedure that is helpful to reduce the computational complexity of large-scale shell-model calculations, by preserving as much as possible the role of the rejected degrees of freedom in an effective approach. Our truncation is driven first by the analysis of the effective single-particle energies of the original large-scale shell-model hamiltonian, so to locate the relevant degrees of freedom to describe a class of isotopes or isotones, namely the single-particle orbitals that will constitute a new truncated model space. The second step is to perform an unitary transformation of the original hamiltonian from its model space into the truncated one. This transformation generates a new shell-model hamiltonian, defined in a smaller model space, that retains effectively the role of the excluded single-particle orbitals. As an application of this procedure, we have chosen a realistic shell-model hamiltonian defined in a large model space, set up by seven and five proton and neutron single-particle orb...

  7. Light Water Reactor Sustainability Program: Computer-based procedure for field activities: results from three evaluations at nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna [Idaho National Laboratory; Bly, Aaron [Idaho National Laboratory; LeBlanc, Katya [Idaho National Laboratory

    2014-09-01

    Nearly all activities that involve human interaction with the systems of a nuclear power plant are guided by procedures. The paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety; however, improving procedure use could yield tremendous savings in increased efficiency and safety. One potential way to improve procedure-based activities is through the use of computer-based procedures (CBPs). Computer-based procedures provide the opportunity to incorporate context driven job aids, such as drawings, photos, just-in-time training, etc into CBP system. One obvious advantage of this capability is reducing the time spent tracking down the applicable documentation. Additionally, human performance tools can be integrated in the CBP system in such way that helps the worker focus on the task rather than the tools. Some tools can be completely incorporated into the CBP system, such as pre-job briefs, placekeeping, correct component verification, and peer checks. Other tools can be partly integrated in a fashion that reduces the time and labor required, such as concurrent and independent verification. Another benefit of CBPs compared to PBPs is dynamic procedure presentation. PBPs are static documents which limits the degree to which the information presented can be tailored to the task and conditions when the procedure is executed. The CBP system could be configured to display only the relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the user down the path of relevant steps based on the current conditions. This feature will reduce the user’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. As part of the Department of Energy’s (DOE) Light Water Reactors Sustainability Program

  8. Measuring the Effects of Customs and Administrative Procedures on Trade: Gravity Model for South-Eastern Europe

    Directory of Open Access Journals (Sweden)

    Katerina Toševska-Trpčevska

    2014-04-01

    Full Text Available This paper measures the effects of certain customs and administrative procedures on trade between the countries of South-Eastern Europe in the period 2008-2012. Following OECD methodology, we employ the augmented gravity model. The empirical results suggest that the number of days spent at the border and costs paid in both importer and exporter countries had significant negative influence on the volume of trade in the period 2008-2012. In addition, the model underlines that sharing the same border and being part of the former Yugoslav market are important determinants of trade in the region.

  9. Procedure to Determine Thermal Characteristics and Groundwater Influence in Heterogeneous Subsoil by an Enhanced Thermal Response Test and Numerical Modeling

    Science.gov (United States)

    Aranzabal, Nordin; Martos, Julio; Montero, Álvaro; Monreal, Llúcia; Soret, Jesús; Torres, José; García-Olcina, Raimundo

    2016-04-01

    Ground thermal conductivity and borehole thermal resistance are indispensable parameters for the optimal design of subsoil thermal processes and energy storage characterization. The standard method to determine these parameters is the Thermal Response Test (TRT) which results are evaluated by models considering the ground being homogeneous and isotropic. This method obtains an effective ground thermal conductivity which represents an average of the thermal conductivity along the different layers crossed by perforation. In order to obtain a ground thermal conductivity profile as a function of depth two additional key factors are required, first, a new significant data set: a temperature profile along the borehole; and second, a new analysis procedure to extract ground heterogeneity from the recorded data. This research work presents the results of an analysis procedure, complementing the standard TRT analysis, which allows to estimate the thermal conductivity profile from a temperature profile measured along the borehole during a TRT. In the analysis procedure, a 3D Finite Element Model (FEM) is used to fit simulation results with experimental data, by a set of iterative simulations. This methodology is applied to a data set obtained throughout a TRT of 1kW heat power injection in a 30m depth Borehole Heat Exchange (BHE) facility. A highly conductive layer have been detected and located at 25 m depth. In addition, a novel automated device to obtain temperature profiles along geothermal pipes with or without fluid flow is presented. This sensor system is intended to improve the standard TRT and it allows the collection of depth depending thermal characteristics of the subsoil geological structure. Currently, some studies are being conducted in double U-pipe borehole installations in order to improve previously introduced analysis procedure. From a numerical model simulation that takes into account advective effects is pretended to estimate underground water velocity

  10. A Dissipative Model for Hydrogen Storage: Existence and Regularity Results

    CERN Document Server

    Chiodaroli, Elisabetta

    2010-01-01

    We prove global existence of a solution to an initial and boundary value problem for a highly nonlinear PDE system. The problem arises from a termomechanical dissipative model describing hydrogen storage by use of metal hydrides. In order to treat the model from an analytical point of view, we formulate it as a phase transition phenomenon thanks to the introduction of a suitable phase variable. Continuum mechanics laws lead to an evolutionary problem involving three state variables: the temperature, the phase parameter and the pressure. The problem thus consists of three coupled partial differential equations combined with initial and boundary conditions. Existence and regularity of the solutions are here investigated by means of a time discretization-a priori estimate-passage to the limit procedure joined with compactness and monotonicity arguments.

  11. A performance weighting procedure for GCMs based on explicit probabilistic models and accounting for observation uncertainty

    Science.gov (United States)

    Renard, Benjamin; Vidal, Jean-Philippe

    2016-04-01

    In recent years, the climate modeling community has put a lot of effort into releasing the outputs of multimodel experiments for use by the wider scientific community. In such experiments, several structurally distinct GCMs are run using the same observed forcings (for the historical period) or the same projected forcings (for the future period). In addition, several members are produced for a single given model structure, by running each GCM with slightly different initial conditions. This multiplicity of GCM outputs offers many opportunities in terms of uncertainty quantification or GCM comparisons. In this presentation, we propose a new procedure to weight GCMs according to their ability to reproduce the observed climate. Such weights can be used to combine the outputs of several models in a way that rewards good-performing models and discards poorly-performing ones. The proposed procedure has the following main properties: 1. It is based on explicit probabilistic models describing the time series produced by the GCMs and the corresponding historical observations, 2. It can use several members whenever available, 3. It accounts for the uncertainty in observations, 4. It assigns a weight to each GCM (all weights summing up to one), 5. It can also assign a weight to the "H0 hypothesis" that all GCMs in the multimodel ensemble are not compatible with observations. The application of the weighting procedure is illustrated with several case studies including synthetic experiments, simple cases where the target GCM output is a simple univariate variable and more realistic cases where the target GCM output is a multivariate and/or a spatial variable. These case studies illustrate the generality of the procedure which can be applied in a wide range of situations, as long as the analyst is prepared to make an explicit probabilistic assumption on the target variable. Moreover, these case studies highlight several interesting properties of the weighting procedure. In

  12. A Robbins-Monro procedure for a class of models of deformation

    CERN Document Server

    Fraysse, Philippe

    2012-01-01

    The paper deals with the statistical analysis of several data sets associated with shape invariant models with different translation, height and scaling parameters. We propose to estimate these parameters together with the common shape function. Our approach extends the recent work of Bercu and Fraysse to multivariate shape invariant models. We propose a very efficient Robbins-Monro procedure for the estimation of the translation parameters and we use these estimates in order to evaluate scale parameters. The main pattern is estimated by a weighted Nadaraya-Watson estimator. We provide almost sure convergence and asymptotic normality for all estimators. Finally, we illustrate the convergence of our estimation procedure on simulated data as well as on real ECG data.

  13. A fast and systematic procedure to develop dynamic models of bioprocesses: application to microalgae cultures

    Directory of Open Access Journals (Sweden)

    J. Mailier

    2010-09-01

    Full Text Available The purpose of this paper is to report on the development of a procedure for inferring black-box, yet biologically interpretable, dynamic models of bioprocesses based on sets of measurements of a few external components (biomass, substrates, and products of interest. The procedure has three main steps: (a the determination of the number of macroscopic biological reactions linking the measured components; (b the estimation of a first reaction scheme, which has interesting mathematical properties, but might lack a biological interpretation; and (c the "projection" (or transformation of this reaction scheme onto a biologically-consistent scheme. The advantage of the method is that it allows the fast prototyping of models for the culture of microorganisms that are not well documented. The good performance of the third step of the method is demonstrated by application to an example of microalgal culture.

  14. A lifesaving model: teaching advanced procedures on shelter animals in a tertiary care facility.

    Science.gov (United States)

    Spindel, Miranda E; MacPhail, Catriona M; Hackett, Timothy B; Egger, Erick L; Palmer, Ross H; Mama, Khursheed R; Lee, David E; Wilkerson, Nicole; Lappin, Michael R

    2008-01-01

    It is estimated that there are over 5 million homeless animals in the United States. While the veterinary profession continues to evolve in advanced specialty disciplines, animal shelters in every community lack resources for basic care. Concurrently, veterinary students, interns, and residents have less opportunity for practical primary and secondary veterinary care experiences in tertiary-care institutions that focus on specialty training. The two main goals of this project were (1) to provide practical medical and animal-welfare experiences to veterinary students, interns, and residents, under faculty supervision, and (2) to care for animals with medical problems beyond a typical shelter's technical capabilities and budget. Over a two-year period, 22 animals from one humane society were treated at Colorado State University Veterinary Medical Center. Initial funding for medical expenses was provided by PetSmart Charities. All 22 animals were successfully treated and subsequently adopted. The results suggest that collaboration between a tertiary-care facility and a humane shelter can be used successfully to teach advanced procedures and to save homeless animals. The project demonstrated that linking a veterinary teaching hospital's resources to a humane shelter's needs did not financially affect either institution. It is hoped that such a program might be used as a model and be perpetuated in other communities.

  15. Some results regarding the comparison of the Earth's atmospheric models

    Directory of Open Access Journals (Sweden)

    Šegan S.

    2005-01-01

    Full Text Available In this paper we examine air densities derived from our realization of aeronomic atmosphere models based on accelerometer measurements from satellites in a low Earth's orbit (LEO. Using the adapted algorithms we derive comparison parameters. The first results concerning the adjustment of the aeronomic models to the total-density model are given.

  16. Due process model of procedural justice in performance appraisal: promotion versus termination scenarios.

    Science.gov (United States)

    Kataoka, Heloneida C; Cole, Nina D; Flint, Douglas A

    2006-12-01

    In a laboratory study, 318 student participants (148 male, 169 female, and one who did not report sex; M age 25.0, SD = 6.0) in introductory organizational behavior classes responded to scenarios in which performance appraisal resulted in either employee promotion or termination. Each scenario had varying levels of three procedural justice criteria for performance appraisal. For both promotion and termination outcomes, analysis showed that, as the number of criteria increased, perceptions of procedural fairness increased. A comparison between the two outcomes showed that perceptions of fairness were significantly stronger for the promotion outcome than for termination.

  17. Geostatistical Procedures for Developing Three-Dimensional Aquifer Models from Drillers' Logs

    Science.gov (United States)

    Bohling, G.; Helm, C.

    2013-12-01

    The Hydrostratigraphic Drilling Record Assessment (HyDRA) project is developing procedures for employing the vast but highly qualitative hydrostratigraphic information contained in drillers' logs in the development of quantitative three-dimensional (3D) depictions of subsurface properties for use in flow and transport models to support groundwater management practices. One of the project's objectives is to develop protocols for 3D interpolation of lithological data from drillers' logs, properly accounting for the categorical nature of these data. This poster describes the geostatistical procedures developed to accomplish this objective. Using a translation table currently containing over 62,000 unique sediment descriptions encountered during the transcription of over 15,000 logs in the Kansas High Plains aquifer, the sediment descriptions are translated into 71 standardized terms, which are then mapped into a small number of categories associated with different representative property (e.g., hydraulic conductivity [K]) values. Each log is partitioned into regular intervals and the proportion of each K category within each interval is computed. To properly account for their compositional nature, a logratio transform is applied to the proportions. The transformed values are then kriged to the 3D model grid and backtransformed to determine the proportion of each category within each model cell. Various summary measures can then be computed from the proportions, including a proportion-weighted average K and an entropy measure representing the degree of mixing of categories within each cell. We also describe a related cross-validation procedure for assessing log quality.

  18. Detecting temporal trends in species assemblages with bootstrapping procedures and hierarchical models

    Science.gov (United States)

    Gotelli, Nicholas J.; Dorazio, Robert M.; Ellison, Aaron M.; Grossman, Gary D.

    2010-01-01

    Quantifying patterns of temporal trends in species assemblages is an important analytical challenge in community ecology. We describe methods of analysis that can be applied to a matrix of counts of individuals that is organized by species (rows) and time-ordered sampling periods (columns). We first developed a bootstrapping procedure to test the null hypothesis of random sampling from a stationary species abundance distribution with temporally varying sampling probabilities. This procedure can be modified to account for undetected species. We next developed a hierarchical model to estimate species-specific trends in abundance while accounting for species-specific probabilities of detection. We analysed two long-term datasets on stream fishes and grassland insects to demonstrate these methods. For both assemblages, the bootstrap test indicated that temporal trends in abundance were more heterogeneous than expected under the null model. We used the hierarchical model to estimate trends in abundance and identified sets of species in each assemblage that were steadily increasing, decreasing or remaining constant in abundance over more than a decade of standardized annual surveys. Our methods of analysis are broadly applicable to other ecological datasets, and they represent an advance over most existing procedures, which do not incorporate effects of incomplete sampling and imperfect detection.

  19. The Effect of Bathymetric Filtering on Nearshore Process Model Results

    Science.gov (United States)

    2009-01-01

    Filtering on Nearshore Process Model Results 6. AUTHOR(S) Nathaniel Plant, Kacey L. Edwards, James M. Kaihatu, Jayaram Veeramony, Yuan-Huang L. Hsu...filtering on nearshore process model results Nathaniel G. Plant **, Kacey L Edwardsb, James M. Kaihatuc, Jayaram Veeramony b, Larry Hsu’’, K. Todd Holland...assimilation efforts that require this information. Published by Elsevier B.V. 1. Introduction Nearshore process models are capable of predicting

  20. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.; Cameron, Bruce M.; Robb, Richard A. [Biomedical Imaging Resource, Mayo Clinic College of Medicine, Rochester, Minnesota 55905 (United States); Kwartowitz, David M. [Department of Bioengineering, Clemson University, Clemson, South Carolina 29634 (United States); Gunawan, Mia [Department of Biochemistry and Molecular and Cellular Biology, Georgetown University, Washington D.C. 20057 (United States); Johnson, Susan B.; Packer, Douglas L. [Division of Cardiovascular Diseases, Mayo Clinic, Rochester, Minnesota 55905 (United States); Dalegrave, Charles [Clinical Cardiac Electrophysiology, Cardiology Division Hospital Sao Paulo, Federal University of Sao Paulo, 04024-002 Brazil (Brazil); Kolasa, Mark W. [David Grant Medical Center, Fairfield, California 94535 (United States)

    2014-02-15

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved

  1. Light Water Reactor Sustainability Program: Computer-Based Procedures for Field Activities: Results from Three Evaluations at Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna [Idaho National Lab. (INL), Idaho Falls, ID (United States); Le Blanc, Katya [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bly, Aaron [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-09-01

    The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which is a research and development (R&D) program sponsored by Department of Energy (DOE) and performed in close collaboration with industry R&D programs that provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. Nearly all activities in the nuclear power industry are guided by procedures, which today are printed and executed on paper. This paper-based procedure process has proven to ensure safety; however, there are improvements to be gained. Due to its inherent dynamic nature, a CBP provides the opportunity to incorporate context driven job aids, such as drawings, photos, and just-in-time training. Compared to the static state of paper-based procedures (PBPs), the presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps.

  2. Revision Arthroscopic Repair Versus Latarjet Procedure in Patients With Recurrent Instability After Initial Repair Attempt: A Cost-Effectiveness Model.

    Science.gov (United States)

    Makhni, Eric C; Lamba, Nayan; Swart, Eric; Steinhaus, Michael E; Ahmad, Christopher S; Romeo, Anthony A; Verma, Nikhil N

    2016-09-01

    To compare the cost-effectiveness of arthroscopic revision instability repair and Latarjet procedure in treating patients with recurrent instability after initial arthroscopic instability repair. An expected-value decision analysis of revision arthroscopic instability repair compared with Latarjet procedure for recurrent instability followed by failed repair attempt was modeled. Inputs regarding procedure cost, clinical outcomes, and health utilities were derived from the literature. Compared with revision arthroscopic repair, Latarjet was less expensive ($13,672 v $15,287) with improved clinical outcomes (43.78 v 36.76 quality-adjusted life-years). Both arthroscopic repair and Latarjet were cost-effective compared with nonoperative treatment (incremental cost-effectiveness ratios of 3,082 and 1,141, respectively). Results from sensitivity analyses indicate that under scenarios of high rates of stability postoperatively, along with improved clinical outcome scores, revision arthroscopic repair becomes increasingly cost-effective. Latarjet procedure for failed instability repair is a cost-effective treatment option, with lower costs and improved clinical outcomes compared with revision arthroscopic instability repair. However, surgeons must still incorporate clinical judgment into treatment algorithm formation. Level IV, expected value decision analysis. Copyright © 2016. Published by Elsevier Inc.

  3. Peri- and intraarticular analgesia (PIA) superior to other procedures in TKA – review of literature and results of own RCT

    Science.gov (United States)

    Beckmann, J.; Stathellis, A.; Fitz, W.; Köck, F.; Bauer, G.; Gebauer, M.; Schnurr, C.

    2016-01-01

    Background: TKA is a worldwide established surgery with periarticular infiltrations being scientifically funded is the superiority of in the perioperative setup of TKA nowadays. However, nerve blocks are still widely used as perioperative standard meaning possible analgesia beyond surger, but simultaneously with sensomotoric deficiency and resulting risk of fall as well as increased pain after removal of nerval catheters. Also a standard procedure is intubation which means immediate pain after awakening of patients with resulting need for sufficient oral, intravenous, intramuscular or subcutaneous analgesia. We present an actual review of the literature and an own RCT comparing infiltrations with nerve blocks, which to our best knowledge presents the first RCT combined with an additional continous part. Methods: Literature review (PubMed)Own RCT: 50 TKA were randomized and prospectively included. 25 patients received nerve blocks with a single ischiadicus block and a femoralis block with additional perineural catheter. 25 patients received periarticular infiltrations together with an intraarticular catheter (PIA). All catheters stayed for 4 days. Both groups received a laryngeal masc. Postoperative mobilisation, surgeon and type of prosthesis were the same in all patients. The following was evaluated pre- and postoperatively (first, third and sixth hour, first until sixth day): VAS, additional analgetics/ opioids, KSS score, knee function and ability to raise the straightened leg. Complications as infection, falls, DVT etc. were recorded. Results: Periarticular infiltrations are superior to other options in the perioperative setup of TKA which is clearly shown by literature (several studies including reviews) as well as the results of our own RCT: Pain occured in both groups, however, VAS, additional analgetics/ opioids and KSS score as well as the ability to raise the straightened leg were significantly better following PIA (p<0,01), with comparable knee function

  4. Steel Containment Vessel Model Test: Results and Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Costello, J.F.; Hashimote, T.; Hessheimer, M.F.; Luk, V.K.

    1999-03-01

    A high pressure test of the steel containment vessel (SCV) model was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. A concentric steel contact structure (CS), installed over the SCV model and separated at a nominally uniform distance from it, provided a simplified representation of a reactor shield building in the actual plant. The SCV model and contact structure were instrumented with strain gages and displacement transducers to record the deformation behavior of the SCV model during the high pressure test. This paper summarizes the conduct and the results of the high pressure test and discusses the posttest metallurgical evaluation results on specimens removed from the SCV model.

  5. Building Energy Simulation Test for Existing Homes (BESTEST-EX): Instructions for Implementing the Test Procedure, Calibration Test Reference Results, and Example Acceptance-Range Criteria

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, R.; Polly, B.; Bianchi, M.; Neymark, J.; Kennedy, M.

    2011-08-01

    This publication summarizes building energy simulation test for existing homes (BESTEST-EX): instructions for implementing the test procedure, calibration tests reference results, and example acceptance-range criteria.

  6. A Form—Correcting System of Chinese Characters Using a Model of Correcting Procedures of Calligraphists

    Institute of Scientific and Technical Information of China (English)

    曾建超; HidehikoSanada; 等

    1995-01-01

    A support system for form-correction of Chinese Characters is developed based upon a generation model SAM,and its feasibility is evaluated.SAM is excellent as a model for generating Chinese characters,but it is difficult to determine appropriate parameters because the use of calligraphic knowledge is needed.by noticing that calligraphic knowledge of calligraphists is included in their corrective actions, we adopt a strategy to acquire calligraphic knowledge by monitoring,recording and analyzing corrective actions of calligraphists,and try to realize an environment under which calligraphists can easily make corrections to character forms and which can record corrective actions of calligraphists without interfering with them.In this paper,we first construct a model of correcting procedures of calligraphists,which is composed of typical correcting procedures that are acquired by extensively observing their corrective actions and interviewing them,and develop a form-correcting system for brush-written Chinese characters by using the model.Secondly,through actual correcting experiments,we demonstrate that parameters within SAM can be easily corrected at the level of character patterns by our system,and show that it is effective and easy for calligraphists to be used by evaluating effectiveness of the correcting model,sufficiency of its functions and execution speed.

  7. Results from a new Cocks-Ashby style porosity model

    Science.gov (United States)

    Barton, Nathan

    2017-01-01

    A new porosity evolution model is described, along with preliminary results. The formulation makes use of a Cocks-Ashby style treatment of porosity kinetics that includes rate dependent flow in the mechanics of porosity growth. The porosity model is implemented in a framework that allows for a variety of strength models to be used for the matrix material, including ones with significant changes in rate sensitivity as a function of strain rate. Results of the effect of changing strain rate sensitivity on porosity evolution are shown. The overall constitutive model update involves the coupled solution of a system of nonlinear equations.

  8. [DESCRIPTION AND PRESENTATION OF THE RESULTS OF ELECTROENCEPHALOGRAM PROCESSING USING AN INFORMATION MODEL].

    Science.gov (United States)

    Myznikov, I L; Nabokov, N L; Rogovanov, D Yu; Khankevich, Yu R

    2016-01-01

    The paper proposes to apply the informational modeling of correlation matrix developed by I.L. Myznikov in early 1990s in neurophysiological investigations, such as electroencephalogram recording and analysis, coherence description of signals from electrodes on the head surface. The authors demonstrate information models built using the data from studies of inert gas inhalation by healthy human subjects. In the opinion of the authors, information models provide an opportunity to describe physiological processes with a high level of generalization. The procedure of presenting the EEG results holds great promise for the broad application.

  9. Results of the Marine Ice Sheet Model Intercomparison Project, MISMIP

    Directory of Open Access Journals (Sweden)

    F. Pattyn

    2012-05-01

    Full Text Available Predictions of marine ice-sheet behaviour require models that are able to robustly simulate grounding line migration. We present results of an intercomparison exercise for marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no effects of lateral buttressing. Unique steady state grounding line positions exist for ice sheets on a downward sloping bed, while hysteresis occurs across an overdeepened bed, and stable steady state grounding line positions only occur on the downward-sloping sections. Models based on the shallow ice approximation, which does not resolve extensional stresses, do not reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line. For extensional-stress resolving "shelfy stream" models, differences between model results were mainly due to the choice of spatial discretization. Moving grid methods were found to be the most accurate at capturing grounding line evolution, since they track the grounding line explicitly. Adaptive mesh refinement can further improve accuracy, including fixed grid models that generally perform poorly at coarse resolution. Fixed grid models, with nested grid representations of the grounding line, are able to generate accurate steady state positions, but can be inaccurate over transients. Only one full-Stokes model was included in the intercomparison, and consequently the accuracy of shelfy stream models as approximations of full-Stokes models remains to be determined in detail, especially during transients.

  10. Using Discrete Event Simulation to Model the Economic Value of Shorter Procedure Times on EP Lab Efficiency in the VALUE PVI Study.

    Science.gov (United States)

    Kowalski, Marcin; DeVille, J Brian; Svinarich, J Thomas; Dan, Dan; Wickliffe, Andrew; Kantipudi, Charan; Foell, Jason D; Filardo, Giovanni; Holbrook, Reece; Baker, James; Baydoun, Hassan; Jenkins, Mark; Chang-Sing, Peter

    2016-05-01

    The VALUE PVI study demonstrated that atrial fibrillation (AF) ablation procedures and electrophysiology laboratory (EP lab) occupancy times were reduced for the cryoballoon compared with focal radiofrequency (RF) ablation. However, the economic impact associated with the cryoballoon procedure for hospitals has not been determined. Assess the economic value associated with shorter AF ablation procedure times based on VALUE PVI data. A model was formulated from data from the VALUE PVI study. This model used a discrete event simulation to translate procedural efficiencies into metrics utilized by hospital administrators. A 1000-day period was simulated to determine the accrued impact of procedure time on an institution's EP lab when considering staff and hospital resources. The simulation demonstrated that procedures performed with the cryoballoon catheter resulted in several efficiencies, including: (1) a reduction of 36.2% in days with overtime (422 days RF vs 60 days cryoballoon); (2) 92.7% less cumulative overtime hours (370 hours RF vs 27 hours cryoballoon); and (3) an increase of 46.7% in days with time for an additional EP lab usage (186 days RF vs 653 days cryoballoon). Importantly, the added EP lab utilization could not support the time required for an additional AF ablation procedure. The discrete event simulation of the VALUE PVI data demonstrates the potential positive economic value of AF ablation procedures using the cryoballoon. These benefits include more days where overtime is avoided, fewer cumulative overtime hours, and more days with time left for additional usage of EP lab resources.

  11. Establishing Reliable Cognitive Change in Children with Epilepsy: The Procedures and Results for a Sample with Epilepsy

    Science.gov (United States)

    van Iterson, Loretta; Augustijn, Paul B.; de Jong, Peter F.; van der Leij, Aryan

    2013-01-01

    The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a reference sample. Then, these RCIs were applied to a…

  12. Establishing reliable cognitive change in children with epilepsy: The procedures and results for a sample with epilepsy

    NARCIS (Netherlands)

    van Iterson, L.; Augustijn, P.B.; de Jong, P.F.; van der Leij, A.

    2013-01-01

    The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a referenc

  13. Establishing Reliable Cognitive Change in Children with Epilepsy: The Procedures and Results for a Sample with Epilepsy

    Science.gov (United States)

    van Iterson, Loretta; Augustijn, Paul B.; de Jong, Peter F.; van der Leij, Aryan

    2013-01-01

    The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a reference sample. Then, these RCIs were applied to a…

  14. Common handling procedures conducted in preclinical safety studies result in minimal hepatic gene expression changes in Sprague-Dawley rats.

    Directory of Open Access Journals (Sweden)

    Yudong D He

    Full Text Available Gene expression profiling is a tool to gain mechanistic understanding of adverse effects in response to compound exposure. However, little is known about how the common handling procedures of experimental animals during a preclinical study alter baseline gene expression. We report gene expression changes in the livers of female Sprague-Dawley rats following common handling procedures. Baseline gene expression changes identified in this study provide insight on how these changes may affect interpretation of gene expression profiles following compound exposure. Rats were divided into three groups. One group was not subjected to handling procedures and served as controls for both handled groups. Animals in the other two groups were weighed, subjected to restraint in Broome restrainers, and administered water via oral gavage daily for 1 or 4 days with tail vein blood collections at 1, 2, 4, and 8 hours postdose on days 1 and 4. Significantly altered genes were identified in livers of animals following 1 or 4 days of handling when compared to the unhandled animals. Gene changes in animals handled for 4 days were similar to those handled for 1 day, suggesting a lack of habituation. The altered genes were primarily immune function related genes. These findings, along with a correlating increase in corticosterone levels suggest that common handling procedures may cause a minor immune system perturbance.

  15. Updated Results for the Wake Vortex Inverse Model

    Science.gov (United States)

    Robins, Robert E.; Lai, David Y.; Delisi, Donald P.; Mellman, George R.

    2008-01-01

    NorthWest Research Associates (NWRA) has developed an Inverse Model for inverting aircraft wake vortex data. The objective of the inverse modeling is to obtain estimates of the vortex circulation decay and crosswind vertical profiles, using time history measurements of the lateral and vertical position of aircraft vortices. The Inverse Model performs iterative forward model runs using estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Iterations are performed until a user-defined criterion is satisfied. Outputs from an Inverse Model run are the best estimates of the time history of the vortex circulation derived from the observed data, the vertical crosswind profile, and several vortex parameters. The forward model, named SHRAPA, used in this inverse modeling is a modified version of the Shear-APA model, and it is described in Section 2 of this document. Details of the Inverse Model are presented in Section 3. The Inverse Model was applied to lidar-observed vortex data at three airports: FAA acquired data from San Francisco International Airport (SFO) and Denver International Airport (DEN), and NASA acquired data from Memphis International Airport (MEM). The results are compared with observed data. This Inverse Model validation is documented in Section 4. A summary is given in Section 5. A user's guide for the inverse wake vortex model is presented in a separate NorthWest Research Associates technical report (Lai and Delisi, 2007a).

  16. Non-linear spacecraft component parameters identification based on experimental results and finite element modelling

    Science.gov (United States)

    Vismara, S. O.; Ricci, S.; Bellini, M.; Trittoni, L.

    2016-06-01

    The objective of the present paper is to describe a procedure to identify and model the non-linear behaviour of structural elements. The procedure herein applied can be divided into two main steps: the system identification and the finite element model updating. The application of the restoring force surface method as a strategy to characterize and identify localized non-linearities has been investigated. This method, which works in the time domain, has been chosen because it has `built-in' characterization capabilities, it allows a direct non-parametric identification of non-linear single-degree-of-freedom systems and it can easily deal with sine-sweep excitations. Two different application examples are reported. At first, a numerical test case has been carried out to investigate the modelling techniques in the case of non-linear behaviour based on the presence of a free-play in the model. The second example concerns the flap of the Intermediate eXperimental Vehicle that successfully completed its 100-min mission on 11 February 2015. The flap was developed under the responsibility of Thales Alenia Space Italia, the prime contractor, which provided the experimental data needed to accomplish the investigation. The procedure here presented has been applied to the results of modal testing performed on the article. Once the non-linear parameters were identified, they were used to update the finite element model in order to prove its capability of predicting the flap behaviour for different load levels.

  17. SModelS: A Tool for Making Systematic Use of Simplified Models Results

    Science.gov (United States)

    Waltenberger, Wolfgang; SModelS Group.

    2016-10-01

    We present an automated software tool ”SModelS” to systematically confront theories Beyond the Standard Model (BSM) with experimental data. The tool consists of a general procedure to decompose such BSM theories into their Simplified Models Spectra (SMS). In addition, SModelS features a database containing the majority of the published SMS results of CMS and ATLAS. These results consist of the 95% confidence level upper limits on signal production cross sections. The two components together allow us to quickly confront any BSM model with LHC results. As a show-case example we will briefly discuss an application of our procedure to a specific supersymmetric model. It is one of our ongoing efforts to extend the framework to include also efficiency maps produced either by the experimental collaborations, by efforts performed within the phenomenological groups, or possibly also by ourselves. While the current implementation can handle null results only, it is our ultimate goal to build the Next Standard Model in a bottom-up fashion from both negative and positive results of several experiments. The implementation is open source, written in python, and available from http://smodels.hephy.at.

  18. Generalised Chou-Yang model and recent results

    Energy Technology Data Exchange (ETDEWEB)

    Fazal-e-Aleem [International Centre for Theoretical Physics, Trieste (Italy); Rashid, H. [Punjab Univ., Lahore (Pakistan). Centre for High Energy Physics

    1996-12-31

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and {rho} together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author) 16 refs.

  19. Results of 45 arthroscopic Bankart procedures: Does the ISIS remain a reliable prognostic assessment after 5 years?

    Science.gov (United States)

    Boughebri, Omar; Maqdes, Ali; Moraiti, Constantina; Dib, Choukry; Leclère, Franck Marie; Valenti, Philippe

    2015-05-01

    The Instability Severity Index Score (ISIS) includes preoperative clinical and radiological risk factors to select patients who can benefit from an arthroscopic Bankart procedure with a low rate of recurrence. Patients who underwent an arthroscopic Bankart for anterior shoulder instability with an ISIS lower than or equal to four were assessed after a minimum of 5-year follow-up. Forty-five shoulders were assessed at a mean of 79 months (range 60-118 months). Average age was 29.4 years (range 17-58 years) at the time of surgery. Postoperative functions were assessed by the Walch and Duplay and the Rowe scores for 26 patients; an adapted telephonic interview was performed for the 19 remaining patients who could not be reassessed clinically. A failure was defined by the recurrence of an anterior dislocation or subluxation. Patients were asked whether they were finally very satisfied, satisfied or unhappy. The mean Walch and Duplay score at last follow-up was 84.3 (range 35-100). The final result for these patients was excellent in 14 patients (53.8 %), good in seven cases (26.9 %), poor in three patients (11.5 %) and bad in two patients (7.7 %). The mean Rowe score was 82.6 (range 35-100). Thirty-nine patients (86.7 %) were subjectively very satisfied or satisfied, and six (13.3 %) were unhappy. Four patients (8.9 %) had a recurrence of frank dislocation with a mean delay of 34 months (range 12-72 months). Three of them had a Hill-Sachs lesion preoperatively. Two patients had a preoperative ISIS at 4 points and two patients at 3 points. The selection based on the ISIS allows a low rate of failure after an average term of 5 years. Lowering the limit for indication to 3 points allows to avoid the association between two major risk factors for recurrence, which are valued at 2 points. The existence of a Hill-Sachs lesion is a stronger indicator for the outcome of instability repair. Level IV, Retrospective Case Series, Treatment Study.

  20. Specific training for LESS surgery results from a prospective study in the animal model

    Directory of Open Access Journals (Sweden)

    Giovannni Scala Marchini

    2016-02-01

    Full Text Available ABSTRACT Objective to prospectively evaluate the ability of post-graduate students enrolled in a laparoscopy program of the Institute for Teaching and Research to complete single port total nephrectomies. Materials and Methods 15 post-graduate students were enrolled in the study, which was performed using the SILStm port system for single-port procedures. All participants were already proficient in total nephrectomies in animal models and performed a left followed by a right nephrectomy. Analyzed data comprised incision size, complications, and the time taken to complete each part of the procedure. Statistical significance was set at p<0.05. Results All students successfully finished the procedure using the single-port system. A total of 30 nephrectomies were analyzed. Mean incision size was 3.61 cm, mean time to trocar insertion was 9.61 min and to dissect the renal hilum was 25.3 min. Mean time to dissect the kidney was 5.18 min and to complete the whole procedure was 39.4 min. Total renal hilum and operative time was 45.8% (p<0.001 and 38% (p=0.001 faster in the second procedure, respectively. Complications included 3 renal vein lesions, 2 kidney lacerations and 1 lesion of a lumbar artery. All were immediately identified and corrected laparoscopically through the single-port system, except for one renal vein lesion, which required the introduction an auxiliary laparoscopic port. Conclusion Laparoscopic single-port nephrectomy in the experimental animal model is a feasible but relatively difficult procedure for those with intermediate laparoscopic experience. Intraoperative complications might be successfully treated with the single-port system. Training aids reducing surgical time and improves outcomes.

  1. Life cycle Prognostic Model Development and Initial Application Results

    Energy Technology Data Exchange (ETDEWEB)

    Jeffries, Brien; Hines, Wesley; Nam, Alan; Sharp, Michael; Upadhyaya, Belle [The University of Tennessee, Knoxville (United States)

    2014-08-15

    In order to obtain more accurate Remaining Useful Life (RUL) estimates based on empirical modeling, a Lifecycle Prognostics algorithm was developed that integrates various prognostic models. These models can be categorized into three types based on the type of data they process. The application of multiple models takes advantage of the most useful information available as the system or component operates through its lifecycle. The Lifecycle Prognostics is applied to an impeller test bed, and the initial results serve as a proof of concept.

  2. The Role of 3-D Heart Models in Planning and Executing Interventional Procedures.

    Science.gov (United States)

    Grant, Elena K; Olivieri, Laura J

    2017-09-01

    Percutaneous interventions aimed at addressing congenital and structural heart disease are simultaneously becoming more common and more complex as time progresses. An increasing number of heart defects that had previously required open heart surgery can now be successfully addressed in the cardiac catheterization laboratory. Adequate preprocedural preparation for these novel, complex procedures is critical to ensure their success. Diagnostic data can be collected before the intervention and displayed in multiple formats during the procedure. Advanced cardiac imaging, including cardiac magnetic resonance and cardiac computed tomography form the basis of this preparatory information. Novel methods of displaying these images are becoming more widespread and more useful, including 3-D printed models, 3-D digital models displayed on a virtual or augmented reality system and 3-D digital models overlaid onto a fluoroscopy system. In this review we summarize these state-of-the-art technologies and how they are able to help interventional cardiologists push the boundaries of what is possible in the cardiac catheterization laboratory. Copyright © 2017 Canadian Cardiovascular Society. All rights reserved.

  3. [Results of Kapandji-Sauvé procedure with distal radio-ulnar fusion and segmental resection of the ulna].

    Science.gov (United States)

    Haferkamp, H; Heidemann, B; Gühne, O; Deventer, B

    2003-05-01

    The Kapandji-Sauvé procedure was performed in 75 patients between 1990 and 2003. The most important indication was painful and restricted forearm rotation after fracture of the distal radius combined with dislocation or destruction of the distal radioulnar joint. 25 patients were followed up using a modified Martini score. We found a significant improvement of forearm rotation, reduction of pain and a good patient satisfaction in a long-term follow-up ranging from three to 12 years.

  4. Multivariate Bias Correction Procedures for Improving Water Quality Predictions using Mechanistic Models

    Science.gov (United States)

    Libera, D.; Arumugam, S.

    2015-12-01

    Water quality observations are usually not available on a continuous basis because of the expensive cost and labor requirements so calibrating and validating a mechanistic model is often difficult. Further, any model predictions inherently have bias (i.e., under/over estimation) and require techniques that preserve the long-term mean monthly attributes. This study suggests and compares two multivariate bias-correction techniques to improve the performance of the SWAT model in predicting daily streamflow, TN Loads across the southeast based on split-sample validation. The first approach is a dimension reduction technique, canonical correlation analysis that regresses the observed multivariate attributes with the SWAT model simulated values. The second approach is from signal processing, importance weighting, that applies a weight based off the ratio of the observed and model densities to the model data to shift the mean, variance, and cross-correlation towards the observed values. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are also compared with independent estimates from the USGS LOADEST model. Uncertainties in the bias-corrected estimates due to limited water quality observations are also discussed.

  5. Effects of modelling examples in complex procedural skills training: a randomised study.

    Science.gov (United States)

    Bjerrum, Anne Sofie; Hilberg, Ole; van Gog, Tamara; Charles, Peder; Eika, Berit

    2013-09-01

    Learning complex procedural skills, such as bronchoscopy, through simulation training, imposes a high cognitive load on novices. Example-based learning has been shown to be an effective way to reduce cognitive load and enhance learning outcomes. Prior research has shown that modelling examples, in which a human model demonstrates the skill to a learner, were effective for learning basic surgical skills. However, principles derived from simple skills training do not necessarily generalise to more complex skills. Therefore, the present study examined the effectiveness of integrating modelling examples into simulation training for a more complex procedural skill - bronchoscopy. Moreover, this study extended previous simulation studies by using a physical demonstration rather than video-based modelling examples. Forty-eight medical students were randomised into a modelling group and a control group. They all practised on eight bronchoscopy simulation cases individually, followed by standardised feedback from an instructor. Additionally, the modelling group watched three modelling examples of the simulated bronchoscopy, performed by the instructor. These modelling examples were interspersed between cases. Assessments were carried out at pre-, post- and 3-week retention tests with simulator-measured performance metrics. The primary outcome measure was the percentage of segments entered/minute. Other measures were wall collisions, red-out, the percentage of segments entered and the time to completion. Group differences were examined using repeated measures analysis of variance (anova). A clear learning curve was observed for both groups, but as hypothesised, the modelling group outperformed the control group on all parameters except the percentage of segments entered on the post-test and retained this superiority at the retention test. For the primary outcome measure, the percentage of segments entered/minute, the modelling group achieved a 46% higher score at the post

  6. A Comparison between Robust z and 0.3-Logit Difference Procedures in Assessing Stability of Linking Items for the Rasch Model

    Science.gov (United States)

    Huynh, Huynh; Rawls, Anita

    2011-01-01

    There are at least two procedures to assess item difficulty stability in the Rasch model: robust z procedure and "0.3 Logit Difference" procedure. The robust z procedure is a variation of the z statistic that reduces dependency on outliers. The "0.3 Logit Difference" procedure is based on experiences in Rasch linking for tests…

  7. Nitrous oxide emissions from cropland: a procedure for calibrating the DayCent biogeochemical model using inverse modelling

    Science.gov (United States)

    Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.

    2013-01-01

    DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.

  8. Stress distribution on the thorax after the Nuss procedure for pectus excavatum results in different patterns between adult and child patients.

    Science.gov (United States)

    Nagasao, Tomohisa; Miyamoto, Junpei; Tamaki, Tamotsu; Ichihara, Kazuhiko; Jiang, Hua; Taguchi, Toshihiko; Yozu, Ryohei; Nakajima, Tatsuo

    2007-12-01

    In the Nuss procedure, in which the deformed thorax is forcibly corrected by insertion of correction bars, considerable stresses occur on the patient's thorax. We performed the present study to elucidate how stress patterns on the thorax after this procedure differ between child and adult patients. Eighteen patients with pectus excavatum, constituting a child group (n = 10) and an adult group (n = 8), were included in the study. After a 3-dimensional computer-assisted design model was produced with computed tomographic data from each patient, simulation of the Nuss procedure was performed on the model. Then the stresses occurring on each thorax were calculated using the finite element method. The stresses were compared between the child and adult groups in terms of intensity on each rib and the distribution patterns over the whole thorax. With all 12 ribs, significantly greater stress occurred in the adult group than stress in the child group. Although the stresses occurring on the thorax demonstrated concentrated patterns in the child group, widely distributed patterns were observed in the adult group. The stresses that occur on the thorax after the Nuss procedure take different patterns between children and adults in terms of intensity and distribution. The differences should be taken into consideration in managing postoperative pain after the Nuss procedure.

  9. An Overview of Models of Speaking Performance and Its Implications for the Development of Procedural Framework for Diagnostic Speaking Tests

    Science.gov (United States)

    Zhao, Zhongbao

    2013-01-01

    This paper aims at developing a procedural framework for the development and validation of diagnostic speaking tests. The researcher reviews the current available models of speaking performance, analyzes the distinctive features and then points out the implications for the development of a procedural framework for diagnostic speaking tests. On…

  10. Modelling CO2 flow in naturally fractured geological media using MINC and multiple subregion upscaling procedure

    Science.gov (United States)

    Tatomir, Alexandru Bogdan A. C.; Flemisch, Bernd; Class, Holger; Helmig, Rainer; Sauter, Martin

    2017-04-01

    Geological storage of CO2 represents one viable solution to reduce greenhouse gas emission in the atmosphere. Potential leakage of CO2 storage can occur through networks of interconnected fractures. The geometrical complexity of these networks is often very high involving fractures occurring at various scales and having hierarchical structures. Such multiphase flow systems are usually hard to solve with a discrete fracture modelling (DFM) approach. Therefore, continuum fracture models assuming average properties are usually preferred. The multiple interacting continua (MINC) model is an extension of the classic double porosity model (Warren and Root, 1963) which accounts for the non-linear behaviour of the matrix-fracture interactions. For CO2 storage applications the transient representation of the inter-porosity two phase flow plays an important role. This study tests the accuracy and computational efficiency of the MINC method complemented with the multiple sub-region (MSR) upscaling procedure versus the DFM. The two phase flow MINC simulator is implemented in the free-open source numerical toolbox DuMux (www.dumux.org). The MSR (Gong et al., 2009) determines the inter-porosity terms by solving simplified local single-phase flow problems. The DFM is considered as the reference solution. The numerical examples consider a quasi-1D reservoir with a quadratic fracture system , a five-spot radial symmetric reservoir, and a completely random generated fracture system. Keywords: MINC, upscaling, two-phase flow, fractured porous media, discrete fracture model, continuum fracture model

  11. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario...... of the meteorological model results. These uncertainties stem from e.g. limits in meteorological obser-vations used to initialise meteorological forecast series. By perturbing the initial state of an NWP model run in agreement with the available observa-tional data, an ensemble of meteorological forecasts is produced....... However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties...

  12. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the ‘most likely...... uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble......’ dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent...

  13. Mathematical Existence Results for the Doi-Edwards Polymer Model

    Science.gov (United States)

    Chupin, Laurent

    2017-01-01

    In this paper, we present some mathematical results on the Doi-Edwards model describing the dynamics of flexible polymers in melts and concentrated solutions. This model, developed in the late 1970s, has been used and extensively tested in modeling and simulation of polymer flows. From a mathematical point of view, the Doi-Edwards model consists in a strong coupling between the Navier-Stokes equations and a highly nonlinear constitutive law. The aim of this article is to provide a rigorous proof of the well-posedness of the Doi-Edwards model, namely that it has a unique regular solution. We also prove, which is generally much more difficult for flows of viscoelastic type, that the solution is global in time in the two dimensional case, without any restriction on the smallness of the data.

  14. Direct colposcopic vision used with the LLETZ procedure for optimal treatment of CIN: results of joint cohort studies.

    Science.gov (United States)

    Carcopino, Xavier; Mancini, Julien; Charpin, Colette; Grisot, Céline; Maycock, Joan Annette; Houvenaeghel, Gilles; Agostini, Aubert; Boubli, Léon; Prendiville, Walter

    2013-11-01

    To assess the value of direct colposcopic vision (DCV) for optimizing large loop excision of the transformation zone (LLETZ) for the treatment of cervical intraepithelial neoplasia (CIN). Data from 648 patients who underwent excisional procedures for CIN and were included in two previously published cohort studies were retrospectively reviewed. Women who had a LLETZ were included for analysis (n = 436). Margin status, surgical specimen dimensions and volume were analysed according to the use of colposcopy during procedure. Compared to LLETZ guided by previous colposcopy report only, and to LLETZ performed immediately after colposcopy, DCV allowed for a significantly higher rate of clear margins: 33 (52.4 %), 104 (68.0 %) and 142 (84.5 %), respectively (p < 0.001). It also allowed for a significantly higher probability of achieving both negative margins and depth of specimen <10 mm: 10 (15.9 %) cases, 47 (30.7 %) cases and 125 (74.4 %) cases, respectively (p < 0.001). In multivariate analysis, when compared with the use of previous colposcopy report or with colposcopy immediately before the LLETZ, DCV allowed for a significantly higher probability of negative margins (AOR: 4.61; 95 % CI: 2.37-8.99 and AOR: 2.55; 95 % CI: 1.47-4.41), combined negative margins and depth <75th percentile (AOR: 3.67; 95 % CI: 1.97-6.86 and AOR: 3.05; 95 % CI: 1.91-4.87) and combined negative margins and volume <75th percentile (AOR: 12.96; 95 % CI: 5.99-28.05 and AOR: 6.16; 95 % CI: 3.75-10.14), respectively. When used with the LLETZ procedure, DCV allows for optimal outcomes in terms of negative resection margins, and minimized depth and volume of the excised specimen; and should therefore be recommended.

  15. Modeling Results for the ITER Cryogenic Fore Pump

    Science.gov (United States)

    Zhang, Dongsheng

    The work presented here is the analysis and modeling of the ITER-Cryogenic Fore Pump (CFP), also called Cryogenic Viscous Compressor (CVC). Unlike common cryopumps that are usually used to create and maintain vacuum, the cryogenic fore pump is designed for ITER to collect and compress hydrogen isotopes during the regeneration process of the torus cryopumps. Different from common cryopumps, the ITER-CFP works in the viscous flow regime. As a result, both adsorption boundary conditions and transport phenomena contribute unique features to the pump performance. In this report, the physical mechanisms of cryopumping are studied, especially the diffusion-adsorption process and these are coupled with the standard equations of species, momentum and energy balance, as well as the equation of state. Numerical models are developed, which include highly coupled non-linear conservation equations of species, momentum, and energy and equation of state. Thermal and kinetic properties are treated as functions of temperature, pressure, and composition of the gas fluid mixture. To solve such a set of equations, a novel numerical technique, identified as the Group-Member numerical technique is proposed. This document presents three numerical models: a transient model, a steady state model, and a hemisphere (or molecular flow) model. The first two models are developed based on analysis of the raw experimental data while the third model is developed as a preliminary study. The modeling results are compared with available experiment data for verification. The models can be used for cryopump design, and can also benefit problems, such as loss of vacuum in a cryomodule or cryogenic desublimation. The scientific and engineering investigation being done here builds connections between Mechanical Engineering and other disciplines, such as Chemical Engineering, Physics, and Chemistry.

  16. Comparison of NASCAP modelling results with lumped circuit analysis

    Science.gov (United States)

    Stang, D. B.; Purvis, C. K.

    1980-01-01

    Engineering design tools that can be used to predict the development of absolute and differential potentials by realistic spacecraft under geomagnetic substorm conditions are described. Two types of analyses are in use: (1) the NASCAP code, which computes quasistatic charging of geometrically complex objects with multiple surface materials in three dimensions; (2) lumped element equivalent circuit models that are used for analyses of particular spacecraft. The equivalent circuit models require very little computation time, however, they cannot account for effects, such as the formation of potential barriers, that are inherently multidimensional. Steady state potentials of structure and insulation are compared with those resulting from the equivalent circuit model.

  17. The East model: recent results and new progresses

    CERN Document Server

    Faggionato, Alessandra; Roberto, Cyril; Toninelli, Cristina

    2012-01-01

    The East model is a particular one dimensional interacting particle system in which certain transitions are forbidden according to some constraints depending on the configuration of the system. As such it has received particular attention in the physics literature as a special case of a more general class of systems referred to as kinetically constrained models, which play a key role in explaining some features of the dynamics of glasses. In this paper we give an extensive overview of recent rigorous results concerning the equilibrium and non-equilibrium dynamics of the East model together with some new improvements.

  18. Constraining hybrid inflation models with WMAP three-year results

    CERN Document Server

    Cardoso, A

    2006-01-01

    We reconsider the original model of quadratic hybrid inflation in light of the WMAP three-year results and study the possibility of obtaining a spectral index of primordial density perturbations, $n_s$, smaller than one from this model. The original hybrid inflation model naturally predicts $n_s\\geq1$ in the false vacuum dominated regime but it is also possible to have $n_s<1$ when the quadratic term dominates. We therefore investigate whether there is also an intermediate regime compatible with the latest constraints, where the scalar field value during the last 50 e-folds of inflation is less than the Planck scale.

  19. Recent MEG Results and Predictive SO(10) Models

    CERN Document Server

    Fukuyama, Takeshi

    2011-01-01

    Recent MEG results of a search for the lepton flavor violating (LFV) muon decay, $\\mu \\to e \\gamma$, show 3 events as the best value for the number of signals in the maximally likelihood fit. Although this result is still far from the evidence/discovery in statistical point of view, it might be a sign of a certain new physics beyond the Standard Model. As has been well-known, supersymmetric (SUSY) models can generate the $\\mu \\to e \\gamma$ decay rate within the search reach of the MEG experiment. A certain class of SUSY grand unified theory (GUT) models such as the minimal SUSY SO(10) model (we call this class of models "predictive SO(10) models") can unambiguously determine fermion Yukawa coupling matrices, in particular, the neutrino Dirac Yukawa matrix. Based on the universal boundary conditions for soft SUSY breaking parameters at the GUT scale, we calculate the rate of the $\\mu \\to e \\gamma$ process by using the completely determined Dirac Yukawa matrix in two examples of predictive SO(10) models. If we ...

  20. A Tractable Model of the LTE Access Reservation Procedure for Machine-Type Communications

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Min Kim, Dong; Madueño, Germán Corrales;

    2015-01-01

    A canonical scenario in Machine-Type Communications (MTC) is the one featuring a large number of devices, each of them with sporadic traffic. Hence, the number of served devices in a single LTE cell is not determined by the available aggregate rate, but rather by the limitations of the LTE access...... reservation protocol. Specifically, the limited number of contention preambles and the limited amount of uplink grants per random access response are crucial to consider when dimensioning LTE networks for MTC. We propose a low-complexity model that encompasses these two limitations and allows us to evaluate...... on the preamble collisions. A comparison with the simulated LTE access reservation procedure that follows the 3GPP specifications, confirms that our model provides an accurate estimation of the system outage event and the number of supported MTC devices....

  1. Standard Model physics results from ATLAS and CMS

    CERN Document Server

    Dordevic, Milos

    2015-01-01

    The most recent results of Standard Model physics studies in proton-proton collisions at 7 TeV and 8 TeV center-of-mass energy based on data recorded by ATLAS and CMS detectors during the LHC Run I are reviewed. This overview includes studies of vector boson production cross section and properties, results on V+jets production with light and heavy flavours, latest VBS and VBF results, measurement of diboson production with an emphasis on ATGC and QTGC searches, as well as results on inclusive jet cross sections with strong coupling constant measurement and PDF constraints. The outlined results are compared to the prediction of the Standard Model.

  2. A Procedure for Modeling Structural Component/Attachment Failure Using Transient Finite Element Analysis

    Science.gov (United States)

    Lovejoy, Andrew E.; Jegley, Dawn C. (Technical Monitor)

    2007-01-01

    Structures often comprise smaller substructures that are connected to each other or attached to the ground by a set of finite connections. Under static loading one or more of these connections may exceed allowable limits and be deemed to fail. Of particular interest is the structural response when a connection is severed (failed) while the structure is under static load. A transient failure analysis procedure was developed by which it is possible to examine the dynamic effects that result from introducing a discrete failure while a structure is under static load. The failure is introduced by replacing a connection load history by a time-dependent load set that removes the connection load at the time of failure. The subsequent transient response is examined to determine the importance of the dynamic effects by comparing the structural response with the appropriate allowables. Additionally, this procedure utilizes a standard finite element transient analysis that is readily available in most commercial software, permitting the study of dynamic failures without the need to purchase software specifically for this purpose. The procedure is developed and explained, demonstrated on a simple cantilever box example, and finally demonstrated on a real-world example, the American Airlines Flight 587 (AA587) vertical tail plane (VTP).

  3. Clinical use of a 15-W diode laser in small animal surgery: results in 30 varied procedures

    Science.gov (United States)

    Crowe, Dennis T.; Swalander, David; Hittenmiller, Donald; Newton, Jenifer

    1999-06-01

    The use of a 15-watt diode laser (CeramOptec)in 30 surgical procedures in dogs and cats was reviewed. Ease of use, operator safety, hemostasis control, wound healing, surgical time, complication rate, and pain control were observed and recorded. Procedures performed were partial pancreatectomy, nasal carcinoma ablation, medial meniscus channeling, perianal and anorectal mass removal (5), hemangioma and hemangiopericytoma removal from two legs, benign skin mass removal (7), liver lobectomy, partial prostatectomy, soft palate resection, partial arytenoidectomy, partial ablation of a thyroid carcinoma, photo-vaporization of the tumor bed following malignant tumor resection (4), neurosheath tumor removal from the tongue, tail sebaceous cyst resection, malignant mammary tumor and mast cell tumor removal. The laser was found to be very simple and safe to use. Hemostasis was excellent in all but the liver and prostate surgeries. The laser was particularly effective in preventing hemorrhage during perianal, anal, and tongue mass removal. It is estimated that a time and blood loss savings of 50% over that of conventional surgery occurred with the use of the laser. All external wounds made by laser appeared to heal faster and with less inflammation than those made with a conventional or electrosurgical scalpel.

  4. Relationship Marketing results: proposition of a cognitive mapping model

    Directory of Open Access Journals (Sweden)

    Iná Futino Barreto

    2015-12-01

    Full Text Available Objective - This research sought to develop a cognitive model that expresses how marketing professionals understand the relationship between the constructs that define relationship marketing (RM. It also tried to understand, using the obtained model, how objectives in this field are achieved. Design/methodology/approach – Through cognitive mapping, we traced 35 individual mental maps, highlighting how each respondent understands the interactions between RM elements. Based on the views of these individuals, we established an aggregate mental map. Theoretical foundation – The topic is based on a literature review that explores the RM concept and its main elements. Based on this review, we listed eleven main constructs. Findings – We established an aggregate mental map that represents the RM structural model. Model analysis identified that CLV is understood as the final result of RM. We also observed that the impact of most of the RM elements on CLV is brokered by loyalty. Personalization and quality, on the other hand, proved to be process input elements, and are the ones that most strongly impact others. Finally, we highlight that elements that punish customers are much less effective than elements that benefit them. Contributions - The model was able to insert core elements of RM, but absent from most formal models: CLV and customization. The analysis allowed us to understand the interactions between the RM elements and how the end result of RM (CLV is formed. This understanding improves knowledge on the subject and helps guide, assess and correct actions.

  5. Are joint and soft tissue injections painful? Results of a national French cross-sectional study of procedural pain in rheumatological practice

    Directory of Open Access Journals (Sweden)

    Poncet Coralie

    2010-01-01

    Full Text Available Abstract Background Joint, spinal and soft tissue injections are commonly performed by rheumatologists in their daily practice. Contrary to other procedures, e.g. performed in pediatric care, little is known about the frequency, the intensity and the management of procedural pain observed in osteo-articular injections in daily practice. Methods This observational, prospective, national study was carried out among a French national representative database of primary rheumatologists to evaluate the prevalence and intensity of pain caused by intra-and peri-articular injections, synovial fluid aspirations, soft tissue injections, and spinal injections. For each physician, data were collected over 1 month, for up to 40 consecutive patients (>18-years-old for whom a synovial fluid aspiration, an intra or peri-articular injection or a spinal injection were carried out during consultations. Statistical analysis was carried out in order to compare patients who had suffered from pain whilst undergoing the procedure to those who had not. Explanatory analyses were conducted by stepwise logistic regression with the characteristics of the patients to explain the existence of pain. Results Data were analysed for 8446 patients (64% female, mean age 62 ± 14 years recruited by 240 physicians. The predominant sites injected were the knee (45.5% and spine (19.1%. Over 80% of patients experienced procedural pain which was most common in the small joints (42% and spine (32% Pain was severe in 5.3% of patients, moderate in 26.6%, mild in 49.8%, and absent in 18.3%. Pain was significantly more intense in patients with severe pain linked to their underlying pathology and for procedures performed in small joints. Preventative or post-procedure analgesia was rarely given, only to 5.7% and 36.3% of patients, respectively. Preventative analgesia was more frequently prescribed in patients with more severe procedural pain. Conclusion Most patients undergoing intra-or peri

  6. Intravascular ultrasound guidance of percutaneous coronary intervention in ostial chronic total occlusions: a description of the technique and procedural results.

    Science.gov (United States)

    Ryan, Nicola; Gonzalo, Nieves; Dingli, Philip; Cruz, Oscar Vedia; Jiménez-Quevedo, Pilar; Nombela-Franco, Luis; Nuñez-Gil, Ivan; Trigo, María Del; Salinas, Pablo; Macaya, Carlos; Fernandez-Ortiz, Antonio; Escaned, Javier

    2017-02-14

    Inability to cross the lesion with a guidewire is the most common reason for failure in percutaneous revascularization (PCI) of chronic total occlusions (CTOs). An ostial or stumpless CTO is an acknowledged challenge for CTO recanalization due to difficulty in successful wiring. IVUS imaging provides the opportunity to visualize the occluded vessel and to aid guidewire advancement. We review the value of this technique in a single-centre experience of CTO PCI. This series involves 22 patients who underwent CTO-PCI using IVUS guidance for stumpless CTO wiring at our institution. CTO operators with extensive IVUS experience in non-CTO cases carried out all procedures. Procedural and outcome data was prospectively entered into the institutional database and a retrospective analysis of clinical, angiographic and technical data performed. 17 (77%) of the 22 procedures were successful. The mean age was 59.8 ± 11.5 years, and 90.9% were male. The most commonly attempted lesions were located in the left anterior descending 36.4% (Soon et al. in J Intervent Cardiol 20(5):359-366, 2007) and Circumflex artery (LCx) 31.8% (Mollet et al. in Am J Cardiol 95(2):240-243, 2005). Mean JCTO score was 3.09 ± 0.75 (3.06 ± 0.68, 3.17 ± 0.98 in the successful and failed groups respectively p = 0.35). The mean contrast volume was 378.7 ml ± 114.7 (389.9 ml ± 130.5, 349.2 ml ± 52.2 p = 0.3 in the successful and failed groups respectively). There was no death, coronary artery bypass grafting or myocardial infarction requiring intervention in this series. When the success rates were analyzed taking into account the date of adoption of this technique, the learning curve had no significant impact on CTO-PCI success. This series describes a good success rate in IVUS guided stumpless wiring of CTOs in consecutive patients with this complex anatomical scenario.

  7. Adverse events related to gastrointestinal endoscopic procedures in pediatric patients under anesthesia care and a predictive risk model (AEGEP Study).

    Science.gov (United States)

    Ariza, F; Montilla-Coral, D; Franco, O; González, L F; Lozano, L C; Torres, A M; Jordán, J; Blanco, L F; Suárez, L; Cruz, G; Cepeda, M

    2014-01-01

    Multiple studies have analyzed perioperative factors related to adverse events (AEs) in children who require gastrointestinal endoscopic procedures (GEP) in settings where deep sedation is the preferred anesthetic technique over general anesthesia (GA) but not for the opposite case. We reviewed our anesthesia institutional database, seeking children less than 12 years who underwent GEP over a 5-year period. A logistic regression was used to determine significant associations between preoperative conditions, characteristics of the procedure, airway management, anesthetic approaches and the presence of serious and non-serious AEs. GA was preferred over deep sedation [77.8% vs. 22.2% in 2178 GEP under anesthesia care (n=1742)]. We found 96 AEs reported in 77 patients, including hypoxemia (1.82%), bronchospasm (1.14%) and laryngospasm (0.91%) as the most frequent. There were 2 cases of severe bradycardia related to laryngospasm/hypoxemia and a case of aspiration resulting in unplanned hospitalization, but there were no cases of intra- or postoperative deaths. Final predictive model for perioperative AEs included age <1 year, upper respiratory tract infections (URTI) <1 week prior to the procedure and low weight for the age (LWA) as independent risk factors and ventilation by facial mask as a protector against these events (p<0.05). AEs are infrequent and severe ones are remote in a setting where AG is preferred over deep sedation. Ventilatory AEs are the most frequent and depend on biometrical and comorbid conditions more than anesthetic drugs chosen. Age <1 year, history of URTI in the week prior to the procedure and LWA work as independent risk factors for AEs in these patients. Copyright © 2013 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Published by Elsevier España. All rights reserved.

  8. Value of the distant future: Model-independent results

    Science.gov (United States)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  9. Developing a spatial-statistical model and map of historical malaria prevalence in Botswana using a staged variable selection procedure

    Directory of Open Access Journals (Sweden)

    Mabaso Musawenkosi LH

    2007-09-01

    Full Text Available Abstract Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have

  10. Exact results for car accidents in a traffic model

    Science.gov (United States)

    Huang, Ding-wei

    1998-07-01

    Within the framework of a recent model for car accidents on single-lane highway traffic, we study analytically the probability of the occurrence of car accidents. Exact results are obtained. Various scaling behaviours are observed. The linear dependence of the occurrence of car accidents on density is understood as the dominance of a single velocity in the distribution.

  11. The improvement of pelvic floor muscle function in POP patients after the Prolift procedure: results from surface electromyography.

    Science.gov (United States)

    Wang, Lihua; Chen, Xinliang; Li, Xiaocui; Gong, Yao; Li, Huaifang; Tong, Xiaowen

    2013-10-01

    Patients with pelvic organ prolapse (POP) have lower pelvic floor muscle (PFM) function. We hypothesized that pelvic reconstructive surgery could improve PFM function and strength. The controlled, nonrandomized study recruited 37 POP patients in the Prolift group and 30 non-POP patients in the control group. Two urogynecologists performed the Prolift procedure. One experienced physiotherapist who was blinded to the grouping conducted the surface electromyography (SEMG) evaluation using an intravaginal probe. The patient was considered objectively cured if she had stage 0 or I according to the Pelvic Organ Prolapse Quantification system (POP-Q) at the 3rd month postoperatively. Two types of contractions, namely maximum voluntary contraction (MVC) and short, fast contractions (SFC) in 6 s were performed at each SEMG measurement. The SEMG data were collected once in the control group on admission and twice in the Prolift group (on admission and at the 3rd month postoperatively). The t test, Mann-Whitney U test, and Wilcoxon test were used for statistical analysis. A total of 36 POP patients were cured by the Prolift procedure. At the 3-month follow-up, the voltage and duration of MVC as well as the numbers and voltage of SFC increased significantly in the Prolift group. These variables were lower in POP patients compared to women without POP. The restoration of pelvic anatomy may account for the improved PFM function with increased electrical activity in POP patients verified by SEMG. Evaluation of PFM function may be used as a clinical tool in the overall assessment of pelvic reconstructive surgeries.

  12. Embryonic NOTES thoracic sympathectomy for palmar hyperhidrosis: results of a novel technique and comparison with the conventional VATS procedure.

    Science.gov (United States)

    Zhu, Li-Huan; Chen, Long; Yang, Shengsheng; Liu, Daoming; Zhang, Jixue; Cheng, Xianjin; Chen, Weisheng

    2013-11-01

    To avoid the disadvantages of chronic pain and chest wall paresthesia associated with video-assisted thoracic surgery (VATS) procedures, we developed a novel surgical technique for performing sympathectomy by embryonic natural orifice transumbilical endoscopic surgery (E-NOTES) with a flexible endoscope. In this study, we compared the outcomes of E-NOTES with conventional VATS thoracic sympathectomy on palmar hyperhidrosis. From January 2010 to April 2011, a total of 66 patients with severe palmar hyperhidrosis were treated with thoracic sympathectomy in our department. Thirty-four transumbilical thoracic sympathectomies were performed via a 5-mm umbilicus incision with ultrathin gastroscope, then compared with 32 conventional needlescopic thoracic sympathectomies. Retrospective statistical analysis of a prospectively collected group of patients was performed. There was no significant difference with regard to gender, mean age, body mass index, and length of hospital stay between the two groups. The operative time for E-NOTES thoracic sympathectomy was longer than that of VATS thoracic sympathectomy (56.4 ± 10.8 vs. 40.3 ± 6.5 min, p sympathectomy reported successful treatment of their palmar hyperhidrosis. Compensatory hyperhidrosis was noticed in 7 (20.1 %) and 6 (18.8 %) patients in the E-NOTES and VATS groups, respectively (p > 0.05). Postoperative pain and paresthesia were significantly reduced in the E-NOTES group at each time interval, and the aesthetic effect of the incision was superior in the E-NOTES group. Transumbilical-diaphragmatic thoracic sympathectomy is a safe and efficacious alternative to the conventional approach. This novel procedure can further reduce postoperative pain and chest wall paresthesia as well as afford maximum cosmetic benefits by hiding the surgical incision in the umbilicus.

  13. Nurse-administered propofol sedation for gastrointestinal endoscopic procedures: first Nordic results from implementation of a structured training program.

    Science.gov (United States)

    Slagelse, Charlotte; Vilmann, Peter; Hornslet, Pernille; Hammering, Anne; Mantoni, Teit

    2011-12-01

    Proper training to improve safety of NAPS (nurse-administered propofol sedation) is essential. To communicate our experience with a training program of NAPS. In 2007, a training program was introduced for endoscopists and endoscopy nurses in collaboration with the Department of Anaesthesiology. During a 2.5-year period, eight nurses were trained. Propofol was given as monotherapy. The training program for nurses consisted of a 6-week course including theoretical and practical training whereas the training program for endoscopists consisted of 2.5 h of theory. Patients were selected based on strict criteria including patients in ASA (American Society of Anesthesiologists) group I-III. 2527 patients undergoing 2.656 gastrointestinal endoscopic procedures were included. The patients were ASA group I, II and III in 34.7%, 56% and 9,3%, respectively. Median dose of propofol was 300 mg. No mortality was noted. 119 of 2527 patients developed short lasting hypoxia (4.7%); 61 (2.4%) needed suction; 22 (0.9%) required bag-mask ventilation and 8 (0.3%) procedures had to be discontinued. In 11 patients (0.4%), anesthetic assistance was called due to short lasting desaturation. 34 patients (1.3%) experienced a change in blood pressure greater than 30%. NAPS provided by properly trained nurses according to the present protocol is safe and only associated with a minor risk (short lasting hypoxia 4.7%). National or international structured training programs are at present few or non-existing. The present training program has documented its value and is suggested as the basis for the current development of guidelines.

  14. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Science.gov (United States)

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  15. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  16. The U.S. Agency-Level Bid Protest Mechanism: A Model for Bid Challenge Procedures in Developing Nations

    Science.gov (United States)

    2005-08-31

    35 V. Bid Protest Procedures in the UNCITRAL Model Procurement Law and World Bank Development...Program...................................................................... 39 A. UNCITRAL Model Procurement Law...government procurement,5 and the assistance offered by the United Nations Committee on International Trade Law ( UNCITRAL )6 in the form of its Model Law

  17. Modeling Results For the ITER Cryogenic Fore Pump. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)

    2014-03-31

    A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.

  18. Assessment of Galileo modal test results for mathematical model verification

    Science.gov (United States)

    Trubert, M.

    1984-01-01

    The modal test program for the Galileo Spacecraft was completed at the Jet Propulsion Laboratory in the summer of 1983. The multiple sine dwell method was used for the baseline test. The Galileo Spacecraft is a rather complex 2433 kg structure made of a central core on which seven major appendages representing 30 percent of the total mass are attached, resulting in a high modal density structure. The test revealed a strong nonlinearity in several major modes. This nonlinearity discovered in the course of the test necessitated running additional tests at the unusually high response levels of up to about 21 g. The high levels of response were required to obtain a model verification valid at the level of loads for which the spacecraft was designed. Because of the high modal density and the nonlinearity, correlation between the dynamic mathematical model and the test results becomes a difficult task. Significant changes in the pre-test analytical model are necessary to establish confidence in the upgraded analytical model used for the final load verification. This verification, using a test verified model, is required by NASA to fly the Galileo Spacecraft on the Shuttle/Centaur launch vehicle in 1986.

  19. Modeling vertical loads in pools resulting from fluid injection. [BWR

    Energy Technology Data Exchange (ETDEWEB)

    Lai, W.; McCauley, E.W.

    1978-06-15

    Table-top model experiments were performed to investigate pressure suppression pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peachbottom Mark I boiling water reactor containment system. The results guided subsequent conduct of experiments in the /sup 1///sub 5/-scale facility and provided new insight into the vertical load function (VLF). Model experiments show an oscillatory VLF with the download typically double-spiked followed by a more gradual sinusoidal upload. The load function contains a high frequency oscillation superimposed on a low frequency one; evidence from measurements indicates that the oscillations are initiated by fluid dynamics phenomena.

  20. The Impact of R-Optimized Administration Modeling Procedures on Brazilian Normative Reference Values for Rorschach Scores.

    Science.gov (United States)

    Pianowski, Giselle; Meyer, Gregory J; Villemor-Amaral, Anna Elisa de

    2016-01-01

    To generate normative reference data for the Rorschach Performance Assessment System (R-PAS), modeling procedures were developed to convert the distribution of responses (R) in protocols obtained using Comprehensive System (CS; Exner 2003 ) administration guidelines to match the distribution of R in protocols obtained using R-Optimized Administration (Meyer, Viglione, Mihura, Erard, & Erdberg, 2011 ). This study replicates the R-PAS study, examining the impact of modeling R-Optimized Administration on Brazilian normative reference values by comparing a sample of 746 CS administered protocols to its counterpart sample of 343 records modeled to match R-Optimized Administration. The results were strongly consistent with the R-PAS findings, showing the modeled records had a slightly higher mean R and, secondarily, slightly higher means for Complexity and V-Comp, as well as smaller standard deviations for R, Complexity, and R8910%. We also observed 5 other small differences not observed in the R-PAS study. However, when comparing effect sizes for the differences in means and standard deviations observed in this study to the differences found in the R-PAS study, the results were virtually identical. These findings suggest that using R-Optimized Administration in Brazil might produce normative results that are similar to traditional CS norms for Brazil and similar to the international norms used in R-PAS.

  1. [Rapid and Dynamic Determination Models of Amino Acids and Catechins Concentrations during the Processing Procedures of Keemun Black Tea].

    Science.gov (United States)

    Ning, Jing-ming; Yan, Ling; Zhang, Zheng-zhu; Wei, Ling-dong; Li, Lu-qing; Fang, Jun-ting; Huang, Cai-wang

    2015-12-01

    Tea is one of the most popular beverages in the world. For the contribution to the taste and healthy functions of tea, amino acids and catechins are important components. Among different kinds of black teas in the world, Keemun black tea has the famous and specific fragrance, "Keemun aroma". During the processing procedure of Keemun black tea, the contents of amino acids and catechins changed greatly, and the differences of these concentrations during processing varied significantly. However, a rapid and dynamic determination method during the processing procedure was not existed up to now. In order to find out a rapid determination method for the contents of amino acids and catechins during the processing procedure of Keemun black tea, the materials of fresh leaves, withered leaves, twisted leaves, fermented leaves, and crude tea (after drying) were selected to acquire their corresponding near infrared spectroscopy and obtain their contents of amino acids and catechins by chemical analysis method. The original spectra data were preprocessed by the Standard Normal Variate Transformation (SNVT) method. And the model of Near Infrared (NIR) spectroscopy with the contents of amino acids and catechins combined with Synergy Interval Partial Least squares (Si-PLS) was established in this study. The correlation coefficients and the cross validation root mean square error are treated as the efficient indexes for evaluating models. The results showed that the optimal prediction model of amino acids by Si-PLS contained 20 spectral intervals combined with 4 subintervals and 9 principal component factors. The correlation coefficient and the root mean square error of the calibration set were 0. 955 8 and 1. 768, respectively; the correlation coefficient and the root mean square error of the prediction set were 0. 949 5 and 2. 16, respectively. And the optimal prediction model of catechins by Si-PLS contained 20 spectral intervals combined with 3 subintervals and 10 principal

  2. Numerical Simulation Procedure for Modeling TGO Crack Propagation and TGO Growth in Thermal Barrier Coatings upon Thermal-Mechanical Cycling

    Directory of Open Access Journals (Sweden)

    Ding Jun

    2014-01-01

    Full Text Available This paper reports a numerical simulation procedure to model crack propagation in TGO layer and TGO growth near a surface groove in metal substrate upon multiple thermal-mechanical cycles. The material property change method is employed to model TGO formation cycle by cycle, and the creep properties for constituent materials are also incorporated. Two columns of repeated nodes are placed along the interface of the potential crack, and these nodes are bonded together as one node at a geometrical location. In terms of critical crack opening displacement criterion, onset of crack propagation in TGO layer has been determined by finite element analyses in comparison with that without predefined crack. Then, according to the results from the previous analyses, the input values for the critical failure parameters for the subsequent analyses can be decided. The robust capabilities of restart analysis in ABAQUS help to implement the overall simulation for TGO crack propagation. The comparison of the TGO final deformation profile between numerical and experimental observation shows a good agreement indicating the correctness and effectiveness of the present procedure, which can guide the prediction of the failure in TGO for the future design and optimization for TBC system.

  3. A general U-block model-based design procedure for nonlinear polynomial control systems

    Science.gov (United States)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  4. A Robbins-Monro procedure for estimation in semiparametric regression models

    CERN Document Server

    Bercu, Bernard

    2011-01-01

    This paper is devoted to the parametric estimation of a shift together with the nonparametric estimation of a regression function in a semiparametric regression model. We implement a Robbins-Monro procedure very efficient and easy to handle. On the one hand, we propose a stochastic algorithm similar to that of Robbins-Monro in order to estimate the shift parameter. A preliminary evaluation of the regression function is not necessary for estimating the shift parameter. On the other hand, we make use of a recursive Nadaraya-Watson estimator for the estimation of the regression function. This kernel estimator takes in account the previous estimation of the shift parameter. We establish the almost sure convergence for both Robbins-Monro and Nadaraya-Watson estimators. The asymptotic normality of our estimates is also provided.

  5. Variational procedure for nuclear shell-model calculations and energy-variance extrapolation

    CERN Document Server

    Shimizu, Noritaka; Mizusaki, Takahiro; Honma, Michio; Tsunoda, Yusuke; Otsuka, Takaharu

    2012-01-01

    We discuss a variational calculation for nuclear shell-model calculations and propose a new procedure for the energy-variance extrapolation (EVE) method using a sequence of the approximated wave functions obtained by the variational calculation. The wave functions are described as linear combinations of the parity, angular-momentum projected Slater determinants, the energy of which is minimized by the conjugate gradient method obeying the variational principle. The EVE generally works well using the wave functions, but we found some difficult cases where the EVE gives a poor estimation. We discuss the origin of the poor estimation concerning shape coexistence. We found that the appropriate reordering of the Slater determinants allows us to overcome this difficulty and to reduce the uncertainty of the extrapolation.

  6. Major risk-stratification models fail to predict outcomes in patients with multivessel coronary artery disease undergoing simultaneous hybrid procedure

    Institute of Scientific and Technical Information of China (English)

    WANG Hao-ran; ZHENG Zhe; XIONG Hui; XU Bo; LI Li-huan; GAO Run-lin; HU Sheng-shou

    2013-01-01

    Background The hybrid procedure for coronary heart disease combines minimally invasive coronary artery bypass grafting (CABG) and percutaneous coronary intervention (PCI) and is an alternative to revascularization treatment.We sought to assess the predictive value of four risk-stratification models for risk assessment of major adverse cardiac and cerebrovascular events (MACCE) in patients with multivessel disease undergoing hybrid coronary revascularization.Methods The data of 120 patients were retrospectively collected and the SYNTAX score,EuroSCORE,SinoSCORE and the Global Risk Classification (GRC) calculated for each patient.The outcomes of interest were 2.7-year incidences of MACCE,including death,myocardial infarction,stroke,and any-vessel revascularization.Results During a mean of 2.7-year follow-up,actuarial survival was 99.17%,and no myocardial infarctions occurred.The discriminatory power (area under curve (AUC)) of the SYNTAX score,EuroSCORE,SinoSCORE and GRC for 2.7-year MACCE was 0.60 (95% confidence interval 0.42-0.77),0.65 (0.47-0.82),0.57 (0.39-0.75) and 0.65 (0.46-0.83),respectively.The calibration characteristics of the SYNTAX score,EuroSCORE,SinoSCORE and GRC were 3.92 (P=0.86),5.39 (P=0.37),13.81 (P=0.32) and 0.02 (P=0.89),respectively.Conclusions In patients with multivessel disease undergoing a hybrid procedure,the SYNTAX score,EuroSCORE,SinoSCORE and GRC were inaccurate in predicting MACCE.Modifying risk-stratification models to improve the predictive value for a hybrid procedure is needed.

  7. Are the results of questionnaires measuring non-cognitive characteristics during the selection procedure for medical school application biased by social desirability?

    Science.gov (United States)

    Obst, Katrin U.; Brüheim, Linda; Westermann, Jürgen; Katalinic, Alexander; Kötter, Thomas

    2016-01-01

    Introduction: A stronger consideration of non-cognitive characteristics in Medical School application procedures is desirable. Psychometric tests could be used as an economic supplement to face-to-face interviews which are frequently conducted during university internal procedures for Medical School applications (AdH, Auswahlverfahren der Hochschulen). This study investigates whether the results of psychometric questionnaires measuring non-cognitive characteristics such as personality traits, empathy, and resilience towards stress are vulnerable to distortions of social desirability when used in the context of selection procedures at Medical Schools. Methods: This study took place during the AdH of Lübeck University in August 2015. The following questionnaires have been included: NEO-FFI, SPF, and AVEM. In a 2x1 between-subject experiment we compared the answers from an alleged application condition and a control condition. In the alleged application condition we told applicants that these questionnaires were part of the application procedure. In the control condition applicants were informed about the study prior to completing the questionnaires. Results: All included questionnaires showed differences which can be regarded as social-desirability effects. These differences did not affect the entire scales but, rather, single subscales. Conclusion: These results challenge the informative value of these questionnaires when used for Medical School application procedures. Future studies may investigate the extent to which the differences influence the actual selection of applicants and what implications can be drawn from them for the use of psychometric questionnaires as part of study-place allocation procedures at Medical Schools. PMID:27990471

  8. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  9. [Long-term results after a modified Sauvé-Kapandji procedure. Report on 105 post-traumatic cases].

    Science.gov (United States)

    Zimmermann, R; Gabl, M; Angermann, P; Lutz, M; Pechlaner, S

    2003-07-01

    A modified Sauvé-Kapandji procedure was performed on 105 patients for a painfully limited range of motion and arthritis of the distal radioulnar joint following distal fracture of the radius ( n=81), the radius and ulna in the distal one-third ( n=18) and of the forearm shaft ( n=6). After an average of 8 years all patients were followed up clinically (motion, strength, pain) and radiographically (union of the arthrodesis, carpal translation, radioulnar distance). Rotation of the forearm had been improved by 53%. The amount of strength lay by 70% in comparison to the contralateral side. In 97% of the patients pain could be reduced. In all cases the arthrodesis had fused completely. An ulnar drift of the carpus was observed in 5% of the patients, and 74% of the patients showed radiological signs of approximation of the proximal ulnar stump to the radius. This reduction of the radioulnar distance amounted to less than 3 mm in 65% of the patients and lay between 3 and 5 mm in 29% of the patients. In none of the cases was direct contact between the ulna and the radius encountered.

  10. Comparison of procedures for immediate reconstruction of large osseous defects resulting from removal of a single tooth to prepare for insertion of an endosseous implant after healing

    NARCIS (Netherlands)

    Raghoebar, G. M.; Slater, J. J. H.; den Hartog, L.; Meijer, H. J. A.; Vissink, A.

    2009-01-01

    This study evaluated the treatment outcome of immediate reconstruction of 45 large osseous defects resulting from removal of a single tooth with a 1:2 mixture of Bio-Oss(R) and autologous tuberosity bone, and three different procedures for soft tissue closing (Bio-Gide(R) membrane, connective tissue

  11. Simulating lightning into the RAMS model: implementation and preliminary results

    Directory of Open Access Journals (Sweden)

    S. Federico

    2014-05-01

    Full Text Available This paper shows the results of a tailored version of a previously published methodology, designed to simulate lightning activity, implemented into the Regional Atmospheric Modeling System (RAMS. The method gives the flash density at the resolution of the RAMS grid-scale allowing for a detailed analysis of the evolution of simulated lightning activity. The system is applied in detail to two case studies occurred over the Lazio Region, in Central Italy. Simulations are compared with the lightning activity detected by the LINET network. The cases refer to two thunderstorms of different intensity. Results show that the model predicts reasonably well both cases and that the lightning activity is well reproduced especially for the most intense case. However, there are errors in timing and positioning of the convection, whose magnitude depends on the case study, which mirrors in timing and positioning errors of the lightning distribution. To assess objectively the performance of the methodology, standard scores are presented for four additional case studies. Scores show the ability of the methodology to simulate the daily lightning activity for different spatial scales and for two different minimum thresholds of flash number density. The performance decreases at finer spatial scales and for higher thresholds. The comparison of simulated and observed lighting activity is an immediate and powerful tool to assess the model ability to reproduce the intensity and the evolution of the convection. This shows the importance of the use of computationally efficient lightning schemes, such as the one described in this paper, in forecast models.

  12. Modeling air quality over China: Results from the Panda project

    Science.gov (United States)

    Katinka Petersen, Anna; Bouarar, Idir; Brasseur, Guy; Granier, Claire; Xie, Ying; Wang, Lili; Wang, Xuemei

    2015-04-01

    China faces strong air pollution problems related to rapid economic development in the past decade and increasing demand for energy. Air quality monitoring stations often report high levels of particle matter and ozone all over the country. Knowing its long-term health impacts, air pollution became then a pressing problem not only in China but also in other Asian countries. The PANDA project is a result of cooperation between scientists from Europe and China who joined their efforts for a better understanding of the processes controlling air pollution in China, improve methods for monitoring air quality and elaborate indicators in support of European and Chinese policies. A modeling system of air pollution is being setup within the PANDA project and include advanced global (MACC, EMEP) and regional (WRF-Chem, EMEP) meteorological and chemical models to analyze and monitor air quality in China. The poster describes the accomplishments obtained within the first year of the project. Model simulations for January and July 2010 are evaluated with satellite measurements (SCIAMACHY NO2 and MOPITT CO) and in-situ data (O3, CO, NOx, PM10 and PM2.5) observed at several surface stations in China. Using the WRF-Chem model, we investigate the sensitivity of the model performance to emissions (MACCity, HTAPv2), horizontal resolution (60km, 20km) and choice of initial and boundary conditions.

  13. Solution Procedure for Transport Modeling in Effluent Recharge Based on Operator-Splitting Techniques

    Directory of Open Access Journals (Sweden)

    Shutang Zhu

    2008-01-01

    Full Text Available The coupling of groundwater movement and reactive transport during groundwater recharge with wastewater leads to a complicated mathematical model, involving terms to describe convection-dispersion, adsorption/desorption and/or biodegradation, and so forth. It has been found very difficult to solve such a coupled model either analytically or numerically. The present study adopts operator-splitting techniques to decompose the coupled model into two submodels with different intrinsic characteristics. By applying an upwind finite difference scheme to the finite volume integral of the convection flux term, an implicit solution procedure is derived to solve the convection-dominant equation. The dispersion term is discretized in a standard central-difference scheme while the dispersion-dominant equation is solved using either the preconditioned Jacobi conjugate gradient (PJCG method or Thomas method based on local-one-dimensional scheme. The solution method proposed in this study is applied to the demonstration project of groundwater recharge with secondary effluent at Gaobeidian sewage treatment plant (STP successfully.

  14. Prediction of melting temperatures in fluorescence in situ hybridization (FISH) procedures using thermodynamic models.

    Science.gov (United States)

    Fontenete, Sílvia; Guimarães, Nuno; Wengel, Jesper; Azevedo, Nuno Filipe

    2016-01-01

    The thermodynamics and kinetics of DNA hybridization, i.e. the process of self-assembly of one, two or more complementary nucleic acid strands, has been studied for many years. The appearance of the nearest-neighbor model led to several theoretical and experimental papers on DNA thermodynamics that provide reasonably accurate thermodynamic information on nucleic acid duplexes and allow estimation of the melting temperature. Because there are no thermodynamic models specifically developed to predict the hybridization temperature of a probe used in a fluorescence in situ hybridization (FISH) procedure, the melting temperature is used as a reference, together with corrections for certain compounds that are used during FISH. However, the quantitative relation between melting and experimental FISH temperatures is poorly described. In this review, various models used to predict the melting temperature for rRNA targets, for DNA oligonucleotides and for nucleic acid mimics (chemically modified oligonucleotides), will be addressed in detail, together with a critical assessment of how this information should be used in FISH.

  15. Exact results for the one dimensional asymmetric exclusion model

    Science.gov (United States)

    Derrida, B.; Evans, M. R.; Hakim, V.; Pasquier, V.

    1993-11-01

    The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices.

  16. Exact results for the one dimensional asymmetric exclusion model

    Energy Technology Data Exchange (ETDEWEB)

    Derrida, B.; Evans, M.R.; Pasquier, V. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Service de Physique Theorique; Hakim, V. [Ecole Normale Superieure, 75 - Paris (France)

    1993-12-31

    The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices. (author).

  17. APPLYING LOGISTIC REGRESSION MODEL TO THE EXAMINATION RESULTS DATA

    Directory of Open Access Journals (Sweden)

    Goutam Saha

    2011-01-01

    Full Text Available The binary logistic regression model is used to analyze the school examination results(scores of 1002 students. The analysis is performed on the basis of the independent variables viz.gender, medium of instruction, type of schools, category of schools, board of examinations andlocation of schools, where scores or marks are assumed to be dependent variables. The odds ratioanalysis compares the scores obtained in two examinations viz. matriculation and highersecondary.

  18. Analytical results for a three-phase traffic model.

    Science.gov (United States)

    Huang, Ding-wei

    2003-10-01

    We study analytically a cellular automaton model, which is able to present three different traffic phases on a homogeneous highway. The characteristics displayed in the fundamental diagram can be well discerned by analyzing the evolution of density configurations. Analytical expressions for the traffic flow and shock speed are obtained. The synchronized flow in the intermediate-density region is the result of aggressive driving scheme and determined mainly by the stochastic noise.

  19. Soft tissue cell adhesion to titanium abutments after different cleaning procedures: preliminary results of a randomized clinical trial.

    Science.gov (United States)

    Canullo, L; Penarrocha-Oltra, D; Marchionni, S; Bagán, L; Peñarrocha-Diago, M-A; Micarelli, C

    2014-03-01

    A randomized controlled trial was performed to assess soft tissue cell adhesion to implant titanium abutments subjected to different cleaning procedures and test if plasma cleaning can enhance cell adhesion at an early healing time. Eighteen patients with osseointegrated and submerged implants were included. Before re-opening, 18 abutments were divided in 3 groups corresponding to different clinical conditions with different cleaning processes: no treatment (G1), laboratory customization and cleaning by steam (G2), cleaning by plasma of Argon (G3). Abutments were removed after 1 week and scanning electron microscopy was used to analyze cell adhesion to the abutment surface quantitatively (percentage of area occupied by cells) and qualitatively (aspect of adhered cells and presence of contaminants). Mean percentages of area occupied by cells were 17.6 ± 22.7%, 16.5 ± 12.9% and 46.3 ± 27.9% for G1, G2 and G3 respectively. Differences were statistically significant between G1 and G3 (p=0.030), close to significance between G2 and G3 (p=0.056), and non-significant between G1 and G2 (p=0.530). The proportion of samples presenting adhered cells was homogeneous among the 3 groups (p-valor = 1.000). In all cases cells presented a flattened aspect; in 2 cases cells were less efficiently adhered and in 1 case cells presented filipodia. Three cases showed contamination with cocobacteria. Within the limits of the present study, plasma of Argon may enhance cell adhesion to titanium abutments, even at the early stage of soft tissue healing. Further studies with greater samples are necessary to confirm these findings.

  20. Outcome of the Sauvé-Kapandji procedure for distal radioulnar joint disorder with rheumatoid arthritis or osteoarthritis: Results of one-year follow-up.

    Science.gov (United States)

    Ikeda, Mikinori; Kawabata, Akira; Suzuki, Keisuke; Toyama, Masahiko; Egi, Takeshi

    2017-08-24

    We performed the Sauvé-Kapandji procedure for treating disorders of the distal radioulnar joint (DRUJ) in patients with rheumatoid arthritis (RA) or osteoarthritis (OA). This study aimed to compare and clarify the results of the SK procedure between RA and OA patients. We report the one-year follow-up results of patients who underwent the SK procedure to correct the DRUJ disorder caused by RA or OA. The study included 22 wrists of 19 patients with RA and 10 wrists of nine patients with OA. Pain, grip strength and range of motion of the wrist were examined clinically. For the evaluation of the stability of the carpus, ulnar stump and bone union, parameters were measured using radiographs. Shortened disabilities of the arm, shoulder and hand questionnaire (QuickDASH) was used for functional evaluation. Wrist pain reduced in all cases, and bone union was achieved in all wrists. The QuickDASH score significantly improved in both patients with RA and OA. In patients with RA, the range of motion increased significantly with regard to supination but decreased significantly with regard to palmar flexion. Carpal alignment and ulnar stump stability were maintained well at one-year follow-up. The Sauvé-Kapandji procedure for treating disorders of the distal radioulnar joint DRUJ showed good results clinically and radiographically, irrespective of RA or OA.

  1. Challenges in validating model results for first year ice

    Science.gov (United States)

    Melsom, Arne; Eastwood, Steinar; Xie, Jiping; Aaboe, Signe; Bertino, Laurent

    2017-04-01

    In order to assess the quality of model results for the distribution of first year ice, a comparison with a product based on observations from satellite-borne instruments has been performed. Such a comparison is not straightforward due to the contrasting algorithms that are used in the model product and the remote sensing product. The implementation of the validation is discussed in light of the differences between this set of products, and validation results are presented. The model product is the daily updated 10-day forecast from the Arctic Monitoring and Forecasting Centre in CMEMS. The forecasts are produced with the assimilative ocean prediction system TOPAZ. Presently, observations of sea ice concentration and sea ice drift are introduced in the assimilation step, but data for sea ice thickness and ice age (or roughness) are not included. The model computes the age of the ice by recording and updating the time passed after ice formation as sea ice grows and deteriorates as it is advected inside the model domain. Ice that is younger than 365 days is classified as first year ice. The fraction of first-year ice is recorded as a tracer in each grid cell. The Ocean and Sea Ice Thematic Assembly Centre in CMEMS redistributes a daily product from the EUMETSAT OSI SAF of gridded sea ice conditions which include "ice type", a representation of the separation of regions between those infested by first year ice, and those infested by multi-year ice. The ice type is parameterized based on data for the gradient ratio GR(19,37) from SSMIS observations, and from the ASCAT backscatter parameter. This product also includes information on ambiguity in the processing of the remote sensing data, and the product's confidence level, which have a strong seasonal dependency.

  2. Results of Satellite Brightness Modeling Using Kringing Optimized Interpolation

    Science.gov (United States)

    Weeden, C.; Hejduk, M.

    At the 2005 AMOS conference, Kriging Optimized Interpolation (KOI) was presented as a tool to model satellite brightness as a function of phase angle and solar declination angle (J.M Okada and M.D. Hejduk). Since November 2005, this method has been used to support the tasking algorithm for all optical sensors in the Space Surveillance Network (SSN). The satellite brightness maps generated by the KOI program are compared to each sensor's ability to detect an object as a function of the brightness of the background sky and angular rate of the object. This will determine if the sensor can technically detect an object based on an explicit calculation of the object's probability of detection. In addition, recent upgrades at Ground-Based Electro Optical Deep Space Surveillance Sites (GEODSS) sites have increased the amount and quality of brightness data collected and therefore available for analysis. This in turn has provided enough data to study the modeling process in more detail in order to obtain the most accurate brightness prediction of satellites. Analysis of two years of brightness data gathered from optical sensors and modeled via KOI solutions are outlined in this paper. By comparison, geo-stationary objects (GEO) were tracked less than non-GEO objects but had higher density tracking in phase angle due to artifices of scheduling. A statistically-significant fit to a deterministic model was possible less than half the time in both GEO and non-GEO tracks, showing that a stochastic model must often be used alone to produce brightness results, but such results are nonetheless serviceable. Within the Kriging solution, the exponential variogram model was the most frequently employed in both GEO and non-GEO tracks, indicating that monotonic brightness variation with both phase and solar declination angle is common and testifying to the suitability to the application of regionalized variable theory to this particular problem. Finally, the average nugget value, or

  3. Titan Chemistry: Results From A Global Climate Model

    Science.gov (United States)

    Wilson, Eric; West, R. A.; Friedson, A. J.; Oyafuso, F.

    2008-09-01

    We present results from a 3-dimesional global climate model of Titan's atmosphere and surface. This model, a modified version of NCAR's CAM-3 (Community Atmosphere Model), has been optimized for analysis of Titan's lower atmosphere and surface. With the inclusion of forcing from Saturn's gravitational tides, interaction from the surface, transfer of longwave and shortwave radiation, and parameterization of haze properties, constrained by Cassini observations, a dynamical field is generated, which serves to advect 14 long-lived species. The concentrations of these chemical tracers are also affected by 82 chemical reactions and the photolysis of 21 species, based on the Wilson and Atreya (2004) model, that provide sources and sinks for the advected species along with 23 additional non-advected radicals. In addition, the chemical contribution to haze conversion is parameterized along with the microphysical processes that serve to distribute haze opacity throughout the atmosphere. References Wilson, E.H. and S.K. Atreya, J. Geophys. Res., 109, E06002, 2004.

  4. Why Does a Kronecker Model Result in Misleading Capacity Estimates?

    CERN Document Server

    Raghavan, Vasanthan; Sayeed, Akbar M

    2008-01-01

    Many recent works that study the performance of multi-input multi-output (MIMO) systems in practice assume a Kronecker model where the variances of the channel entries, upon decomposition on to the transmit and the receive eigen-bases, admit a separable form. Measurement campaigns, however, show that the Kronecker model results in poor estimates for capacity. Motivated by these observations, a channel model that does not impose a separable structure has been recently proposed and shown to fit the capacity of measured channels better. In this work, we show that this recently proposed modeling framework can be viewed as a natural consequence of channel decomposition on to its canonical coordinates, the transmit and/or the receive eigen-bases. Using tools from random matrix theory, we then establish the theoretical basis behind the Kronecker mismatch at the low- and the high-SNR extremes: 1) Sparsity of the dominant statistical degrees of freedom (DoF) in the true channel at the low-SNR extreme, and 2) Non-regul...

  5. New DNS and modeling results for turbulent pipe flow

    Science.gov (United States)

    Johansson, Arne; El Khoury, George; Grundestam, Olof; Schlatter, Philipp; Brethouwer, Geert; Linne Flow Centre Team

    2013-11-01

    The near-wall region of turbulent pipe and channel flows (as well as zero-pressure gradient boundary layers) have been shown to exhibit a very high degree of similarity in terms of all statistical moments and many other features, while even the mean velocity profile in the two cases exhibits significant differences between in the outer region. The wake part of the profile, i.e. the deviation from the log-law, in the outer region is of substantially larger amplitude in pipe flow as compared to channel flow (although weaker than in boundary layer flow). This intriguing feature has been well known but has no simple explanation. Model predictions typically give identical results for the two flows. We have analyzed a new set of DNS for pipe and channel flows (el Khoury et al. 2013, Flow, Turbulence and Combustion) for friction Reynolds numbers up to 1000 and made comparing calculations with differential Reynolds stress models (DRSM). We have strong indications that the key factor behind the difference in mean velocity in the outer region can be coupled to differences in the turbulent diffusion in this region. This is also supported by DRSM results, where interesting differences are seen depending on the sophistication of modeling the turbulent diffusion coefficient.

  6. Some Results on Optimal Dividend Problem in Two Risk Models

    Directory of Open Access Journals (Sweden)

    Shuaiqi Zhang

    2010-12-01

    Full Text Available The compound Poisson risk model and the compound Poisson risk model perturbed by diffusion are considered in the presence of a dividend barrier with solvency constraints. Moreover, it extends the known result due to [1]. Ref. [1] finds the optimal dividend policy is of a barrier type for a jump-diffusion model with exponentially distributed jumps. In this paper, it turns out that there can be two different solutions depending on the model’s parameters. Furthermore, an interesting result is given: the proportional transaction cost has no effect on the dividend barrier. The objective of the corporation is to maximize the cumulative expected discounted dividends payout with solvency constraints before the time of ruin. It is well known that under some reasonable assumptions, optimal dividend strategy is a barrier strategy, i.e., there is a level b_{1}(b_{2} so that whenever surplus goes above the level b_{1}(b_{2}, the excess is paid out as dividends. However, the optimal level b_{1}(b_{2} may be unacceptably low from a solvency point of view. Therefore, some constraints should imposed on an insurance company such as to pay out dividends unless the surplus has reached a level b^{1}_{c}>b_{1}(b^2_{c}>b_{2} . We show that in this case a barrier strategy at b^{1}_{c}(b^2_{c} is optimal.

  7. Modeling results for the ITER cryogenic fore pump

    Science.gov (United States)

    Zhang, D. S.; Miller, F. K.; Pfotenhauer, J. M.

    2014-01-01

    The cryogenic fore pump (CFP) is designed for ITER to collect and compress hydrogen isotopes during the regeneration process of torus cryopumps. Different from common cryopumps, the ITER-CFP works in the viscous flow regime. As a result, both adsorption boundary conditions and transport phenomena contribute unique features to the pump performance. In this report, the physical mechanisms of cryopumping are studied, especially the diffusion-adsorption process and these are coupled with standard equations of species, momentum and energy balance, as well as the equation of state. Numerical models are developed, which include highly coupled non-linear conservation equations of species, momentum and energy and equation of state. Thermal and kinetic properties are treated as functions of temperature, pressure, and composition. To solve such a set of equations, a novel numerical technique, identified as the Group-Member numerical technique is proposed. It is presented here a 1D numerical model. The results include comparison with the experimental data of pure hydrogen flow and a prediction for hydrogen flow with trace helium. An advanced 2D model and detailed explanation of the Group-Member technique are to be presented in following papers.

  8. Generic Procedure for Coupling the PHREEQC Geochemical Modeling Framework with Flow and Solute Transport Simulators

    Science.gov (United States)

    Wissmeier, L. C.; Barry, D. A.

    2009-12-01

    Computer simulations of water availability and quality play an important role in state-of-the-art water resources management. However, many of the most utilized software programs focus either on physical flow and transport phenomena (e.g., MODFLOW, MT3DMS, FEFLOW, HYDRUS) or on geochemical reactions (e.g., MINTEQ, PHREEQC, CHESS, ORCHESTRA). In recent years, several couplings between both genres of programs evolved in order to consider interactions between flow and biogeochemical reactivity (e.g., HP1, PHWAT). Software coupling procedures can be categorized as ‘close couplings’, where programs pass information via the memory stack at runtime, and ‘remote couplings’, where the information is exchanged at each time step via input/output files. The former generally involves modifications of software codes and therefore expert programming skills are required. We present a generic recipe for remotely coupling the PHREEQC geochemical modeling framework and flow and solute transport (FST) simulators. The iterative scheme relies on operator splitting with continuous re-initialization of PHREEQC and the FST of choice at each time step. Since PHREEQC calculates the geochemistry of aqueous solutions in contact with soil minerals, the procedure is primarily designed for couplings to FST’s for liquid phase flow in natural environments. It requires the accessibility of initial conditions and numerical parameters such as time and space discretization in the input text file for the FST and control of the FST via commands to the operating system (batch on Windows; bash/shell on Unix/Linux). The coupling procedure is based on PHREEQC’s capability to save the state of a simulation with all solid, liquid and gaseous species as a PHREEQC input file by making use of the dump file option in the TRANSPORT keyword. The output from one reaction calculation step is therefore reused as input for the following reaction step where changes in element amounts due to advection

  9. Modern meta-heuristics based on nonlinear physics processes: A review of models and design procedures

    Science.gov (United States)

    Salcedo-Sanz, S.

    2016-10-01

    Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in

  10. A procedure for the change point problem in parametric models based on phi-divergence test-statistics

    CERN Document Server

    Batsidis, Apostolos; Pardo, Leandro; Zografos, Konstantinos

    2011-01-01

    This paper studies the change point problem for a general parametric, univariate or multivariate family of distributions. An information theoretic procedure is developed which is based on general divergence measures for testing the hypothesis of the existence of a change. For comparing the accuracy of the new test-statistic a simulation study is performed for the special case of a univariate discrete model. Finally, the procedure proposed in this paper is illustrated through a classical change-point example.

  11. SR-Site groundwater flow modelling methodology, setup and results

    Energy Technology Data Exchange (ETDEWEB)

    Selroos, Jan-Olof (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))

    2010-12-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report.

  12. Geochemical controls on shale groundwaters: Results of reaction path modeling

    Energy Technology Data Exchange (ETDEWEB)

    Von Damm, K.L.; VandenBrook, A.J.

    1989-03-01

    The EQ3NR/EQ6 geochemical modeling code was used to simulate the reaction of several shale mineralogies with different groundwater compositions in order to elucidate changes that may occur in both the groundwater compositions, and rock mineralogies and compositions under conditions which may be encountered in a high-level radioactive waste repository. Shales with primarily illitic or smectitic compositions were the focus of this study. The reactions were run at the ambient temperatures of the groundwaters and to temperatures as high as 250/degree/C, the approximate temperature maximum expected in a repository. All modeling assumed that equilibrium was achieved and treated the rock and water assemblage as a closed system. Graphite was used as a proxy mineral for organic matter in the shales. The results show that the presence of even a very small amount of reducing mineral has a large influence on the redox state of the groundwaters, and that either pyrite or graphite provides essentially the same results, with slight differences in dissolved C, Fe and S concentrations. The thermodynamic data base is inadequate at the present time to fully evaluate the speciation of dissolved carbon, due to the paucity of thermodynamic data for organic compounds. In the illitic cases the groundwaters resulting from interaction at elevated temperatures are acid, while the smectitic cases remain alkaline, although the final equilibrium mineral assemblages are quite similar. 10 refs., 8 figs., 15 tabs.

  13. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N K; Duan, Q; Gao, X; Sorooshian, S

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  14. VNIR spectral modeling of Mars analogue rocks: first results

    Science.gov (United States)

    Pompilio, L.; Roush, T.; Pedrazzi, G.; Sgavetti, M.

    Knowledge regarding the surface composition of Mars and other bodies of the inner solar system is fundamental to understanding of their origin, evolution, and internal structures. Technological improvements of remote sensors and associated implications for planetary studies have encouraged increased laboratory and field spectroscopy research to model the spectral behavior of terrestrial analogues for planetary surfaces. This approach has proven useful during Martian surface and orbital missions, and petrologic studies of Martian SNC meteorites. Thermal emission data were used to suggest two lithologies occurring on Mars surface: basalt with abundant plagioclase and clinopyroxene and andesite, dominated by plagioclase and volcanic glass [1,2]. Weathered basalt has been suggested as an alternative to the andesite interpretation [3,4]. Orbital VNIR spectral imaging data also suggest the crust is dominantly basaltic, chiefly feldspar and pyroxene [5,6]. A few outcrops of ancient crust have higher concentrations of olivine and low-Ca pyroxene, and have been interpreted as cumulates [6]. Based upon these orbital observations future lander/rover missions can be expected to encounter particulate soils, rocks, and rock outcrops. Approaches to qualitative and quantitative analysis of remotely-acquired spectra have been successfully used to infer the presence and abundance of minerals and to discover compositionally associated spectral trends [7-9]. Both empirical [10] and mathematical [e.g. 11-13] methods have been applied, typically with full compositional knowledge, to chiefly particulate samples and as a result cannot be considered as objective techniques for predicting the compositional information, especially for understanding the spectral behavior of rocks. Extending the compositional modeling efforts to include more rocks and developing objective criteria in the modeling are the next required steps. This is the focus of the present investigation. We present results of

  15. ITER CS Model Coil and CS Insert Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Martovetsky, N; Michael, P; Minervina, J; Radovinsky, A; Takayasu, M; Thome, R; Ando, T; Isono, T; Kato, T; Nakajima, H; Nishijima, G; Nunoya, Y; Sugimoto, M; Takahashi, Y; Tsuji, H; Bessette, D; Okuno, K; Ricci, M

    2000-09-07

    The Inner and Outer modules of the Central Solenoid Model Coil (CSMC) were built by US and Japanese home teams in collaboration with European and Russian teams to demonstrate the feasibility of a superconducting Central Solenoid for ITER and other large tokamak reactors. The CSMC mass is about 120 t, OD is about 3.6 m and the stored energy is 640 MJ at 46 kA and peak field of 13 T. Testing of the CSMC and the CS Insert took place at Japan Atomic Energy Research Institute (JAERI) from mid March until mid August 2000. This paper presents the main results of the tests performed.

  16. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.;

    2013-01-01

    Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade......A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...

  17. Model independent analysis of dark energy I: Supernova fitting result

    CERN Document Server

    Gong, Y

    2004-01-01

    The nature of dark energy is a mystery to us. This paper uses the supernova data to explore the property of dark energy by some model independent methods. We first Talyor expanded the scale factor $a(t)$ to find out the deceleration parameter $q_0<0$. This result just invokes the Robertson-Walker metric. Then we discuss several different parameterizations used in the literature. We find that $\\Omega_{\\rm DE0}$ is almost less than -1 at $1\\sigma$ level. We also find that the transition redshift from deceleration phase to acceleration phase is $z_{\\rm T}\\sim 0.3$.

  18. Preliminary results of steel containment vessel model test

    Energy Technology Data Exchange (ETDEWEB)

    Luk, V.K.; Hessheimer, M.F. [Sandia National Labs., Albuquerque, NM (United States); Matsumoto, T.; Komine, K.; Arai, S. [Nuclear Power Engineering Corp., Tokyo (Japan); Costello, J.F. [Nuclear Regulatory Commission, Washington, DC (United States)

    1998-04-01

    A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11--12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented.

  19. Procedure and comparative analysis of results of silicon tracker modules testing for D0 (FNAL) collider experiment

    CERN Document Server

    Ermolov, P F; Karmanov, D E; Leflat, A; Merkin, M M; Shabalina, E K

    2002-01-01

    The silicon microstrip tracker consists of three main parts: the central cylindrical one, internal disks and face disks. All the parts of the tracker have modular structure. The modulus contains one or several silicon detectors and a flexible printed circuit with an integral read-out system. The methodology for testing the D0 tracker parts on their functional efficiency, reliability and defectiveness is described. Comparison of the results of the disks modules testing with the disk detectors parameters before their assembling is carried out. The comparative analysis results make it possible to optimize the process of the detector mass testing and work out the criteria for the detectors quality evaluation

  20. Analytical solution to the 1D Lemaitre's isotropic damage model and plane stress projected implicit integration procedure

    DEFF Research Database (Denmark)

    Andriollo, Tito; Thorborg, Jesper; Hattel, Jesper Henri

    2016-01-01

    obtaining an integral relationship between total strain and effective stress. By means of the generalized binomial theorem, an expression in terms of infinite series is subsequently derived. The solution is found to simplify considerably existing techniques for material parameters identification based...... on optimization, as all issues associated with classical numerical solution procedures of the constitutive equations are eliminated. In addition, an implicit implementation of the plane stress projected version of Lemaitre's model is discussed, showing that the resulting algebraic system can be reduced...... to a single non-linear equation. The accuracy of the proposed integration scheme is then verified by means of the presented 1D analytical solution. Finally, a closed-form expression for the consistent tangent modulus taking damage evolution into account is given, and its impact on the convergence rate...

  1. Multi-Model Combination Techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N; Duan, Q; Gao, X; Sorooshian, S

    2006-05-08

    This paper examines several multi-model combination techniques: the Simple Multimodel Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  2. A computer touch screen system and training procedure for use with primate infants: Results from pigtail monkeys (Macaca nemestrina)

    NARCIS (Netherlands)

    Mandell, D.J.; Sackett, G.P.

    2008-01-01

    Computerized cognitive and perceptual testing has resulted in many advances towards understanding adult brain-behavior relations across a variety of abilities and species. However, there has been little migration of this technology to the assessment of very young primate subjects. We describe a trai

  3. A computer touch screen system and training procedure for use with primate infants: Results from pigtail monkeys (Macaca nemestrina)

    NARCIS (Netherlands)

    Mandell, D.J.; Sackett, G.P.

    2008-01-01

    Computerized cognitive and perceptual testing has resulted in many advances towards understanding adult brain-behavior relations across a variety of abilities and species. However, there has been little migration of this technology to the assessment of very young primate subjects. We describe a

  4. A computer touch screen system and training procedure for use with primate infants: Results from pigtail monkeys (Macaca nemestrina)

    NARCIS (Netherlands)

    Mandell, D.J.; Sackett, G.P.

    2008-01-01

    Computerized cognitive and perceptual testing has resulted in many advances towards understanding adult brain-behavior relations across a variety of abilities and species. However, there has been little migration of this technology to the assessment of very young primate subjects. We describe a trai

  5. Impact Flash Physics: Modeling and Comparisons With Experimental Results

    Science.gov (United States)

    Rainey, E.; Stickle, A. M.; Ernst, C. M.; Schultz, P. H.; Mehta, N. L.; Brown, R. C.; Swaminathan, P. K.; Michaelis, C. H.; Erlandson, R. E.

    2015-12-01

    horizontal. High-speed radiometer measurements were made of the time-dependent impact flash at wavelengths of 350-1100 nm. We will present comparisons between these measurements and the output of APL's model. The results of this validation allow us to determine basic relationships between observed optical signatures and impact conditions.

  6. Subsea Permafrost Climate Modeling - Challenges and First Results

    Science.gov (United States)

    Rodehacke, C. B.; Stendel, M.; Marchenko, S. S.; Christensen, J. H.; Romanovsky, V. E.; Nicolsky, D.

    2015-12-01

    Recent observations indicate that the East Siberian Arctic Shelf (ESAS) releases methane, which stems from shallow hydrate seabed reservoirs. The total amount of carbon within the ESAS is so large that release of only a small fraction, for example via taliks, which are columns of unfrozen sediment within the permafrost, could impact distinctly the global climate. Therefore it is crucial to simulate the future fate of ESAS' subsea permafrost with regard to changing atmospheric and oceanic conditions. However only very few attempts to address the vulnerability of subsea permafrost have been made, instead most studies have focused on the evolution of permafrost since the Late Pleistocene ocean transgression, approximately 14000 years ago.In contrast to land permafrost modeling, any attempt to model the future fate of subsea permafrost needs to consider several additional factors, in particular the dependence of freezing temperature on water depth and salt content and the differences in ground heat flux depending on the seabed properties. Also the amount of unfrozen water in the sediment needs to be taken into account. Using a system of coupled ocean, atmosphere and permafrost models will allow us to capture the complexity of the different parts of the system and evaluate the relative importance of different processes. Here we present the first results of a novel approach by means of dedicated permafrost model simulations. These have been driven by conditions of the Laptev Sea region in East Siberia. By exploiting the ensemble approach, we will show how uncertainties in boundary conditions and applied forcing scenarios control the future fate of the sub sea permafrost.

  7. DARK STARS: IMPROVED MODELS AND FIRST PULSATION RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Rindler-Daller, T.; Freese, K. [Department of Physics and Michigan Center for Theoretical Physics, University of Michigan, Ann Arbor, MI 48109 (United States); Montgomery, M. H.; Winget, D. E. [Department of Astronomy, McDonald Observatory and Texas Cosmology Center, University of Texas, Austin, TX 78712 (United States); Paxton, B. [Kavli Insitute for Theoretical Physics, University of California, Santa Barbara, CA 93106 (United States)

    2015-02-01

    We use the stellar evolution code MESA to study dark stars (DSs). DSs, which are powered by dark matter (DM) self-annihilation rather than by nuclear fusion, may be the first stars to form in the universe. We compute stellar models for accreting DSs with masses up to 10{sup 6} M {sub ☉}. The heating due to DM annihilation is self-consistently included, assuming extended adiabatic contraction of DM within the minihalos in which DSs form. We find remarkably good overall agreement with previous models, which assumed polytropic interiors. There are some differences in the details, with positive implications for observability. We found that, in the mass range of 10{sup 4}-10{sup 5} M {sub ☉}, our DSs are hotter by a factor of 1.5 than those in Freese et al., are smaller in radius by a factor of 0.6, denser by a factor of three to four, and more luminous by a factor of two. Our models also confirm previous results, according to which supermassive DSs are very well approximated by (n = 3)-polytropes. We also perform a first study of DS pulsations. Our DS models have pulsation modes with timescales ranging from less than a day to more than two years in their rest frames, at z ∼ 15, depending on DM particle mass and overtone number. Such pulsations may someday be used to identify bright, cool objects uniquely as DSs; if properly calibrated, they might, in principle, also supply novel standard candles for cosmological studies.

  8. Field Test Evaluation of Conservation Retrofits of Low-Income, Single-Family Buildings in Wisconsin: Blower-Door-Directed Infiltration Reduction Procedure, Field Test Implementation and Results

    Energy Technology Data Exchange (ETDEWEB)

    Gettings, M.B.

    2001-05-21

    A blower-door-directed infiltration retrofit procedure was field tested on 18 homes in south central Wisconsin. The procedure, developed by the Wisconsin Energy Conservation Corporation, includes recommended retrofit techniques as well as criteria for estimating the amount of cost-effective work to be performed on a house. A recommended expenditure level and target air leakage reduction, in air changes per hour at 50 Pascal (ACH50), are determined from the initial leakage rate measured. The procedure produced an average 16% reduction in air leakage rate. For the 7 houses recommended for retrofit, 89% of the targeted reductions were accomplished with 76% of the recommended expenditures. The average cost of retrofits per house was reduced by a factor of four compared with previous programs. The average payback period for recommended retrofits was 4.4 years, based on predicted energy savings computed from achieved air leakage reductions. Although exceptions occurred, the procedure's 8 ACH50 minimum initial leakage rate for advising retrofits to be performed appeared a good choice, based on cost-effective air leakage reduction. Houses with initial rates of 7 ACH50 or below consistently required substantially higher costs to achieve significant air leakage reductions. No statistically significant average annual energy savings was detected as a result of the infiltration retrofits. Average measured savings were -27 therm per year, indicating an increase in energy use, with a 90% confidence interval of 36 therm. Measured savings for individual houses varied widely in both positive and negative directions, indicating that factors not considered affected the results. Large individual confidence intervals indicate a need to increase the accuracy of such measurements as well as understand the factors which may cause such disparity. Recommendations for the procedure include more extensive training of retrofit crews, checks for minimum air exchange rates to insure air

  9. Convergence results for a coarsening model using global linearization

    CERN Document Server

    Gallay, T; Gallay, Th.

    2002-01-01

    We study a coarsening model describing the dynamics of interfaces in the one-dimensional Allen-Cahn equation. Given a partition of the real line into intervals of length greater than one, the model consists in constantly eliminating the shortest interval of the partition by merging it with its two neighbors. We show that the mean-field equation for the time-dependent distribution of interval lengths can be explicitly solved using a global linearization transformation. This allows us to derive rigorous results on the long-time asymptotics of the solutions. If the average length of the intervals is finite, we prove that all distributions approach a uniquely determined self-similar solution. We also obtain global stability results for the family of self-similar profiles which correspond to distributions with infinite expectation. eliminating the shortest interval of the partition by merging it with its two neighbors. We show that the mean-field equation for the time-dependent distribution of interval lengths can...

  10. Compressible Turbulent Channel Flows: DNS Results and Modeling

    Science.gov (United States)

    Huang, P. G.; Coleman, G. N.; Bradshaw, P.; Rai, Man Mohan (Technical Monitor)

    1994-01-01

    The present paper addresses some topical issues in modeling compressible turbulent shear flows. The work is based on direct numerical simulation of two supersonic fully developed channel flows between very cold isothermal walls. Detailed decomposition and analysis of terms appearing in the momentum and energy equations are presented. The simulation results are used to provide insights into differences between conventional time-and Favre-averaging of the mean-flow and turbulent quantities. Study of the turbulence energy budget for the two cases shows that the compressibility effects due to turbulent density and pressure fluctuations are insignificant. In particular, the dilatational dissipation and the mean product of the pressure and dilatation fluctuations are very small, contrary to the results of simulations for sheared homogeneous compressible turbulence and to recent proposals for models for general compressible turbulent flows. This provides a possible explanation of why the Van Driest density-weighted transformation is so successful in correlating compressible boundary layer data. Finally, it is found that the DNS data do not support the strong Reynolds analogy. A more general representation of the analogy is analysed and shown to match the DNS data very well.

  11. Validation tests of open-source procedures for digital camera calibration and 3D image-based modelling

    OpenAIRE

    I. Toschi; Rivola, R.; Bertacchini, E; Castagnetti, C.; M. Dubbini; Capra, A.

    2013-01-01

    Among the many open-source software solutions recently developed for the extraction of point clouds from a set of un-oriented images, the photogrammetric tools Apero and MicMac (IGN, Institut Géographique National) aim to distinguish themselves by focusing on the accuracy and the metric content of the final result. This paper firstly aims at assessing the accuracy of the simplified and automated calibration procedure offered by the IGN tools. Results obtained with this procedure were...

  12. Numerical Results of 3-D Modeling of Moon Accumulation

    Science.gov (United States)

    Khachay, Yurie; Anfilogov, Vsevolod; Antipin, Alexandr

    2014-05-01

    For the last time for the model of the Moon usually had been used the model of mega impact in which the forming of the Earth and its sputnik had been the consequence of the Earth's collision with the body of Mercurial mass. But all dynamical models of the Earth's accumulation and the estimations after the Pb-Pb system, lead to the conclusion that the duration of the planet accumulation was about 1 milliard years. But isotopic results after the W-Hf system testify about a very early (5-10) million years, dividing of the geochemical reservoirs of the core and mantle. In [1,2] it is shown, that the account of energy dissipating by the decay of short living radioactive elements and first of all Al26,it is sufficient for heating even small bodies with dimensions about (50-100) km up to the iron melting temperature and can be realized a principal new differentiation mechanism. The inner parts of the melted preplanets can join and they are mainly of iron content, but the cold silicate fragments return to the supply zone and additionally change the content of Moon forming to silicates. Only after the increasing of the gravitational radius of the Earth, the growing area of the future Earth's core can save also the silicate envelope fragments [3]. For understanding the further system Earth-Moon evolution it is significant to trace the origin and evolution of heterogeneities, which occur on its accumulation stage.In that paper we are modeling the changing of temperature,pressure,velocity of matter flowing in a block of 3d spherical body with a growing radius. The boundary problem is solved by the finite-difference method for the system of equations, which include equations which describe the process of accumulation, the Safronov equation, the equation of impulse balance, equation Navier-Stocks, equation for above litho static pressure and heat conductivity in velocity-pressure variables using the Businesque approach.The numerical algorithm of the problem solution in velocity

  13. [Kapandji-Sauvé procedure for chronic disorders of the distal radioulnar joint with special regard to the long-term results].

    Science.gov (United States)

    Daecke, W; Martini, A K; Streich, N A

    2003-05-01

    We present the preliminary results of a retrospective study on 56 patients who underwent the Kapandji-Sauvé procedure for chronic disorders of the distal radioulnar joint (DRUJ). Outcome was assessed with special regard to the long-term results. The average follow-up was 5.9 years (1 to 12 years). 15 of the 56 operations were performed before 1996. Most procedures were performed because of secondary arthrosis or chronic dislocation of the DRUJ after distal radius fracture. Patients were assessed for pain, range of motion of wrist and forearm and radiological features. The DASH score and Mayo wrist score were used. Pain was improved in 94 % of the patients, but only 53 % were free of symptoms during heavy manual labour concerning the operated site. In four cases symptoms of ulnar impingement were found. Improvement in range of motion of wrist and forearm was significant. The post-operative DASH score was 22.6 +/- 20.0 and the Mayo wrist score was 79.5 +/- 14.6. One non-union of the DRUJ with consecutive fracture of the fixation screw and an algodystrophy in another case were found as postoperative complications. The only long-term complication consisted of a beginning humeroradial arthrosis ten years after the operation. The results demonstrate high patient satisfaction and reliable improvement in range of motion. Our results confirm the Kapandji-Sauvé procedure to be a reliable salvage procedure for arthrosis or chronic dislocation of the DRUJ even after long-term follow up.

  14. An inverse modeling procedure to determine particle growth and nucleation rates from measured aerosol size distributions

    Directory of Open Access Journals (Sweden)

    B. Verheggen

    2006-01-01

    Full Text Available Classical nucleation theory is unable to explain the ubiquity of nucleation events observed in the atmosphere. This shows a need for an empirical determination of the nucleation rate. Here we present a novel inverse modeling procedure to determine particle nucleation and growth rates based on consecutive measurements of the aerosol size distribution. The particle growth rate is determined by regression analysis of the measured change in the aerosol size distribution over time, taking into account the effects of processes such as coagulation, deposition and/or dilution. This allows the growth rate to be determined with a higher time-resolution than can be deduced from inspecting contour plots ('banana-plots''. Knowing the growth rate as a function of time enables the evaluation of the time of nucleation of measured particles of a certain size. The nucleation rate is then obtained by integrating the particle losses from time of measurement to time of nucleation. The regression analysis can also be used to determine or verify the optimum value of other parameters of interest, such as the wall loss or coagulation rate constants. As an example, the method is applied to smog chamber measurements. This program offers a powerful interpretive tool to study empirical aerosol population dynamics in general, and nucleation and growth in particular.

  15. A comparison of the parameter estimating procedures for the Michaelis-Menten model.

    Science.gov (United States)

    Tseng, S J; Hsu, J P

    1990-08-23

    The performance of four parameter estimating procedures for the estimation of the adjustable parameters in the Michaelis-Menten model, the maximum initial rate Vmax, and the Michaelis-Menten constant Km, including Lineweaver & Burk transformation (L-B), Eadie & Hofstee transformation (E-H), Eisenthal & Cornish-Bowden transformation (ECB), and Hsu & Tseng random search (H-T) is compared. The analysis of the simulated data reveals the followings: (i) Vmax can be estimated more precisely than Km. (ii) The sum of square errors, from the smallest to the largest, follows the sequence H-T, E-H, ECB, L-B. (iii) Considering the sum of square errors, relative error, and computing time, the overall performance follows the sequence H-T, L-B, E-H, ECB, from the best to the worst. (iv) The performance of E-H and ECB are on the same level. (v) L-B and E-H are appropriate for pricesly measured data. H-T should be adopted for data whose error level are high. (vi) Increasing the number of data points has a positive effect on the performance of H-T, and a negative effect on the performance of L-B, E-H, and ECB.

  16. Comparison of aerial survey procedures for estimating polar bear density: Results of pilot studies in northern Alaska

    Science.gov (United States)

    McDonald, Lyman L.; Garner, Gerald W.; Garner, Gerald W.; Amstrup, Steven C.; Laake, Jeffrey L.; Manly, Bryan F.J.; McDonald, Lyman L.; Robertson, Donna G.

    1999-01-01

    The U.S. Marine Mammal Protection Act (MMPA) and International Agreement on the Conservation of Polar Bears mandate that boundaries and sizes of polar bear (Ursus maritimus) populations be known so they can be managed at optimum sustainable levels. However, data to estimate polar bear numbers for the Chukchi/Bering Sea and Beaufort Sea populations in Alaska are limited. We evaluated aerial line transect methodology for assessing the size of these Alaskan polar bear populations during pilot studies in spring 1987 and summer 1994. In April and May 1987 we flew 12.239 km of transect lines in the northern Bering, Chukchi, and western Beaufort seas. In June 1994 we flew 6.244 km of transect lines in a primary survey unit using a helicopter, and 5,701 km of transect lines in a secondary survey unit using a fixed-wing aircraft in the Beaufort Sea. We examined visibility bias in aerial transect surveys, double counts by independent observers, single-season mark-resight methods, the suitability of using polar bear sign to stratify the study area, and adaptive sampling methods. Fifteen polar bear groups were observed during the 1987 study. Probability of detecting bears decreased with increasing perpendicular distance from the transect line, and probability of detecting polar bear groups likely increased with increasing group size. We estimated population density in high density areas to be 446 km2/bear. In 1994, 15 polar bear groups were observed by independent front and rear seat observers on transect lines in the primary survey unit. Density estimates ranged from 284 km2/bear to 197 km2/bear depending on the model selected. Low polar bear numbers scattered over large areas of polar ice in 1987 indicated that spring is a poor time to conduct aerial surveys. Based on the 1994 survey we determined that ship-based helicopter or land-based fixed-wing aerial surveys conducted at the ice-edge in late summer-early fall may produce robust density estimates for polar bear

  17. Results of infrapopliteal endovascular procedures performed in diabetic patients with critical limb ischemia and tissue loss from the perspective of an angiosome-oriented revascularization strategy.

    Science.gov (United States)

    Acín, Francisco; Varela, César; López de Maturana, Ignacio; de Haro, Joaquín; Bleda, Silvia; Rodriguez-Padilla, Javier

    2014-01-01

    Our aim was to describe our experience with infrapopliteal endovascular procedures performed in diabetic patients with ischemic ulcers and critical ischemia (CLI). A retrospective study of 101 procedures was performed. Our cohort was divided into groups according to the number of tibial vessels attempted and the number of patent tibial vessels achieved to the foot. An angiosome anatomical classification of ulcers were used to describe the local perfusion obtained after revascularization. Ischemic ulcer healing and limb salvage rates were measured. Ischemic ulcer healing at 12 months and limb salvage at 24 months was similar between a single revascularization and multiple revascularization attempts. The group in whom none patent tibial vessel to the foot was obtained presented lower healing and limb salvage rates. No differences were observed between obtaining a single patent tibial vessel versus more than one tibial vessel. Indirect revascularization of the ulcer through arterial-arterial connections provided similar results than those obtained after direct revascularization via its specific angiosome tibial artery. Our results suggest that, in CLI diabetic patients with ischemic ulcers that undergo infrapopliteal endovascular procedures, better results are expected if at least one patent vessel is obtained and flow is restored to the local ischemic area of the foot.

  18. Comparison of blade-strike modeling results with empirical data

    Energy Technology Data Exchange (ETDEWEB)

    Ploskey, Gene R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Carlson, Thomas J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2004-03-01

    This study is the initial stage of further investigation into the dynamics of injury to fish during passage through a turbine runner. As part of the study, Pacific Northwest National Laboratory (PNNL) estimated the probability of blade strike, and associated injury, as a function of fish length and turbine operating geometry at two adjacent turbines in Powerhouse 1 of Bonneville Dam. Units 5 and 6 had identical intakes, stay vanes, wicket gates, and draft tubes, but Unit 6 had a new runner and curved discharge ring to minimize gaps between the runner hub and blades and between the blade tips and discharge ring. We used a mathematical model to predict blade strike associated with two Kaplan turbines and compared results with empirical data from biological tests conducted in 1999 and 2000. Blade-strike models take into consideration the geometry of the turbine blades and discharges as well as fish length, orientation, and distribution along the runner. The first phase of this study included a sensitivity analysis to consider the effects of difference in geometry and operations between families of turbines on the strike probability response surface. The analysis revealed that the orientation of fish relative to the leading edge of a runner blade and the location that fish pass along the blade between the hub and blade tip are critical uncertainties in blade-strike models. Over a range of discharges, the average prediction of injury from blade strike was two to five times higher than average empirical estimates of visible injury from shear and mechanical devices. Empirical estimates of mortality may be better metrics for comparison to predicted injury rates than other injury measures for fish passing at mid-blade and blade-tip locations.

  19. Position-sensitive transition edge sensor modeling and results

    Energy Technology Data Exchange (ETDEWEB)

    Hammock, Christina E-mail: chammock@milkyway.gsfc.nasa.gov; Figueroa-Feliciano, Enectali; Apodaca, Emmanuel; Bandler, Simon; Boyce, Kevin; Chervenak, Jay; Finkbeiner, Fred; Kelley, Richard; Lindeman, Mark; Porter, Scott; Saab, Tarek; Stahle, Caroline

    2004-03-11

    We report the latest design and experimental results for a Position-Sensitive Transition-Edge Sensor (PoST). The PoST is motivated by the desire to achieve a larger field-of-view without increasing the number of readout channels. A PoST consists of a one-dimensional array of X-ray absorbers connected on each end to a Transition Edge Sensor (TES). Position differentiation is achieved through a comparison of pulses between the two TESs and X-ray energy is inferred from a sum of the two signals. Optimizing such a device involves studying the available parameter space which includes device properties such as heat capacity and thermal conductivity as well as TES read-out circuitry parameters. We present results for different regimes of operation and the effects on energy resolution, throughput, and position differentiation. Results and implications from a non-linear model developed to study the saturation effects unique to PoSTs are also presented.

  20. Dark Stars: Improved Models and First Pulsation Results

    CERN Document Server

    Rindler-Daller, Tanja; Freese, Katherine; Winget, Donald E; Paxton, Bill

    2014-01-01

    (Abridged) We use the stellar evolution code MESA to study dark stars. Dark stars (DSs), which are powered by dark matter (DM) self-annihilation rather than by nuclear fusion, may be the first stars to form in the Universe. We compute stellar models for accreting DSs with masses up to 10^6 M_sun. While previous calculations were limited to polytropic interiors, our current calculations use MESA, a modern stellar evolution code to solve the equations of stellar structure. The heating due to DM annihilation is self-consistently included, assuming extended adiabatic contraction of DM within the minihalos in which DSs form. We find remarkably good overall agreement with the basic results of previous models. There are some differences, however, in the details, with positive implications for observability of DSs. We found that, in the mass range of 10^4 - 10^5 M_sun, using MESA, our DSs are hotter by a factor of 1.5 than those in Freese et al.(2010), are smaller in radius by a factor of 0.6, denser by a factor of 3...

  1. MODELING RESULTS FROM CESIUM ION EXCHANGE PROCESSING WITH SPHERICAL RESINS

    Energy Technology Data Exchange (ETDEWEB)

    Nash, C.; Hang, T.; Aleman, S.

    2011-01-03

    Ion exchange modeling was conducted at the Savannah River National Laboratory to compare the performance of two organic resins in support of Small Column Ion Exchange (SCIX). In-tank ion exchange (IX) columns are being considered for cesium removal at Hanford and the Savannah River Site (SRS). The spherical forms of resorcinol formaldehyde ion exchange resin (sRF) as well as a hypothetical spherical SuperLig{reg_sign} 644 (SL644) are evaluated for decontamination of dissolved saltcake wastes (supernates). Both SuperLig{reg_sign} and resorcinol formaldehyde resin beds can exhibit hydraulic problems in their granular (nonspherical) forms. SRS waste is generally lower in potassium and organic components than Hanford waste. Using VERSE-LC Version 7.8 along with the cesium Freundlich/Langmuir isotherms to simulate the waste decontamination in ion exchange columns, spherical SL644 was found to reduce column cycling by 50% for high-potassium supernates, but sRF performed equally well for the lowest-potassium feeds. Reduced cycling results in reduction of nitric acid (resin elution) and sodium addition (resin regeneration), therefore, significantly reducing life-cycle operational costs. These findings motivate the development of a spherical form of SL644. This work demonstrates the versatility of the ion exchange modeling to study the effects of resin characteristics on processing cycles, rates, and cold chemical consumption. The value of a resin with increased selectivity for cesium over potassium can be assessed for further development.

  2. Dipole model test with one superconducting coil; results analysed

    CERN Document Server

    Durante, M; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  3. Dipole model test with one superconducting coil: results analysed

    CERN Document Server

    Bajas, H; Benda, V; Berriaud, C; Bajko, M; Bottura, L; Caspi, S; Charrondiere, M; Clément, S; Datskov, V; Devaux, M; Durante, M; Fazilleau, P; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  4. Communicable diseases prioritized for surveillance and epidemiological research: results of a standardized prioritization procedure in Germany, 2011.

    Directory of Open Access Journals (Sweden)

    Yanina Balabanova

    Full Text Available INTRODUCTION: To establish strategic priorities for the German national public health institute (RKI and guide the institute's mid-term strategic decisions, we prioritized infectious pathogens in accordance with their importance for national surveillance and epidemiological research. METHODS: We used the Delphi process with internal (RKI and external experts and a metric-consensus approach to score pathogens according to ten three-tiered criteria. Additional experts were invited to weight each criterion, leading to the calculation of a median weight by which each score was multiplied. We ranked the pathogens according to the total weighted score and divided them into four priority groups. RESULTS: 127 pathogens were scored. Eighty-six experts participated in the weighting; "Case fatality rate" was rated as the most important criterion. Twenty-six pathogens were ranked in the highest priority group; among those were pathogens with internationally recognised importance (e.g., Human Immunodeficiency Virus, Mycobacterium tuberculosis, Influenza virus, Hepatitis C virus, Neisseria meningitides, pathogens frequently causing large outbreaks (e.g., Campylobacter spp., and nosocomial pathogens associated with antimicrobial resistance. Other pathogens in the highest priority group included Helicobacter pylori, Respiratory Syncytial Virus, Varicella zoster virus and Hantavirus. DISCUSSION: While several pathogens from the highest priority group already have a high profile in national and international health policy documents, high scores for other pathogens (e.g., Helicobacter pylori, Respiratory syncytial virus or Hantavirus indicate a possible under-recognised importance within the current German public health framework. A process to strengthen respective surveillance systems and research has been started. The prioritization methodology has worked well; its modular structure makes it potentially useful for other settings.

  5. Modeling soil bulk density through a complete data scanning procedure: Heuristic alternatives

    Science.gov (United States)

    Shiri, Jalal; Keshavarzi, Ali; Kisi, Ozgur; Karimi, Sepideh; Iturraran-Viveros, Ursula

    2017-06-01

    Soil bulk density (BD) is very important factor in land drainage and reclamation, irrigation scheduling (for estimating the soil volumetric water content), and assessing soil carbon and nutrient stock as well as determining the pollutant mass balance in soils. Numerous pedotransfer functions have been suggested so far to relate the soil BD values to soil parameters (e.g. soil separates, carbon content, etc). The present paper aims at simulating soil BD using easily measured soil variables through heuristic gene expression programming (GEP), neural networks (NN), random forest (RF), support vector machine (SVM), and boosted regression trees (BT) techniques. The statistical Gamma test was utilized to identify the most influential soil parameters on BD. The applied models were assessed through k-fold testing where all the available data patterns were involved in the both training and testing stages, which provide an accurate assessment of the models accuracy. Some existing pedotransfer functions were also applied and compared with the heuristic models. The obtained results revealed that the heuristic GEP model outperformed the other applied models globally and per test stage. Nevertheless, the performance accuracy of the applied heuristic models was much better than those of the applied pedotransfer functions. Using k-fold testing provides a more-in-detail judgment of the models.

  6. Further Results on Dynamic Additive Hazard Rate Model

    Directory of Open Access Journals (Sweden)

    Zhengcheng Zhang

    2014-01-01

    Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.

  7. Establishing the ferret as a gyrencephalic animal model of traumatic brain injury: Optimization of controlled cortical impact procedures.

    Science.gov (United States)

    Schwerin, Susan C; Hutchinson, Elizabeth B; Radomski, Kryslaine L; Ngalula, Kapinga P; Pierpaoli, Carlo M; Juliano, Sharon L

    2017-06-15

    Although rodent TBI studies provide valuable information regarding the effects of injury and recovery, an animal model with neuroanatomical characteristics closer to humans may provide a more meaningful basis for clinical translation. The ferret has a high white/gray matter ratio, gyrencephalic neocortex, and ventral hippocampal location. Furthermore, ferrets are amenable to behavioral training, have a body size compatible with pre-clinical MRI, and are cost-effective. We optimized the surgical procedure for controlled cortical impact (CCI) using 9 adult male ferrets. We used subject-specific brain/skull morphometric data from anatomical MRIs to overcome across-subject variability for lesion placement. We also reflected the temporalis muscle, closed the craniotomy, and used antibiotics. We then gathered MRI, behavioral, and immunohistochemical data from 6 additional animals using the optimized surgical protocol: 1 control, 3 mild, and 1 severely injured animals (surviving one week) and 1 moderately injured animal surviving sixteen weeks. The optimized surgical protocol resulted in consistent injury placement. Astrocytic reactivity increased with injury severity showing progressively greater numbers of astrocytes within the white matter. The density and morphological changes of microglia amplified with injury severity or time after injury. Motor and cognitive impairments scaled with injury severity. The optimized surgical methods differ from those used in the rodent, and are integral to success using a ferret model. We optimized ferret CCI surgery for consistent injury placement. The ferret is an excellent animal model to investigate pathophysiological and behavioral changes associated with TBI. Published by Elsevier B.V.

  8. Impact of Patient and Procedure Mix on Finances of Perinatal Centres - Theoretical Models for Economic Strategies in Perinatal Centres.

    Science.gov (United States)

    Hildebrandt, T; Kraml, F; Wagner, S; Hack, C C; Thiel, F C; Kehl, S; Winkler, M; Frobenius, W; Faschingbauer, F; Beckmann, M W; Lux, M P

    2013-08-01

    Introduction: In Germany, cost and revenue structures of hospitals with defined treatment priorities are currently being discussed to identify uneconomic services. This discussion has also affected perinatal centres (PNCs) and represents a new economic challenge for PNCs. In addition to optimising the time spent in hospital, the hospital management needs to define the "best" patient mix based on costs and revenues. Method: Different theoretical models were proposed based on the cost and revenue structures of the University Perinatal Centre for Franconia (UPF). Multi-step marginal costing was then used to show the impact on operating profits of changes in services and bed occupancy rates. The current contribution margin accounting used by the UPF served as the basis for the calculations. The models demonstrated the impact of changes in services on costs and revenues of a level 1 PNC. Results: Contribution margin analysis was used to calculate profitable and unprofitable DRGs based on average inpatient cost per day. Nineteen theoretical models were created. The current direct costing used by the UPF and a theoretical model with a 100 % bed occupancy rate were used as reference models. Significantly higher operating profits could be achieved by doubling the number of profitable DRGs and halving the number of less profitable DRGs. Operating profits could be increased even more by changing the rates of profitable DRGs per bed occupancy. The exclusive specialisation on pathological and high-risk pregnancies resulted in operating losses. All models which increased the numbers of caesarean sections or focused exclusively on c-sections resulted in operating losses. Conclusion: These theoretical models offer a basis for economic planning. They illustrate the enormous impact potential changes can have on the operating profits of PNCs. Level 1 PNCs require high bed occupancy rates and a profitable patient mix to cover the extremely high costs incurred due to the services

  9. Multiscale stochastic finite element method on random field modeling of geotechnical problems- a fast computing procedure

    Institute of Scientific and Technical Information of China (English)

    Xi F. XU

    2015-01-01

    The Green-function-based multiscale stochastic finite element method (MSFEM) has been formulated based on the stochastic variational principle. In this study a fast computing procedure based on the MSFEM is developed to solve random field geotechnical problems with a typical coefficient of variance less than 1. A unique fast computing advantage of the procedure enables computation performed only on those locations of interest, therefore saving a lot of computation. The numerical example on soil settlement shows that the procedure achieves significant computing efficiency compared with Monte Carlo method.

  10. Multiscale modelling as a tool to prescribe realistic boundary conditions for the study of surgical procedures.

    Science.gov (United States)

    Laganà, K; Dubini, G; Migliavacca, F; Pietrabissa, R; Pennati, G; Veneziani, A; Quarteroni, A

    2002-01-01

    This work was motivated by the problems of analysing detailed 3D models of vascular districts with complex anatomy. It suggests an approach to prescribing realistic boundary conditions to use in order to obtain information on local as well as global haemodynamics. A method was developed which simultaneously solves Navier-Stokes equations for local information and a non-linear system of ordinary differential equations for global information. This is based on the principle that an anatomically detailed 3D model of a cardiovascular district can be achieved by using the finite element method. In turn the finite element method requires a specific boundary condition set. The approach outlined in this work is to include the system of ordinary differential equations in the boundary condition set. Such a multiscale approach was first applied to two controls: (i) a 3D model of a straight tube in a simple hydraulic network and (ii) a 3D model of a straight coronary vessel in a lumped-parameter model of the cardiovascular system. The results obtained are very close to the solutions available for the pipe geometry. This paper also presents preliminary results from the application of the methodology to a particular haemodynamic problem: namely the fluid dynamics of a systemic-to-pulmonary shunt in paediatric cardiac surgery.

  11. Mouse Model of Neurological Complications Resulting from Encephalitic Alphavirus Infection

    Science.gov (United States)

    Ronca, Shannon E.; Smith, Jeanon; Koma, Takaaki; Miller, Magda M.; Yun, Nadezhda; Dineley, Kelly T.; Paessler, Slobodan

    2017-01-01

    Long-term neurological complications, termed sequelae, can result from viral encephalitis, which are not well understood. In human survivors, alphavirus encephalitis can cause severe neurobehavioral changes, in the most extreme cases, a schizophrenic-like syndrome. In the present study, we aimed to adapt an animal model of alphavirus infection survival to study the development of these long-term neurological complications. Upon low-dose infection of wild-type C57B/6 mice, asymptomatic and symptomatic groups were established and compared to mock-infected mice to measure general health and baseline neurological function, including the acoustic startle response and prepulse inhibition paradigm. Prepulse inhibition is a robust operational measure of sensorimotor gating, a fundamental form of information processing. Deficits in prepulse inhibition manifest as the inability to filter out extraneous sensory stimuli. Sensory gating is disrupted in schizophrenia and other mental disorders, as well as neurodegenerative diseases. Symptomatic mice developed deficits in prepulse inhibition that lasted through 6 months post infection; these deficits were absent in asymptomatic or mock-infected groups. Accompanying prepulse inhibition deficits, symptomatic animals exhibited thalamus damage as visualized with H&E staining, as well as increased GFAP expression in the posterior complex of the thalamus and dentate gyrus of the hippocampus. These histological changes and increased GFAP expression were absent in the asymptomatic and mock-infected animals, indicating that glial scarring could have contributed to the prepulse inhibition phenotype observed in the symptomatic animals. This model provides a tool to test mechanisms of and treatments for the neurological sequelae of viral encephalitis and begins to delineate potential explanations for the development of such sequelae post infection.

  12. A Duality Result for the Generalized Erlang Risk Model

    Directory of Open Access Journals (Sweden)

    Lanpeng Ji

    2014-11-01

    Full Text Available In this article, we consider the generalized Erlang risk model and its dual model. By using a conditional measure-preserving correspondence between the two models, we derive an identity for two interesting conditional probabilities. Applications to the discounted joint density of the surplus prior to ruin and the deficit at ruin are also discussed.

  13. Pick-Up and Delivery Problem: Models and Single Vehicle Exact Procedures,

    Science.gov (United States)

    1985-06-01

    appropriate Lagrangian ascent procedures (for example, see Bazaraa and Goode 18)). IMI 22 U r! ,.~ 4. A BRANCH AND BOUND ALGORITHM FOR THE SINGLE VEHICLE...B&B phase. For this purpose a subgradient optimization procedure, suggested by Bazaraa and Goode for the TSP, was utilized to improve the * value of F...Analysed by a Routing Heuristic", Working Paper #81-006, University of Maryland (February). 8 Bazaraa , M. and J. Goode (1977). "The Travelling Salesman

  14. Towards an American Model of Criminal Process: The Reform of the Polish Code of Criminal Procedure

    Directory of Open Access Journals (Sweden)

    Roclawska Monika

    2014-06-01

    Full Text Available In September 2013, the Polish Parliament passed an amendment to the Code of Criminal Procedure. The legislators decided to expand a number of adversarial elements present in current Polish criminal proceedings. When these changes come into effect (July 1, 2015, Polish criminal procedure will be similar to American regulations, in which the judge’s role is to be an impartial arbitrator, not an investigator.

  15. On the evaluation of box model results: the case of BOXURB model.

    Science.gov (United States)

    Paschalidou, A K; Kassomenos, P A

    2009-08-01

    In the present paper, the BOXURB model results, as they occurred in the Greater Area of Athens after model application on an hourly basis for the 10-year period 1995-2004, are evaluated both in time and space in the light of observed pollutant concentrations time series from 17 monitoring stations. The evaluation is performed at a total, monthly, daily and hourly scale. The analysis also includes evaluation of the model performance with regard to the meteorological parameters. Finally, the model is evaluated as an air quality forecasting and urban planning tool. Given the simplicity of the model and the complexity of the area topography, the model results are found to be in good agreement with the measured pollutant concentrations, especially in the heavy traffic stations. Therefore, the model can be used for regulatory purposes by authorities for time-efficient, simple and reliable estimation of air pollution levels within city boundaries.

  16. A Comparison of Item Parameter Standard Error Estimation Procedures for Unidimensional and Multidimensional Item Response Theory Modeling

    Science.gov (United States)

    Paek, Insu; Cai, Li

    2014-01-01

    The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…

  17. A Comparison of Item Parameter Standard Error Estimation Procedures for Unidimensional and Multidimensional Item Response Theory Modeling

    Science.gov (United States)

    Paek, Insu; Cai, Li

    2014-01-01

    The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…

  18. Reliability assessment of a manual-based procedure towards learning curve modeling and fmea analysis

    Directory of Open Access Journals (Sweden)

    Gustavo Rech

    2013-03-01

    Full Text Available Separation procedures in drug Distribution Centers (DC are manual-based activities prone to failures such as shipping exchanged, expired or broken drugs to the customer. Two interventions seem as promising in improving the reliability in the separation procedure: (i selection and allocation of appropriate operators to the procedure, and (ii analysis of potential failure modes incurred by selected operators. This article integrates Learning Curves (LC and FMEA (Failure Mode and Effect Analysis aimed at reducing the occurrence of failures in the manual separation of a drug DC. LCs parameters enable generating an index to identify the recommended operators to perform the procedures. The FMEA is then applied to the separation procedure carried out by the selected operators in order to identify failure modes. It also deployed the traditional FMEA severity index into two sub-indexes related to financial issues and damage to company´s image in order to characterize failures severity. When applied to a drug DC, the proposed method significantly reduced the frequency and severity of failures in the separation procedure.

  19. Final model independent result of DAMA/LIBRA-phase1

    Energy Technology Data Exchange (ETDEWEB)

    Bernabei, R.; D' Angelo, S.; Di Marco, A. [Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Belli, P. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Cappella, F.; D' Angelo, A.; Prosperi, D. [Universita di Roma ' ' La Sapienza' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma, Rome (Italy); Caracciolo, V.; Castellano, S.; Cerulli, R. [INFN, Laboratori Nazionali del Gran Sasso, Assergi (Italy); Dai, C.J.; He, H.L.; Kuang, H.H.; Ma, X.H.; Sheng, X.D.; Wang, R.G. [Chinese Academy, IHEP, Beijing (China); Incicchitti, A. [INFN, sez. Roma, Rome (Italy); Montecchia, F. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Ingegneria Civile e Ingegneria Informatica, Rome (Italy); Ye, Z.P. [Chinese Academy, IHEP, Beijing (China); University of Jing Gangshan, Jiangxi (China)

    2013-12-15

    The results obtained with the total exposure of 1.04 ton x yr collected by DAMA/LIBRA-phase1 deep underground at the Gran Sasso National Laboratory (LNGS) of the I.N.F.N. during 7 annual cycles (i.e. adding a further 0.17 ton x yr exposure) are presented. The DAMA/LIBRA-phase1 data give evidence for the presence of Dark Matter (DM) particles in the galactic halo, on the basis of the exploited model independent DM annual modulation signature by using highly radio-pure NaI(Tl) target, at 7.5{sigma} C.L. Including also the first generation DAMA/NaI experiment (cumulative exposure 1.33 ton x yr, corresponding to 14 annual cycles), the C.L. is 9.3{sigma} and the modulation amplitude of the single-hit events in the (2-6) keV energy interval is: (0.0112{+-}0.0012) cpd/kg/keV; the measured phase is (144{+-}7) days and the measured period is (0.998{+-}0.002) yr, values well in agreement with those expected for DM particles. No systematic or side reaction able to mimic the exploited DM signature has been found or suggested by anyone over more than a decade. (orig.)

  20. Infrared thermography for CFRP inspection: computational model and experimental results

    Science.gov (United States)

    Fernandes, Henrique C.; Zhang, Hai; Morioka, Karen; Ibarra-Castanedo, Clemente; López, Fernando; Maldague, Xavier P. V.; Tarpani, José R.

    2016-05-01

    Infrared Thermography (IRT) is a well-known Non-destructive Testing (NDT) technique. In the last decades, it has been widely applied in several fields including inspection of composite materials (CM), specially the fiber-reinforced polymer matrix ones. Consequently, it is important to develop and improve efficient NDT techniques to inspect and assess the quality of CM parts in order to warranty airworthiness and, at the same time, reduce costs of airline companies. In this paper, active IRT is used to inspect carbon fiber-reinforced polymer (CFRP) at laminate with artificial inserts (built-in sample) placed on different layers prior to the manufacture. Two optical active IRT are used. The first is pulsed thermography (PT) which is the most widely utilized IRT technique. The second is a line-scan thermography (LST) technique: a dynamic technique, which can be employed for the inspection of materials by heating a component, line-by-line, while acquiring a series of thermograms with an infrared camera. It is especially suitable for inspection of large parts as well as complex shaped parts. A computational model developed using COMSOL Multiphysics® was used in order to simulate the inspections. Sequences obtained from PT and LST were processed using principal component thermography (PCT) for comparison. Results showed that it is possible to detect insertions of different sizes at different depths using both PT and LST IRT techniques.

  1. Spin-1 Ising model on tetrahedron recursive lattices: Exact results

    Science.gov (United States)

    Jurčišinová, E.; Jurčišin, M.

    2016-11-01

    We investigate the ferromagnetic spin-1 Ising model on the tetrahedron recursive lattices. An exact solution of the model is found in the framework of which it is shown that the critical temperatures of the second order phase transitions of the model are driven by a single equation simultaneously on all such lattices. It is also shown that this general equation for the critical temperatures is equivalent to the corresponding polynomial equation for the model on the tetrahedron recursive lattice with arbitrary given value of the coordination number. The explicit form of these polynomial equations is shown for the lattices with the coordination numbers z = 6, 9, and 12. In addition, it is shown that the thermodynamic properties of all possible physical phases of the model are also completely driven by the corresponding single equations simultaneously on all tetrahedron recursive lattices. In this respect, the spontaneous magnetization, the free energy, the entropy, and the specific heat of the model are studied in detail.

  2. Droplet Reaction and Evaporation of Agents Model (DREAM). Glass model results; Sand model plans

    NARCIS (Netherlands)

    Hin, A.R.T.

    2006-01-01

    The Agent Fate Program is generating an extensive set of quality agent fate data which is being used to develop highly accurate secondary evaporation predictive models. Models are being developed that cover a wide range of traditional chemical warfare agents deposited onto surfaces routinely found o

  3. The Next Step in Deployment of Computer Based Procedures For Field Workers: Insights And Results From Field Evaluations at Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna; Le Blanc, Katya L.; Bly, Aaron

    2015-02-01

    The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human operator interacts with the procedures. One way to achieve these improvements is through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping, correct component verification, etc.), and dynamic step presentation. The latter means that the CBP system could only display relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the operator down the path of relevant steps based on the current conditions. This feature will reduce the operator’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. The research team at the Idaho National Laboratory has developed a prototype CBP system for field workers, which has been evaluated from a human factors and usability perspective in four laboratory studies. Based on the results from each study revisions were made to the CBP system. However, a crucial step to get the end users' (e.g., auxiliary operators, maintenance technicians, etc.) acceptance is to put the system in their hands and let them use it as a part of their everyday work activities. In the spring 2014 the first field evaluation of the INL CBP system was conducted at a nuclear power plant. Auxiliary operators conduct a functional test of one out of three backup air compressors each week. During the field evaluation activity, one auxiliary operator conducted the test with the paper-based procedure while a second auxiliary operator

  4. Adapting a Markov Monte Carlo simulation model for forecasting the number of Coronary Artery Revascularisation Procedures in an era of rapidly changing technology and policy

    Directory of Open Access Journals (Sweden)

    Knuiman Matthew

    2008-06-01

    Full Text Available Abstract Background Treatments for coronary heart disease (CHD have evolved rapidly over the last 15 years with considerable change in the number and effectiveness of both medical and surgical treatments. This period has seen the rapid development and uptake of statin drugs and coronary artery revascularization procedures (CARPs that include Coronary Artery Bypass Graft procedures (CABGs and Percutaneous Coronary Interventions (PCIs. It is difficult in an era of such rapid change to accurately forecast requirements for treatment services such as CARPs. In a previous paper we have described and outlined the use of a Markov Monte Carlo simulation model for analyzing and predicting the requirements for CARPs for the population of Western Australia (Mannan et al, 2007. In this paper, we expand on the use of this model for forecasting CARPs in Western Australia with a focus on the lack of adequate performance of the (standard model for forecasting CARPs in a period during the mid 1990s when there were considerable changes to CARP technology and implementation policy and an exploration and demonstration of how the standard model may be adapted to achieve better performance. Methods Selected key CARP event model probabilities are modified based on information relating to changes in the effectiveness of CARPs from clinical trial evidence and an awareness of trends in policy and practice of CARPs. These modified model probabilities and the ones obtained by standard methods are used as inputs in our Markov simulation model. Results The projected numbers of CARPs in the population of Western Australia over 1995–99 only improve marginally when modifications to model probabilities are made to incorporate an increase in effectiveness of PCI procedures. However, the projected numbers improve substantially when, in addition, further modifications are incorporated that relate to the increased probability of a PCI procedure and the reduced probability of a CABG

  5. Exsanguination of a home hemodialysis patient as a result of misconnected blood-lines during the wash back procedure: A case report

    Directory of Open Access Journals (Sweden)

    Allcock Kerryanne

    2012-05-01

    Full Text Available Abstract Background Home hemodialysis is common in New Zealand and associated with lower cost, improved survival and better patient experience. We present the case of a fully trained home hemodialysis patient who exsanguinated at home as a result of an incorrect wash back procedure. Case presentation The case involves a 67 year old male with a history of well controlled hypertension and impaired glucose tolerance. He commenced on peritoneal dialysis in 2006 following the development of end stage kidney failure secondary to focal segmental glomerulosclerosis. He transferred to hemodialysis due to peritoneal membrane failure in 2010, and successfully trained for home hemodialysis over a 20 week period. Following one month of uncomplicated dialysis at home, he was found deceased on his machine at home in the midst of dialysis. His death occurred during the wash back procedure performed using the “open circuit” method, and resulted from misconnection of the saline bag to the venous end of the extracorporeal blood circuit instead of the arterial end. This led to approximately 2.3L of his blood being pumped into the saline bag resulting in hypovolaemic shock and death from exsanguination. Conclusions Despite successful training, critical procedural errors can still be made by patients on home hemodialysis. In this case, the error involved misconnection of the saline bag for wash back. This case should prompt providers of home hemodialysis to review their training protocols and manuals. Manufacturers of dialysis machinery should be encouraged to design machines specifically for home hemodialysis, and consider distinguishing the arterial and venous ends of the extracorporeal blood circuit with colour coding or incompatible connectivity, to prevent occurrences such as these in the future.

  6. Effect of geometry of rice kernels on drying modeling results

    Science.gov (United States)

    Geometry of rice grain is commonly represented by sphere, spheroid or ellipsoid shapes in the drying models. Models using simpler shapes are easy to solve mathematically, however, deviation from the true grain shape might lead to large errors in predictions of drying characteristics such as, moistur...

  7. Urban traffic noise assessment by combining measurement and model results

    NARCIS (Netherlands)

    Eerden, F.J.M. van der; Graafland, F.; Wessels, P.W.; Basten, T.G.H.

    2013-01-01

    A model based monitoring system is applied on a local scale in an urban area to obtain a better understanding of the traffic noise situation. The system consists of a scalable sensor network and an engineering model. A better understanding is needed to take appropriate and cost efficient measures,

  8. Periodic Integration: Further Results on Model Selection and Forecasting

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1996-01-01

    textabstractThis paper considers model selection and forecasting issues in two closely related models for nonstationary periodic autoregressive time series [PAR]. Periodically integrated seasonal time series [PIAR] need a periodic differencing filter to remove the stochastic trend. On the other

  9. A Generic Procedure for the Assessment of the Effect of Concrete Admixtures on the Sorption of Radionuclides on Cement: Concept and Selected Results

    Energy Technology Data Exchange (ETDEWEB)

    Glaus, M.A.; Laube, A.; Van Loon, L.R

    2004-03-01

    A screening procedure is proposed for the assessment of the effect of concrete admixtures on the sorption of radionuclides by cement. The procedure is both broad and generic, and can thus be used as input for the assessment of concrete admixtures which might be used in the future. The experimental feasibility and significance of the screening procedure are tested using selected concrete admixtures: i.e. sulfonated naphthalene-formaldehyde condensates, lignosulfonates, and a plasticiser used at PSI for waste conditioning. The effect of these on the sorption properties of Ni(II), Eu(III) and Th(IV) in cement is investigated using crushed Hardened Cement Paste (HCP), as well as cement pastes prepared in the presence of these admixtures. Strongly adverse effects on the sorption of the radionuclides tested are observed only in single cases, and under extreme conditions: i.e. at high ratios of concrete admixtures to HCP, and at low ratios of HCP to cement pore water. Under realistic conditions, both radionuclide sorption and the sorption of isosaccharinic acid (a strong complexant produced in cement-conditioned wastes containing cellulose) remain unaffected by the presence of concrete admixtures, which can be explained by the sorption of them onto the HCP. The pore-water concentrations of the concrete admixtures tested are thereby reduced to levels at which the formation of radionuclide complexes is no longer of importance. Further, the Langmuir sorption model, proposed for the sorption of concrete admixtures on HCP, suggests that the HCP surface does not become saturated, at least for those concrete admixtures tested. (author)

  10. Results from modeling and simulation of chemical downstream etch systems

    Energy Technology Data Exchange (ETDEWEB)

    Meeks, E.; Vosen, S.R.; Shon, J.W.; Larson, R.S.; Fox, C.A.; Buchenauer

    1996-05-01

    This report summarizes modeling work performed at Sandia in support of Chemical Downstream Etch (CDE) benchmark and tool development programs under a Cooperative Research and Development Agreement (CRADA) with SEMATECH. The Chemical Downstream Etch (CDE) Modeling Project supports SEMATECH Joint Development Projects (JDPs) with Matrix Integrated Systems, Applied Materials, and Astex Corporation in the development of new CDE reactors for wafer cleaning and stripping processes. These dry-etch reactors replace wet-etch steps in microelectronics fabrication, enabling compatibility with other process steps and reducing the use of hazardous chemicals. Models were developed at Sandia to simulate the gas flow, chemistry and transport in CDE reactors. These models address the essential components of the CDE system: a microwave source, a transport tube, a showerhead/gas inlet, and a downstream etch chamber. The models have been used in tandem to determine the evolution of reactive species throughout the system, and to make recommendations for process and tool optimization. A significant part of this task has been in the assembly of a reasonable set of chemical rate constants and species data necessary for successful use of the models. Often the kinetic parameters were uncertain or unknown. For this reason, a significant effort was placed on model validation to obtain industry confidence in the model predictions. Data for model validation were obtained from the Sandia Molecular Beam Mass Spectrometry (MBMS) experiments, from the literature, from the CDE Benchmark Project (also part of the Sandia/SEMATECH CRADA), and from the JDP partners. The validated models were used to evaluate process behavior as a function of microwave-source operating parameters, transport-tube geometry, system pressure, and downstream chamber geometry. In addition, quantitative correlations were developed between CDE tool performance and operation set points.

  11. Heating stents with radio frequency energy to prevent tumor ingrowth: modeling and experimental results

    Science.gov (United States)

    Ryan, Thomas P.; Lawes, Kate; Goldberg, S. Nahum

    1998-04-01

    Stents are often inserted into internal orifices to treat blockage due to tumor ingrowth. Stents are favored due to their minimally invasive nature, possible avoidance of a surgical procedure, and their ability to palliate surgically non-resectable disease. Because of rapid tumor growth however, a treatment means to prevent overgrowth through the stent and resultant blockage is required. To further this goal, experiments were performed in which a stent was placed in tissue and heated with radiofrequency (RF) energy to coagulate a cylinder of tissue, thereby eradicating viable tissue in the proximity of the stent. Temperatures were measured at the central stent surface and edges over time during a 5 - 10 minute heating in phantom and in fresh tissue. In addition, a finite element model was used to simulate the electric field and temperature distribution. Blood flow was also introduced in the model by evaluating RF application to stents to determine effectiveness of the energy applications. Changing perfusion and tissue electrical conductivity as a function of temperature was applied as the tissue was heated to 100 degree(s)C. Results from the electric field model will be shown as well as the thermal distribution over time from the simulations. Lastly, results from the damage integral will be discussed.

  12. Wave-current interactions: model development and preliminary results

    Science.gov (United States)

    Mayet, Clement; Lyard, Florent; Ardhuin, Fabrice

    2013-04-01

    The coastal area concentrates many uses that require integrated management based on diagnostic and predictive tools to understand and anticipate the future of pollution from land or sea, and learn more about natural hazards at sea or activity on the coast. The realistic modelling of coastal hydrodynamics needs to take into account various processes which interact, including tides, surges, and sea state (Wolf [2008]). These processes act at different spatial scales. Unstructured-grid models have shown the ability to satisfy these needs, given that a good mesh resolution criterion is used. We worked on adding a sea state forcing in a hydrodynamic circulation model. The sea state model is the unstructured version of WAVEWATCH III c (Tolman [2008]) (which version is developed at IFREMER, Brest (Ardhuin et al. [2010]) ), and the hydrodynamic model is the 2D barotropic module of the unstructured-grid finite element model T-UGOm (Le Bars et al. [2010]). We chose to use the radiation stress approach (Longuet-Higgins and Stewart [1964]) to represent the effect of surface waves (wind waves and swell) in the barotropic model, as previously done by Mastenbroek et al. [1993]and others. We present here some validation of the model against academic cases : a 2D plane beach (Haas and Warner [2009]) and a simple bathymetric step with analytic solution for waves (Ardhuin et al. [2008]). In a second part we present realistic application in the Ushant Sea during extreme event. References Ardhuin, F., N. Rascle, and K. Belibassakis, Explicit wave-averaged primitive equations using a generalized Lagrangian mean, Ocean Modelling, 20 (1), 35-60, doi:10.1016/j.ocemod.2007.07.001, 2008. Ardhuin, F., et al., Semiempirical Dissipation Source Functions for Ocean Waves. Part I: Definition, Calibration, and Validation, J. Phys. Oceanogr., 40 (9), 1917-1941, doi:10.1175/2010JPO4324.1, 2010. Haas, K. A., and J. C. Warner, Comparing a quasi-3D to a full 3D nearshore circulation model: SHORECIRC and

  13. Acucise™ endopyelotomy in a porcine model: procedure standardization and analysis of safety and immediate efficacy

    Directory of Open Access Journals (Sweden)

    Andreoni Cássio

    2004-01-01

    Full Text Available PURPOSE: The study here presented was done to test the technical reliability and immediate efficacy of the Acucise device using a standardized technique. MATERIALS AND METHODS: 56 Acucise procedures were performed in pigs by a single surgeon who used a standardized technique: insert 5F angiographic catheter bilaterally up to the midureter, perform retrograde pyelogram, Amplatz super-stiff guidewire is advanced up to the level of the renal pelvis, angiographic catheters are removed, Acucise catheter balloon is advanced to the ureteropelvic junction (UPJ level, the super-stiff guide-wire is removed and the contrast medium in the renal pelvis is aspirated and replaced with distilled water, activate Acucise at 75 watts of pure cutting current, keep the balloon fully inflated for 10 minutes, perform retrograde ureteropyelogram to document extravasation, remove Acucise catheter and pass an ureteral stent and remove guide-wire. RESULTS: In no case did the Acucise device present malfunction. The electrocautery activation time was 2.2 seconds (ranging from 2 to 4 seconds. The extravasation of contrast medium, visible by fluoroscopy, occurred in 53 of the 56 cases (94.6%. In no case there was any evidence of intraoperative hemorrhage. CONCLUSIONS: This study revealed that performing Acucise endopyelotomy routinely in a standardized manner could largely preclude intraoperative device malfunction and eliminate complications while achieving a successful incision in the UPJ. With the guidelines that were used in this study, we believe that Acucise endopyelotomy can be completed successfully and safely in the majority of selected patients with UPJ obstruction.

  14. Optimization of extraction procedures for ecotoxicity analyses: Use of TNT contaminated soil as a model

    Energy Technology Data Exchange (ETDEWEB)

    Sunahara, G.I.; Renoux, A.Y.; Dodard, S.; Paquet, L.; Hawari, J. [BRI, Montreal, Quebec (Canada); Ampleman, G.; Lavigne, J.; Thiboutot, S. [DREV, Courcelette, Quebec (Canada)

    1995-12-31

    The environmental impact of energetic substances (TNT, RDX, GAP, NC) in soil is being examined using ecotoxicity bioassays. An extraction method was characterized to optimize bioassay assessment of TNT toxicity in different soil types. Using the Microtox{trademark} (Photobacterium phosphoreum) assay and non-extracted samples, TNT was most acutely toxic (IC{sub 50} = 1--9 PPM) followed by RDX and GAP; NC did not show obvious toxicity (probably due to solubility limitations). TNT (in 0.25% DMSO) yielded an IC{sub 50} 0.98 + 0.10 (SD) ppm. The 96h-EC{sub 50} (Selenastrum capricornutum growth inhibition) of TNT (1. 1 ppm) was higher than GAP and RDX; NC was not apparently toxic (probably due to solubility limitations). Soil samples (sand or a silt-sand mix) were spiked with either 2,000 or 20,000 mg TNT/kg soil, and were adjusted to 20% moisture. Samples were later mixed with acetonitrile, sonicated, and then treated with CaCl{sub 2} before filtration, HPLC and ecotoxicity analyses. Results indicated that: the recovery of TNT from soil (97.51% {+-} 2.78) was independent of the type of soil or moisture content; CaCl{sub 2} interfered with TNT toxicity and acetonitrile extracts could not be used directly for algal testing. When TNT extracts were diluted to fixed concentrations, similar TNT-induced ecotoxicities were generally observed and suggested that, apart from the expected effects of TNT concentrations in the soil, the soil texture and the moisture effects were minimal. The extraction procedure permits HPLC analyses as well as ecotoxicity testing and minimizes secondary soil matrix effects. Studies will be conducted to study the toxic effects of other energetic substances present in soil using this approach.

  15. Procedure for the systematic orientation of digitised cranial models. Design and validation.

    Science.gov (United States)

    Bailo, M; Baena, S; Marín, J J; Arredondo, J M; Auría, J M; Sánchez, B; Tardío, E; Falcón, L

    2015-12-01

    Comparison of bony pieces requires that they are oriented systematically to ensure that homologous regions are compared. Few orientation methods are highly accurate; this is particularly true for methods applied to three-dimensional models obtained by surface scanning, a technique whose special features make it a powerful tool in forensic contexts. The aim of this study was to develop and evaluate a systematic, assisted orientation method for aligning three-dimensional cranial models relative to the Frankfurt Plane, which would be produce accurate orientations independent of operator and anthropological expertise. The study sample comprised four crania of known age and sex. All the crania were scanned and reconstructed using an Eva Artec™ portable 3D surface scanner and subsequently, the position of certain characteristic landmarks were determined by three different operators using the Rhinoceros 3D surface modelling software. Intra-observer analysis showed a tendency for orientation to be more accurate when using the assisted method than when using conventional manual orientation. Inter-observer analysis showed that experienced evaluators achieve results at least as accurate if not more accurate using the assisted method than those obtained using manual orientation; while inexperienced evaluators achieved more accurate orientation using the assisted method. The method tested is a an innovative system capable of providing very precise, systematic and automatised spatial orientations of virtual cranial models relative to standardised anatomical planes independent of the operator and operator experience.

  16. Making the procedure manual come alive: A prototype relational database and dynamic website model for the management of nursing information.

    Science.gov (United States)

    Peace, Jane; Brennan, Patricia Flatley

    2006-01-01

    The nursing procedural manual is an essential resource for clinical practice, yet insuring its currency and availability at the point of care remains an unresolved information management challenge for nurses. While standard HTML-based web pages offer significant advantage over paper compilations, employing emerging computer science tools offers even greater promise. This paper reports on the creation of a prototypical dynamic web-based nursing procedure manual driven by a relational database. We created a relational database in MySQL to manage, store, and link the procedure information, and developed PHP files to guide content retrieval, content management, and display on demand in browser-viewable format. This database driven dynamic website model is an important innovation to meet the challenge of content management and dissemination of nursing information.

  17. Box photosynthesis modeling results for WRF/CMAQ LSM

    Data.gov (United States)

    U.S. Environmental Protection Agency — Box Photosynthesis model simulations for latent heat and ozone at 6 different FLUXNET sites. This dataset is associated with the following publication: Ran, L., J....

  18. Review of the dWind Model Conceptual Results

    Energy Technology Data Exchange (ETDEWEB)

    Baring-Gould, Ian; Gleason, Michael; Preus, Robert; Sigrin, Ben

    2015-09-16

    This presentation provides an overview of the dWind model, including its purpose, background, and current status. Baring-Gould presented this material as part of the September 2015 WINDExchange webinar.

  19. Predicting the Best Fit: A Comparison of Response Surface Models for Midazolam and Alfentanil Sedation in Procedures With Varying Stimulation.

    Science.gov (United States)

    Liou, Jing-Yang; Ting, Chien-Kun; Mandell, M Susan; Chang, Kuang-Yi; Teng, Wei-Nung; Huang, Yu-Yin; Tsou, Mei-Yung

    2016-08-01

    Selecting an effective dose of sedative drugs in combined upper and lower gastrointestinal endoscopy is complicated by varying degrees of pain stimulation. We tested the ability of 5 response surface models to predict depth of sedation after administration of midazolam and alfentanil in this complex model. The procedure was divided into 3 phases: esophagogastroduodenoscopy (EGD), colonoscopy, and the time interval between the 2 (intersession). The depth of sedation in 33 adult patients was monitored by Observer Assessment of Alertness/Scores. A total of 218 combinations of midazolam and alfentanil effect-site concentrations derived from pharmacokinetic models were used to test 5 response surface models in each of the 3 phases of endoscopy. Model fit was evaluated with objective function value, corrected Akaike Information Criterion (AICc), and Spearman ranked correlation. A model was arbitrarily defined as accurate if the predicted probability is fit. The reduced Greco model had the lowest objective function value and AICc and thus the best fit. This model was reliable with acceptable predictive ability based on adequate clinical correlation. We suggest that this model has practical clinical value for patients undergoing procedures with varying degrees of stimulation.

  20. Some Econometric Results for the Blanchard-Watson Bubble Model

    DEFF Research Database (Denmark)

    Johansen, Soren; Lange, Theis

    The purpose of the present paper is to analyse a simple bubble model suggested by Blanchard and Watson. The model is defined by y(t) =s(t)¿y(t-1)+e(t), t=1,…,n, where s(t) is an i.i.d. binary variable with p=P(s(t)=1), independent of e(t) i.i.d. with mean zero and finite variance. We take ¿>1 so...

  1. A Suggested Stress Analysis Procedure For Nozzle To Head Shell Element Model – A Case Study

    Directory of Open Access Journals (Sweden)

    Sanket S. Chaudhari

    2012-08-01

    Full Text Available Stress analysis of pressure vessel has been always a serious and a critical analysis. The paper performs a standard procedure of pressure vessel analysis and validation based on previous papers. It also demonstrates the most critical part and how it affects entire structure. Relevant ASME (ASME, 2004, ASME Boiler and Pressure Vessel Code, Section VIII, Division 2, American Society of Mechanical Engineers, New York norms are produced to explain analysis procedure. WRC (Welding research council methodology is explained to validate finite element analysis work

  2. Impact of the volume of gaseous phase in closed reactors on ANC results and modelling

    Science.gov (United States)

    Drapeau, Clémentine; Delolme, Cécile; Lassabatere, Laurent; Blanc, Denise

    2016-04-01

    The understanding of the geochemical behavior of polluted solid materials is often challenging and requires huge expenses of time and money. Nevertheless, given the increasing amounts of polluted solid materials and related risks for the environment, it is more and more crucial to understand the leaching of majors and trace metals elements from these matrices. In the designs of methods to quantify pollutant solubilization, the combination of experimental procedures with modeling approaches has recently gained attention. Among usual methods, some rely on the association of ANC and geochemical modeling. ANC experiments - Acid Neutralization Capacity - consists in adding known quantities of acid or base to a mixture of water and contaminated solid materials at a given liquid / solid ratio in closed reactors. Reactors are agitated for 48h and then pH, conductivity, redox potential, carbon, majors and heavy metal solubilized are quantified. However, in most cases, the amounts of matrix and water do not reach the total volume of reactors, leaving some space for air (gaseous phase). Despite this fact, no clear indication is given in standard procedures about the effect of this gaseous phase. Even worse, the gaseous phase is never accounted for when exploiting or modeling ANC data. The gaseous phase may exchange CO2 with the solution, which may, in turn, impact both pH and element release. This study lies within the most general framework for the use of geochemical modeling for the prediction of ANC results for the case of pure phases to real phase assemblages. In this study, we focus on the effect of the gaseous phase on ANC experiments on different mineral phases through geochemical modeling. To do so, we use PHREEQC code to model the evolution of pH and element release (including majors and heavy metals) when several matrices are put in contact with acid or base. We model the following scenarios for the gaseous phase: no gas, contact with the atmosphere (open system

  3. Validating management simulation models and implications for communicating results to stakeholders

    NARCIS (Netherlands)

    Pastoors, M.A.; Poos, J.J.; Kraak, S.B.M.; Machiels, M.A.M.

    2007-01-01

    Simulations of management plans generally aim to demonstrate the robustness of the plans to assumptions about population dynamics and fleet dynamics. Such modelling is characterized by specification of an operating model (OM) representing the underlying truth and a management procedure that mimics t

  4. Interpretation of the results of statistical measurements. [search for basic probability model

    Science.gov (United States)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  5. The Animal Model Determines the Results of Aeromonas Virulence Factors

    Science.gov (United States)

    Romero, Alejandro; Saraceni, Paolo R.; Merino, Susana; Figueras, Antonio; Tomás, Juan M.; Novoa, Beatriz

    2016-01-01

    The selection of an experimental animal model is of great importance in the study of bacterial virulence factors. Here, a bath infection of zebrafish larvae is proposed as an alternative model to study the virulence factors of Aeromonas hydrophila. Intraperitoneal infections in mice and trout were compared with bath infections in zebrafish larvae using specific mutants. The great advantage of this model is that bath immersion mimics the natural route of infection, and injury to the tail also provides a natural portal of entry for the bacteria. The implication of T3SS in the virulence of A. hydrophila was analyzed using the AH-1::aopB mutant. This mutant was less virulent than the wild-type strain when inoculated into zebrafish larvae, as described in other vertebrates. However, the zebrafish model exhibited slight differences in mortality kinetics only observed using invertebrate models. Infections using the mutant AH-1ΔvapA lacking the gene coding for the surface S-layer suggested that this protein was not totally necessary to the bacteria once it was inside the host, but it contributed to the inflammatory response. Only when healthy zebrafish larvae were infected did the mutant produce less mortality than the wild-type. Variations between models were evidenced using the AH-1ΔrmlB, which lacks the O-antigen lipopolysaccharide (LPS), and the AH-1ΔwahD, which lacks the O-antigen LPS and part of the LPS outer-core. Both mutants showed decreased mortality in all of the animal models, but the differences between them were only observed in injured zebrafish larvae, suggesting that residues from the LPS outer core must be important for virulence. The greatest differences were observed using the AH-1ΔFlaB-J (lacking polar flagella and unable to swim) and the AH-1::motX (non-motile but producing flagella). They were as pathogenic as the wild-type strain when injected into mice and trout, but no mortalities were registered in zebrafish larvae. This study demonstrates

  6. Results of the 2013 UT modeling benchmark obtained with models implemented in CIVA

    Energy Technology Data Exchange (ETDEWEB)

    Toullelan, Gwénaël; Raillon, Raphaële; Chatillon, Sylvain [CEA, LIST, 91191Gif-sur-Yvette (France); Lonne, Sébastien [EXTENDE, Le Bergson, 15 Avenue Emile Baudot, 91300 MASSY (France)

    2014-02-18

    The 2013 Ultrasonic Testing (UT) modeling benchmark concerns direct echoes from side drilled holes (SDH), flat bottom holes (FBH) and corner echoes from backwall breaking artificial notches inspected with a matrix phased array probe. This communication presents the results obtained with the models implemented in the CIVA software: the pencilmodel is used to compute the field radiated by the probe, the Kirchhoff approximation is applied to predict the response of FBH and notches and the SOV (Separation Of Variables) model is used for the SDH responses. The comparison between simulated and experimental results are presented and discussed.

  7. Impact of postdischarge surveillance on surgical site infection rates for several surgical procedures: results from the nosocomial surveillance network in The Netherlands.

    NARCIS (Netherlands)

    Manniën, Judith; Wille, Jan C; Snoeren, Ruud L M M; Hof, Susan van den

    2006-01-01

    OBJECTIVE: To compare the number of surgical site infections (SSIs) registered after hospital discharge with respect to various surgical procedures and to identify the procedures for which postdischarge surveillance (PDS) is most important. DESIGN: Prospective SSI surveillance with voluntary PDS.

  8. Preliminary results of a three-dimensional radiative transfer model

    Energy Technology Data Exchange (ETDEWEB)

    O`Hirok, W. [Univ. of California, Santa Barbara, CA (United States)

    1995-09-01

    Clouds act as the primary modulator of the Earth`s radiation at the top of the atmosphere, within the atmospheric column, and at the Earth`s surface. They interact with both shortwave and longwave radiation, but it is primarily in the case of shortwave where most of the uncertainty lies because of the difficulties in treating scattered solar radiation. To understand cloud-radiative interactions, radiative transfer models portray clouds as plane-parallel homogeneous entities to ease the computational physics. Unfortunately, clouds are far from being homogeneous, and large differences between measurement and theory point to a stronger need to understand and model cloud macrophysical properties. In an attempt to better comprehend the role of cloud morphology on the 3-dimensional radiation field, a Monte Carlo model has been developed. This model can simulate broadband shortwave radiation fluxes while incorporating all of the major atmospheric constituents. The model is used to investigate the cloud absorption anomaly where cloud absorption measurements exceed theoretical estimates and to examine the efficacy of ERBE measurements and cloud field experiments. 3 figs.

  9. Implementation procedure for the generalized moving Preisach model based on a first order reversal curve diagram

    Institute of Scientific and Technical Information of China (English)

    HAN Yong; ZHU Jie

    2009-01-01

    Ftrst order reversal curves (FORC) of nanoeomposite Nd2Fe14B/Fe3B magnetic materials were measured to attain a FORC diagram, which characterizes reversible magnetization, irreversible magnetization, and magnetic interactions in a hysteresis system. Then, generalized mov-ing Preisach model (GMPM) was implemented based on the FORC diagram. Reversible and irreversible magnetizations shown in FORCs and a FORC diagram were used as an input of GMPM. Coupling interaction between reversible and irreversible magnetizations was added when calculating reversible magnetization. Meanwhile, irreversible magnetic moments' interaction was approximately represented by mean field interaction. The result shows that the simulated main curves mostly coincide with the experimental curves.

  10. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    Science.gov (United States)

    1990-11-01

    findings contained in this report are thosE Df the author(s) and should not he construed as an official Department Df the Army position, policy , or...Marquardt methods" to perform linear and nonlinear estimations. One idea in this area by Box and Jenkins (1976) was the " backcasting " procedure to evaluate

  11. Reply: New results justify open discussion of alternative models

    Science.gov (United States)

    Newman, Andrew; Stein, Seth; Weber, John; Engeln, Joseph; Mao, Aitlin; Dixon, Timothy

    A millennium ago, Jewish sages wrote that “the rivalry of scholars increases wisdom.” In contrast, Schweig et al. (Eos, this issue) demand that “great caution” be exercised in discussing alternatives to their model of high seismic hazard in the New Madrid seismic zone (NMSZ). We find this view surprising; we have no objection to their and their coworkers' extensive efforts promoting their model in a wide variety of public media, but see no reason not to explore a lower-hazard alternative based on both new data and reanalysis of data previously used to justify their model. In our view, the very purpose of collecting new data and reassessing existing data is to promote spirited testing and improvement of existing hypotheses. For New Madrid, such open reexamination seems scientifically appropriate, given the challenge of understanding intraplate earthquakes, and socially desirable because of the public policy implications.

  12. Some Econometric Results for the Blanchard-Watson Bubble Model

    DEFF Research Database (Denmark)

    Johansen, Soren; Lange, Theis

    The purpose of the present paper is to analyse a simple bubble model suggested by Blanchard and Watson. The model is defined by y(t) =s(t)¿y(t-1)+e(t), t=1,…,n, where s(t) is an i.i.d. binary variable with p=P(s(t)=1), independent of e(t) i.i.d. with mean zero and finite variance. We take ¿>1 so...... is whether a bubble model with infinite variance can create the long swings, or persistence, which are observed in many macro variables. We say that a variable is persistent if its autoregressive coefficient ¿(n) of y(t) on y(t-1), is close to one. We show that the estimator of ¿(n) converges to ¿p...

  13. Transmission resonance Raman spectroscopy: experimental results versus theoretical model calculations.

    Science.gov (United States)

    Gonzálvez, Alicia G; González Ureña, Ángel

    2012-10-01

    A laser spectroscopic technique is described that combines transmission and resonance-enhanced Raman inelastic scattering together with low laser power (view, a model for the Raman signal dependence on the sample thickness is also presented. Essentially, the model considers the sample to be homogeneous and describes the underlying physics using only three parameters: the Raman cross-section, the laser-radiation attenuation cross-section, and the Raman signal attenuation cross-section. The model was applied successfully to describe the sample-size dependence of the Raman signal in both β-carotene standards and carrot roots. The present technique could be useful for direct, fast, and nondestructive investigations in food quality control and analytical or physiological studies of animal and human tissues.

  14. Results on a Binding Neuron Model and Their Implications for Modified Hourglass Model for Neuronal Network

    Directory of Open Access Journals (Sweden)

    Viswanathan Arunachalam

    2013-01-01

    Full Text Available The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008 in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.

  15. First results obtained in France with the latest model of the Fresenius cell separator: AS 104.

    Science.gov (United States)

    Coffe, C; Couteret, Y; Devillers, M; Fest, T; Hervé, P; Kieffer, Y; Lamy, B; Masse, M; Morel, P; Pouthier-Stein, F

    1993-01-01

    In Besançon, we carried out 40 plateletphereses with the latest model of the Fresenius cell separator AS 104 to check this new system against the new generation of cell separators, according to the following criteria: less than 2x10 6 leukocytes (before filtration) and more than 5x10 11 platelets. The results show that platelet concentrates contained 5.04+/-0.88x10 11 platelets in a total volume of 435+/-113 mL. The mean platelet recovery was 40.95+/-4.86% (from 31.7 to 51.6). The leukocyte content was 2.28+/-5.48x10 6 and the red blood cell contamination was 3.48+/-2.38x10 8. The quality of the platelets was very satisfactory. There was no problem with donor biocompatibility or procedure safety, few adverse donor reactions (0.6%) and good therapeutic efficiency of platelet concentrates.

  16. Some vaccination strategies for the SEIR epidemic model. Preliminary results

    CERN Document Server

    De la Sen, M; Alonso-Quesada, S

    2011-01-01

    This paper presents a vaccination-based control strategy for a SEIR (susceptible plus infected plus infectious plus removed populations) propagation disease model. The model takes into account the total population amounts as a refrain for the illness transmission since its increase makes more difficult contacts among susceptible and infected. The control objective is the asymptotically tracking of the removed-by-immunity population to the total population while achieving simultaneously the remaining population (i.e. susceptible plus infected plus infectious) to asymptotically tend to zero.

  17. Results from Development of Model Specifications for Multifamily Energy Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Brozyna, K.

    2012-08-01

    Specifications, modeled after CSI MasterFormat, provide the trade contractors and builders with requirements and recommendations on specific building materials, components and industry practices that comply with the expectations and intent of the requirements within the various funding programs associated with a project. The goal is to create a greater level of consistency in execution of energy efficiency retrofits measures across the multiple regions a developer may work. IBACOS and Mercy Housing developed sample model specifications based on a common building construction type that Mercy Housing encounters.

  18. Results From Development of Model Specifications for Multifamily Energy Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Brozyna, Kevin [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-08-01

    Specifications, modeled after CSI MasterFormat, provide the trade contractors and builders with requirements and recommendations on specific building materials, components and industry practices that comply with the expectations and intent of the requirements within the various funding programs associated with a project. The goal is to create a greater level of consistency in execution of energy efficiency retrofits measures across the multiple regions a developer may work. IBACOS and Mercy Housing developed sample model specifications based on a common building construction type that Mercy Housing encounters.

  19. Determination of a Differential Item Functioning Procedure Using the Hierarchical Generalized Linear Model

    Directory of Open Access Journals (Sweden)

    Tülin Acar

    2012-01-01

    Full Text Available The aim of this research is to compare the result of the differential item functioning (DIF determining with hierarchical generalized linear model (HGLM technique and the results of the DIF determining with logistic regression (LR and item response theory–likelihood ratio (IRT-LR techniques on the test items. For this reason, first in this research, it is determined whether the students encounter DIF with HGLM, LR, and IRT-LR techniques according to socioeconomic status (SES, in the Turkish, Social Sciences, and Science subtest items of the Secondary School Institutions Examination. When inspecting the correlations among the techniques in terms of determining the items having DIF, it was discovered that there was significant correlation between the results of IRT-LR and LR techniques in all subtests; merely in Science subtest, the results of the correlation between HGLM and IRT-LR techniques were found significant. DIF applications can be made on test items with other DIF analysis techniques that were not taken to the scope of this research. The analysis results, which were determined by using the DIF techniques in different sample sizes, can be compared.

  20. Compliance with a time-out procedure intended to prevent wrong surgery in hospitals : results of a national patient safety programme in the Netherlands

    NARCIS (Netherlands)

    van Schoten, Steffie M; Kop, Veerle; de Blok, Carolien; Spreeuwenberg, Peter; Groenewegen, Peter P; Wagner, Cordula

    2014-01-01

    OBJECTIVE: To prevent wrong surgery, the WHO 'Safe Surgery Checklist' was introduced in 2008. The checklist comprises a time-out procedure (TOP): the final step before the start of the surgical procedure where the patient, surgical procedure and side/site are reviewed by the surgical team. The aim

  1. FEM modeling and histological analyses on thermal damage induced in facial skin resurfacing procedure with different CO2 laser pulse duration

    Science.gov (United States)

    Rossi, Francesca; Zingoni, Tiziano; Di Cicco, Emiliano; Manetti, Leonardo; Pini, Roberto; Fortuna, Damiano

    2011-07-01

    Laser light is nowadays routinely used in the aesthetic treatments of facial skin, such as in laser rejuvenation, scar removal etc. The induced thermal damage may be varied by setting different laser parameters, in order to obtain a particular aesthetic result. In this work, it is proposed a theoretical study on the induced thermal damage in the deep tissue, by considering different laser pulse duration. The study is based on the Finite Element Method (FEM): a bidimensional model of the facial skin is depicted in axial symmetry, considering the different skin structures and their different optical and thermal parameters; the conversion of laser light into thermal energy is modeled by the bio-heat equation. The light source is a CO2 laser, with different pulse durations. The model enabled to study the thermal damage induced into the skin, by calculating the Arrhenius integral. The post-processing results enabled to study in space and time the temperature dynamics induced in the facial skin, to study the eventual cumulative effects of subsequent laser pulses and to optimize the procedure for applications in dermatological surgery. The calculated data where then validated in an experimental measurement session, performed in a sheep animal model. Histological analyses were performed on the treated tissues, evidencing the spatial distribution and the entity of the thermal damage in the collageneous tissue. Modeling and experimental results were in good agreement, and they were used to design a new optimized laser based skin resurfacing procedure.

  2. Some Results On The Modelling Of TSS Manufacturing Lines

    Directory of Open Access Journals (Sweden)

    Viorel MÎNZU

    2000-12-01

    Full Text Available This paper deals with the modelling of a particular class of manufacturing lines, governed by a decentralised control strategy so that they balance themselves. Such lines are known as “bucket brigades” and also as “TSS lines”, after their first implementation, at Toyota, in the 70’s. A first study of their behaviour was based upon modelling as stochastic dynamic systems, which emphasised, in the frame of the so-called “Normative Model”, a sufficient condition for self-balancing, that means for autonomous functioning at a steady production rate (stationary behaviour. Under some particular conditions, a simulation analysis of TSS lines could be made on non-linear block diagrams, showing that the state trajectories are piecewise continuous in between occurrences of certain discrete events, which determine their discontinuity. TSS lines may therefore be modelled as hybrid dynamic systems, more specific, with autonomous switching and autonomous impulses (jumps. A stability analysis of such manufacturing lines is allowed by modelling them as hybrid dynamic systems with discontinuous motions.

  3. 3D model-based catheter tracking for motion compensation in EP procedures

    Science.gov (United States)

    Brost, Alexander; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2010-02-01

    Atrial fibrillation is the most common sustained heart arrhythmia and a leading cause of stroke. Its treatment by radio-frequency catheter ablation, performed using fluoroscopic image guidance, is gaining increasingly more importance. Two-dimensional fluoroscopic navigation can take advantage of overlay images derived from pre-operative 3-D data to add anatomical details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair the utility of these static overlay images for catheter navigation. We developed an approach for image-based 3-D motion compensation as a solution to this problem. A bi-plane C-arm system is used to take X-ray images of a special circumferential mapping catheter from two directions. In the first step of the method, a 3-D model of the device is reconstructed. Three-dimensional respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter model in 3-D. This step involves bi-plane fluoroscopy and 2-D/3-D registration. Phantom data and clinical data were used to assess our model-based catheter tracking method. Experiments involving a moving heart phantom yielded an average 2-D tracking error of 1.4 mm and an average 3-D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2-D tracking error of 1.0 mm +/- 0.4 mm and an average 3-D tracking error of 0.8 mm +/- 0.5 mm. These results demonstrate that model-based motion-compensation based on 2-D/3-D registration is both feasible and accurate.

  4. [The value of the Kapandji-Sauvé procedure with considering clinical results and measurement of bone density. A clinical study].

    Science.gov (United States)

    Wüstner-Hofmann, M C; Schober, F; Hofmann, A K

    2003-05-01

    Between 1989 and 1995, 33 patients were treated with a Kapandji-Sauvé procedure for malunited fracture of the distal radius and instabilities of the distal radioulnar joint. Thirty patients were followed up with a mean follow-up time of 91 months. Fourteen patients underwent a measurement of bone density of the distal forearm. Twenty-eight patients showed good ossification of the distal radioulnar arthrodesis. Forearm rotation improved by 17.3 %. Mean grip strength was 72 % of that of the contralateral hand. Evaluation by the Cooney score resulted in 10 % very good, in 65 % good, 22 % fair and in 3 % poor results. The measurement of bone density of the distal radius showed an increase of rotation and flexure firmness. The cortical density remained constant. In the subcortical bone of the distal radius, we found a decrease of the trabecular density in the radial part.

  5. Regionalization of climate model results for the North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Kauker, F. [Alfred-Wegener-Institut fuer Polar- und Meeresforschung, Bremerhaven (Germany); Storch, H. von [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    2000-07-01

    A dynamical downscaling for the North Sea is presented. The numerical model used for the study is the coupled ice-ocean model OPYC. In a hindcast of the years 1979 to 1993 it was forced with atmospheric forcing of the ECMWF reanalysis. The models capability in simulating the observed mean state and variability in the North Sea is demonstrated by the hindcast. Two time scale ranges, from weekly to seasonal and the longer-than-seasonal time scales are investigated. Shorter time scales, for storm surges, are not captured by the model formulation. The main modes of variability of sea level, sea-surface circulation, sea-surface temperature, and sea-surface salinity are described and connections to atmospheric phenomena, like the NAO, are discussed. T106 ''time-slice'' simulations with a ''2 x CO{sub 2}'' horizon are used to estimate the effects of a changing climate on the shelf sea ''North Sea''. The ''2 x CO{sub 2}'' changes in the surface forcing are accompanied by changes in the lateral oceanic boundary conditions taken from a global coupled climate model. For ''2 x CO{sub 2}'' the time mean sea level increases up to 25 cm in the German Bight in the winter, where 15 cm are due to the surface forcing and 10 cm due to thermal expansion. This change is compared to the ''natural'' variability as simulated in the ECMWF integration and found to be not outside the range spanned by it. The variability of sea level on the weekly-to-seasonal time-scales is significantly reduced in the scenario integration. The variability on the longer-than-seasonal time-scales in the control and scenario runs is much smaller then in the ECMWF integration. This is traced back to the use of ''time-slice'' experiments. Discriminating between locally forced changes and changes induced at the lateral oceanic boundaries of the model in the circulation and

  6. Vaccination strategies for SEIR models using feedback linearization. Preliminary results

    CERN Document Server

    De la Sen, M; Alonso-Quesada, S

    2011-01-01

    A linearization-based feedback-control strategy for a SEIR epidemic model is discussed. The vaccination objective is the asymptotically tracking of the removed-by-immunity population to the total population while achieving simultaneously the remaining population (i.e. susceptible plus infected plus infectious) to asymptotically tend to zero. The disease controlpolicy is designed based on a feedback linearization technique which provides a general method to generate families of vaccination policies with sound technical background.

  7. Recent results in the NJL model with heavy quarks

    CERN Document Server

    Feldmann, T

    1996-01-01

    We investigate the interplay of chiral and heavy quark symmetries by using the NJL quark model. Heavy quarks with finite masses m(Q) as well as the limit m(Q) to infinity are studied. We found large corrections to the heavy mass scaling law for the pseudoscalar decay constant. The influence of external momenta on the shape parameters of the Isgur-Wise form factor is discussed.

  8. Impact of Cell-Free Fetal DNA Screening on Patients’ Choice of Invasive Procedures after a Positive California Prenatal Screen Result

    Directory of Open Access Journals (Sweden)

    Forum T. Shah

    2014-07-01

    Full Text Available Until recently, maternal serum analyte levels paired with sonographic fetal nuchal translucency measurement was the most accurate prenatal screen available for Trisomies 18 and 21, (91% and 94% detection and false positive rates of 0.31% and 4.5% respectively. Women with positive California Prenatal Screening Program (CPSP results have the option of diagnostic testing to determine definitively if the fetus has a chromosomal abnormality. Cell-free fetal (cff- DNA screening for Trisomies 13, 18, and 21 was first offered in 2012, allowing women with positive screens to choose additional screening before diagnostic testing. Cff-DNA sensitivity rates are as high as 99.9% and 99.1%, with false positive rates of 0.4% and 0.1%, for Trisomies 18 and 21, respectively. A retrospective chart review was performed in 2012 on 500 CPSP referrals at the University of California, San Diego Thornton Hospital. Data were collected prior to and after the introduction of cff-DNA. There was a significant increase in the number of participants who chose to pursue additional testing and a decrease in the number of invasive procedures performed after cff-DNA screening was available. We conclude that as fetal aneuploidy screening improves, the number of invasive procedures will continue to decrease.

  9. Blade element momentum modeling of inflow with shear in comparison with advanced model results

    DEFF Research Database (Denmark)

    Aagaard Madsen, Helge; Riziotis, V.; Zahle, Frederik

    2012-01-01

    There seems to be a significant uncertainty in aerodynamic and aeroelastic simulations on megawatt turbines operating in inflow with considerable shear, in particular with the engineering blade element momentum (BEM) model, commonly implemented in the aeroelastic design codes used by industry....... Computations with advanced vortex and computational fluid dynamics models are used to provide improved insight into the complex flow phenomena and rotor aerodynamics caused by the sheared inflow. One consistent result from the advanced models is the variation of induced velocity as a function of azimuth when...... a higher power than in uniform flow. On the basis of the consistent azimuthal induction variations seen in the advanced model results, three different BEM implementation methods are discussed and tested in the same aeroelastic code. A full local BEM implementation on an elemental stream tube in both...

  10. The physical model of a terraced plot: first results

    Science.gov (United States)

    Perlotto, Chiara; D'Agostino, Vincenzo; Buzzanca, Giacomo

    2017-04-01

    Terrace building have been expanded in the 19th century because of the increased demographic pressure and the need to crop additional areas at steeper slopes. Terraces are also important to regulate the hydrological behavior of the hillslope. Few studies are available in literature on rainfall-runoff processes and flood risk mitigation in terraced areas. Bench terraces, reducing the terrain slope and the length of the overland flow, quantitatively control the runoff flow velocity, facilitating the drainage and thus leading to a reduction of soil erosion. The study of the hydrologic-hydraulic function of terraced slopes is essential in order to evaluate their possible use to cooperate for flood-risk mitigation also preserving the landscape value. This research aims to better focus the times of the hydrological response, which are determined by a hillslope plot bounded by a dry-stone wall, considering both the overland flow and the groundwater. A physical model, characterized by a quasi-real scale, has been built to reproduce the behavior of a 3% outward sloped terrace at bare soil condition. The model consists of a steel metal box (1 m large, 3.3 m long, 2 m high) containing the hillslope terrain. The terrain is equipped with two piezometers, 9 TDR sensors measuring the volumetric water content, a surface spillway at the head releasing the steady discharge under test, a scale at the wall base to measure the outflowing discharge. The experiments deal with different initial moisture condition (non-saturated and saturated), and discharges of 19.5, 12.0 and 5.0 l/min. Each experiment has been replicated, conducting a total number of 12 tests. The volumetric water content analysis produced by the 9 TDR sensors was able to provide a quite satisfactory representation of the soil moisture during the runs. Then, different lag times at the outlet since the inflow initiation were measured both for runoff and groundwater. Moreover, the time of depletion and the piezometer

  11. CONCEPTUAL MODEL AND PROCEDURES TO ASSIMILATE PRODUCTION TECHNOLOGIES OF BIOENERGETICS OF RESIDUAL BIOMASS

    Directory of Open Access Journals (Sweden)

    David Muto Lubota

    2016-07-01

    Full Text Available The present work expose the conceptual pattern for a process of assimilation of technologies with the purpose of creating obtaining capacities of bio energy with the objective of achieving an energy insurance of the recycle of Urban Solid Residuals (RSU in the municipality of Cabinda, Angola. The conceptual pattern is novel because it considers the south-south collaboration, and it is supported by a general procedure of assimilation of the technologies that includes in one of its steps a specify procedure for the step concerning the insurance of the chain supply that contains as additional aspect, in a novel way, the determination of the initial’s investors capacities assisting to the demand of final products as well as to the readiness of the raw materials, based in the problems of uncertainty to the future changes. Finally conclusions are elaborated with projections for the future work.

  12. Modelling nitrogen and phosphorus cycles and dissolved oxygen in the Zhujiang Estuary Ⅱ. Model results

    Institute of Scientific and Technical Information of China (English)

    Guan Weibing; Wong Lai-Ah; Xu Dongfeng

    2001-01-01

    In the present study, the ecosystem-based water quality model was applied to the Pearl River (Zhujiang) Estuary. The model results successfully represent the distribution trend of nutrients and dissolved oxygen both in the horizontal and vertical planes during the flood season, and it shows that the model has taken into consideration the key part of the dynamical, chemical and biological processes existing in the Zhujiang Estuary. The further studies illustrate that nitrogen is in plenty while phosphorus and light limit the phytoplankton biomass in the Zhujiang Estuary during the flood season.

  13. Roux-en-Y fistulo-jejunostomy as a salvage procedure in patients with post-sleeve gastrectomy fistula: mid-term results.

    Science.gov (United States)

    Chouillard, Elie; Younan, Antoine; Alkandari, Mubarak; Daher, Ronald; Dejonghe, Bernard; Alsabah, Salman; Biagini, Jean

    2016-10-01

    Sleeve gastrectomy (SG) is currently the most commonly performed bariatric procedure in France. It achieves both adequate excess weight loss and significant reduction in comorbidities. However, fistula is still the most common complication after SG, occurring in more than 3 % of cases, even in specialized centers (Gagner and Buchwald in Surg Obes Relat Dis 10:713-723. doi: 10.1016/j.soard.2014.01.016 , 2014). Its management is not standardized, long, and challenging. We have already reported the short-term results of Roux-en-Y fistulo-jejunostomy (RYFJ) as a salvage procedure in patients with post-SG fistula (Chouillard et al. in Surg Endosc 28:1954-1960 doi: 10.1007/s00464-014-3424-y , 2014). In this study, we analyzed the mid-term results of the RYFJ emphasizing its endoscopic, radiologic, and safety outcome. Between January 2007 and December 2013, we treated 75 patients with post-SG fistula, mainly referred from other centers. Immediate management principles included computerized tomography (CT) scan-guided drainage of collections or surgical peritoneal lavage, nutritional support, and endoscopic stenting. Ultimately, this approach achieved fistula control in nearly two-thirds of the patients. In the remaining third, RYFJ was proposed, eventually leading to fistula control in all cases. The mid-term results (i.e., more than 1 year after surgery) were assessed using anamnesis, clinical evaluation, biology tests, upper digestive tract endoscopy, and IV-enhanced CT scan with contrast upper series. Thirty patients (22 women and 8 men) had RYFJ for post-SG fistula. Mean age was 40 years (range 22-59). Procedures were performed laparoscopically in all but 3 cases (90 %). Three patients (10 %) were lost to follow-up. Mean follow-up period was 22 months (18-90). Mean body mass index (BMI) was 27.4 kg/m(2) (22-41). Endoscopic and radiologic assessment revealed no persistent fistula and no residual collections. Despite the lack of long-term follow-up, RYFJ could be

  14. Effect of carryover and presampling procedures on the results of real-time PCR used for diagnosis of bovine intramammary infections with Streptococcus agalactiae at routine milk recordings.

    Science.gov (United States)

    Mahmmod, Yasser S; Mweu, Marshal M; Nielsen, Søren S; Katholm, Jørgen; Klaas, Ilka C

    2014-03-01

    The use of PCR tests as diagnostics for intramammary infections (IMI) based on composite milk samples collected in a non-sterile manner at milk recordings is increasing. Carryover of sample material between cows and non-aseptic PCR sampling may be incriminated for misclassification of IMI with Streptococcus agalactiae (S. agalactiae) in dairy herds with conventional milking parlours. Misclassification may result in unnecessary costs for treatment and culling. The objectives of this study were to (1) determine the effect of carryover on PCR-positivity for S. agalactiae at different PCR cycle threshold (Ct) cut-offs by estimating the between-cow correlation while accounting for the milking order, and (2) evaluate the effect of aseptic presampling procedures (PSP) on PCR-positivity at the different Ct-value cut-offs. The study was conducted in four herds with conventional milking parlours at routine milk recordings. Following the farmers' routine pre-milking preparation, 411 of 794 cows were randomly selected for the PSP treatment. These procedures included removing the first streams of milk and 70% alcohol teat disinfection. Composite milk samples were then collected from all cows and tested using PCR. Data on milking order were used to estimate the correlation between consecutively milked cows in each milking unit. Factors associated with the PCR-positivity for S. agalactiae were analyzed using generalized estimating equations assuming a binomially-distributed outcome with a logit link function. Presampling procedures were only significant using cut-off 37. A first-order autoregressive correlation structure provided the best correlation between consecutively milked cows. The correlation was 13%, 11%, 9% at cut-offs <40, 37, and 34, respectively. PSP did not reduce the odds of cows being PCR-positive for S. agalactiae. In conclusion, carryover and non-aseptic sampling affected the PCR results and should therefore be considered when samples from routine milk

  15. Applicability and generalisability of published results of randomised controlled trials and non-randomised studies evaluating four orthopaedic procedures: methodological systematic review.

    Science.gov (United States)

    Pibouleau, Leslie; Boutron, Isabelle; Reeves, Barnaby C; Nizard, Rémy; Ravaud, Philippe

    2009-11-17

    To compare the reporting of essential applicability data from randomised controlled trials and non-randomised studies evaluating four new orthopaedic surgical procedures. Medline and the Cochrane central register of controlled trials. All articles of comparative studies assessing total hip or knee arthroplasty carried out by a minimally invasive approach or computer assisted navigation system. Items judged to be essential for interpreting the applicability of findings about such procedures were identified by a survey of a sample of orthopaedic surgeons (77 of 512 completed the survey). Reports were evaluated for data describing these "essential" items and the number of centres and surgeons involved in the trials. When data on the number of centres and surgeons were not reported, the corresponding author of the selected trials was contacted. Results 84 articles were identified (38 randomised controlled trials, 46 non-randomised studies). The median percentage (interquartile range) of essential items reported for non-randomised studies compared with randomised controlled trials was 38% (25-63%) versus 44% (38-45%) for items about patients, 71% (43-86%) versus 71% (57-86%) for items considered essential for all interventions, and 38% (25-50%) versus 50% (25-50%) for items about the context of care. More than 80% of both study types were single centre studies, with one or two participating surgeons. The reporting of data related to the applicability of results was poor in published articles of both non-randomised studies and randomised controlled trials and did not differ by study design. The applicability of results from the trials and studies was similar in terms of number of centres and surgeons involved and the reproducibility of the intervention.

  16. Exact results in modeling planetary atmospheres-III

    Energy Technology Data Exchange (ETDEWEB)

    Pelkowski, J. [Institut fuer Atmosphaere und Umwelt, J.W. Goethe Universitaet Frankfurt, Campus Riedberg, Altenhoferallee 1, D-60438 Frankfurt a.M. (Germany)], E-mail: Pelkowski@meteor.uni-frankfurt.de; Chevallier, L. [Observatoire de Paris-Meudon, Laboratoire LUTH, 5 Place Jules Janssen, 92195 Meudon cedex (France); Rutily, B. [Universite de Lyon, F-69003 Lyon (France); Universite Lyon 1, Observatoire de Lyon, 9 avenue Charles Andre, F-69230 Saint-Genis-Laval (France); CNRS, UMR 5574, Centre de Recherche Astrophysique de Lyon (France); Ecole Normale Superieure de Lyon, F-69007 Lyon (France); Titaud, O. [Centro de Modelamiento Matematico, UMI 2807 CNRS-UChile, Blanco Encalada 2120 - 7 Piso, Casilla 170 - Correo 3, Santiago (Chile)

    2008-01-15

    We apply the semi-gray model of our previous paper to the particular case of the Earth's atmosphere, in order to illustrate quantitatively the inverse problem associated with the direct problem we dealt with before. From given climatological values of the atmosphere's spherical albedo and transmittance for visible radiation, the single-scattering albedo and the optical thickness in the visible are inferred, while the infrared optical thickness is deduced for given global average surface temperature. Eventually, temperature distributions in terms of the infrared optical depth will be shown for a terrestrial atmosphere assumed to be semi-gray and, locally, in radiative and thermodynamic equilibrium.

  17. Exact results in modeling planetary atmospheres-I. Gray atmospheres

    Energy Technology Data Exchange (ETDEWEB)

    Chevallier, L. [Observatoire de Paris-Meudon, Laboratoire LUTH, 5 Place Jules Janssen, 92195 Meudon cedex (France)]. E-mail: loic.chevallier@obspm.fr; Pelkowski, J. [Institut fuer Meteorologie und Geophysik, J.W. Goethe Universitaet Frankfurt, Robert Mayer Strasse 1, D-60325 Frankfurt (Germany); Rutily, B. [Universite de Lyon, Lyon, F-69000 (France) and Universite Lyon 1, Villeurbanne, F-69622 (France) and Centre de Recherche Astronomique de Lyon, Observatoire de Lyon, 9 avenue Charles Andre, Saint-Genis Laval cedex, F-69561 (France) and CNRS, UMR 5574; Ecole Normale Superieure de Lyon, Lyon (France)

    2007-04-15

    An exact model is proposed for a gray, isotropically scattering planetary atmosphere in radiative equilibrium. The slab is illuminated on one side by a collimated beam and is bounded on the other side by an emitting and partially reflecting ground. We provide expressions for the incident and reflected fluxes on both boundary surfaces, as well as the temperature of the ground and the temperature distribution in the atmosphere, assuming the latter to be in local thermodynamic equilibrium. Tables and curves of the temperature distribution are included for various values of the optical thickness. Finally, semi-infinite atmospheres illuminated from the outside or by sources at infinity is dealt with.

  18. Delta-tilde interpretation of standard linear mixed model results

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra

    2016-01-01

    effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... for factors with differences in number of levels. For mixed models, where in general the relevant error terms for the fixed effects are not the pure residual error, it is suggested to base the d-prime-like interpretation on the residual error. The methods are illustrated on a multifactorial sensory profile...... inherently challenging effect size measure estimates in ANOVA settings....

  19. Robust solution procedure for the discrete energy-averaged model on the calculation of 3D hysteretic magnetization and magnetostriction of iron–gallium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Tari, H., E-mail: tari.1@osu.edu; Scheidler, J.J., E-mail: scheidler.8@osu.edu; Dapino, M.J., E-mail: dapino.1@osu.edu

    2015-06-15

    A reformulation of the Discrete Energy-Averaged model for the calculation of 3D hysteretic magnetization and magnetostriction of iron-gallium (Galfenol) alloys is presented in this paper. An analytical solution procedure based on an eigenvalue decomposition is developed. This procedure avoids the singularities present in the existing approximate solution by offering multiple local minimum energy directions for each easy crystallographic direction. This improved robustness is crucial for use in finite element codes. Analytical simplifications of the 3D model to 2D and 1D applications are also presented. In particular, the 1D model requires calculation for only one easy direction, while all six easy directions must be considered for general applications. Compared to the approximate solution procedure, it is shown that the resulting robustness comes at no expense for 1D applications, but requires almost twice the computational effort for 3D applications. To find model parameters, we employ the average of the hysteretic data, rather than anhysteretic curves, which would require additional measurements. An efficient optimization routine is developed that retains the dimensionality of the prior art. The routine decouples the parameters into exclusive sets, some of which are found directly through a fast preprocessing step to improve accuracy and computational efficiency. The effectiveness of the model is verified by comparison with existing measurement data. - Highlights: • The discrete energy-averaged model for Galfenol is reformulated. • An analytical solution for 3D magnetostriction and magnetization is developed from eigenvalue decomposition. • Improved robustness is achieved. • An efficient optimization routine is developed to identify parameters from averaged hysteresis curves. • The effectiveness of the model is demonstrated against experimental data.

  20. Modelling cohesive laws in finite element simulations via an adapted contact procedure in ABAQUS

    DEFF Research Database (Denmark)

    Feih, S.

    2004-01-01

    is not straightforward, and most existing publications consider theoretical and therefore simpler softening shapes. Two possible methods of bridging law approximation areexplained and compared in this report. The bridging laws were implemented in a numerical user subroutine in the finite element code ABAQUS. The main......The influence of different fibre sizings on the strength and fracture toughness of composites was studied by investigating the characteristics of fibre cross-over bridging in DCB specimens loaded with pure bending moments. These tests result in bridginglaws, which are obtained by simultaneous...... measurements of the crack growth resistance and the end opening of the notch. The advantage of this method is that these bridging laws represent material laws independent of the specimen geometry. However, theadaption of the experimentally determined shape to a numerically valid model shape...

  1. A two-way nesting procedure for an ocean model with application to the Norwegian Sea

    Energy Technology Data Exchange (ETDEWEB)

    Heggelund, Yngve; Berntsen, Jarle

    2000-11-01

    Two-way nesting for a {sigma}-coordinate ocean model is implemented. The test case is a traveling low pressure along the west coast of Norway. Different methods for interaction between the coarse grid and the fine grid have been investigated. It is found that both a Dirichlet type and a FRS-type boundary condition for the fine grid give reasonable results for this test case. The FRS-type boundary condition gives a smoother transition between the coarse and fine grid, but more noise in the interior of the fine grid. With no feedback from the fine grid to the coarse grid, phase differences between the solutions on the two grids cause unphysical vortices to be found at the interface between the grids. (author)

  2. Transcranial Magnetic Stimulation: An Automated Procedure to Obtain Coil-specific Models for Field Calculations

    DEFF Research Database (Denmark)

    Madsen, Kristoffer Hougaard; Ewald, Lars; Siebner, Hartwig R.

    2015-01-01

    Background: Field calculations for transcranial magnetic stimulation (TMS) are increasingly implemented online in neuronavigation systems and in more realistic offline approaches based on finite-element methods. They are often based on simplified and/or non-validated models of the magnetic vector...... potential of the TMS coils. Objective: To develop an approach to reconstruct the magnetic vector potential based on automated measurements. Methods: We implemented a setup that simultaneously measures the three components of the magnetic field with high spatial resolution. This is complemented by a novel...... approach to determine the magnetic vector potential via volume integration of the measured field. Results: The integration approach reproduces the vector potential with very good accuracy. The vector potential distribution of a standard figure-of-eight shaped coil determined with our setup corresponds well...

  3. A surgical rat model of sleeve gastrectomy with staple technique: long-term weight loss results.

    Science.gov (United States)

    Patrikakos, Panagiotis; Toutouzas, Konstantinos G; Perrea, Despoina; Menenakos, Evangelos; Pantopoulou, Alkistis; Thomopoulos, Theodore; Papadopoulos, Stefanos; Bramis, John I

    2009-11-01

    Sleeve gastrectomy (SG) is one of the surgical procedures applied for treating morbid obesity consisting of removing the gastric fundus and transforming the stomach into a narrow gastric tube. The aim of this experimental study is to create a functional model of SG and to present the long-term weight loss results. Twenty adult Wistar rats were fed with high fat diet for 12 weeks before being divided randomly in two groups of ten rats each. One group underwent SG performed with the use of staples, and the other group underwent a sham operation (control group). The animals' weight was evaluated weekly for 15 weeks after the operation. All animals survived throughout the experiment. After the operation both groups started to lose weight with maximum weight loss on the seventh postoperative day (POD) for the sham-operated group and on the 15th POD for the SG group. Thereafter, both groups started to regain weight but with different rates. By the fourth postoperative week (POW), the average weight of the sham group did not differ statistically significantly compared to the preoperative weight, while after the eighth POW, rats' average weight was statistically significantly increased compared to the preoperative value. On the other hand, average weight of the SG group was lower postoperatively until the end of the study compared to the preoperative average weight. We have created a surgical rat model of experimental SG model, enabling the further study of biochemical and hormonal parameters.

  4. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    Science.gov (United States)

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  5. Large Deviation Results for Generalized Compound Negative Binomial Risk Models

    Institute of Scientific and Technical Information of China (English)

    Fan-chao Kong; Chen Shen

    2009-01-01

    In this paper we extend and improve some results of the large deviation for random sums of random variables.Let {Xn;n≥1} be a sequence of non-negative,independent and identically distributed random variables with common heavy-tailed distribution function F and finite mean μ∈R+,{N(n);n≥0} be a sequence of negative binomial distributed random variables with a parameter p ∈(0,1),n≥0,let {M(n);n≥0} be a Poisson process with intensity λ0.Suppose {N(n);n≥0},{Xn;n≥1} and {M(n);n≥0} are mutually results.These results can be applied to certain problems in insurance and finance.

  6. Theoretical Modeling of ISO Results on Planetary Nebula NGC 7027

    Science.gov (United States)

    Yan, M.; Federman, S. R.; Dalgarno, A.; Bjorkman, J. E.

    1999-04-01

    We present a thermal and chemical model of the neutral envelope of planetary nebula NGC 7027. In our model, the neutral envelope is composed of a thin dense shell of constant density and an outer stellar wind region with the usual inverse-square law density profile. The thermal and chemical structure is calculated with the assumption that the incident radiation field on the inner surface equals 0.5×105 times Draine's fit to the average interstellar far-ultraviolet field. The rate coefficient for H2 formation on grains is assumed to be 1/5 the usual value to take into account the lower dust-gas mass ratio in the neutral envelope of NGC 7027. The calculated temperature in the dense shell decreases from 3000 to under 200 K. Once the temperature drops to 200 K, we assume that it remains at 200 K until the outer edge of the dense shell is reached, so that the observed intensities of CO J=16-15, 15-14, and 14-13 lines can be reproduced. The 200 K temperature can be interpreted as the average temperature of the shocked gas just behind the forward shock front in the framework of the interacting stellar wind theory. We calculate the intensities of the molecular far-infrared rotational lines by using a revised version of the escape probability formalism. The theoretical intensities for rotational lines of CO (from J=29-28 to J=14-13), CH+, OH, and CH are shown to be in good agreement with ISO observations. The H2 rovibrational line intensities are also calculated and are in agreement with available observations.

  7. Combining forming results via weld models to powerful numerical assemblies

    NARCIS (Netherlands)

    Kose, K.; Rietman, Bert

    2004-01-01

    Forming simulations generally give satisfying results with respect to thinning, stresses, changed material properties and, with a proper springback calculation, the geometric form. The joining of parts by means of welding yields an extra change of the material properties and the residual stresses.

  8. Combining forming results via weld models to powerful numerical assemblies

    NARCIS (Netherlands)

    Kose, K.; Rietman, B.

    2004-01-01

    Forming simulations generally give satisfying results with respect to thinning, stresses, changed material properties and, with a proper springback calculation, the geometric form. The joining of parts by means of welding yields an extra change of the material properties and the residual stresses. W

  9. MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.

    Science.gov (United States)

    Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela

    2013-05-01

    Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process.

  10. Ionospheric Poynting Flux and Joule Heating Modeling Challenge: Latest Results and New Models.

    Science.gov (United States)

    Shim, J. S.; Rastaetter, L.; Kuznetsova, M. M.; Knipp, D. J.; Zheng, Y.; Cosgrove, R. B.; Newell, P. T.; Weimer, D. R.; Fuller-Rowell, T. J.; Wang, W.

    2014-12-01

    Poynting Flux and Joule Heating in the ionosphere - latest results from the challenge and updates at the CCMC. With the addition of satellite tracking and display features in the online analysis tool and at the Community Coordinated Modeling Center (CCMC), we are now able to obtain Poynting flux and Joule heating values from a wide variety of ionospheric models. In addition to Poynting fluxes derived from electric and magnetic field measurements from the Defense Meteorological Satellite Program (DMSP) satellites for a recent modeling challenge, we can now use a Poynting Flux model derived from FAST satellite observations for comparison. Poynting Fluxes are also correlated using Ovation Prime maps of precipitation patterns during the same time periods to assess how "typical" the events in the challenge are.

  11. Modeling Framework and Results to Inform Charging Infrastructure Investments

    Energy Technology Data Exchange (ETDEWEB)

    Melaina, Marc W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wood, Eric W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-01

    The plug-in electric vehicle (PEV) market is experiencing rapid growth with dozens of battery electric (BEV) and plug-in hybrid electric (PHEV) models already available and billions of dollars being invested by automotive manufacturers in the PEV space. Electric range is increasing thanks to larger and more advanced batteries and significant infrastructure investments are being made to enable higher power fast charging. Costs are falling and PEVs are becoming more competitive with conventional vehicles. Moreover, new technologies such as connectivity and automation hold the promise of enhancing the value proposition of PEVs. This presentation outlines a suite of projects funded by the U.S. Department of Energy's Vehicle Technology Office to conduct assessments of the economic value and charging infrastructure requirements of the evolving PEV market. Individual assessments include national evaluations of PEV economic value (assuming 73M PEVs on the road in 2035), national analysis of charging infrastructure requirements (with community and corridor level resolution), and case studies of PEV ownership in Columbus, OH and Massachusetts.

  12. Rocket injector anomalies study. Volume 1: Description of the mathematical model and solution procedure

    Science.gov (United States)

    Przekwas, A. J.; Singhal, A. K.; Tam, L. T.

    1984-01-01

    The capability of simulating three dimensional two phase reactive flows with combustion in the liquid fuelled rocket engines is demonstrated. This was accomplished by modifying an existing three dimensional computer program (REFLAN3D) with Eulerian Lagrangian approach to simulate two phase spray flow, evaporation and combustion. The modified code is referred as REFLAN3D-SPRAY. The mathematical formulation of the fluid flow, heat transfer, combustion and two phase flow interaction of the numerical solution procedure, boundary conditions and their treatment are described.

  13. Establishing and validating luminiscence dating procedures for archaoelogical remains in the geochronology laboratory of the University of A Coruña: first results

    Directory of Open Access Journals (Sweden)

    Sanjurjo Sánchez, Jorge

    2008-12-01

    Full Text Available Thermoluminescence dating (TL of archaeological material is a technique about 50 years old, but it is not much used in Spain. Recent methodological advances have improved the accuracy and precision of the method. The unit of Geochronology of the University Geological Institute “Isidro Parga Pondal” University of A Coruña, has set up recently a Luminescence laboratory. In order to test analytical dating procedures, medieval tiles from an archaeological site next to the Hercules Tower (A Coruña have been dated. The stratigraphical column was previously dated by 14C, so a good chronological control is available. Samples were analysed using two different analytical procedures: a classical one using a multialiquot approach (AD-TL and a recent one using a single aliquot procedure (SAR-TL. Our results show that both methods yield comparable paleodoses, being the SAR-TL the one with smaller error.

    La datación mediante termoluminiscencia (TL de material arqueológico es una técnica con unos 50 años de antigüedad, aunque no muy empleada en España. Avances metodológicos recientes han permitido aumentar la exactitud y precisión del método. La Unidad de Geocronología del Instituto Universitario de Xeoloxía “Isidro Parga Pondal” de la Universidad de A Coruña cuenta en la actualidad con un Laboratorio de Luminiscencia. Para poner a punto los procedimientos analíticos de datación, se han datado tejas medievales de una excavación cercana a la Torre de Hércules (A Coruña en la que la columna estratigráfica fue datada por medio de 14C. Las muestras fueron sometidas a dos procedimientos existentes en la literatura: uno clásico (AD-TL y otro de reciente desarrollo (SAR-TL. Los resultados muestran concordancia entre ambos métodos e incluso mejoras considerables obtenidas con el SAR-TL.

  14. Standard Model Higgs results from ATLAS and CMS experiments

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00221190; The ATLAS collaboration

    2016-01-01

    The properties of the Higgs boson particle were measured with the ATLAS and CMS experiments at the LHC at the centre-of-mass energies 7 TeV and 8 TeV. The combined data samples of the ATLAS and CMS experiments were used for the measurements of the Higgs boson mass and couplings. Furthermore, the CP and spin analysis done separately with the CMS and ATLAS experiments are described. Moreover, first results of the Higgs boson cross section at the centre-of-mass energy 13 TeV in the channels H->ZZ->4leptons and H->gamma+gamma with the ATLAS detector are presented.

  15. Revisiting the destination ranking procedure in development of an Intervening Opportunities Model for public transit trip distribution

    Science.gov (United States)

    Nazem, Mohsen; Trépanier, Martin; Morency, Catherine

    2015-01-01

    An Enhanced Intervening Opportunities Model (EIOM) is developed for Public Transit (PT). This is a distribution supply dependent model, with single constraints on trip production for work trips during morning peak hours (6:00 a.m.-9:00 a.m.) within the Island of Montreal, Canada. Different data sets, including the 2008 Origin-Destination (OD) survey of the Greater Montreal Area, the 2006 Census of Canada, GTFS network data, along with the geographical data of the study area, are used. EIOM is a nonlinear model composed of socio-demographics, PT supply data and work location attributes. An enhanced destination ranking procedure is used to calculate the number of spatially cumulative opportunities, the basic variable of EIOM. For comparison, a Basic Intervening Opportunities Model (BIOM) is developed by using the basic destination ranking procedure. The main difference between EIOM and BIOM is in the destination ranking procedure: EIOM considers the maximization of a utility function composed of PT Level Of Service and number of opportunities at the destination, along with the OD trip duration, whereas BIOM is based on a destination ranking derived only from OD trip durations. Analysis confirmed that EIOM is more accurate than BIOM. This study presents a new tool for PT analysts, planners and policy makers to study the potential changes in PT trip patterns due to changes in socio-demographic characteristics, PT supply, and other factors. Also it opens new opportunities for the development of more accurate PT demand models with new emergent data such as smart card validations.

  16. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    capability of performing iterated-source (criticality), multiplied-fixed-source, and fixed-source calculations. MCV uses a highly detailed continuous-energy (as opposed to multigroup) representation of neutron histories and cross section data. The spatial modeling is fully three-dimensional (3-D), and any geometrical region that can be described by quadric surfaces may be represented. The primary results are region-wise reaction rates, neutron production rates, slowing-down-densities, fluxes, leakages, and when appropriate the eigenvalue or multiplication factor. Region-wise nuclidic reaction rates are also computed, which may then be used by other modules in the system to determine time-dependent nuclide inventories so that RACER can perform depletion calculations. Furthermore, derived quantities such as ratios and sums of primary quantities and/or other derived quantities may also be calculated. MCV performs statistical analyses on output quantities, computing estimates of the 95% confidence intervals as well as indicators as to the reliability of these estimates. The remainder of this chapter provides an overview of the MCV algorithm. The following three chapters describe the MCV mathematical, physical, and statistical treatments in more detail. Specifically, Chapter 2 discusses topics related to tracking the histories including: geometry modeling, how histories are moved through the geometry, and variance reduction techniques related to the tracking process. Chapter 3 describes the nuclear data and physical models employed by MCV. Chapter 4 discusses the tallies, statistical analyses, and edits. Chapter 5 provides some guidance as to how to run the code, and Chapter 6 is a list of the code input options.

  17. Stress Resultant Based Elasto-Viscoplastic Thick Shell Model

    Directory of Open Access Journals (Sweden)

    Pawel Woelke

    2012-01-01

    Full Text Available The current paper presents enhancement introduced to the elasto-viscoplastic shell formulation, which serves as a theoretical base for the finite element code EPSA (Elasto-Plastic Shell Analysis [1–3]. The shell equations used in EPSA are modified to account for transverse shear deformation, which is important in the analysis of thick plates and shells, as well as composite laminates. Transverse shear forces calculated from transverse shear strains are introduced into a rate-dependent yield function, which is similar to Iliushin's yield surface expressed in terms of stress resultants and stress couples [12]. The hardening rule defined by Bieniek and Funaro [4], which allows for representation of the Bauschinger effect on a moment-curvature plane, was previously adopted in EPSA and is used here in the same form. Viscoplastic strain rates are calculated, taking into account the transverse shears. Only non-layered shells are considered in this work.

  18. High-energy radiation damage in zirconia: modeling results

    Energy Technology Data Exchange (ETDEWEB)

    Zarkadoula, Eva; Devanathan, Ram; Weber, William J.; Seaton, Michael; Todorov, Ilian; Nordlund, Kai; Dove, Martin T.; Trachenko, Kostya

    2014-02-28

    Zirconia has been viewed as a material of exceptional resistance to amorphization by radiation damage, and was consequently proposed as a candidate to immobilize nuclear waste and serve as a nuclear fuel matrix. Here, we perform molecular dynamics simulations of radiation damage in zirconia in the range of 0.1-0.5 MeV energies with the account of electronic energy losses. We find that the lack of amorphizability co-exists with a large number of point defects and their clusters. These, importantly, are largely disjoint from each other and therefore represent a dilute damage that does not result in the loss of long-range structural coherence and amorphization. We document the nature of these defects in detail, including their sizes, distribution and morphology, and discuss practical implications of using zirconia in intense radiation environments.

  19. High-energy radiation damage in zirconia: modeling results

    Energy Technology Data Exchange (ETDEWEB)

    Zarkadoula, Evangelia [Queen Mary, University of London; Devanathan, Ram [Pacific Northwest National Laboratory (PNNL); Weber, William J [ORNL; Seaton, M [Daresbury Laboratory, UK; Todorov, I T [Daresbury Laboratory, UK; Nordlund, Kai [University of Helsinki; Dove, Martin T [Queen Mary, University of London; Trachenko, Kostya [Queen Mary, University of London

    2014-01-01

    Zirconia is viewed as a material of exceptional resistance to amorphization by radiation damage, and consequently proposed as a candidate to immobilize nuclear waste and serve as an inert nuclear fuel matrix. Here, we perform molecular dynamics simulations of radiation damage in zirconia in the range of 0.1-0.5 MeV energies with account of electronic energy losses. We nd that the lack of amorphizability co-exists with a large number of point defects and their clusters. These, importantly, are largely isolated from each other and therefore represent a dilute damage that does not result in the loss of long-range structural coherence and amorphization. We document the nature of these defects in detail, including their sizes, distribution and morphology, and discuss practical implications of using zirconia in intense radiation environments.

  20. High-energy radiation damage in zirconia: Modeling results

    Energy Technology Data Exchange (ETDEWEB)

    Zarkadoula, E., E-mail: zarkadoulae@ornl.gov [School of Physics and Astronomy, Queen Mary University of London, Mile End Road, London E1 4NS (United Kingdom); SEPnet, Queen Mary University of London, Mile End Road, London E1 4NS (United Kingdom); Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Devanathan, R. [Nuclear Sciences Division, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Weber, W. J. [Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Materials Science and Engineering, University of Tennessee, Knoxville, Tennessee 37996 (United States); Seaton, M. A.; Todorov, I. T. [STFC Daresbury Laboratory, Scientific Computing Department, Keckwick Lane, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom); Nordlund, K. [University of Helsinki, P.O. Box 43, FIN-00014 Helsinki (Finland); Dove, M. T. [School of Physics and Astronomy, Queen Mary University of London, Mile End Road, London E1 4NS (United Kingdom); Trachenko, K. [School of Physics and Astronomy, Queen Mary University of London, Mile End Road, London E1 4NS (United Kingdom); SEPnet, Queen Mary University of London, Mile End Road, London E1 4NS (United Kingdom)

    2014-02-28

    Zirconia is viewed as a material of exceptional resistance to amorphization by radiation damage, and consequently proposed as a candidate to immobilize nuclear waste and serve as an inert nuclear fuel matrix. Here, we perform molecular dynamics simulations of radiation damage in zirconia in the range of 0.1–0.5 MeV energies with account of electronic energy losses. We find that the lack of amorphizability co-exists with a large number of point defects and their clusters. These, importantly, are largely isolated from each other and therefore represent a dilute damage that does not result in the loss of long-range structural coherence and amorphization. We document the nature of these defects in detail, including their sizes, distribution, and morphology, and discuss practical implications of using zirconia in intense radiation environments.