WorldWideScience

Sample records for model updating procedures

  1. Obtaining manufactured geometries of deep-drawn components through a model updating procedure using geometric shape parameters

    Science.gov (United States)

    Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan

    2018-01-01

    The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.

  2. Detection of Earthquake-Induced Damage in a Framed Structure Using a Finite Element Model Updating Procedure

    Science.gov (United States)

    Kim, Seung-Nam; Park, Taewon; Lee, Sang-Hyun

    2014-01-01

    Damage of a 5-story framed structure was identified from two types of measured data, which are frequency response functions (FRF) and natural frequencies, using a finite element (FE) model updating procedure. In this study, a procedure to determine the appropriate weightings for different groups of observations was proposed. In addition, a modified frame element which included rotational springs was used to construct the FE model for updating to represent concentrated damage at the member ends (a formulation for plastic hinges in framed structures subjected to strong earthquakes). The results of the model updating and subsequent damage detection when the rotational springs (RS model) were used were compared with those obtained using the conventional frame elements (FS model). Comparisons indicated that the RS model gave more accurate results than the FS model. That is, the errors in the natural frequencies of the updated models were smaller, and the identified damage showed clearer distinctions between damaged and undamaged members and was more consistent with observed damage. PMID:24574888

  3. Detection of Earthquake-Induced Damage in a Framed Structure Using a Finite Element Model Updating Procedure

    Directory of Open Access Journals (Sweden)

    Eunjong Yu

    2014-01-01

    Full Text Available Damage of a 5-story framed structure was identified from two types of measured data, which are frequency response functions (FRF and natural frequencies, using a finite element (FE model updating procedure. In this study, a procedure to determine the appropriate weightings for different groups of observations was proposed. In addition, a modified frame element which included rotational springs was used to construct the FE model for updating to represent concentrated damage at the member ends (a formulation for plastic hinges in framed structures subjected to strong earthquakes. The results of the model updating and subsequent damage detection when the rotational springs (RS model were used were compared with those obtained using the conventional frame elements (FS model. Comparisons indicated that the RS model gave more accurate results than the FS model. That is, the errors in the natural frequencies of the updated models were smaller, and the identified damage showed clearer distinctions between damaged and undamaged members and was more consistent with observed damage.

  4. Detection of earthquake-induced damage in a framed structure using a finite element model updating procedure.

    Science.gov (United States)

    Yu, Eunjong; Kim, Seung-Nam; Park, Taewon; Lee, Sang-Hyun

    2014-01-01

    Damage of a 5-story framed structure was identified from two types of measured data, which are frequency response functions (FRF) and natural frequencies, using a finite element (FE) model updating procedure. In this study, a procedure to determine the appropriate weightings for different groups of observations was proposed. In addition, a modified frame element which included rotational springs was used to construct the FE model for updating to represent concentrated damage at the member ends (a formulation for plastic hinges in framed structures subjected to strong earthquakes). The results of the model updating and subsequent damage detection when the rotational springs (RS model) were used were compared with those obtained using the conventional frame elements (FS model). Comparisons indicated that the RS model gave more accurate results than the FS model. That is, the errors in the natural frequencies of the updated models were smaller, and the identified damage showed clearer distinctions between damaged and undamaged members and was more consistent with observed damage.

  5. Analogous Mechanisms of Selection and Updating in Declarative and Procedural Working Memory: Experiments and a Computational Model

    Science.gov (United States)

    Oberauer, Klaus; Souza, Alessandra S.; Druey, Michel D.; Gade, Miriam

    2013-01-01

    The article investigates the mechanisms of selecting and updating representations in declarative and procedural working memory (WM). Declarative WM holds the objects of thought available, whereas procedural WM holds representations of what to do with these objects. Both systems consist of three embedded components: activated long-term memory, a…

  6. Rapid Map Updating Procedures Using Orthophotos

    Science.gov (United States)

    Alrajhi, M.

    2009-04-01

    The General Directorate of Surveying and Mapping (GDSM) of the Ministry for Municipal and Rural Affairs (MOMRA) of the Kingdom of Saudi Arabia has the mandate for large scale mapping of 220 Saudi Arabian cities. During the last 30 years all of these cities have been mapped in 3D at least once using stereo photogrammetric procedures. The output of these maps is in digital vector files with more than 300 types of features coded. Mapping at the required scales of 1:10,000 for the urban and suburban areas and at 1:1,000 for the urban areas proper has been a lengthy and costly process, which did not lend itself to regular updating procedures. For this reason the major cities, where most of the developments took place, have been newly mapped at about 10 year intervals. To record the changes of urban landscapes more rapidly orthophotomapping has recently been introduced. Rather than waiting for about 5 years for the line mapping of a large city after the inception of a mapping project, orthophotos could be produced a few months after a new aerial flight was made. While new, but slow stereomapping in 3D provides accurate results in conformity with the usual urban mapping specifications, the geocoded superposition of outdated maps with the more recent orthophotos provided a very useful monitoring of the urban changes. At the same time the use of orthophotos opens up a new possibility for urban map updating by on-screen digitizing in 2D. This can at least be done for the most relevant features, such as buildings, walls, roads and vegetation. As this is a faster method than 3D stereo plotting a lesser geometric accuracy is to be expected for the on-screen digitization. There is a need to investigate and to compare the two methods with respect to accuracy and speed of operation as a basis for a decision, whether to continue with new 3D stereomapping every 10 years or to introduce rapid map updating in 2D via on-screen digitization every 3 to 5 years. This presentation is about

  7. Laboratory procedures update on Hirschsprung disease.

    Science.gov (United States)

    Takawira, Catherine; D'Agostini, Stephanie; Shenouda, Suzan; Persad, Rabin; Sergi, Consolato

    2015-05-01

    The detection of ganglion cells in rectal biopsies of infants or toddlers with severe constipation is routinely performed by pediatric pathologists in many institutions. Hirschsprung disease (HD) is defined by the lack of ganglion cells (aganglionosis). The early recognition and the prompt implementation of surgical procedures obviously protect infants affected with HD from potential life-threatening conditions, including enterocolitis and debilitating constipation. Image-based and non-image-based clinical techniques and some laboratory tests have been reevaluated along the years, but often fragmentarily. Immunohistochemical markers have been increasingly used in pathology laboratories to detect ganglion cells and nerve fibers. Recently, calretinin, a vitamin D-dependent calcium-binding protein with expression in ganglion cells and nerves, has been described as an adjunctive or primary diagnostic test in HD. The aim of the present study was to systematically summarize and update laboratory procedures targeting ganglion cells in rectal biopsies. Procedures and tests have been reviewed and values of specificity and sensitivity have been calculated according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Contrast enema has the lowest sensitivity and specificity of all of the 3-index investigations under the lens: contrast enema, anorectal manometry, and biopsy with histology. The latter procedure seems to have the highest sensitivity and specificity. Acetylcholinesterase staining on fresh-frozen material has been found to have slightly higher rates of sensitivity and specificity when compared with hematoxylin and eosin only. Calretinin staining may be supportive for the diagnosis, although some cases with false-positivity may be of some concern. Hematoxylin and eosin with or without acetylcholinesterase remains the criterion standard according to our PRISMA-based data. In our opinion, the number of false-positive results

  8. PSA Update Procedures, an Ultimate Need for Living PSA

    International Nuclear Information System (INIS)

    Hegedus, D.

    1998-01-01

    Nuclear facilities by their complex nature, change with time. These changes can be both physical (plant modification, etc.), operational (enhanced procedures, etc.) and organizational. In addition, there are also changes in our understanding of the plant, due to operational experience, data collection, technology enhancements, etc. Therefore, it is imperative that PSA model must be frequently up-dated or modified to reflect these changes. Over the last ten years. these has been a remarkable growth of the use of Probabilistic Safety Assessments (PSAs). The most rapidly growing area of the PSA Applications is their use to support operational decision-making. Many of these applications are characterized by the potential for not only improving the safety level but also for providing guidance on the optimal use of resources and reducing regulatory burden. To enable a wider use of the PSA model as a tool for safety activities it is essential to maintain the model in a controlled state. Moreover, to fulfill requirements for L iving PSA , the PSA model has to be constantly updated and/or monitored to reflect the current plant configuration. It should be noted that the PSA model should not only represent the plant design but should also represent the operational and emergency procedures. To keep the PSA model up-to-date several issues should be clearly defined including: - Responsibility should be divided among the PSA group, - Procedures for implementing changes should be established, and - QA requirements/program should be established to assure documentation and reporting. (author)

  9. Empirical testing of forecast update procedure forseasonal products

    DEFF Research Database (Denmark)

    Wong, Chee Yew; Johansen, John

    2008-01-01

    of a toy supply chain. The theoretical simulation involves historical weekly consumer demand data for 122 toy products. The empirical test is then carried out in real-time with 291 toy products. The results show that the proposed forecast updating procedure: 1) reduced forecast errors of the annual......Updating of forecasts is essential for successful collaborative forecasting, especially for seasonal products. This paper discusses the results of a theoretical simulation and an empirical test of a proposed time-series forecast updating procedure. It involves a two-stage longitudinal case study...

  10. Model validation: Correlation for updating

    Indian Academy of Sciences (India)

    Department of Mechanical Engineering, Imperial College of Science, ... If we are unable to obtain a satisfactory degree of correlation between the initial theoretical model and the test data, then it is extremely unlikely that any form of model updating (correcting the model to match the test data) will succeed. Thus, a successful ...

  11. Update on Nonsurgical Lung Volume Reduction Procedures

    Directory of Open Access Journals (Sweden)

    J. Alberto Neder

    2016-01-01

    Full Text Available There has been a surge of interest in endoscopic lung volume reduction (ELVR strategies for advanced COPD. Valve implants, coil implants, biological LVR (BioLVR, bronchial thermal vapour ablation, and airway stents are used to induce lung deflation with the ultimate goal of improving respiratory mechanics and chronic dyspnea. Patients presenting with severe air trapping (e.g., inspiratory capacity/total lung capacity (TLC 225% predicted and thoracic hyperinflation (TLC > 150% predicted have the greatest potential to derive benefit from ELVR procedures. Pre-LVRS or ELVR assessment should ideally include cardiological evaluation, high resolution CT scan, ventilation and perfusion scintigraphy, full pulmonary function tests, and cardiopulmonary exercise testing. ELVR procedures are currently available in selected Canadian research centers as part of ethically approved clinical trials. If a decision is made to offer an ELVR procedure, one-way valves are the first option in the presence of complete lobar exclusion and no significant collateral ventilation. When the fissure is not complete, when collateral ventilation is evident in heterogeneous emphysema or when emphysema is homogeneous, coil implants or BioLVR (in that order are the next logical alternatives.

  12. The update of the accounting procedures in Agricultural Cooperatives.

    Directory of Open Access Journals (Sweden)

    Rafael Enrique Viña Echevarría

    2014-06-01

    Full Text Available As part of the implementation of Internal Control in Agricultural Cooperatives from the standards established by the General Controller of the Republic, and the harmonization of accounting procedures Cuban Accounting Standards, It is need to update the accounting procedure manuals to guide and regulate the flows, times and registration basis, considering the current legislation, being these the purpose of the discussion in this investigation. The results focused on organizational dynamics of cooperatives, serving the agricultural cooperative sector and its relation to internal control and accounting management guidelines based on economic and social policy of the Party and the Revolution, as well as updating the procedure manuals. It even showed limitations in the application of internal control procedures and accounting according to the current regulations in Cuba, expressing the need to continue its development.

  13. Updating Small Generator Interconnection Procedures for New Market Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Coddington, M.; Fox, K.; Stanfield, S.; Varnado, L.; Culley, T.; Sheehan, M.

    2012-12-01

    Federal and state regulators are faced with the challenge of keeping interconnection procedures updated against a backdrop of evolving technology, new codes and standards, and considerably transformed market conditions. This report is intended to educate policymakers and stakeholders on beneficial reforms that will keep interconnection processes efficient and cost-effective while maintaining a safe and reliable power system.

  14. Standard Review Plan Update and Development Program. Implementing Procedures Document

    Energy Technology Data Exchange (ETDEWEB)

    1992-05-01

    This implementing procedures document (IPD) was prepared for use in implementing tasks under the standard review plan update and development program (SRP-UDP). The IPD provides comprehensive guidance and detailed procedures for SRP-UDP tasks. The IPD is mandatory for contractors performing work for the SRP-UDP. It is guidance for the staff. At the completion of the SRP-UDP, the IPD will be revised (to remove the UDP aspects) and will replace NRR Office Letter No. 800 as long-term maintenance procedures.

  15. Partial updating of clinical practice guidelines often makes more sense than full updating: a systematic review on methods and the development of an updating procedure.

    Science.gov (United States)

    Becker, Monika; Neugebauer, Edmund A M; Eikermann, Michaela

    2014-01-01

    To conduct a systematic review of the methods used to determine when and how to update clinical practice guidelines (CPGs) and develop a procedure for updating CPGs. We searched MEDLINE, Embase, and the Cochrane Methodology Register for methodological publications on updating CPGs. Guideline development manuals were obtained from the Web sites of guideline-developing organizations. Using the information obtained from these records, a procedure for updating CPGs was developed. A total of 5,116 journal articles were screened, and seven articles met the criteria for inclusion. Forty-seven manuals were included; of these, eight included details about the methods used to update the guidelines. Most of the included publications focused on assessing whether the CPGs needed updating and not on how to update them. The developed procedure includes a systematic monitoring system and a scheduled process for updating the CPGs, which includes guidance on how to determine the type and scope of an update. Partial updating often makes more sense than updating the whole CPG because topics and recommendations differ in terms of the need for updating. Guideline developers should implement a systematic updating procedure that includes an ongoing monitoring system that is appropriate for the nature of the guideline topics and the capabilities of the developers. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Updated Arkansas Global Rice Model

    OpenAIRE

    Wailes, Eric J.; Chavez, Eddie C.

    2010-01-01

    The Arkansas Global Rice Model is based on a multi-country statistical simulation and econometric framework. The model consists of six sub regions. These regions are the U.S., South Asia, North Asia and the Middle East, the Americas, Africa and Europe. Each region comprises of several countries and each country model has a supply sector, a demand sector, a trade, stocks and price linkage equations. All equations used in this model were estimated using econometric procedures or identities. Est...

  17. An updated bleeding model to predict the risk of post-procedure bleeding among patients undergoing percutaneous coronary intervention: a report using an expanded bleeding definition from the National Cardiovascular Data Registry CathPCI Registry.

    Science.gov (United States)

    Rao, Sunil V; McCoy, Lisa A; Spertus, John A; Krone, Ronald J; Singh, Mandeep; Fitzgerald, Susan; Peterson, Eric D

    2013-09-01

    This study sought to develop a model that predicts bleeding complications using an expanded bleeding definition among patients undergoing percutaneous coronary intervention (PCI) in contemporary clinical practice. New knowledge about the importance of periprocedural bleeding combined with techniques to mitigate its occurrence and the inclusion of new data in the updated CathPCI Registry data collection forms encouraged us to develop a new bleeding definition and risk model to improve the monitoring and safety of PCI. Detailed clinical data from 1,043,759 PCI procedures at 1,142 centers from February 2008 through April 2011 participating in the CathPCI Registry were used to identify factors associated with major bleeding complications occurring within 72 h post-PCI. Risk models (full and simplified risk scores) were developed in 80% of the cohort and validated in the remaining 20%. Model discrimination and calibration were assessed in the overall population and among the following pre-specified patient subgroups: females, those older than 70 years of age, those with diabetes mellitus, those with ST-segment elevation myocardial infarction, and those who did not undergo in-hospital coronary artery bypass grafting. Using the updated definition, the rate of bleeding was 5.8%. The full model included 31 variables, and the risk score had 10. The full model had similar discriminatory value across pre-specified subgroups and was well calibrated across the PCI risk spectrum. The updated bleeding definition identifies important post-PCI bleeding events. Risk models that use this expanded definition provide accurate estimates of post-PCI bleeding risk, thereby better informing clinical decision making and facilitating risk-adjusted provider feedback to support quality improvement. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  18. Model Updating Nonlinear System Identification Toolbox Project

    Data.gov (United States)

    National Aeronautics and Space Administration — ZONA Technology (ZONA) proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology that utilizes flight data with...

  19. 2017 Updates: Earth Gravitational Model 2020

    Science.gov (United States)

    Barnes, D. E.; Holmes, S. A.; Ingalls, S.; Beale, J.; Presicci, M. R.; Minter, C.

    2017-12-01

    The National Geospatial-Intelligence Agency [NGA], in conjunction with its U.S. and international partners, has begun preliminary work on its next Earth Gravitational Model, to replace EGM2008. The new `Earth Gravitational Model 2020' [EGM2020] has an expected public release date of 2020, and will retain the same harmonic basis and resolution as EGM2008. As such, EGM2020 will be essentially an ellipsoidal harmonic model up to degree (n) and order (m) 2159, but will be released as a spherical harmonic model to degree 2190 and order 2159. EGM2020 will benefit from new data sources and procedures. Updated satellite gravity information from the GOCE and GRACE mission, will better support the lower harmonics, globally. Multiple new acquisitions (terrestrial, airborne and shipborne) of gravimetric data over specific geographical areas (Antarctica, Greenland …), will provide improved global coverage and resolution over the land, as well as for coastal and some ocean areas. Ongoing accumulation of satellite altimetry data as well as improvements in the treatment of this data, will better define the marine gravity field, most notably in polar and near-coastal regions. NGA and partners are evaluating different approaches for optimally combining the new GOCE/GRACE satellite gravity models with the terrestrial data. These include the latest methods employing a full covariance adjustment. NGA is also working to assess systematically the quality of its entire gravimetry database, towards correcting biases and other egregious errors. Public release number 15-564

  20. Model Updating Nonlinear System Identification Toolbox Project

    Data.gov (United States)

    National Aeronautics and Space Administration — ZONA Technology proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology by adopting the flight data with state-of-the-art...

  1. NODA for EPA's Updated Ozone Transport Modeling

    Science.gov (United States)

    Find EPA's NODA for the Updated Ozone Transport Modeling Data for the 2008 Ozone National Ambient Air Quality Standard (NAAQS) along with the ExitExtension of Public Comment Period on CSAPR for the 2008 NAAQS.

  2. Finite element model updating in structural dynamics using design sensitivity and optimisation

    OpenAIRE

    Calvi, Adriano

    1998-01-01

    Model updating is an important issue in engineering. In fact a well-correlated model provides for accurate evaluation of the structure loads and responses. The main objectives of the study were to exploit available optimisation programs to create an error localisation and updating procedure of nite element models that minimises the "error" between experimental and analytical modal data, addressing in particular the updating of large scale nite element models with se...

  3. Model parameter updating using Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Treml, C. A. (Christine A.); Ross, Timothy J.

    2004-01-01

    This paper outlines a model parameter updating technique for a new method of model validation using a modified model reference adaptive control (MRAC) framework with Bayesian Networks (BNs). The model parameter updating within this method is generic in the sense that the model/simulation to be validated is treated as a black box. It must have updateable parameters to which its outputs are sensitive, and those outputs must have metrics that can be compared to that of the model reference, i.e., experimental data. Furthermore, no assumptions are made about the statistics of the model parameter uncertainty, only upper and lower bounds need to be specified. This method is designed for situations where a model is not intended to predict a complete point-by-point time domain description of the item/system behavior; rather, there are specific points, features, or events of interest that need to be predicted. These specific points are compared to the model reference derived from actual experimental data. The logic for updating the model parameters to match the model reference is formed via a BN. The nodes of this BN consist of updateable model input parameters and the specific output values or features of interest. Each time the model is executed, the input/output pairs are used to adapt the conditional probabilities of the BN. Each iteration further refines the inferred model parameters to produce the desired model output. After parameter updating is complete and model inputs are inferred, reliabilities for the model output are supplied. Finally, this method is applied to a simulation of a resonance control cooling system for a prototype coupled cavity linac. The results are compared to experimental data.

  4. A Provenance Tracking Model for Data Updates

    Directory of Open Access Journals (Sweden)

    Gabriel Ciobanu

    2012-08-01

    Full Text Available For data-centric systems, provenance tracking is particularly important when the system is open and decentralised, such as the Web of Linked Data. In this paper, a concise but expressive calculus which models data updates is presented. The calculus is used to provide an operational semantics for a system where data and updates interact concurrently. The operational semantics of the calculus also tracks the provenance of data with respect to updates. This provides a new formal semantics extending provenance diagrams which takes into account the execution of processes in a concurrent setting. Moreover, a sound and complete model for the calculus based on ideals of series-parallel DAGs is provided. The notion of provenance introduced can be used as a subjective indicator of the quality of data in concurrent interacting systems.

  5. 28 CFR 16.34 - Procedure to obtain change, correction or updating of identification records.

    Science.gov (United States)

    2010-07-01

    ... PRODUCTION OR DISCLOSURE OF MATERIAL OR INFORMATION Production of FBI Identification Records in Response to Written Requests by Subjects Thereof § 16.34 Procedure to obtain change, correction or updating of...

  6. Probability-Based Damage Detection of Structures Using Model Updating with Enhanced Ideal Gas Molecular Movement Algorithm

    OpenAIRE

    M. R. Ghasemi; R. Ghiasi; H. Varaee

    2017-01-01

    Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that ...

  7. Adjustment or updating of models

    Indian Academy of Sciences (India)

    While the model is defined in terms of these spatial parameters, .... (mode shapes defined at the n DOFs of a typical modal test in place of the complete N DOFs .... In these expressions,. N И the number of degrees of freedom in the model, while N1 and N2 are the numbers of mass and stiffness elements to be corrected ...

  8. Update on procedure-related risks for prenatal diagnosis techniques

    DEFF Research Database (Denmark)

    Tabor, Ann; Alfirevic, Zarko

    2010-01-01

    Introduction: As a consequence of the introduction of effective screening methods, the number of invasive prenatal diagnostic procedures is steadily declining. The aim of this review is to summarize the risks related to these procedures. Material and Methods: Review of the literature. Results: Data...... from randomised controlled trials as well as from systematic reviews and a large national registry study are consistent with a procedure-related miscarriage rate of 0.5-1.0% for amniocentesis as well as for chorionic villus sampling (CVS). In single-center studies performance may be remarkably good due...... invasive procedures calls for quality assurance and monitoring of operators' performance....

  9. OSPREY Model Development Status Update

    Energy Technology Data Exchange (ETDEWEB)

    Veronica J Rutledge

    2014-04-01

    During the processing of used nuclear fuel, volatile radionuclides will be discharged to the atmosphere if no recovery processes are in place to limit their release. The volatile radionuclides of concern are 3H, 14C, 85Kr, and 129I. Methods are being developed, via adsorption and absorption unit operations, to capture these radionuclides. It is necessary to model these unit operations to aid in the evaluation of technologies and in the future development of an advanced used nuclear fuel processing plant. A collaboration between Fuel Cycle Research and Development Offgas Sigma Team member INL and a NEUP grant including ORNL, Syracuse University, and Georgia Institute of Technology has been formed to develop off gas models and support off gas research. Georgia Institute of Technology is developing fundamental level model to describe the equilibrium and kinetics of the adsorption process, which are to be integrated with OSPREY. This report discusses the progress made on expanding OSPREY to be multiple component and the integration of macroscale and microscale level models. Also included in this report is a brief OSPREY user guide.

  10. Model validation: Correlation for updating

    Indian Academy of Sciences (India)

    of refining the theoretical model which will be used for the design optimisation process. There are many different names given to the tasks involved in this refinement. .... slightly from the ideal line but in a systematic rather than a random fashion as this situation suggests that there is a specific characteristic responsible for the ...

  11. Adjustment or updating of models

    Indian Academy of Sciences (India)

    Department of Mechanical Engineering, Imperial College of Science, .... It is first necessary to decide upon the level of accuracy, or correctness which is sought from the adjustment of the initial model, and this will be heavily influenced by the eventual application of the ..... reviewing the degree of success attained.

  12. On-line Bayesian model updating for structural health monitoring

    Science.gov (United States)

    Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo

    2018-03-01

    Fatigue induced cracks is a dangerous failure mechanism which affects mechanical components subject to alternating load cycles. System health monitoring should be adopted to identify cracks which can jeopardise the structure. Real-time damage detection may fail in the identification of the cracks due to different sources of uncertainty which have been poorly assessed or even fully neglected. In this paper, a novel efficient and robust procedure is used for the detection of cracks locations and lengths in mechanical components. A Bayesian model updating framework is employed, which allows accounting for relevant sources of uncertainty. The idea underpinning the approach is to identify the most probable crack consistent with the experimental measurements. To tackle the computational cost of the Bayesian approach an emulator is adopted for replacing the computationally costly Finite Element model. To improve the overall robustness of the procedure, different numerical likelihoods, measurement noises and imprecision in the value of model parameters are analysed and their effects quantified. The accuracy of the stochastic updating and the efficiency of the numerical procedure are discussed. An experimental aluminium frame and on a numerical model of a typical car suspension arm are used to demonstrate the applicability of the approach.

  13. Update on procedure-related risks for prenatal diagnosis techniques

    DEFF Research Database (Denmark)

    Tabor, Ann; Alfirevic, Zarko

    2010-01-01

    Introduction: As a consequence of the introduction of effective screening methods, the number of invasive prenatal diagnostic procedures is steadily declining. The aim of this review is to summarize the risks related to these procedures. Material and Methods: Review of the literature. Results: Data...... from randomised controlled trials as well as from systematic reviews and a large national registry study are consistent with a procedure-related miscarriage rate of 0.5-1.0% for amniocentesis as well as for chorionic villus sampling (CVS). In single-center studies performance may be remarkably good due...... not be performed before 15 + 0 weeks' gestation. CVS on the other hand should not be performed before 10 weeks' gestation due to a possible increase in risk of limb reduction defects. Discussion: Experienced operators have a higher success rate and a lower complication rate. The decreasing number of prenatal...

  14. A review on model updating of joint structure for dynamic analysis purpose

    Directory of Open Access Journals (Sweden)

    Zahari S.N.

    2016-01-01

    Full Text Available Structural joints provide connection between structural element (beam, plate etc. in order to construct a whole assembled structure. There are many types of structural joints such as bolted joint, riveted joints and welded joints. The joints structures significantly contribute to structural stiffness and dynamic behaviour of structures hence the main objectives of this paper are to review on method of model updating on joints structure and to discuss the guidelines to perform model updating for dynamic analysis purpose. This review paper firstly will outline some of the existing finite element modelling works of joints structure. Experimental modal analysis is the next step to obtain modal parameters (natural frequency & mode shape to validate and improve the discrepancy between results obtained from experimental and the simulation counterparts. Hence model updating will be carried out to minimize the differences between the two results. There are two methods of model updating; direct method and iterative method. Sensitivity analysis employed using SOL200 in NASTRAN by selecting the suitable updating parameters to avoid ill-conditioning problem. It is best to consider both geometrical and material properties in the updating procedure rather than choosing only a number of geometrical properties alone. Iterative method was chosen as the best model updating procedure because the physical meaning of updated parameters are guaranteed although this method required computational effort compare to direct method.

  15. Dynamic model updating based on strain mode shape and natural frequency using hybrid pattern search technique

    Science.gov (United States)

    Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping

    2018-05-01

    Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.

  16. 2011 Updated Arkansas Global Rice Model

    OpenAIRE

    Wailes, Eric J.; Chavez, Eddie C.

    2011-01-01

    The Arkansas Global Rice Model is based on a multi-country statistical simulation and econometric framework. The model is disaggregated by five world regions: Africa, the Americas, Asia, Europe, and Oceania. Each region includes country models which have a supply sector, a demand sector, a trade, stocks and price linkage equations. All equations used in this model are estimated using econometric procedures or identities. Estimates are based upon a set of explanatory variables including exogen...

  17. 78 FR 58897 - Pipeline Safety: Administrative Procedures; Updates and Technical Corrections

    Science.gov (United States)

    2013-09-25

    ... from the committees given the potential impact on administrative enforcement processes. After.../API commented that, to ensure due process and basic fairness in both the administrative process and..., 199-25] RIN 2137-AE92 Pipeline Safety: Administrative Procedures; Updates and Technical Corrections...

  18. Novel procedure for characterizing nonlinear systems with memory: 2017 update

    Science.gov (United States)

    Nuttall, Albert H.; Katz, Richard A.; Hughes, Derke R.; Koch, Robert M.

    2017-05-01

    The present article discusses novel improvements in nonlinear signal processing made by the prime algorithm developer, Dr. Albert H. Nuttall and co-authors, a consortium of research scientists from the Naval Undersea Warfare Center Division, Newport, RI. The algorithm, called the Nuttall-Wiener-Volterra or 'NWV' algorithm is named for its principal contributors [1], [2],[ 3] . The NWV algorithm significantly reduces the computational workload for characterizing nonlinear systems with memory. Following this formulation, two measurement waveforms are required in order to characterize a specified nonlinear system under consideration: (1) an excitation input waveform, x(t) (the transmitted signal); and, (2) a response output waveform, z(t) (the received signal). Given these two measurement waveforms for a given propagation channel, a 'kernel' or 'channel response', h= [h0,h1,h2,h3] between the two measurement points, is computed via a least squares approach that optimizes modeled kernel values by performing a best fit between measured response z(t) and a modeled response y(t). New techniques significantly diminish the exponential growth of the number of computed kernel coefficients at second and third order and alleviate the Curse of Dimensionality (COD) in order to realize practical nonlinear solutions of scientific and engineering interest.

  19. Resource Tracking Model Updates and Trade Studies

    Science.gov (United States)

    Chambliss, Joe; Stambaugh, Imelda; Moore, Michael

    2016-01-01

    The Resource Tracking Model has been updated to capture system manager and project manager inputs. Both the Trick/General Use Nodal Network Solver Resource Tracking Model (RTM) simulator and the RTM mass balance spreadsheet have been revised to address inputs from system managers and to refine the way mass balance is illustrated. The revisions to the RTM included the addition of a Plasma Pyrolysis Assembly (PPA) to recover hydrogen from Sabatier Reactor methane, which was vented in the prior version of the RTM. The effect of the PPA on the overall balance of resources in an exploration vehicle is illustrated in the increased recycle of vehicle oxygen. Case studies have been run to show the relative effect of performance changes on vehicle resources.

  20. Update on GOCART Model Development and Applications

    Science.gov (United States)

    Kim, Dongchul

    2013-01-01

    Recent results from the GOCART and GMI models are reported. They include: Updated emission inventories for anthropogenic and volcano sources, satellite-derived vegetation index for seasonal variations of dust emission, MODIS-derived smoke AOT for assessing uncertainties of biomass-burning emissions, long-range transport of aerosol across the Pacific Ocean, and model studies on the multi-decadal trend of regional and global aerosol distributions from 1980 to 2010, volcanic aerosols, and nitrate aerosols. The document was presented at the 2013 AEROCENTER Annual Meeting held at the GSFC Visitors Center, May 31, 2013. The Organizers of the meeting are posting the talks to the public Aerocentr website, after the meeting.

  1. SAM Photovoltaic Model Technical Reference 2016 Update

    Energy Technology Data Exchange (ETDEWEB)

    Gilman, Paul [National Renewable Energy Laboratory (NREL), Golden, CO (United States); DiOrio, Nicholas A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Freeman, Janine M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Janzou, Steven [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dobos, Aron [No longer NREL employee; Ryberg, David [No longer NREL employee

    2018-03-19

    This manual describes the photovoltaic performance model in the System Advisor Model (SAM) software, Version 2016.3.14 Revision 4 (SSC Version 160). It is an update to the 2015 edition of the manual, which describes the photovoltaic model in SAM 2015.1.30 (SSC 41). This new edition includes corrections of errors in the 2015 edition and descriptions of new features introduced in SAM 2016.3.14, including: 3D shade calculator Battery storage model DC power optimizer loss inputs Snow loss model Plane-of-array irradiance input from weather file option Support for sub-hourly simulations Self-shading works with all four subarrays, and uses same algorithm for fixed arrays and one-axis tracking Linear self-shading algorithm for thin-film modules Loss percentages replace derate factors. The photovoltaic performance model is one of the modules in the SAM Simulation Core (SSC), which is part of both SAM and the SAM SDK. SAM is a user-friedly desktop application for analysis of renewable energy projects. The SAM SDK (Software Development Kit) is for developers writing their own renewable energy analysis software based on SSC. This manual is written for users of both SAM and the SAM SDK wanting to learn more about the details of SAM's photovoltaic model.

  2. Updating parameters of the chicken processing line model

    DEFF Research Database (Denmark)

    Kurowicka, Dorota; Nauta, Maarten; Jozwiak, Katarzyna

    2010-01-01

    A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows...... updating parameters of the model to better describe processes observed in slaughterhouses. We propose Bayesian updating as a suitable technique to update expert judgment with microbiological data. Berrang and Dickens’s data are used to demonstrate performance of this method in updating parameters...... of the chicken processing line model....

  3. Model Updating Nonlinear System Identification Toolbox, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — ZONA Technology (ZONA) proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology that utilizes flight data with...

  4. MARMOT update for oxide fuel modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schwen, Daniel [Idaho National Lab. (INL), Idaho Falls, ID (United States); Chakraborty, Pritam [Idaho National Lab. (INL), Idaho Falls, ID (United States); Jiang, Chao [Idaho National Lab. (INL), Idaho Falls, ID (United States); Aagesen, Larry [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ahmed, Karim [Idaho National Lab. (INL), Idaho Falls, ID (United States); Jiang, Wen [Idaho National Lab. (INL), Idaho Falls, ID (United States); Biner, Bulent [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bai, Xianming [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Tonks, Michael [Pennsylvania State Univ., University Park, PA (United States); Millett, Paul [Univ. of Arkansas, Fayetteville, AR (United States)

    2016-09-01

    This report summarizes the lower-length-scale research and development progresses in FY16 at Idaho National Laboratory in developing mechanistic materials models for oxide fuels, in parallel to the development of the MARMOT code which will be summarized in a separate report. This effort is a critical component of the microstructure based fuel performance modeling approach, supported by the Fuels Product Line in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. The progresses can be classified into three categories: 1) development of materials models to be used in engineering scale fuel performance modeling regarding the effect of lattice defects on thermal conductivity, 2) development of modeling capabilities for mesoscale fuel behaviors including stage-3 gas release, grain growth, high burn-up structure, fracture and creep, and 3) improved understanding in material science by calculating the anisotropic grain boundary energies in UO$_2$ and obtaining thermodynamic data for solid fission products. Many of these topics are still under active development. They are updated in the report with proper amount of details. For some topics, separate reports are generated in parallel and so stated in the text. The accomplishments have led to better understanding of fuel behaviors and enhance capability of the MOOSE-BISON-MARMOT toolkit.

  5. Updating of a dynamic finite element model from the Hualien scale model reactor building

    International Nuclear Information System (INIS)

    Billet, L.; Moine, P.; Lebailly, P.

    1996-08-01

    The forces occurring at the soil-structure interface of a building have generally a large influence on the way the building reacts to an earthquake. One can be tempted to characterise these forces more accurately bu updating a model from the structure. However, this procedure requires an updating method suitable for dissipative models, since significant damping can be observed at the soil-structure interface of buildings. Such a method is presented here. It is based on the minimization of a mechanical energy built from the difference between Eigen data calculated bu the model and Eigen data issued from experimental tests on the real structure. An experimental validation of this method is then proposed on a model from the HUALIEN scale-model reactor building. This scale-model, built on the HUALIEN site of TAIWAN, is devoted to the study of soil-structure interaction. The updating concerned the soil impedances, modelled by a layer of springs and viscous dampers attached to the building foundation. A good agreement was found between the Eigen modes and dynamic responses calculated bu the updated model and the corresponding experimental data. (authors). 12 refs., 3 figs., 4 tabs

  6. Experimental Studies on Finite Element Model Updating for a Heated Beam-Like Structure

    Directory of Open Access Journals (Sweden)

    Kaipeng Sun

    2015-01-01

    Full Text Available An experimental study was made for the identification procedure of time-varying modal parameters and the finite element model updating technique of a beam-like thermal structure in both steady and unsteady high temperature environments. An improved time-varying autoregressive method was proposed first to extract the instantaneous natural frequencies of the structure in the unsteady high temperature environment. Based on the identified modal parameters, then, a finite element model for the structure was updated by using Kriging meta-model and optimization-based finite-element model updating method. The temperature-dependent parameters to be updated were expressed as low-order polynomials of temperature increase, and the finite element model updating problem was solved by updating several coefficients of the polynomials. The experimental results demonstrated the effectiveness of the time-varying modal parameter identification method and showed that the instantaneous natural frequencies of the updated model well tracked the trends of the measured values with high accuracy.

  7. A NEW PROCEDURE FOR FORESTRY DATABASE UPDATING WITH GIS AND REMOTE SENSING

    Directory of Open Access Journals (Sweden)

    Luis M. T. de Carvalho

    2003-07-01

    Full Text Available The aim of this study was to develop an automated, simple and flexibleprocedure for updating raster-based forestry database. Four modules compose the procedure:(1 location of changed sites, (2 quantification of changed area, (3 identification of the newland cover, and (4 database updating. Firstly, a difference image is decomposed with wavelettransforms in order to extract changed sites. Secondly, segmentation is performed on thedifference image. Thirdly, each changed pixel or each segmented region is assigned to theland cover class with the highest probability of membership. Then, the output is used toupdate the GIS layer where changes took place. This procedure was less sensitive togeometric and radiometric misregistration, and less dependent on ground truth, whencompared with post classification comparison and direct multidate classification.

  8. [Fourth update of the guidelines on determination of irreversible brain death. Procedural course and amendments].

    Science.gov (United States)

    Tonn, J-C

    2016-02-01

    In 2015 the fourth update of the directive for the determination of definitely irreversible loss of complete function of the cerebrum, cerebellum and brainstem was passed and came into force. This was preceded by several hearings of all professional societies and associations involved as well as a 2-year advisory process of an interdisciplinary working party. The directive is intended to determine irreversible brain death in the field of intensive care medicine and is independent of individual decisions about organ donation. Not only an update based on scientific data but also a clarification of the several procedures and a clear definition of the medical qualifications required were worked out. Furthermore, the technical procedures computed tomography (CT) angiography and duplex sonography were adopted for the diagnosis of cerebral circulatory arrest. The new directive including comprehensive explanatory notes was approved by the German Federal Ministry of Health and published by the German Medical Council (Bundesärztekammer).

  9. General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data

    Energy Technology Data Exchange (ETDEWEB)

    Bagwell, L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Bennett, P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-02-21

    This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).

  10. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  11. Reservoir management under geological uncertainty using fast model update

    NARCIS (Netherlands)

    Hanea, R.; Evensen, G.; Hustoft, L.; Ek, T.; Chitu, A.; Wilschut, F.

    2015-01-01

    Statoil is implementing "Fast Model Update (FMU)," an integrated and automated workflow for reservoir modeling and characterization. FMU connects all steps and disciplines from seismic depth conversion to prediction and reservoir management taking into account relevant reservoir uncertainty. FMU

  12. 76 FR 35886 - Notice Updating Procedural Schedule for Licensing; FirstLight Hydro Generating Company, City of...

    Science.gov (United States)

    2011-06-20

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No. 2662-012; Project No. 12968-001] Notice Updating Procedural Schedule for Licensing; FirstLight Hydro Generating Company, City of... Hydroelectric Project No. 2662 and Scotland Hydroelectric Project No. 12968 has been updated. Subsequent...

  13. A comparison of updating algorithms for large $N$ reduced models

    CERN Document Server

    Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto

    2015-01-01

    We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  14. Updates to the Demographic and Spatial Allocation Models to ...

    Science.gov (United States)

    EPA announced the availability of the draft report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) for a 30-day public comment period. The ICLUS version 2 (v2) modeling tool furthered land change modeling by providing nationwide housing development scenarios up to 2100. ICLUS V2 includes updated population and land use data sets and addressing limitations identified in ICLUS v1 in both the migration and spatial allocation models. The companion user guide describes the development of ICLUS v2 and the updates that were made to the original data sets and the demographic and spatial allocation models. [2017 UPDATE] Get the latest version of ICLUS and stay up-to-date by signing up to the ICLUS mailing list. The GIS tool enables users to run SERGoM with the population projections developed for the ICLUS project and allows users to modify the spatial allocation housing density across the landscape.

  15. Effectiveness of radio waves application in modern general dental procedures: An update.

    Science.gov (United States)

    Qureshi, Arslan; Kellesarian, Sergio Varela; Pikos, Michael A; Javed, Fawad; Romanos, Georgios E

    2017-01-01

    The purpose of the present study was to review indexed literature and provide an update on the effectiveness of high-frequency radio waves (HRW) application in modern general dentistry procedures. Indexed databases were searched to identify articles that assessed the efficacy of radio waves in dental procedures. Radiosurgery is a refined form of electrosurgery that uses waves of electrons at a radiofrequency ranging between 2 and 4 MHz. Radio waves have also been reported to cause much less thermal damage to peripheral tissues compared with electrosurgery or carbon dioxide laser-assisted surgery. Formation of reparative dentin in direct pulp capping procedures is also significantly higher when HRW are used to achieve hemostasis in teeth with minimally exposed dental pulps compared with traditional techniques for achieving hemostasis. A few case reports have reported that radiosurgery is useful for procedures such as gingivectomy and gingivoplasty, stage-two surgery for implant exposure, operculectomy, oral biopsy, and frenectomy. Radiosurgery is a relatively modern therapeutic methodology for the treatment of trigeminal neuralgia; however, its long-term efficacy is unclear. Radio waves can also be used for periodontal procedures, such as gingivectomies, coronal flap advancement, harvesting palatal grafts for periodontal soft tissue grafting, and crown lengthening. Although there are a limited number of studies in indexed literature regarding the efficacy of radio waves in modern dentistry, the available evidence shows that use of radio waves is a modernization in clinical dentistry that might be a contemporary substitute for traditional clinical dental procedures.

  16. Model Updating Nonlinear System Identification Toolbox, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — ZONA Technology proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology by adopting the flight data with state-of-the-art...

  17. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  18. Typical NRC inspection procedures for model plant

    International Nuclear Information System (INIS)

    Blaylock, J.

    1984-01-01

    A summary of NRC inspection procedures for a model LEU fuel fabrication plant is presented. Procedures and methods for combining inventory data, seals, measurement techniques, and statistical analysis are emphasized

  19. Preconditioner Updates Applied to CFD Model Problems

    Czech Academy of Sciences Publication Activity Database

    Birken, P.; Duintjer Tebbens, Jurjen; Meister, A.; Tůma, Miroslav

    2008-01-01

    Roč. 58, č. 11 (2008), s. 1628-1641 ISSN 0168-9274 R&D Projects: GA AV ČR 1ET400300415; GA AV ČR KJB100300703 Institutional research plan: CEZ:AV0Z10300504 Keywords : finite volume methods * update preconditioning * Krylov subspace methods * Euler equations * conservation laws Subject RIV: BA - General Mathematics Impact factor: 0.952, year: 2008

  20. Finite element model updating of natural fibre reinforced composite structure in structural dynamics

    Directory of Open Access Journals (Sweden)

    Sani M.S.M.

    2016-01-01

    Full Text Available Model updating is a process of making adjustment of certain parameters of finite element model in order to reduce discrepancy between analytical predictions of finite element (FE and experimental results. Finite element model updating is considered as an important field of study as practical application of finite element method often shows discrepancy to the test result. The aim of this research is to perform model updating procedure on a composite structure as well as trying improving the presumed geometrical and material properties of tested composite structure in finite element prediction. The composite structure concerned in this study is a plate of reinforced kenaf fiber with epoxy. Modal properties (natural frequency, mode shapes, and damping ratio of the kenaf fiber structure will be determined using both experimental modal analysis (EMA and finite element analysis (FEA. In EMA, modal testing will be carried out using impact hammer test while normal mode analysis using FEA will be carried out using MSC. Nastran/Patran software. Correlation of the data will be carried out before optimizing the data from FEA. Several parameters will be considered and selected for the model updating procedure.

  1. Online updating procedures for a real-time hydrological forecasting system

    International Nuclear Information System (INIS)

    Kahl, B; Nachtnebel, H P

    2008-01-01

    Rainfall-runoff-models can explain major parts of the natural runoff pattern but never simulate the observed hydrograph exactly. Reasons for errors are various sources of uncertainties embedded in the model forecasting system. Errors are due to measurement errors, the selected time period for calibration and validation, the parametric uncertainty and the model imprecision. In on-line forecasting systems forecasted input data is used which additionally generates a major uncertainty for the hydrological forecasting system. Techniques for partially compensating these uncertainties are investigated in the recent study in a medium sized catchment in the Austrian part of the Danube basin. The catchment area is about 1000 km2. The forecasting system consists of a semi-distributed continuous rainfall-runoff model that uses quantitative precipitation and temperature forecasts. To provide adequate system states at the beginning of the forecasting period continuous simulation is required, especially in winter. In this study two online updating methods are used and combined for enhancing the runoff forecasts. The first method is used for updating the system states at the beginning of the forecasting period by changing the precipitation input. The second method is an autoregressive error model, which is used to eliminate systematic errors in the model output. In combination those two methods work together well as each method is more effective in different runoff situations.

  2. Reservoir structural model updating using the Ensemble Kalman Filter

    Energy Technology Data Exchange (ETDEWEB)

    Seiler, Alexandra

    2010-09-15

    In reservoir characterization, a large emphasis is placed on risk management and uncertainty assessment, and the dangers of basing decisions on a single base-case reservoir model are widely recognized. In the last years, statistical methods for assisted history matching have gained popularity for providing integrated models with quantified uncertainty, conditioned on all available data. Structural modeling is the first step in a reservoir modeling work flow and consists in defining the geometrical framework of the reservoir, based on the information from seismic surveys and well data. Large uncertainties are typically associated with the processing and interpretation of seismic data. However, the structural model is often fixed to a single interpretation in history-matching work flows due to the complexity of updating the structural model and related reservoir grid. This thesis present a method that allows to account for the uncertainties in the structural model and continuously update the model and related uncertainties by assimilation of production data using the Ensemble Kalman Filter (EnKF). We consider uncertainties in the depth of the reservoir horizons and in the fault geometry, and assimilate production data, such as oil production rate, gas-oil ratio and water-cut. In the EnKF model-updating work flow, an ensemble of reservoir models, expressing explicitly the model uncertainty, is created. We present a parameterization that allows to generate different realizations of the structural model to account for the uncertainties in faults and horizons and that maintains the consistency throughout the reservoir characterization project, from the structural model to the prediction of production profiles. The uncertainty in the depth of the horizons is parameterized as simulated depth surfaces, the fault position as a displacement vector and the fault throw as a throw-scaling factor. In the EnKF, the model parameters and state variables are updated sequentially in

  3. Finite element model updating of a small steel frame using neural networks

    International Nuclear Information System (INIS)

    Zapico, J L; González, M P; Alonso, R; González-Buelga, A

    2008-01-01

    This paper presents an experimental and analytical dynamic study of a small-scale steel frame. The experimental model was physically built and dynamically tested on a shaking table in a series of different configurations obtained from the original one by changing the mass and by causing structural damage. Finite element modelling and parameterization with physical meaning is iteratively tried for the original undamaged configuration. The finite element model is updated through a neural network, the natural frequencies of the model being the net input. The updating process is made more accurate and robust by using a regressive procedure, which constitutes an original contribution of this work. A novel simplified analytical model has been developed to evaluate the reduction of bending stiffness of the elements due to damage. The experimental results of the rest of the configurations have been used to validate both the updated finite element model and the analytical one. The statistical properties of the identified modal data are evaluated. From these, the statistical properties and a confidence interval for the estimated model parameters are obtained by using the Latin Hypercube sampling technique. The results obtained are successful: the updated model accurately reproduces the low modes identified experimentally for all configurations, and the statistical study of the transmission of errors yields a narrow confidence interval for all the identified parameters

  4. Numerical modelling of mine workings: annual update 1999/2000.

    CSIR Research Space (South Africa)

    Lightfoot, N

    1999-09-01

    Full Text Available chapters of the guidebook. In order to download the guidebook a visitor needs to have a password which will issued upon receipt of a nominal charge. 7 2 Updated Edition of Numerical Modelling of Mine Workings Enabling Output 1: Updates to the current... of rock mass ratings. 4.3.3.2 Quadratic model Figure describing the quadratic backfill material model has been corrected. Chapter 5 Solution Methods 5.2 Analytical Methods and 5.3 Computational Methods Use of the words slot, crack and slit...

  5. Real Time Updating in Distributed Urban Rainfall Runoff Modelling

    DEFF Research Database (Denmark)

    Borup, Morten; Madsen, Henrik

    are equipped with basins and automated structures that allow for a large degree of control of the systems, but in order to do this optimally it is required to know what is happening throughout the system. For this task models are needed, due to the large scale and complex nature of the systems. The physically...... that are being updated from system measurements was studied. The results showed that the fact alone that it takes time for rainfall data to travel the distance between gauges and catchments has such a big negative effect on the forecast skill of updated models, that it can justify the choice of even very...... when it was used to update the water level in multiple upstream basins. This method is, however, not capable of utilising the spatial correlations in the errors to correct larger parts of the models. To accommodate this a method was developed for correcting the slow changing inflows to urban drainage...

  6. Circumplex model of marital and family systems: VI. Theoretical update.

    Science.gov (United States)

    Olson, D H; Russell, C S; Sprenkle, D H

    1983-03-01

    This paper updates the theoretical work on the Circumplex Model and provides revised and new hypotheses. Similarities and contrasts to the Beavers Systems Model are made along with comments regarding Beavers and Voeller's critique. FACES II, a newly revised assessment tool, provides both "perceived" and "ideal" family assessment that is useful empirically and clinically.

  7. Procedural Personas for Player Decision Modeling and Procedural Content Generation

    DEFF Research Database (Denmark)

    Holmgård, Christoffer

    2016-01-01

    ." These methods for constructing procedural personas are then integrated with existing procedural content generation systems, acting as critics that shape the output of these systems, optimizing generated content for different personas and by extension, different kinds of players and their decision making styles......How can player models and artificially intelligent (AI) agents be useful in early-stage iterative game and simulation design? One answer may be as ways of generating synthetic play-test data, before a game or level has ever seen a player, or when the sampled amount of play test data is very low....... This thesis explores methods for creating low-complexity, easily interpretable, generative AI agents for use in game and simulation design. Based on insights from decision theory and behavioral economics, the thesis investigates how player decision making styles may be defined, operationalised, and measured...

  8. A model of procedural and distributive fairness

    NARCIS (Netherlands)

    Krawczyk, M.W.

    2007-01-01

    This paper presents a new model aimed at predicting behav- ior in games involving a randomized allocation procedure. It is designed to capture the relative importance and interaction between procedural justice (defined crudely in terms of the share of one's expected outcome in the sum of all

  9. Crushed-salt constitutive model update

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, G.D.; Loken, M.C.; Mellegard, K.D. [RE/SPEC Inc., Rapid City, SD (United States); Hansen, F.D. [Sandia National Labs., Albuquerque, NM (United States)

    1998-01-01

    Modifications to the constitutive model used to describe the deformation of crushed salt are presented in this report. Two mechanisms--dislocation creep and grain boundary diffusional pressure solutioning--defined previously but used separately are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. New creep consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt to determine material parameters for the constitutive model. Nonlinear least-squares model fitting to data from the shear consolidation tests and a combination of the shear and hydrostatic consolidation tests produced two sets of material parameter values for the model. The change in material parameter values from test group to test group indicates the empirical nature of the model but demonstrates improvement over earlier work with the previous models. Key improvements are the ability to capture lateral strain reversal and better resolve parameter values. To demonstrate the predictive capability of the model, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the model to predict the test data, the model appears to capture the creep consolidation behavior of crushed salt quite well.

  10. Crushed-salt constitutive model update

    International Nuclear Information System (INIS)

    Callahan, G.D.; Loken, M.C.; Mellegard, K.D.; Hansen, F.D.

    1998-01-01

    Modifications to the constitutive model used to describe the deformation of crushed salt are presented in this report. Two mechanisms--dislocation creep and grain boundary diffusional pressure solutioning--defined previously but used separately are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. New creep consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt to determine material parameters for the constitutive model. Nonlinear least-squares model fitting to data from the shear consolidation tests and a combination of the shear and hydrostatic consolidation tests produced two sets of material parameter values for the model. The change in material parameter values from test group to test group indicates the empirical nature of the model but demonstrates improvement over earlier work with the previous models. Key improvements are the ability to capture lateral strain reversal and better resolve parameter values. To demonstrate the predictive capability of the model, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the model to predict the test data, the model appears to capture the creep consolidation behavior of crushed salt quite well

  11. Construction and Updating of Event Models in Auditory Event Processing

    Science.gov (United States)

    Huff, Markus; Maurer, Annika E.; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank

    2018-01-01

    Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event…

  12. A Kriging Model Based Finite Element Model Updating Method for Damage Detection

    Directory of Open Access Journals (Sweden)

    Xiuming Yang

    2017-10-01

    Full Text Available Model updating is an effective means of damage identification and surrogate modeling has attracted considerable attention for saving computational cost in finite element (FE model updating, especially for large-scale structures. In this context, a surrogate model of frequency is normally constructed for damage identification, while the frequency response function (FRF is rarely used as it usually changes dramatically with updating parameters. This paper presents a new surrogate model based model updating method taking advantage of the measured FRFs. The Frequency Domain Assurance Criterion (FDAC is used to build the objective function, whose nonlinear response surface is constructed by the Kriging model. Then, the efficient global optimization (EGO algorithm is introduced to get the model updating results. The proposed method has good accuracy and robustness, which have been verified by a numerical simulation of a cantilever and experimental test data of a laboratory three-story structure.

  13. Updating the debate on model complexity

    Science.gov (United States)

    Simmons, Craig T.; Hunt, Randall J.

    2012-01-01

    As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”

  14. A last updating evolution model for online social networks

    Science.gov (United States)

    Bu, Zhan; Xia, Zhengyou; Wang, Jiandong; Zhang, Chengcui

    2013-05-01

    As information technology has advanced, people are turning to electronic media more frequently for communication, and social relationships are increasingly found on online channels. However, there is very limited knowledge about the actual evolution of the online social networks. In this paper, we propose and study a novel evolution network model with the new concept of “last updating time”, which exists in many real-life online social networks. The last updating evolution network model can maintain the robustness of scale-free networks and can improve the network reliance against intentional attacks. What is more, we also found that it has the “small-world effect”, which is the inherent property of most social networks. Simulation experiment based on this model show that the results and the real-life data are consistent, which means that our model is valid.

  15. Procedural Modeling for Digital Cultural Heritage

    Directory of Open Access Journals (Sweden)

    Müller Pascal

    2009-01-01

    Full Text Available The rapid development of computer graphics and imaging provides the modern archeologist with several tools to realistically model and visualize archeological sites in 3D. This, however, creates a tension between veridical and realistic modeling. Visually compelling models may lead people to falsely believe that there exists very precise knowledge about the past appearance of a site. In order to make the underlying uncertainty visible, it has been proposed to encode this uncertainty with different levels of transparency in the rendering, or of decoloration of the textures. We argue that procedural modeling technology based on shape grammars provides an interesting alternative to such measures, as they tend to spoil the experience for the observer. Both its efficiency and compactness make procedural modeling a tool to produce multiple models, which together sample the space of possibilities. Variations between the different models express levels of uncertainty implicitly, while letting each individual model keeping its realistic appearance. The underlying, structural description makes the uncertainty explicit. Additionally, procedural modeling also yields the flexibility to incorporate changes as knowledge of an archeological site gets refined. Annotations explaining modeling decisions can be included. We demonstrate our procedural modeling implementation with several recent examples.

  16. Construction and updating of event models in auditory event processing.

    Science.gov (United States)

    Huff, Markus; Maurer, Annika E; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank

    2018-02-01

    Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event boundaries. Evidence from reading time studies (increased reading times with increasing amount of change) suggest that updating of event models is incremental. We present results from 5 experiments that studied event processing (including memory formation processes and reading times) using an audio drama as well as a transcript thereof as stimulus material. Experiments 1a and 1b replicated the event boundary advantage effect for memory. In contrast to recent evidence from studies using visual stimulus material, Experiments 2a and 2b found no support for incremental updating with normally sighted and blind participants for recognition memory. In Experiment 3, we replicated Experiment 2a using a written transcript of the audio drama as stimulus material, allowing us to disentangle encoding and retrieval processes. Our results indicate incremental updating processes at encoding (as measured with reading times). At the same time, we again found recognition performance to be unaffected by the amount of change. We discuss these findings in light of current event cognition theories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. A procedure for building product models

    DEFF Research Database (Denmark)

    Hvam, Lars; Riis, Jesper; Malis, Martin

    2001-01-01

    This article presents a procedure for building product models to support the specification processes dealing with sales, design of product variants and production preparation. The procedure includes, as the first phase, an analysis and redesign of the business processes, which are to be supported...... with product models. The next phase includes an analysis of the product assortment, and the set up of a so-called product master. Finally the product model is designed and implemented using object oriented modelling. The procedure is developed in order to ensure that the product models constructed are fit...... for the business processes they support, and properly structured and documented, in order to facilitate that the systems can be maintained continually and further developed. The research has been carried out at the Centre for Industrialisation of Engineering, Department of Manufacturing Engineering, Technical...

  18. Updating river basin models with radar altimetry

    DEFF Research Database (Denmark)

    Michailovsky, Claire Irene B.

    Hydrological models are widely used by water managers as a decision support tool for both real-time and long-term applications. Some examples of real-time management issues are the optimal management of reservoir releases, flood forecasting or water allocation in drought conditions. Long term....... Many types of RS are now routinely used to set up and drive river basin models. One of the key hydrological state variables is river discharge. It is typically the output of interest for water allocation applications and is also widely used as a source of calibration data as it presents the integrated...... response of a catchment to meteorological forcing. While river discharge cannot be directly measured from space, radar altimetry (RA) can measure water level variations in rivers at the locations where the satellite ground track and river network intersect called virtual stations or VS. In this PhD study...

  19. An updated digital model of plate boundaries

    Science.gov (United States)

    Bird, Peter

    2003-03-01

    A global set of present plate boundaries on the Earth is presented in digital form. Most come from sources in the literature. A few boundaries are newly interpreted from topography, volcanism, and/or seismicity, taking into account relative plate velocities from magnetic anomalies, moment tensor solutions, and/or geodesy. In addition to the 14 large plates whose motion was described by the NUVEL-1A poles (Africa, Antarctica, Arabia, Australia, Caribbean, Cocos, Eurasia, India, Juan de Fuca, Nazca, North America, Pacific, Philippine Sea, South America), model PB2002 includes 38 small plates (Okhotsk, Amur, Yangtze, Okinawa, Sunda, Burma, Molucca Sea, Banda Sea, Timor, Birds Head, Maoke, Caroline, Mariana, North Bismarck, Manus, South Bismarck, Solomon Sea, Woodlark, New Hebrides, Conway Reef, Balmoral Reef, Futuna, Niuafo'ou, Tonga, Kermadec, Rivera, Galapagos, Easter, Juan Fernandez, Panama, North Andes, Altiplano, Shetland, Scotia, Sandwich, Aegean Sea, Anatolia, Somalia), for a total of 52 plates. No attempt is made to divide the Alps-Persia-Tibet mountain belt, the Philippine Islands, the Peruvian Andes, the Sierras Pampeanas, or the California-Nevada zone of dextral transtension into plates; instead, they are designated as "orogens" in which this plate model is not expected to be accurate. The cumulative-number/area distribution for this model follows a power law for plates with areas between 0.002 and 1 steradian. Departure from this scaling at the small-plate end suggests that future work is very likely to define more very small plates within the orogens. The model is presented in four digital files: a set of plate boundary segments; a set of plate outlines; a set of outlines of the orogens; and a table of characteristics of each digitization step along plate boundaries, including estimated relative velocity vector and classification into one of 7 types (continental convergence zone, continental transform fault, continental rift, oceanic spreading ridge

  20. Finite element model updating of a prestressed concrete box girder bridge using subproblem approximation

    Science.gov (United States)

    Chen, G. W.; Omenzetter, P.

    2016-04-01

    This paper presents the implementation of an updating procedure for the finite element model (FEM) of a prestressed concrete continuous box-girder highway off-ramp bridge. Ambient vibration testing was conducted to excite the bridge, assisted by linear chirp sweepings induced by two small electrodynamic shakes deployed to enhance the excitation levels, since the bridge was closed to traffic. The data-driven stochastic subspace identification method was executed to recover the modal properties from measurement data. An initial FEM was developed and correlation between the experimental modal results and their analytical counterparts was studied. Modelling of the pier and abutment bearings was carefully adjusted to reflect the real operational conditions of the bridge. The subproblem approximation method was subsequently utilized to automatically update the FEM. For this purpose, the influences of bearing stiffness, and mass density and Young's modulus of materials were examined as uncertain parameters using sensitivity analysis. The updating objective function was defined based on a summation of squared values of relative errors of natural frequencies between the FEM and experimentation. All the identified modes were used as the target responses with the purpose of putting more constrains for the optimization process and decreasing the number of potentially feasible combinations for parameter changes. The updated FEM of the bridge was able to produce sufficient improvements in natural frequencies in most modes of interest, and can serve for a more precise dynamic response prediction or future investigation of the bridge health.

  1. An updated geospatial liquefaction model for global application

    Science.gov (United States)

    Zhu, Jing; Baise, Laurie G.; Thompson, Eric M.

    2017-01-01

    We present an updated geospatial approach to estimation of earthquake-induced liquefaction from globally available geospatial proxies. Our previous iteration of the geospatial liquefaction model was based on mapped liquefaction surface effects from four earthquakes in Christchurch, New Zealand, and Kobe, Japan, paired with geospatial explanatory variables including slope-derived VS30, compound topographic index, and magnitude-adjusted peak ground acceleration from ShakeMap. The updated geospatial liquefaction model presented herein improves the performance and the generality of the model. The updates include (1) expanding the liquefaction database to 27 earthquake events across 6 countries, (2) addressing the sampling of nonliquefaction for incomplete liquefaction inventories, (3) testing interaction effects between explanatory variables, and (4) overall improving model performance. While we test 14 geospatial proxies for soil density and soil saturation, the most promising geospatial parameters are slope-derived VS30, modeled water table depth, distance to coast, distance to river, distance to closest water body, and precipitation. We found that peak ground velocity (PGV) performs better than peak ground acceleration (PGA) as the shaking intensity parameter. We present two models which offer improved performance over prior models. We evaluate model performance using the area under the curve under the Receiver Operating Characteristic (ROC) curve (AUC) and the Brier score. The best-performing model in a coastal setting uses distance to coast but is problematic for regions away from the coast. The second best model, using PGV, VS30, water table depth, distance to closest water body, and precipitation, performs better in noncoastal regions and thus is the model we recommend for global implementation.

  2. Olkiluoto surface hydrological modelling: Update 2012 including salt transport modelling

    International Nuclear Information System (INIS)

    Karvonen, T.

    2013-11-01

    Posiva Oy is responsible for implementing a final disposal program for spent nuclear fuel of its owners Teollisuuden Voima Oyj and Fortum Power and Heat Oy. The spent nuclear fuel is planned to be disposed at a depth of about 400-450 meters in the crystalline bedrock at the Olkiluoto site. Leakages located at or close to spent fuel repository may give rise to the upconing of deep highly saline groundwater and this is a concern with regard to the performance of the tunnel backfill material after the closure of the tunnels. Therefore a salt transport sub-model was added to the Olkiluoto surface hydrological model (SHYD). The other improvements include update of the particle tracking algorithm and possibility to estimate the influence of open drillholes in a case where overpressure in inflatable packers decreases causing a hydraulic short-circuit between hydrogeological zones HZ19 and HZ20 along the drillhole. Four new hydrogeological zones HZ056, HZ146, BFZ100 and HZ039 were added to the model. In addition, zones HZ20A and HZ20B intersect with each other in the new structure model, which influences salinity upconing caused by leakages in shafts. The aim of the modelling of long-term influence of ONKALO, shafts and repository tunnels provide computational results that can be used to suggest limits for allowed leakages. The model input data included all the existing leakages into ONKALO (35-38 l/min) and shafts in the present day conditions. The influence of shafts was computed using eight different values for total shaft leakage: 5, 11, 20, 30, 40, 50, 60 and 70 l/min. The selection of the leakage criteria for shafts was influenced by the fact that upconing of saline water increases TDS-values close to the repository areas although HZ20B does not intersect any deposition tunnels. The total limit for all leakages was suggested to be 120 l/min. The limit for HZ20 zones was proposed to be 40 l/min: about 5 l/min the present day leakages to access tunnel, 25 l/min from

  3. Recent Updates to the System Advisor Model (SAM)

    Energy Technology Data Exchange (ETDEWEB)

    DiOrio, Nicholas A [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-02-14

    The System Advisor Model (SAM) is a mature suite of techno-economic models for many renewable energy technologies that can be downloaded for free as a desktop application or software development kit. SAM is used for system-level modeling, including generating performance pro the release of the code as an open source project on GitHub. Other additions that will be covered include the ability to download data directly into SAM from the National Solar Radiation Database (NSRDB) and up- dates to a user-interface macro that assists with PV system sizing. A brief update on SAM's battery model and its integration with the detailed photovoltaic model will also be discussed. Finally, an outline of planned work for the next year will be presented, including the addition of a bifacial model, support for multiple MPPT inputs for detailed inverter modeling, and the addition of a model for inverter thermal behavior.

  4. The Potosi Reservoir Model 2013c, Property Modeling Update

    Energy Technology Data Exchange (ETDEWEB)

    Adushita, Yasmin; Smith, Valerie; Leetaru, Hannes

    2014-09-30

    property modeling workflows and layering. This model was retained as the base case. In the preceding Task [1], the Potosi reservoir model was updated to take into account the new data from the Verification Well #2 (VW2) which was drilled in 2012. The porosity and permeability modeling was revised to take into account the log data from the new well. Revisions of the 2010 modeling assumptions were also done on relative permeability, capillary pressures, formation water salinity, and the maximum allowable well bottomhole pressure. Dynamic simulations were run using the injection target of 3.5 million tons per annum (3.2 MTPA) for 30 years. This dynamic model was named Potosi Dynamic Model 2013b. In this Task, a new property modeling workflow was applied, where seismic inversion data guided the porosity mapping and geobody extraction. The static reservoir model was fully guided by PorosityCube interpretations and derivations coupled with petrophysical logs from three wells. The two main assumptions are: porosity features in the PorosityCube that correlate with lost circulation zones represent vugular zones, and that these vugular zones are laterally continuous. Extrapolation was done carefully to populate the vugular facies and their corresponding properties outside the seismic footprint up to the boundary of the 30 by 30 mi (48 by 48 km) model. Dynamic simulations were also run using the injection target of 3.5 million tons per annum (3.2 MTPA) for 30 years. This new dynamic model was named Potosi Dynamic Model 2013c. Reservoir simulation with the latest model gives a cumulative injection of 43 million tons (39 MT) in 30 years with a single well, which corresponds to 40% of the injection target. The injection rate is approx. 3.2 MTPA in the first six months as the well is injecting into the surrounding vugs, and declines rapidly to 1.8 million tons per annum (1.6 MTPA) in year 3 once the surrounding vugs are full and the CO2 start to reach the matrix. After, the injection

  5. On Realism of Architectural Procedural Models

    Czech Academy of Sciences Publication Activity Database

    Beneš, J.; Kelly, T.; Děchtěrenko, Filip; Křivánek, J.; Müller, P.

    2017-01-01

    Roč. 36, č. 2 (2017), s. 225-234 ISSN 0167-7055 Grant - others:AV ČR(CZ) StrategieAV21/14 Program:StrategieAV Institutional support: RVO:68081740 Keywords : realism * procedural modeling * architecture Subject RIV: IN - Informatics, Computer Science Impact factor: 1.611, year: 2016

  6. On Realism of Architectural Procedural Models

    Czech Academy of Sciences Publication Activity Database

    Beneš, J.; Kelly, T.; Děchtěrenko, Filip; Křivánek, J.; Müller, P.

    2017-01-01

    Roč. 36, č. 2 (2017), s. 225-234 ISSN 0167-7055 Grant - others:AV ČR(CZ) StrategieAV21/14 Program:StrategieAV Institutional support: RVO:68081740 Keywords : realism * procedural modeling * architecture Subject RIV: IN - Informatics, Computer Science OBOR OECD: Cognitive sciences Impact factor: 1.611, year: 2016

  7. Examining the influence of working memory on updating mental models.

    Science.gov (United States)

    Valadao, Derick F; Anderson, Britt; Danckert, James

    2015-01-01

    The ability to accurately build and update mental representations of our environment depends on our ability to integrate information over a variety of time scales and detect changes in the regularity of events. As such, the cognitive mechanisms that support model building and updating are likely to interact with those involved in working memory (WM). To examine this, we performed three experiments that manipulated WM demands concurrently with the need to attend to regularities in other stimulus properties (i.e., location and shape). That is, participants completed a prediction task while simultaneously performing an n-back WM task with either no load or a moderate load. The distribution of target locations (Experiment 1) or shapes (Experiments 2 and 3) included some level of probabilistic regularity, which, unbeknown to participants, changed abruptly within each block. Moderate WM load hampered the ability to benefit from target regularities and to adapt to changes in those regularities (i.e., the prediction task). This was most pronounced when both prediction and WM requirements shared the same target feature. Our results show that representational updating depends on free WM resources in a domain-specific fashion.

  8. CLPX-Model: Rapid Update Cycle 40km (RUC-40) Model Output Reduced Data, Version 1

    Data.gov (United States)

    National Aeronautics and Space Administration — The Rapid Update Cycle, version 2 at 40km (RUC-2, known to the Cold Land Processes community as RUC40) model is a Mesoscale Analysis and Prediction System (MAPS)...

  9. Finite element model updating of a tied-arch bridge using Douglas-Reid method and Rosenbrock optimization algorithm

    Directory of Open Access Journals (Sweden)

    Tobia Zordan

    2014-08-01

    Full Text Available Condition assessment of bridges has become increasingly important. In order to accurately simulate the real bridge, finite element (FE model updating method is often applied. This paper presents the calibration of the FE model of a reinforced concrete tied-arch bridge using Douglas-Reid method in combination with Rosenbrock optimization algorithm. Based on original drawings and topographie survey, a FE model of the investigated bridge is created. Eight global modes of vibration of the bridge are identified by ambient vibration tests and the frequency domain decomposition technique. Then, eight structural parameters are selected for FE model updating procedure through sensitivity analysis. Finally, the optimal structural parameters are identified using Rosenbrock optimization algorithm. Results show that although the identified parameters lead to a perfect agreement between approximate and measured natural frequencies, they may not be the optimal variables which minimize the differences between numerical and experimental modal data. However, a satisfied agreement between them is still presented. Hence, FE model updating based on Douglas-Reid method and Rosenbrock optimization algorithm could be used as an alternative to other complex updating procedures.

  10. A non-linear neural network technique for updating of river flow forecasts

    Directory of Open Access Journals (Sweden)

    A. Y. Shamseldin

    2001-01-01

    Full Text Available A non-linear Auto-Regressive Exogenous-input model (NARXM river flow forecasting output-updating procedure is presented. This updating procedure is based on the structure of a multi-layer neural network. The NARXM-neural network updating procedure is tested using the daily discharge forecasts of the soil moisture accounting and routing (SMAR conceptual model operating on five catchments having different climatic conditions. The performance of the NARXM-neural network updating procedure is compared with that of the linear Auto-Regressive Exogenous-input (ARXM model updating procedure, the latter being a generalisation of the widely used Auto-Regressive (AR model forecast error updating procedure. The results of the comparison indicate that the NARXM procedure performs better than the ARXM procedure. Keywords: Auto-Regressive Exogenous-input model, neural network, output-updating procedure, soil moisture accounting and routing (SMAR model

  11. Updated observational constraints on quintessence dark energy models

    Science.gov (United States)

    Durrive, Jean-Baptiste; Ooba, Junpei; Ichiki, Kiyotomo; Sugiyama, Naoshi

    2018-02-01

    The recent GW170817 measurement favors the simplest dark energy models, such as a single scalar field. Quintessence models can be classified in two classes, freezing and thawing, depending on whether the equation of state decreases towards -1 or departs from it. In this paper, we put observational constraints on the parameters governing the equations of state of tracking freezing, scaling freezing, and thawing models using updated data, from the Planck 2015 release, joint light-curve analysis, and baryonic acoustic oscillations. Because of the current tensions on the value of the Hubble parameter H0, unlike previous authors, we let this parameter vary, which modifies significantly the results. Finally, we also derive constraints on neutrino masses in each of these scenarios.

  12. Finite element model validation of bridge based on structural health monitoring—Part I: Response surface-based finite element model updating

    Directory of Open Access Journals (Sweden)

    Zhouhong Zong

    2015-08-01

    Full Text Available In the engineering practice, merging statistical analysis into structural evaluation and assessment is a tendency in the future. As a combination of mathematical and statistical techniques, response surface (RS methodology has been successfully applied to design optimization, response prediction and model validation. With the aid of RS methodology, these two serial papers present a finite element (FE model updating and validation method for bridge structures based on structural health monitoring. The key issues to implement such a model updating are discussed in this paper, such as design of experiment, parameter screening, construction of high-order polynomial response surface model, optimization methods and precision inspection of RS model. The proposed procedure is illustrated by a prestressed concrete continuous rigid-frame bridge monitored under operational conditions. The results from the updated FE model have been compared with those obtained from online health monitoring system. The real application to a full-size bridge has demonstrated that the FE model updating process is efficient and convenient. The updated FE model can relatively reflect the actual condition of Xiabaishi Bridge in the design space of parameters and can be further applied to FE model validation and damage identification.

  13. Finite element model updating of concrete structures based on imprecise probability

    Science.gov (United States)

    Biswal, S.; Ramaswamy, A.

    2017-09-01

    Imprecise probability based methods are developed in this study for the parameter estimation, in finite element model updating for concrete structures, when the measurements are imprecisely defined. Bayesian analysis using Metropolis Hastings algorithm for parameter estimation is generalized to incorporate the imprecision present in the prior distribution, in the likelihood function, and in the measured responses. Three different cases are considered (i) imprecision is present in the prior distribution and in the measurements only, (ii) imprecision is present in the parameters of the finite element model and in the measurement only, and (iii) imprecision is present in the prior distribution, in the parameters of the finite element model, and in the measurements. Procedures are also developed for integrating the imprecision in the parameters of the finite element model, in the finite element software Abaqus. The proposed methods are then verified against reinforced concrete beams and prestressed concrete beams tested in our laboratory as part of this study.

  14. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    International Nuclear Information System (INIS)

    Fu, Y; Xu, O; Yang, W; Zhou, L; Wang, J

    2017-01-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately. (paper)

  15. Hydrogeological structure model of the Olkiluoto Site. Update in 2010

    International Nuclear Information System (INIS)

    Vaittinen, T.; Ahokas, H.; Nummela, J.; Paulamaeki, S.

    2011-09-01

    As part of the programme for the final disposal of spent nuclear fuel, a hydrogeological structure model containing the hydraulically significant zones on Olkiluoto Island has been compiled. The structure model describes the deterministic site scale zones that dominate the groundwater flow. The main objective of the study is to provide the geometry and the hydrogeological properties related to the groundwater flow for the zones and the sparsely fractured bedrock to be used in the numerical modelling of groundwater flow and geochemical transport and thereby in the safety assessment. Also, these zones should be taken into account in the repository layout and in the construction of the disposal facility and they have a long-term impact on the evolution of the site and the safety of the disposal repository. The previous hydrogeological model was compiled in 2008 and this updated version is based on data available at the end of May 2010. The updating was based on new hydrogeological observations and a systematic approach covering all drillholes to assess measured fracture transmissivities typical of the site-scale hydrogeological zones. New data consisted of head observations and interpreted pressure and flow responses caused by field activities. Essential background data for the modelling included the ductile deformation model and the site scale brittle deformation zones modelled in the geological model version 2.0. The GSM combine both geological and geophysical investigation data on the site. As a result of the modelling campaign, hydrogeological zones HZ001, HZ008, HZ19A, HZ19B, HZ19C, HZ20A, HZ20B, HZ21, HZ21B, HZ039, HZ099, OL-BFZ100, and HZ146 were included in the structure model. Compared with the previous model, zone HZ004 was replaced with zone HZ146 and zone HZ039 was introduced for the first time. Alternative zone HZ21B was included in the basic model. For the modelled zones, both the zone intersections, describing the fractures with dominating groundwater

  16. Operational model updating of spinning finite element models for HAWT blades

    Science.gov (United States)

    Velazquez, Antonio; Swartz, R. Andrew; Loh, Kenneth J.; Zhao, Yingjun; La Saponara, Valeria; Kamisky, Robert J.; van Dam, Cornelis P.

    2014-04-01

    Structural health monitoring (SHM) relies on collection and interrogation of operational data from the monitored structure. To make this data meaningful, a means of understanding how damage sensitive data features relate to the physical condition of the structure is required. Model-driven SHM applications achieve this goal through model updating. This study proposed a novel approach for updating of aero-elastic turbine blade vibrational models for operational horizontal-axis wind turbines (HAWTs). The proposed approach updates estimates of modal properties for spinning HAWT blades intended for use in SHM and load estimation of these structures. Spinning structures present additional challenges for model updating due to spinning effects, dependence of modal properties on rotational velocity, and gyroscopic effects that lead to complex mode shapes. A cyclo-stationary stochastic-based eigensystem realization algorithm (ERA) is applied to operational turbine data to identify data-driven modal properties including frequencies and mode shapes. Model-driven modal properties are derived through modal condensation of spinning finite element models with variable physical parameters. Complex modes are converted into equivalent real modes through reduction transformation. Model updating is achieved through use of an adaptive simulated annealing search process, via Modal Assurance Criterion (MAC) with complex-conjugate modes, to find the physical parameters that best match the experimentally derived data.

  17. An Updated Gas/grain Sulfur Network for Astrochemical Models

    Science.gov (United States)

    Laas, Jacob; Caselli, Paola

    2017-06-01

    Sulfur is a chemical element that enjoys one of the highest cosmic abundances. However, it has traditionally played a relatively minor role in the field of astrochemistry, being drowned out by other chemistries after it depletes from the gas phase during the transition from a diffuse cloud to a dense one. A wealth of laboratory studies have provided clues to its rich chemistry in the condensed phase, and most recently, a report by a team behind the Rosetta spacecraft has significantly helped to unveil its rich cometary chemistry. We have set forth to use this information to greatly update/extend the sulfur reactions within the OSU gas/grain astrochemical network in a systematic way, to provide more realistic chemical models of sulfur for a variety of interstellar environments. We present here some results and implications of these models.

  18. Procedural Content Graphs for Urban Modeling

    Directory of Open Access Journals (Sweden)

    Pedro Brandão Silva

    2015-01-01

    Full Text Available Massive procedural content creation, for example, for virtual urban environments, is a difficult, yet important challenge. While shape grammars are a popular example of effectiveness in architectural modeling, they have clear limitations regarding readability, manageability, and expressive power when addressing a variety of complex structural designs. Moreover, shape grammars aim at geometry specification and do not facilitate integration with other types of content, such as textures or light sources, which could rather accompany the generation process. We present procedural content graphs, a graph-based solution for procedural generation that addresses all these issues in a visual, flexible, and more expressive manner. Besides integrating handling of diverse types of content, this approach introduces collective entity manipulation as lists, seamlessly providing features such as advanced filtering, grouping, merging, ordering, and aggregation, essentially unavailable in shape grammars. Hereby, separated entities can be easily merged or just analyzed together in order to perform a variety of context-based decisions and operations. The advantages of this approach are illustrated via examples of tasks that are either very cumbersome or simply impossible to express with previous grammar approaches.

  19. On Using Selection Procedures with Binomial Models.

    Science.gov (United States)

    1983-10-01

    eds.), Shinko Tsusho Co. Ltd., Tokyo, Japan , pp. 501-533. Gupta, S. S. and Sobel, M. (1960). Selecting a subset containing the best of several...IA_____3_6r__I____ *TITLE food A$ieweI L TYPE of 09PORT 6 PERIOD COVERED ON USING SELECTION PROCEDURES WITH BINOMIAL MODELS Technical 6. PeSPRFeauS1 ONG. REPORT...ontoedis stoc toeSI. to Ei.,..,t&* toemR.,. 14. SUPPOLEMENTARY MOCTES 19. Rey WORDS (Coatiou. 40 ow.oa* edo if Necesary and #do""&a by block number

  20. "Updates to Model Algorithms & Inputs for the Biogenic ...

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.

  1. State updating of a distributed hydrological model with Ensemble Kalman Filtering: effects of updating frequency and observation network density on forecast accuracy

    Directory of Open Access Journals (Sweden)

    O. Rakovec

    2012-09-01

    Full Text Available This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model. The Ensemble Kalman filter (EnKF is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property.

    Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2, a relatively quickly responding catchment in the Belgian Ardennes. We assess the impact on the forecasted discharge of (1 various sets of the spatially distributed discharge gauges and (2 the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty.

  2. Robust estimation procedure in panel data model

    Energy Technology Data Exchange (ETDEWEB)

    Shariff, Nurul Sima Mohamad [Faculty of Science of Technology, Universiti Sains Islam Malaysia (USIM), 71800, Nilai, Negeri Sembilan (Malaysia); Hamzah, Nor Aishah [Institute of Mathematical Sciences, Universiti Malaya, 50630, Kuala Lumpur (Malaysia)

    2014-06-19

    The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependence is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.

  3. A Stress Update Algorithm for Constitutive Models of Glassy Polymers

    Science.gov (United States)

    Danielsson, Mats

    2013-06-01

    A semi-implicit stress update algorithm is developed for the elastic-viscoplastic behavior of glassy polymers. The case of near rate-insensitivity is addressed, and the stress update algorithm is designed to handle this case robustly. A consistent tangent stiffness matrix is derived based on a full linearization of the internal virtual work. The stress update algorithm and (a slightly modified) tangent stiffness matrix are implemented in a commercial finite element program. The stress update algorithm is tested on a large boundary value problem for illustrative purposes.

  4. Robot Visual Tracking via Incremental Self-Updating of Appearance Model

    Directory of Open Access Journals (Sweden)

    Danpei Zhao

    2013-09-01

    Full Text Available This paper proposes a target tracking method called Incremental Self-Updating Visual Tracking for robot platforms. Our tracker treats the tracking problem as a binary classification: the target and the background. The greyscale, HOG and LBP features are used in this work to represent the target and are integrated into a particle filter framework. To track the target over long time sequences, the tracker has to update its model to follow the most recent target. In order to deal with the problems of calculation waste and lack of model-updating strategy with the traditional methods, an intelligent and effective online self-updating strategy is devised to choose the optimal update opportunity. The strategy of updating the appearance model can be achieved based on the change in the discriminative capability between the current frame and the previous updated frame. By adjusting the update step adaptively, severe waste of calculation time for needless updates can be avoided while keeping the stability of the model. Moreover, the appearance model can be kept away from serious drift problems when the target undergoes temporary occlusion. The experimental results show that the proposed tracker can achieve robust and efficient performance in several benchmark-challenging video sequences with various complex environment changes in posture, scale, illumination and occlusion.

  5. Slab2 - Updated Subduction Zone Geometries and Modeling Tools

    Science.gov (United States)

    Moore, G.; Hayes, G. P.; Portner, D. E.; Furtney, M.; Flamme, H. E.; Hearne, M. G.

    2017-12-01

    The U.S. Geological Survey database of global subduction zone geometries (Slab1.0), is a highly utilized dataset that has been applied to a wide range of geophysical problems. In 2017, these models have been improved and expanded upon as part of the Slab2 modeling effort. With a new data driven approach that can be applied to a broader range of tectonic settings and geophysical data sets, we have generated a model set that will serve as a more comprehensive, reliable, and reproducible resource for three-dimensional slab geometries at all of the world's convergent margins. The newly developed framework of Slab2 is guided by: (1) a large integrated dataset, consisting of a variety of geophysical sources (e.g., earthquake hypocenters, moment tensors, active-source seismic survey images of the shallow slab, tomography models, receiver functions, bathymetry, trench ages, and sediment thickness information); (2) a dynamic filtering scheme aimed at constraining incorporated seismicity to only slab related events; (3) a 3-D data interpolation approach which captures both high resolution shallow geometries and instances of slab rollback and overlap at depth; and (4) an algorithm which incorporates uncertainties of contributing datasets to identify the most probable surface depth over the extent of each subduction zone. Further layers will also be added to the base geometry dataset, such as historic moment release, earthquake tectonic providence, and interface coupling. Along with access to several queryable data formats, all components have been wrapped into an open source library in Python, such that suites of updated models can be released as further data becomes available. This presentation will discuss the extent of Slab2 development, as well as the current availability of the model and modeling tools.

  6. Venus Global Reference Atmospheric Model Status and Planned Updates

    Science.gov (United States)

    Justh, H. L.; Cianciolol, A. M. Dwyer

    2017-01-01

    The Venus Global Reference Atmospheric Model (Venus-GRAM) was originally developed in 2004 under funding from NASA's In Space Propulsion (ISP) Aerocapture Project to support mission studies at the planet. Many proposals, including NASA New Frontiers and Discovery, as well as other studies have used Venus-GRAM to design missions and assess system robustness. After Venus-GRAM's release in 2005, several missions to Venus have generated a wealth of additional atmospheric data, yet few model updates have been made to Venus-GRAM. This paper serves to address three areas: (1) to present the current status of Venus-GRAM, (2) to identify new sources of data and other upgrades that need to be incorporated to maintain Venus-GRAM credibility and (3) to identify additional Venus-GRAM options and features that could be included to increase its capability. This effort will de-pend on understanding the needs of the user community, obtaining new modeling data and establishing a dedicated funding source to support continual up-grades. This paper is intended to initiate discussion that can result in an upgraded and validated Venus-GRAM being available to future studies and NASA proposals.

  7. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  8. State updating of a distributed hydrological model with Ensemble Kalman Filtering: Effects of updating frequency and observation network density on forecast accuracy

    Science.gov (United States)

    Rakovec, O.; Weerts, A.; Hazenberg, P.; Torfs, P.; Uijlenhoet, R.

    2012-12-01

    This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model (Rakovec et al., 2012a). The Ensemble Kalman filter (EnKF) is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property). Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2), a relatively quickly responding catchment in the Belgian Ardennes. The uncertain precipitation model forcings were obtained using a time-dependent multivariate spatial conditional simulation method (Rakovec et al., 2012b), which is further made conditional on preceding simulations. We assess the impact on the forecasted discharge of (1) various sets of the spatially distributed discharge gauges and (2) the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty. Rakovec, O., Weerts, A. H., Hazenberg, P., Torfs, P. J. J. F., and Uijlenhoet, R.: State updating of a distributed hydrological model with Ensemble Kalman Filtering: effects of updating frequency and observation network density on forecast accuracy, Hydrol. Earth Syst. Sci. Discuss., 9, 3961-3999, doi:10.5194/hessd-9-3961-2012, 2012a. Rakovec, O., Hazenberg, P., Torfs, P. J. J. F., Weerts, A. H., and Uijlenhoet, R.: Generating spatial precipitation ensembles: impact of

  9. The Course Challenge Procedure: A Fast but Not Furious Way to Update University Curriculums

    Science.gov (United States)

    Fornssler, Cathie M.

    2009-01-01

    Universities want to encourage faculty to keep curricula up-to-date and innovative, yet faculty dread the prospect of arguing about course and program changes with college and university curriculum committees--which are overworked and overwhelmed with detail. The Course Challenge Procedure (CCP) at the University of Saskatchewan is a collegial yet…

  10. Updated Conceptual Model for the 300 Area Uranium Groundwater Plume

    Energy Technology Data Exchange (ETDEWEB)

    Zachara, John M.; Freshley, Mark D.; Last, George V.; Peterson, Robert E.; Bjornstad, Bruce N.

    2012-11-01

    The 300 Area uranium groundwater plume in the 300-FF-5 Operable Unit is residual from past discharge of nuclear fuel fabrication wastes to a number of liquid (and solid) disposal sites. The source zones in the disposal sites were remediated by excavation and backfilled to grade, but sorbed uranium remains in deeper, unexcavated vadose zone sediments. In spite of source term removal, the groundwater plume has shown remarkable persistence, with concentrations exceeding the drinking water standard over an area of approximately 1 km2. The plume resides within a coupled vadose zone, groundwater, river zone system of immense complexity and scale. Interactions between geologic structure, the hydrologic system driven by the Columbia River, groundwater-river exchange points, and the geochemistry of uranium contribute to persistence of the plume. The U.S. Department of Energy (DOE) recently completed a Remedial Investigation/Feasibility Study (RI/FS) to document characterization of the 300 Area uranium plume and plan for beginning to implement proposed remedial actions. As part of the RI/FS document, a conceptual model was developed that integrates knowledge of the hydrogeologic and geochemical properties of the 300 Area and controlling processes to yield an understanding of how the system behaves and the variables that control it. Recent results from the Hanford Integrated Field Research Challenge site and the Subsurface Biogeochemistry Scientific Focus Area Project funded by the DOE Office of Science were used to update the conceptual model and provide an assessment of key factors controlling plume persistence.

  11. Acute complications after laparoscopic bariatric procedures: update for the general surgeon.

    Science.gov (United States)

    Campanile, Fabio Cesare; Boru, Cristian E; Rizzello, Mario; Puzziello, Alessandro; Copaescu, Catalin; Cavallaro, Giuseppe; Silecchia, Gianfranco

    2013-06-01

    Development and widespread use of laparoscopic bariatric surgery exposes emergency room physicians and general surgeons to face acute or chronic surgical complications of bariatric surgery. The most common surgical emergencies after bariatric surgery are examined based on an extensive review of bariatric surgery literature and on the personal experience of the authors' practice in four high-volume bariatric surgery centers. An orderly stepwise approach to the bariatric patient with an emergency condition is advisable. Resuscitation should follow the same protocol adopted for the non-bariatric patients. Consultation with the bariatric surgeon should be obtained early, and referral to the bariatric center should be considered whenever possible. The identification of the surgical procedure to which the patient was submitted will orient in the diagnosis of the acute condition. Procedure-specific complication should always be taken into consideration in the differential diagnosis. Acute slippage is the most frequent complication that needs emergency treatment in a laparoscopic gastric banding. Sleeve gastrectomy and gastric bypasses may present with life-threatening suture leaks or suture line bleeding. Gastric greater curvature plication (investigational restrictive procedure) can present early complications related to prolonged postoperative vomiting. Both gastric bypass and bilio-pancreatic diversion may cause anastomotic marginal ulcer, bleeding, or rarely perforation and severe stenosis, while small bowel obstruction due to internal hernia represents a surgical emergency, also caused by trocar site hernia, intussusceptions, adhesions, strictures, kinking, or blood clots. Rapid weight loss after bariatric surgery can cause cholecystitis or choledocholithiasis, which are difficult to treat after bypass procedures. The general surgeon should be informed about modern bariatric procedures, their potential acute complications, and emergency management.

  12. Updating Allergy and/or Hypersensitivity Diagnostic Procedures in the WHO ICD-11 Revision.

    Science.gov (United States)

    Tanno, Luciana Kase; Calderon, Moises A; Li, James; Casale, Thomas; Demoly, Pascal

    2016-01-01

    The classification of allergy and/or hypersensitivity conditions for the World Health Organization (WHO) International Classification of Diseases (ICD)-11 provides the appropriate corresponding codes for allergic diseases, assuming that the final diagnosis is correct. This classification should be linked to in vitro and in vivo diagnostic procedures. Considering the impact for our specialty, we decided to review the codification of these procedures into the ICD aiming to have a baseline and to suggest changes and/or submit new proposals. For that, we prepared a list of the relevant allergy and/or hypersensitivity diagnostic procedures that health care professionals are dealing with on a daily basis. This was based on the main current guidelines and selected all possible and relevant corresponding terms from the ICD-10 (2015 version) and the ICD-11 β phase foundation (June 2015 version). More than 90% of very specific and important diagnostic procedures currently used by the allergists' community on a daily basis are missing. We observed that some concepts usually used by the allergist community on a daily basis are not fully recognized by other specialties. The whole scheme and the correspondence in the ICD-10 (2015 version) and ICD-11 foundation (June 2015 version) provided us a big picture of the missing or imprecise terms and how they are scattered in the current ICD-11 framework, allowing us to submit new proposals to increase the visibility of the allergy and/or hypersensitivity conditions and diagnostic procedures. Copyright © 2016 American Academy of Allergy, Asthma & Immunology. All rights reserved.

  13. Summary of Expansions, Updates, and Results in GREET 2017 Suite of Models

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Michael [Argonne National Lab. (ANL), Argonne, IL (United States); Elgowainy, Amgad [Argonne National Lab. (ANL), Argonne, IL (United States); Han, Jeongwoo [Argonne National Lab. (ANL), Argonne, IL (United States); Benavides, Pahola Thathiana [Argonne National Lab. (ANL), Argonne, IL (United States); Burnham, Andrew [Argonne National Lab. (ANL), Argonne, IL (United States); Cai, Hao [Argonne National Lab. (ANL), Argonne, IL (United States); Canter, Christina [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Rui [Argonne National Lab. (ANL), Argonne, IL (United States); Dai, Qiang [Argonne National Lab. (ANL), Argonne, IL (United States); Kelly, Jarod [Argonne National Lab. (ANL), Argonne, IL (United States); Lee, Dong-Yeon [Argonne National Lab. (ANL), Argonne, IL (United States); Lee, Uisung [Argonne National Lab. (ANL), Argonne, IL (United States); Li, Qianfeng [Argonne National Lab. (ANL), Argonne, IL (United States); Lu, Zifeng [Argonne National Lab. (ANL), Argonne, IL (United States); Qin, Zhangcai [Argonne National Lab. (ANL), Argonne, IL (United States); Sun, Pingping [Argonne National Lab. (ANL), Argonne, IL (United States); Supekar, Sarang D. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-11-01

    This report provides a technical summary of the expansions and updates to the 2017 release of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET®) model, including references and links to key technical documents related to these expansions and updates. The GREET 2017 release includes an updated version of the GREET1 (the fuel-cycle GREET model) and GREET2 (the vehicle-cycle GREET model), both in the Microsoft Excel platform and in the GREET.net modeling platform. Figure 1 shows the structure of the GREET Excel modeling platform. The .net platform integrates all GREET modules together seamlessly.

  14. A revised model of Jupiter's inner electron belts: Updating the Divine radiation model

    Science.gov (United States)

    Garrett, Henry B.; Levin, Steven M.; Bolton, Scott J.; Evans, Robin W.; Bhattacharya, Bidushi

    2005-02-01

    In 1983, Divine presented a comprehensive model of the Jovian charged particle environment that has long served as a reference for missions to Jupiter. However, in situ observations by Galileo and synchrotron observations from Earth indicate the need to update the model in the inner radiation zone. Specifically, a review of the model for 1 MeV data. Further modifications incorporating observations from the Galileo and Cassini spacecraft will be reported in the future.

  15. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  16. Evidence-based Update of Pediatric Dental Restorative Procedures: Preventive Strategies.

    Science.gov (United States)

    Tinanoff, N; Coll, J A; Dhar, V; Maas, W R; Chhibber, S; Zokaei, L

    2015-01-01

    There has been significant advances in the understanding of preventive restorative procedures regarding the advantages and disadvantages for restorative procedures; the evidence for conservative techniques for deep carious lesions; the effectiveness of pit and fissure sealants; and the evidence for use of resin infiltration techniques. The intent of this review is to help practitioners use evidence to make decisions regarding preventive restorative dentistry in children and young adolescents. This evidence-based review appraises the literature, primarily between the years 1995-2013, on preventive restorative strategies. The evidence was graded as to strong evidence, evidence in favor, or expert opinion by consensus of authors Results: The preventive strategy for dental caries includes individualized assessment of disease progression and management with appropriate preventive and restorative therapy. There is strong evidence that restoration of teeth with incomplete caries excavation results in fewer signs and symptoms of pulpal disease than complete excavation. There is strong evidence that sealants should be placed on pit and fissure surfaces judged to be at risk for dental caries, and surfaces that already exhibit incipient, non-cavitated carious lesions. There is evidence in favor for resin infiltration to improve the clinical appearance of white spot lesions. Substantial evidence exists in the literature regarding the value of preventive dental restorative procedures.

  17. Machine learning in updating predictive models of planning and scheduling transportation projects

    Science.gov (United States)

    1997-01-01

    A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...

  18. Highly efficient model updating for structural condition assessment of large-scale bridges.

    Science.gov (United States)

    2015-02-01

    For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...

  19. A Survey on Procedural Modelling for Virtual Worlds

    NARCIS (Netherlands)

    Smelik, R.M.; Tutenel, T.; Bidarra, R.; Benes, B.

    2014-01-01

    Procedural modelling deals with (semi-)automatic content generation by means of a program or procedure. Among other advantages, its data compression and the potential to generate a large variety of detailed content with reduced human intervention, have made procedural modelling attractive for

  20. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi

    2010-01-01

    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  1. Evaluation of two updating methods for dissipative models on a real structure

    International Nuclear Information System (INIS)

    Moine, P.; Billet, L.

    1996-01-01

    Finite Element Models are widely used to predict the dynamic behaviour from structures. Frequently, the model does not represent the structure with all be expected accuracy i.e. the measurements realised on the structure differ from the data predicted by the model. It is therefore necessary to update the model. Although many modeling errors come from inadequate representation of the damping phenomena, most of the model updating techniques are up to now restricted to conservative models only. In this paper, we present two updating methods for dissipative models using Eigen mode shapes and Eigen values as behavioural information from the structure. The first method - the modal output error method - compares directly the experimental Eigen vectors and Eigen values to the model Eigen vectors and Eigen values whereas the second method - the error in constitutive relation method - uses an energy error derived from the equilibrium relation. The error function, in both cases, is minimized by a conjugate gradient algorithm and the gradient is calculated analytically. These two methods behave differently which can be evidenced by updating a real structure constituted from a piece of pipe mounted on two viscous elastic suspensions. The updating of the model validates an updating strategy consisting in realizing a preliminary updating with the error in constitutive relation method (a fast to converge but difficult to control method) and then to pursue the updating with the modal output error method (a slow to converge but reliable and easy to control method). Moreover the problems encountered during the updating process and their corresponding solutions are given. (authors)

  2. [Update on laparoscopic electrosurgical devices and their use in complex urologic procedures].

    Science.gov (United States)

    Boukheir, G; Aoun, F; Albisinni, S; Roumeguère, T

    2017-04-01

    Laparoscopy is the standard of care for many urologic procedures and witnesses nowadays technological advancements. Hemostasis is highly important in laparoscopy since bleeding could rapidly alter the operative conditions. The objective of this review is to expose the different electrosurgical techniques, their history and their applications in urology. A literature review was overdone using the following terms "laparoscopic electrosurgery" and/or "nephrectomy" and/or "prostatectomy". Two hundred and forty articles were found through Pubmed. After reviewing the title and the content of these articles, 18 were eligible for the following review. The different electrosurgical techniques and their technological evolution are exposed. Physical properties of each system are exposed as well. Advantages and limitations of each system are also reviewed and analyzed. Bipolar electrosurgery with thermofusion and ultrasound technology can achieve good results in terms of nerve sparing for radical laparoscopic prostatectomies. They can both be used in partial nephrectomies. However, they can compromise the surgical resection margins. Hybrid systems seem to have an important role in urological laparoscopic procedures despite the scarce number of available studies. 3. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  3. Synthetic Modifications In the Frequency Domain for Finite Element Model Update and Damage Detection

    Science.gov (United States)

    2017-09-01

    Aeronautical Society , 24, pp. 590–591. [23] Fritzen, C., and Kiefer, T., 1992, “Localization and Correction of Errors in Finite Element Models Based on...MODIFICATIONS IN THE FREQUENCY DOMAIN FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION by Ryun J. C. Konze September 2017 Thesis Advisor...FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION 5. FUNDING NUMBERS 6. AUTHOR(S) Ryun J. C. Konze 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES

  4. Finite element model updating of the UCF grid benchmark using measured frequency response functions

    Science.gov (United States)

    Sipple, Jesse D.; Sanayei, Masoud

    2014-05-01

    A frequency response function based finite element model updating method is presented and used to perform parameter estimation of the University of Central Florida Grid Benchmark Structure. The proposed method is used to calibrate the initial finite element model using measured frequency response functions from the undamaged, intact structure. Stiffness properties, mass properties, and boundary conditions of the initial model were estimated and updated. Model updating was then performed using measured frequency response functions from the damaged structure to detect physical structural change. Grouping and ungrouping were utilized to determine the exact location and magnitude of the damage. The fixity in rotation of two boundary condition nodes was accurately and successfully estimated. The usefulness of the proposed method for finite element model updating is shown by being able to detect, locate, and quantify change in structural properties.

  5. Updated free span design procedure DNV RP-F105 Ormen Lange experiences

    Energy Technology Data Exchange (ETDEWEB)

    Fyrileiv, Olav; Moerk, Kim; Chezhian, Muthu [Det Norsk Veritas (Norway)

    2005-07-01

    The Ormen Lange gas field is located within a prehistoric slide area with varying water depths from 250 to 1100 m. Due to the slide area, the seabed is very uneven including steep slopes and seabed obstacles up to 50 meters tall. The major technical challenges with respect to pipeline design in this area are: extreme seabed topography combined with inhomogeneous soil conditions; uncertainties related to current velocities and distribution; high number of spans including some very long spans; deep waters and therefore difficult and costly seabed preparation/span intervention; flowlines with large potential to buckle laterally in combination with free spans. In order to minimise span intervention costs, a major testing campaign and research programme has been conducted in the Ormen Lange project to come up with a design procedure in compliance with the DNV-RP-F105 (DNV, 2002) design philosophy. The improvements in terms of reduced seabed intervention and rock dumping costs are in the order of several 100 MNOKs. The lessons learned and the improved knowledge will also be a great value for other project dealing with similar free span problems. (author)

  6. Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations

    Science.gov (United States)

    Ehlert, Kurt; Loewe, Laurence

    2014-11-01

    To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected "hubs" such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution. Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present "Lazy Updating," an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.

  7. Updated anatomy of the buccal space and its implications for plastic, reconstructive and aesthetic procedures.

    Science.gov (United States)

    Schenck, Thilo L; Koban, Konstantin C; Schlattau, Alexander; Frank, Konstantin; Sclafani, Anthony P; Giunta, Riccardo E; Roth, Malcolm Z; Gaggl, Alexander; Gotkin, Robert H; Cotofana, Sebastian

    2018-02-01

    The buccal space is an integral deep facial space which is involved in a variety of intra- and extra-oral pathologies and provides a good location for the harvest of the facial artery. The age-related anatomy of this space was investigated and compared to previous reports. We conducted anatomic dissections in 102 fresh frozen human cephalic specimens (45 males, 57 females; age range 50-100 years) and performed additional computed tomographic, magnetic resonance and 3-D surface volumetric imaging studies to visualize the boundaries and the contents of the buccal space after injection of contrast enhancing material. The mean vertical extent of contrast agent injected into the buccal space was 25.2 ± 4.3 mm and did not significantly differ between individuals of different age (p = 0.77) or gender (p = 0.13). The maximal injected volume was 10.02 cc [range: 3.09-10.02] without significant influence of age (p = 0.13) or gender (p = 0.81). The change in surface volume was 3.64 ± 1.04 cc resulting in a mean surface-volume-coefficient of 0.87 ± 0.12 without being statistically significant influenced by age (p = 0.53) or gender (p = 0.78). The facial artery was constantly identified within the buccal space whereas the facial vein was found to course within its posterior boundary. The buccal space did not undergo age-related changes in volume or size which highlights this space is a reliable and predictable landmark for various plastic, reconstructive and aesthetic procedures. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  8. Damage severity assessment in wind turbine blade laboratory model through fuzzy finite element model updating

    Science.gov (United States)

    Turnbull, Heather; Omenzetter, Piotr

    2017-04-01

    The recent shift towards development of clean, sustainable energy sources has provided a new challenge in terms of structural safety and reliability: with aging, manufacturing defects, harsh environmental and operational conditions, and extreme events such as lightning strikes wind turbines can become damaged resulting in production losses and environmental degradation. To monitor the current structural state of the turbine, structural health monitoring (SHM) techniques would be beneficial. Physics based SHM in the form of calibration of a finite element model (FEMs) by inverse techniques is adopted in this research. Fuzzy finite element model updating (FFEMU) techniques for damage severity assessment of a small-scale wind turbine blade are discussed and implemented. The main advantage is the ability of FFEMU to account in a simple way for uncertainty within the problem of model updating. Uncertainty quantification techniques, such as fuzzy sets, enable a convenient mathematical representation of the various uncertainties. Experimental frequencies obtained from modal analysis on a small-scale wind turbine blade were described by fuzzy numbers to model measurement uncertainty. During this investigation, damage severity estimation was investigated through addition of small masses of varying magnitude to the trailing edge of the structure. This structural modification, intended to be in lieu of damage, enabled non-destructive experimental simulation of structural change. A numerical model was constructed with multiple variable additional masses simulated upon the blades trailing edge and used as updating parameters. Objective functions for updating were constructed and minimized using both particle swarm optimization algorithm and firefly algorithm. FFEMU was able to obtain a prediction of baseline material properties of the blade whilst also successfully predicting, with sufficient accuracy, a larger magnitude of structural alteration and its location.

  9. Predicting Individual Physiological Responses During Marksmanship Field Training Using an Updated SCENARIO-J Model

    National Research Council Canada - National Science Library

    Yokota, Miyo

    2004-01-01

    ...)) for individual variation and a metabolic rate (M) correction during downhill movements. This study evaluated the updated version of the model incorporating these new features, using a dataset collected during U.S. Marine Corps (USMC...

  10. Predictability of locomotion: Effects on updating of spatial situation models during narrative comprehension

    NARCIS (Netherlands)

    Dutke, S.; Rinck, M.

    2006-01-01

    We investigated how the updating of spatial situation models during narrative comprehension depends on the interaction of cognitive abilities and text characteristics. Participants with low verbal and visuospatial abilities and participants with high abilities read narratives in which the

  11. Finite element model updating using bayesian framework and modal properties

    CSIR Research Space (South Africa)

    Marwala, T

    2005-01-01

    Full Text Available . In this Note, Markov chain Monte Carlo (MCMC) simulation is used to sample the probability of the updating parameters in light of the measured modal properties. This probability is known as the posterior probability. The Metropolis algorithm (see Ref. 6...

  12. Automatic Multi-Scale Calibration Procedure for Nested Hydrological-Hydrogeological Regional Models

    Science.gov (United States)

    Labarthe, B.; Abasq, L.; Flipo, N.; de Fouquet, C. D.

    2014-12-01

    Large hydrosystem modelling and understanding is a complex process depending on regional and local processes. A nested interface concept has been implemented in the hydrosystem modelling platform for a large alluvial plain model (300 km2) part of a 11000 km2 multi-layer aquifer system, included in the Seine basin (65000 km2, France). The platform couples hydrological and hydrogeological processes through four spatially distributed modules (Mass balance, Unsaturated Zone, River and Groundwater). An automatic multi-scale calibration procedure is proposed. Using different data sets from regional scale (117 gauging stations and 183 piezometers over the 65000 km2) to the intermediate scale(dense past piezometric snapshot), it permits the calibration and homogenization of model parameters over scales.The stepwise procedure starts with the optimisation of the water mass balance parameters at regional scale using a conceptual 7 parameters bucket model coupled with the inverse modelling tool PEST. The multi-objective function is derived from river discharges and their de-composition by hydrograph separation. The separation is performed at each gauging station using an automatic procedure based one Chapman filter. Then, the model is run at the regional scale to provide recharge estimate and regional fluxes to the groundwater local model. Another inversion method is then used to determine the local hydrodynamic parameters. This procedure used an initial kriged transmissivity field which is successively updated until the simulated hydraulic head distribution equals a reference one obtained by krigging. Then, the local parameters are upscaled to the regional model by renormalisation procedure.This multi-scale automatic calibration procedure enhances both the local and regional processes representation. Indeed, it permits a better description of local heterogeneities and of the associated processes which are transposed into the regional model, improving the overall performances

  13. A procedure for Building Product Models

    DEFF Research Database (Denmark)

    Hvam, Lars

    1999-01-01

    activities. A basic assumption is that engineers have to take the responsability for building product models to be used in their domain. To do that they must be able to carry out the modeling task on their own without any need for support from computer science experts. This paper presents a set of simple......The application of product modeling in manufacturing companies raises the important question of how to model product knowledge in a comprehensible and efficient way. An important challenge is to qualify engineers to model and specify IT-systems (product models) to support their specification......, easily adaptable concepts and methods from data modeling (object oriented analysis) and domain modeling (product modeling). The concepts are general and can be used for modeling all types of specifications in the different phases in the product life cycle. The modeling techniques presented have been...

  14. Using radar altimetry to update a routing model of the Zambezi River Basin

    DEFF Research Database (Denmark)

    Michailovsky, Claire Irene B.; Bauer-Gottwein, Peter

    2012-01-01

    of the basin was built to simulate the land phase of the water cycle and produce inflows to a Muskingum routing model. River altimetry from the ENVISAT mission was then used to update the storages in the reaches of the Muskingum model using the Extended Kalman Filter. The method showed improvements in modeled...... is needed for hydrological applications. To overcome these limitations, altimetry river levels can be combined with hydrological modeling in a dataassimilation framework. This study focuses on the updating of a river routing model of the Zambezi using river levels from radar altimetry. A hydrological model...

  15. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2011

    International Nuclear Information System (INIS)

    Nigg, David W.; Steuhm, Devin A.

    2011-01-01

    . Furthermore, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system is being implemented and initial computational results have been obtained. This capability will have many applications in 2011 and beyond as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation. Finally we note that although full implementation of the new computational models and protocols will extend over a period 3-4 years as noted above, interim applications in the much nearer term have already been demonstrated. In particular, these demonstrations included an analysis that was useful for understanding the cause of some issues in December 2009 that were triggered by a larger than acceptable discrepancy between the measured excess core reactivity and a calculated value that was based on the legacy computational methods. As the Modeling Update project proceeds we anticipate further such interim, informal, applications in parallel with formal qualification of the system under the applicable INL Quality Assurance procedures and standards.

  16. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2011

    Energy Technology Data Exchange (ETDEWEB)

    David W. Nigg; Devin A. Steuhm

    2011-09-01

    , a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system is being implemented and initial computational results have been obtained. This capability will have many applications in 2011 and beyond as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation. Finally we note that although full implementation of the new computational models and protocols will extend over a period 3-4 years as noted above, interim applications in the much nearer term have already been demonstrated. In particular, these demonstrations included an analysis that was useful for understanding the cause of some issues in December 2009 that were triggered by a larger than acceptable discrepancy between the measured excess core reactivity and a calculated value that was based on the legacy computational methods. As the Modeling Update project proceeds we anticipate further such interim, informal, applications in parallel with formal qualification of the system under the applicable INL Quality Assurance procedures and standards.

  17. Improvement and Validation of Weld Residual Stress Modelling Procedure

    International Nuclear Information System (INIS)

    Zang, Weilin; Gunnars, Jens; Dong, Pingsha; Hong, Jeong K.

    2009-06-01

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  18. Procedural City Layout Generation Based on Urban Land Use Models

    NARCIS (Netherlands)

    Groenewegen, S.A.; Smelik, R.M.; Kraker, J.K. de; Bidarra, R.

    2009-01-01

    Training and simulation applications in virtual worlds require significant amounts of urban environments. Procedural generation is an efficient way to create such models. Existing approaches for procedural modelling of cities aim at facilitating the work of urban planners and artists, but either

  19. Generic Graph Grammar: A Simple Grammar for Generic Procedural Modelling

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Bærentzen, Jakob Andreas

    2012-01-01

    Methods for procedural modelling tend to be designed either for organic objects, which are described well by skeletal structures, or for man-made objects, which are described well by surface primitives. Procedural methods, which allow for modelling of both kinds of objects, are few and usually of...

  20. A new multi-objective approach to finite element model updating

    Science.gov (United States)

    Jin, Seung-Seop; Cho, Soojin; Jung, Hyung-Jo; Lee, Jong-Jae; Yun, Chung-Bang

    2014-05-01

    The single objective function (SOF) has been employed for the optimization process in the conventional finite element (FE) model updating. The SOF balances the residual of multiple properties (e.g., modal properties) using weighting factors, but the weighting factors are hard to determine before the run of model updating. Therefore, the trial-and-error strategy is taken to find the most preferred model among alternative updated models resulted from varying weighting factors. In this study, a new approach to the FE model updating using the multi-objective function (MOF) is proposed to get the most preferred model in a single run of updating without trial-and-error. For the optimization using the MOF, non-dominated sorting genetic algorithm-II (NSGA-II) is employed to find the Pareto optimal front. The bend angle related to the trade-off relationship of objective functions is used to select the most preferred model among the solutions on the Pareto optimal front. To validate the proposed approach, a highway bridge is selected as a test-bed and the modal properties of the bridge are obtained from the ambient vibration test. The initial FE model of the bridge is built using SAP2000. The model is updated using the identified modal properties by the SOF approach with varying the weighting factors and the proposed MOF approach. The most preferred model is selected using the bend angle of the Pareto optimal front, and compared with the results from the SOF approach using varying the weighting factors. The comparison shows that the proposed MOF approach is superior to the SOF approach using varying the weighting factors in getting smaller objective function values, estimating better updated parameters, and taking less computational time.

  1. Procedures for Geometric Data Reduction in Solid Log Modelling

    Science.gov (United States)

    Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt

    1995-01-01

    One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.

  2. The updated geodetic mean dynamic topography model – DTU15MDT

    DEFF Research Database (Denmark)

    Knudsen, Per; Andersen, Ole Baltazar; Maximenko, Nikolai

    An update to the global mean dynamic topography model DTU13MDT is presented. For DTU15MDT the newer gravity model EIGEN-6C4 has been combined with the DTU15MSS mean sea surface model to construct this global mean dynamic topography model. The EIGEN-6C4 is derived using the full series of GOCE data...

  3. Experimental liver fibrosis research: update on animal models, legal issues and translational aspects

    Science.gov (United States)

    2013-01-01

    Liver fibrosis is defined as excessive extracellular matrix deposition and is based on complex interactions between matrix-producing hepatic stellate cells and an abundance of liver-resident and infiltrating cells. Investigation of these processes requires in vitro and in vivo experimental work in animals. However, the use of animals in translational research will be increasingly challenged, at least in countries of the European Union, because of the adoption of new animal welfare rules in 2013. These rules will create an urgent need for optimized standard operating procedures regarding animal experimentation and improved international communication in the liver fibrosis community. This review gives an update on current animal models, techniques and underlying pathomechanisms with the aim of fostering a critical discussion of the limitations and potential of up-to-date animal experimentation. We discuss potential complications in experimental liver fibrosis and provide examples of how the findings of studies in which these models are used can be translated to human disease and therapy. In this review, we want to motivate the international community to design more standardized animal models which might help to address the legally requested replacement, refinement and reduction of animals in fibrosis research. PMID:24274743

  4. Modelling precipitation extremes in the Czech Republic: update of intensity–duration–frequency curves

    Directory of Open Access Journals (Sweden)

    Michal Fusek

    2016-11-01

    Full Text Available Precipitation records from six stations of the Czech Hydrometeorological Institute were subject to statistical analysis with the objectives of updating the intensity–duration–frequency (IDF curves, by applying extreme value distributions, and comparing the updated curves against those produced by an empirical procedure in 1958. Another objective was to investigate differences between both sets of curves, which could be explained by such factors as different measuring instruments, measuring stations altitudes and data analysis methods. It has been shown that the differences between the two sets of IDF curves are significantly influenced by the chosen method of data analysis.

  5. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  6. Proposed reporting model update creates dialogue between FASB and not-for-profits.

    Science.gov (United States)

    Mosrie, Norman C

    2016-04-01

    Seeing a need to refresh the current guidelines, the Financial Accounting Standards Board (FASB) proposed an update to the financial accounting and reporting model for not-for-profit entities. In a response to solicited feedback, the board is now revisiting its proposed update and has set forth a plan to finalize its new guidelines. The FASB continues to solicit and respond to feedback as the process progresses.

  7. Power mos devices: structures and modelling procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rossel, P.; Charitat, G.; Tranduc, H.; Morancho, F.; Moncoqut

    1997-05-01

    In this survey, the historical evolution of power MOS transistor structures is presented and currently used devices are described. General considerations on current and voltage capabilities are discussed and configurations of popular structures are given. A synthesis of different modelling approaches proposed last three years is then presented, including analytical solutions, for basic electrical parameters such as threshold voltage, on-resistance, saturation and quasi-saturation effects, temperature influence and voltage handling capability. The numerical solutions of basic semiconductor devices is then briefly reviewed along with some typical problems which can be solved this way. A compact circuit modelling method is finally explained with emphasis on dynamic behavior modelling

  8. Procedural meta-models for architectural design praxis

    Directory of Open Access Journals (Sweden)

    Gian Luca Brunetti

    2013-05-01

    Full Text Available This article discusses a procedure for the exploration of options in preliminary design. The procedure is based on the application of morphing procedures, which are typical of animation software, to building parametric analyses. The procedure is based on partially overlapping sequences of evaluations targeted on dynamic ad-hoc test-models and is aimed at the creation of data fields for the representation of the performance consequences of competing design sceneries. This representation is necessarily multidimensional and is based on parallel coordinates plots. The implementation of a specific test procedure of the kind described above is also discussed. The procedure has been supported by the use of contemporary analytical and representational systems and tools; namely, ESP-r, Radiance, Ggobi, and an extensible tool for the dynamic morphing of models through user-specified criteria, named OPTS, by the author.

  9. Conflicts of Interest in Clinical Guidelines: Update of U.S. Preventive Services Task Force Policies and Procedures.

    Science.gov (United States)

    Ngo-Metzger, Quyen; Moyer, Virginia; Grossman, David; Ebell, Mark; Woo, Meghan; Miller, Therese; Brummer, Tana; Chowdhury, Joya; Kato, Elisabeth; Siu, Albert; Phillips, William; Davidson, Karina; Phipps, Maureen; Bibbins-Domingo, Kirsten

    2018-01-01

    The U.S. Preventive Services Task Force (USPSTF) provides independent, objective, and scientifically rigorous recommendations for clinical preventive services. A primary concern is to avoid even the appearance of members having special interests that might influence their ability to judge evidence and formulate unbiased recommendations. The conflicts of interest policy for the USPSTF is described, as is the formal process by which best practices were incorporated to update the policy. The USPSTF performed a literature review, conducted key informant interviews, and reviewed conflicts of interest policies of ten similar organizations. Important findings included transparency and public accessibility; full disclosure of financial relationships; disclosure of non-financial relationships (that create the potential for bias and compromise a member's objective judgment); disclosure of family members' conflicts of interests; and establishment of appropriate reporting periods. Controversies in best practices include the threshold of financial disclosures, ease of access to conflicts of interest policies and declarations, vague definition of non-financial biases, and request for family members' conflicts of interests (particularly those that are non-financial in nature). The USPSTF conflicts of interest policy includes disclosures for immediate family members, a clear non-financial conflicts of interest definition, long look-back period and application of the policy to prospective members. Conflicts of interest is solicited from all members every 4 months, formally reviewed, adjudicated, and made publicly available. The USPSTF conflicts of interest policy is publicly available as part of the USPSTF Procedure Manual. A continuous improvement process can be applied to conflicts of interest policies to enhance public trust in members of panels, such as the USPSTF, that produce clinical guidelines and recommendations. Copyright © 2018 American Journal of Preventive Medicine

  10. Updating and prospective validation of a prognostic model for high sickness absence

    NARCIS (Netherlands)

    Roelen, C.A.M.; Heymans, M.W.; Twisk, J.W.R.; van Rhenen, W.; Pallesen, S.; Bjorvatn, B.; Moen, B.E.; Mageroy, N.

    2015-01-01

    Objectives To further develop and validate a Dutch prognostic model for high sickness absence (SA). Methods Three-wave longitudinal cohort study of 2,059 Norwegian nurses. The Dutch prognostic model was used to predict high SA among Norwegian nurses at wave 2. Subsequently, the model was updated by

  11. Dynamic finite element model updating of prestressed concrete continuous box-girder bridge

    Science.gov (United States)

    Lin, Xiankun; Zhang, Lingmi; Guo, Qintao; Zhang, Yufeng

    2009-09-01

    The dynamic finite element model (FEM) of a prestressed concrete continuous box-girder bridge, called the Tongyang Canal Bridge, is built and updated based on the results of ambient vibration testing (AVT) using a real-coded accelerating genetic algorithm (RAGA). The objective functions are defined based on natural frequency and modal assurance criterion (MAC) metrics to evaluate the updated FEM. Two objective functions are defined to fully account for the relative errors and standard deviations of the natural frequencies and MAC between the AVT results and the updated FEM predictions. The dynamically updated FEM of the bridge can better represent its structural dynamics and serve as a baseline in long-term health monitoring, condition assessment and damage identification over the service life of the bridge.

  12. Inference-based procedural modeling of solids

    KAUST Repository

    Biggers, Keith

    2011-11-01

    As virtual environments become larger and more complex, there is an increasing need for more automated construction algorithms to support the development process. We present an approach for modeling solids by combining prior examples with a simple sketch. Our algorithm uses an inference-based approach to incrementally fit patches together in a consistent fashion to define the boundary of an object. This algorithm samples and extracts surface patches from input models, and develops a Petri net structure that describes the relationship between patches along an imposed parameterization. Then, given a new parameterized line or curve, we use the Petri net to logically fit patches together in a manner consistent with the input model. This allows us to easily construct objects of varying sizes and configurations using arbitrary articulation, repetition, and interchanging of parts. The result of our process is a solid model representation of the constructed object that can be integrated into a simulation-based environment. © 2011 Elsevier Ltd. All rights reserved.

  13. On the general procedure for modelling complex ecological systems

    International Nuclear Information System (INIS)

    He Shanyu.

    1987-12-01

    In this paper, the principle of a general procedure for modelling complex ecological systems, i.e. the Adaptive Superposition Procedure (ASP) is shortly stated. The result of application of ASP in a national project for ecological regionalization is also described. (author). 3 refs

  14. A skin abscess model for teaching incision and drainage procedures.

    Science.gov (United States)

    Fitch, Michael T; Manthey, David E; McGinnis, Henderson D; Nicks, Bret A; Pariyadath, Manoj

    2008-07-03

    Skin and soft tissue infections are increasingly prevalent clinical problems, and it is important for health care practitioners to be well trained in how to treat skin abscesses. A realistic model of abscess incision and drainage will allow trainees to learn and practice this basic physician procedure. We developed a realistic model of skin abscess formation to demonstrate the technique of incision and drainage for educational purposes. The creation of this model is described in detail in this report. This model has been successfully used to develop and disseminate a multimedia video production for teaching this medical procedure. Clinical faculty and resident physicians find this model to be a realistic method for demonstrating abscess incision and drainage. This manuscript provides a detailed description of our model of abscess incision and drainage for medical education. Clinical educators can incorporate this model into skills labs or demonstrations for teaching this basic procedure.

  15. Validation of the PESTLA model: Definitions, objectives and procedure

    NARCIS (Netherlands)

    Boekhold AE; van den Bosch H; Boesten JJTI; Leistra M; Swartjes FA; van der Linden AMA

    1993-01-01

    The simulation model PESTLA was developed to produce estimates of accumulation and leaching of pesticides in soil to facilitate classification of pesticides in the Dutch registration procedure. Before PESTLA can be used for quantitative assessment of expected pesticide concentrations in

  16. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan

    2017-05-24

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  17. MODELING THE EFFECTS OF UPDATING THE INFLUENZA VACCINE ON THE EFFICACY OF REPEATED VACCINATION.

    Energy Technology Data Exchange (ETDEWEB)

    D. SMITH; A. LAPEDES; ET AL

    2000-11-01

    The accumulated wisdom is to update the vaccine strain to the expected epidemic strain only when there is at least a 4-fold difference [measured by the hemagglutination inhibition (HI) assay] between the current vaccine strain and the expected epidemic strain. In this study we investigate the effect, on repeat vaccines, of updating the vaccine when there is a less than 4-fold difference. Methods: Using a computer model of the immune response to repeated vaccination, we simulated updating the vaccine on a 2-fold difference and compared this to not updating the vaccine, in each case predicting the vaccine efficacy in first-time and repeat vaccines for a variety of possible epidemic strains. Results: Updating the vaccine strain on a 2-fold difference resulted in increased vaccine efficacy in repeat vaccines compared to leaving the vaccine unchanged. Conclusions: These results suggest that updating the vaccine strain on a 2-fold difference between the existing vaccine strain and the expected epidemic strain will increase vaccine efficacy in repeat vaccines compared to leaving the vaccine unchanged.

  18. Adapting to change: The role of the right hemisphere in mental model building and updating.

    Science.gov (United States)

    Filipowicz, Alex; Anderson, Britt; Danckert, James

    2016-09-01

    We recently proposed that the right hemisphere plays a crucial role in the processes underlying mental model building and updating. Here, we review the evidence we and others have garnered to support this novel account of right hemisphere function. We begin by presenting evidence from patient work that suggests a critical role for the right hemisphere in the ability to learn from the statistics in the environment (model building) and adapt to environmental change (model updating). We then provide a review of neuroimaging research that highlights a network of brain regions involved in mental model updating. Next, we outline specific roles for particular regions within the network such that the anterior insula is purported to maintain the current model of the environment, the medial prefrontal cortex determines when to explore new or alternative models, and the inferior parietal lobule represents salient and surprising information with respect to the current model. We conclude by proposing some future directions that address some of the outstanding questions in the field of mental model building and updating. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Update of the ITER MELCOR model for the validation of the Cryostat design

    Energy Technology Data Exchange (ETDEWEB)

    Martínez, M.; Labarta, C.; Terrón, S.; Izquierdo, J.; Perlado, J.M.

    2015-07-01

    Some transients can compromise the vacuum in the Cryostat of ITER and cause significant loads. A MELCOR model has been updated in order to assess this loads. Transients have been run with this model and its result will be used in the mechanical assessment of the cryostat. (Author)

  20. Shape Synthesis from Sketches via Procedural Models and Convolutional Networks.

    Science.gov (United States)

    Huang, Haibin; Kalogerakis, Evangelos; Yumer, Ersin; Mech, Radomir

    2017-08-01

    Procedural modeling techniques can produce high quality visual content through complex rule sets. However, controlling the outputs of these techniques for design purposes is often notoriously difficult for users due to the large number of parameters involved in these rule sets and also their non-linear relationship to the resulting content. To circumvent this problem, we present a sketch-based approach to procedural modeling. Given an approximate and abstract hand-drawn 2D sketch provided by a user, our algorithm automatically computes a set of procedural model parameters, which in turn yield multiple, detailed output shapes that resemble the user's input sketch. The user can then select an output shape, or further modify the sketch to explore alternative ones. At the heart of our approach is a deep Convolutional Neural Network (CNN) that is trained to map sketches to procedural model parameters. The network is trained by large amounts of automatically generated synthetic line drawings. By using an intuitive medium, i.e., freehand sketching as input, users are set free from manually adjusting procedural model parameters, yet they are still able to create high quality content. We demonstrate the accuracy and efficacy of our method in a variety of procedural modeling scenarios including design of man-made and organic shapes.

  1. SYNTHESIS OF INFORMATION MODEL FOR ALTERNATIVE FUNCTIONAL DIAGNOSTICS PROCEDURE

    OpenAIRE

    P. F. Shchapov; R. P. Miguschenko

    2014-01-01

    Probabilistic approaches in information theory and information theory of measurement, allowing to calculate and analyze the amount expected to models measuring conversions and encoding tasks random measurement signals were considered. A probabilistic model of diagnostic information model transformation and diagnostic procedures was developed. Conditions for obtaining the maximum amount of diagnostic information were found out.

  2. FE Model Updating on an In-Service Self-Anchored Suspension Bridge with Extra-Width Using Hybrid Method

    Directory of Open Access Journals (Sweden)

    Zhiyuan Xia

    2017-02-01

    Full Text Available Nowadays, many more bridges with extra-width have been needed for vehicle throughput. In order to obtain a precise finite element (FE model of those complex bridge structures, the practical hybrid updating method by integration of Gaussian mutation particle swarm optimization (GMPSO, Kriging meta-model and Latin hypercube sampling (LHS was proposed. By demonstrating the efficiency and accuracy of the hybrid method through the model updating of a damaged simply supported beam, the proposed method was applied to the model updating of a self-anchored suspension bridge with extra-width which showed great necessity considering the results of ambient vibration test. The results of bridge model updating showed that both of the mode frequencies and shapes had relatively high agreement between the updated model and experimental structure. The successful model updating of this bridge fills in the blanks of model updating of a complex self-anchored suspension bridge. Moreover, the updating process enables other model updating issues for complex bridge structures

  3. Finite element modelling and updating of friction stir welding (FSW joint for vibration analysis

    Directory of Open Access Journals (Sweden)

    Zahari Siti Norazila

    2017-01-01

    Full Text Available Friction stir welding of aluminium alloys widely used in automotive and aerospace application due to its advanced and lightweight properties. The behaviour of FSW joints plays a significant role in the dynamic characteristic of the structure due to its complexities and uncertainties therefore the representation of an accurate finite element model of these joints become a research issue. In this paper, various finite elements (FE modelling technique for prediction of dynamic properties of sheet metal jointed by friction stir welding will be presented. Firstly, nine set of flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by FSW are used. Nine set of specimen was fabricated using various types of welding parameters. In order to find the most optimum set of FSW plate, the finite element model using equivalence technique was developed and the model validated using experimental modal analysis (EMA on nine set of specimen and finite element analysis (FEA. Three types of modelling were engaged in this study; rigid body element Type 2 (RBE2, bar element (CBAR and spot weld element connector (CWELD. CBAR element was chosen to represent weld model for FSW joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, total error of the natural frequencies for CBAR model is improved significantly. Therefore, CBAR element was selected as the most reliable element in FE to represent FSW weld joint.

  4. SWEET-Cat update and FASMA. A new minimization procedure for stellar parameters using high-quality spectra

    Science.gov (United States)

    Andreasen, D. T.; Sousa, S. G.; Tsantaki, M.; Teixeira, G. D. C.; Mortier, A.; Santos, N. C.; Suárez-Andrés, L.; Delgado-Mena, E.; Ferreira, A. C. S.

    2017-04-01

    Context. Thanks to the importance that the star-planet relation has to our understanding of the planet formation process, the precise determination of stellar parameters for the ever increasing number of discovered extrasolar planets is of great relevance. Furthermore, precise stellar parameters are needed to fully characterize the planet properties. It is thus important to continue the efforts to determine, in the most uniform way possible, the parameters for stars with planets as new discoveries are announced. Aims: In this paper we present new precise atmospheric parameters for a sample of 50 stars with planets. The results are presented in the catalogue: SWEET-Cat. Methods: Stellar atmospheric parameters and masses for the 50 stars were derived assuming local thermodynamic equilibrium and using high-resolution and high signal-to-noise spectra. The methodology used is based on the measurement of equivalent widths with ARES2 for a list of iron lines. The line abundances were derived using MOOG. We then used the curve of growth analysis to determine the parameters. We implemented a new minimization procedure which significantly improves the computational time. Results: The stellar parameters for the 50 stars are presented and compared with previously determined literature values. For SWEET-Cat, we compile values for the effective temperature, surface gravity, metallicity, and stellar mass for almost all the planet host stars listed in the Extrasolar Planets Encyclopaedia. This data will be updated on a continuous basis. The data can be used for statistical studies of the star-planet correlation, and for the derivation of consistent properties for known planets. Based on observations collected at the La Silla Observatory, ESO (Chile), with FEROS/2.2 m (run 2014B/020), with UVES/VLT at the Cerro Paranal Observatory (runs ID 092.C-0695, 093.C-0219, 094.C-0367, 095.C-0324, and 096.C-0092), and with FIES/NOT at Roque de los Muchachos (Spain; runs ID 14AF14 and 53

  5. Fusion strategies for selecting multiple tuning parameters for multivariate calibration and other penalty based processes: A model updating application for pharmaceutical analysis

    International Nuclear Information System (INIS)

    Tencate, Alister J.; Kalivas, John H.; White, Alexander J.

    2016-01-01

    New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of

  6. Self-shielding models of MICROX-2 code: Review and updates

    International Nuclear Information System (INIS)

    Hou, J.; Choi, H.; Ivanov, K.N.

    2014-01-01

    Highlights: • The MICROX-2 code has been improved to expand its application to advanced reactors. • New fine-group cross section libraries based on ENDF/B-VII have been generated. • Resonance self-shielding and spatial self-shielding models have been improved. • The improvements were assessed by a series of benchmark calculations against MCNPX. - Abstract: The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. The MICROX-2 code has been updated to expand its application to advanced reactor concepts and fuel cycle simulations, including generation of new fine-group cross section libraries based on ENDF/B-VII. In continuation of previous work, the MICROX-2 methods are reviewed and updated in this study, focusing on its resonance self-shielding and spatial self-shielding models for neutron spectrum calculations. The improvement of self-shielding method was assessed by a series of benchmark calculations against the Monte Carlo code, using homogeneous and heterogeneous pin cell models. The results have shown that the implementation of the updated self-shielding models is correct and the accuracy of physics calculation is improved. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by ∼0.1% and ∼0.2% for the homogeneous and heterogeneous pin cell models, respectively, considered in this study

  7. Using an Instructional Design Model to Teach Medical Procedures.

    Science.gov (United States)

    Cheung, Lawrence

    Educators are often tasked with developing courses and curricula that teach learners how to perform medical procedures. This instruction must provide an optimal, uniform learning experience for all learners. If not well designed, this instruction risks being unstructured, informal, variable amongst learners, or incomplete. This article shows how an instructional design model can help craft courses and curricula to optimize instruction in performing medical procedures. Educators can use this as a guide to developing their own course instruction.

  8. Procedures for parameter estimates of computational models for localized failure

    NARCIS (Netherlands)

    Iacono, C.

    2007-01-01

    In the last years, many computational models have been developed for tensile fracture in concrete. However, their reliability is related to the correct estimate of the model parameters, not all directly measurable during laboratory tests. Hence, the development of inverse procedures is needed, that

  9. The 2014 update to the National Seismic Hazard Model in California

    Science.gov (United States)

    Powers, Peter; Field, Edward H.

    2015-01-01

    The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.

  10. The Updated BaSTI Stellar Evolution Models and Isochrones. I. Solar-scaled Calculations

    DEFF Research Database (Denmark)

    Hidalgo, Sebastian L.; Pietrinferni, Adriano; Cassisi, Santi

    2018-01-01

    We present an updated release of the BaSTI (a Bag of Stellar Tracks and Isochrones) stellar model and isochrone library for a solar-scaled heavy element distribution. The main input physics that have been changed from the previous BaSTI release include the solar metal mixture, electron conduction...

  11. Towards an integrated workflow for structural reservoir model updating and history matching

    NARCIS (Netherlands)

    Leeuwenburgh, O.; Peters, E.; Wilschut, F.

    2011-01-01

    A history matching workflow, as typically used for updating of petrophysical reservoir model properties, is modified to include structural parameters including the top reservoir and several fault properties: position, slope, throw and transmissibility. A simple 2D synthetic oil reservoir produced by

  12. Evaluation of Lower East Fork Poplar Creek Mercury Sources - Model Update

    Energy Technology Data Exchange (ETDEWEB)

    Ketelle, Richard [East Tennessee Technology Park (ETTP), Oak Ridge, TN (United States); Brandt, Craig C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peterson, Mark J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevelhimer, Mark S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Watson, David B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Brooks, Scott C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mayes, Melanie [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); DeRolph, Christopher R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dickson, Johnbull O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Olsen, Todd A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-08-01

    The purpose of this report is to assess new data that has become available and provide an update to the evaluations and modeling presented in the Oak Ridge National Laboratory (ORNL) Technical Manuscript Evaluation of lower East Fork Poplar Creek (LEFPC) Mercury Sources (Watson et al., 2016). Primary sources of field and laboratory data for this update include multiple US Department of Energy (DOE) programs including Environmental Management (EM; e.g., Biological Monitoring and Abatement Program, Mercury Remediation Technology Development [TD], and Applied Field Research Initiative), Office of Science (Mercury Science Focus Areas [SFA] project), and the Y-12 National Security Complex (Y-12) Compliance Department.

  13. Updating known distribution models for forecasting climate change impact on endangered species.

    Science.gov (United States)

    Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo

    2013-01-01

    To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only.

  14. Model checking as an aid to procedure design

    International Nuclear Information System (INIS)

    Zhang, Wenhu

    2001-01-01

    The OECD Halden Reactor Project has been actively working on computer assisted operating procedures for many years. The objective of the research has been to provide computerised assistance for procedure design, verification and validation, implementation and maintenance. For the verification purpose, the application of formal methods has been considered in several reports. The recent formal verification activity conducted at the Halden Project is based on using model checking to the verification of procedures. This report presents verification approaches based on different model checking techniques and tools for the formalization and verification of operating procedures. Possible problems and relative merits of the different approaches are discussed. A case study of one of the approaches is presented to show the practical application of formal verification. Application of formal verification in the traditional procedure design process can reduce the human resources involved in reviews and simulations, and hence reduce the cost of verification and validation. A discussion of the integration of the formal verification with the traditional procedure design process is given at the end of this report. (Author)

  15. Near-Source Modeling Updates: Building Downwash & Near-Road

    Science.gov (United States)

    The presentation describes recent research efforts in near-source model development focusing on building downwash and near-road barriers. The building downwash section summarizes a recent wind tunnel study, ongoing computational fluid dynamics simulations and efforts to improve ...

  16. Model Updating and Uncertainty Management for Aircraft Prognostic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal addresses the integration of physics-based damage propagation models with diagnostic measures of current state of health in a mathematically rigorous...

  17. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  18. Disentangling density-dependent dynamics using full annual cycle models and Bayesian model weight updating

    Science.gov (United States)

    Robinson, Orin J.; McGowan, Conor P.; Devers, Patrick K.

    2017-01-01

    Density dependence regulates populations of many species across all taxonomic groups. Understanding density dependence is vital for predicting the effects of climate, habitat loss and/or management actions on wild populations. Migratory species likely experience seasonal changes in the relative influence of density dependence on population processes such as survival and recruitment throughout the annual cycle. These effects must be accounted for when characterizing migratory populations via population models.To evaluate effects of density on seasonal survival and recruitment of a migratory species, we used an existing full annual cycle model framework for American black ducks Anas rubripes, and tested different density effects (including no effects) on survival and recruitment. We then used a Bayesian model weight updating routine to determine which population model best fit observed breeding population survey data between 1990 and 2014.The models that best fit the survey data suggested that survival and recruitment were affected by density dependence and that density effects were stronger on adult survival during the breeding season than during the non-breeding season.Analysis also suggests that regulation of survival and recruitment by density varied over time. Our results showed that different characterizations of density regulations changed every 8–12 years (three times in the 25-year period) for our population.Synthesis and applications. Using a full annual cycle, modelling framework and model weighting routine will be helpful in evaluating density dependence for migratory species in both the short and long term. We used this method to disentangle the seasonal effects of density on the continental American black duck population which will allow managers to better evaluate the effects of habitat loss and potential habitat management actions throughout the annual cycle. The method here may allow researchers to hone in on the proper form and/or strength of

  19. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. II. Optimization model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    improvements. The biological model of the replacement model is described in a previous paper and in this paper the optimization model is described. The model is developed as a prototype for use under practical conditions. The application of the model is demonstrated using data from two commercial Danish sow......Recent methodological improvements in replacement models comprising multi-level hierarchical Markov processes and Bayesian updating have hardly been implemented in any replacement model and the aim of this study is to present a sow replacement model that really uses these methodological...... herds. It is concluded that the Bayesian updating technique and the hierarchical structure decrease the size of the state space dramatically. Since parameter estimates vary considerably among herds it is concluded that decision support concerning sow replacement only makes sense with parameters...

  20. Procedural Skills Education – Colonoscopy as a Model

    Directory of Open Access Journals (Sweden)

    Maitreyi Raman

    2008-01-01

    Full Text Available Traditionally, surgical and procedural apprenticeship has been an assumed activity of students, without a formal educational context. With increasing barriers to patient and operating room access such as shorter work week hours for residents, and operating room and endoscopy time at a premium, alternate strategies to maximizing procedural skill development are being considered. Recently, the traditional surgical apprenticeship model has been challenged, with greater emphasis on the need for surgical and procedural skills training to be more transparent and for alternatives to patient-based training to be considered. Colonoscopy performance is a complex psychomotor skill requiring practioners to integrate multiple sensory inputs, and involves higher cortical centres for optimal performance. Colonoscopy skills involve mastery in the cognitive, technical and process domains. In the present review, we propose a model for teaching colonoscopy to the novice trainee based on educational theory.

  1. Status Update: Modeling Energy Balance in NIF Hohlraums

    Energy Technology Data Exchange (ETDEWEB)

    Jones, O. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-22

    We have developed a standardized methodology to model hohlraum drive in NIF experiments. We compare simulation results to experiments by 1) comparing hohlraum xray fluxes and 2) comparing capsule metrics, such as bang times. Long-pulse, high gas-fill hohlraums require a 20-28% reduction in simulated drive and inclusion of ~15% backscatter to match experiment through (1) and (2). Short-pulse, low fill or near-vacuum hohlraums require a 10% reduction in simulated drive to match experiment through (2); no reduction through (1). Ongoing work focuses on physical model modifications to improve these matches.

  2. Cancer Survivorship, Models, and Care Plans: A Status Update.

    Science.gov (United States)

    Powel, Lorrie L; Seibert, Stephen M

    2017-03-01

    This article provides a synopsis of the status of cancer survivorship in the United States. It highlights the challenges of survivorship care as the number of cancer survivors has steadily grown over the 40 years since the signing of the National Cancer Act in 1971. Also included is an overview of various models of survivorship care plans (SCPs), facilitators and barriers to SCP use, their impact on patient outcomes, and implications for clinical practice and research. This article provides a broad overview of the cancer survivorship, including models of care and survivorship care plans. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. A procedure for automatic updating of total cross section libraries of the Mercure IV code for nuclear safeguard applications

    International Nuclear Information System (INIS)

    Vicini, C.; Amici, S.

    1991-01-01

    The measuring utilization of Montecarlo codes for the simulation of the measurement techniques used in the field of Nuclear Safeguards and the high performances required (error<1%), needs the implementation of libraries with updated nuclear data. MERCURE IV is a computer code especially developed for the non destructive measurement techniques simulation. In addition to an analysis of the MERCURE IV code features, this work presents an algorithm developed for generating the library of the total gamma cross section used by the code

  4. Uncertainty quantification of voice signal production mechanical model and experimental updating

    OpenAIRE

    Cataldo, Edson; Soize, Christian; Sampaio, Rubens

    2013-01-01

    International audience; The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the c...

  5. iTree-Hydro: Snow hydrology update for the urban forest hydrology model

    Science.gov (United States)

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2011-01-01

    This article presents snow hydrology updates made to iTree-Hydro, previously called the Urban Forest Effects—Hydrology model. iTree-Hydro Version 1 was a warm climate model developed by the USDA Forest Service to provide a process-based planning tool with robust water quantity and quality predictions given data limitations common to most urban areas. Cold climate...

  6. Update on Parametric Cost Models for Space Telescopes

    Science.gov (United States)

    Stahl. H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda

    2011-01-01

    Since the June 2010 Astronomy Conference, an independent review of our cost data base discovered some inaccuracies and inconsistencies which can modify our previously reported results. This paper will review changes to the data base, our confidence in those changes and their effect on various parametric cost models

  7. Updates to Blast Injury Criteria Models for Nuclear Casualty Estimation

    Science.gov (United States)

    2015-12-01

    based Casualty Assessment (ORCA) software package contains models which track penetrating fragments and determine the likelihood of injury caused by the...pedestrian and bicycle accidents,” The Institute of Traffic Accident Investigators. Proceedings of the 5th Interantional Conference: 17th and 18th

  8. An updated summary of MATHEW/ADPIC model evaluation studies

    Energy Technology Data Exchange (ETDEWEB)

    Foster, K.T.; Dickerson, M.H.

    1990-05-01

    This paper summarizes the major model evaluation studies conducted for the MATHEW/ADPIC atmospheric transport and diffusion models used by the US Department of Energy's Atmospheric Release Advisory Capability. These studies have taken place over the last 15 years and involve field tracer releases influenced by a variety of meteorological and topographical conditions. Neutrally buoyant tracers released both as surface and elevated point sources, as well as material dispersed by explosive, thermally bouyant release mechanisms have been studied. Results from these studies show that the MATHEW/ADPIC models estimate the tracer air concentrations to within a factor of two of the measured values 20% to 50% of the time, and within a factor of five of the measurements 35% to 85% of the time depending on the complexity of the meteorology and terrain, and the release height of the tracer. Comparisons of model estimates to peak downwind deposition and air concentration measurements from explosive releases are shown to be generally within a factor of two to three. 24 refs., 14 figs., 3 tabs.

  9. An updated summary of MATHEW/ADPIC model evaluation studies

    International Nuclear Information System (INIS)

    Foster, K.T.; Dickerson, M.H.

    1990-05-01

    This paper summarizes the major model evaluation studies conducted for the MATHEW/ADPIC atmospheric transport and diffusion models used by the US Department of Energy's Atmospheric Release Advisory Capability. These studies have taken place over the last 15 years and involve field tracer releases influenced by a variety of meteorological and topographical conditions. Neutrally buoyant tracers released both as surface and elevated point sources, as well as material dispersed by explosive, thermally bouyant release mechanisms have been studied. Results from these studies show that the MATHEW/ADPIC models estimate the tracer air concentrations to within a factor of two of the measured values 20% to 50% of the time, and within a factor of five of the measurements 35% to 85% of the time depending on the complexity of the meteorology and terrain, and the release height of the tracer. Comparisons of model estimates to peak downwind deposition and air concentration measurements from explosive releases are shown to be generally within a factor of two to three. 24 refs., 14 figs., 3 tabs

  10. General equilibrium basic needs policy model, (updating part).

    OpenAIRE

    Kouwenaar A

    1985-01-01

    ILO pub-WEP pub-PREALC pub. Working paper, econometric model for the assessment of structural change affecting development planning for basic needs satisfaction in Ecuador - considers population growth, family size (households), labour force participation, labour supply, wages, income distribution, profit rates, capital ownership, etc.; examines nutrition, education and health as factors influencing productivity. Diagram, graph, references, statistical tables.

  11. Bacteriophages: update on application as models for viruses in water

    African Journals Online (AJOL)

    In view of these features, phages are particularly useful as models to assess the behaviour and survival of enteric viruses in the environment, and as surrogates to assess the resistance of human viruses to water treatment and disinfection processes. Since there is no direct correlation between numbers of phages and ...

  12. Recent Updates to the GEOS-5 Linear Model

    Science.gov (United States)

    Holdaway, Dan; Kim, Jong G.; Errico, Ron; Gelaro, Ronald; Mahajan, Rahul

    2014-01-01

    Global Modeling and Assimilation Office (GMAO) is close to having a working 4DVAR system and has developed a linearized version of GEOS-5.This talk outlines a series of improvements made to the linearized dynamics, physics and trajectory.Of particular interest is the development of linearized cloud microphysics, which provides the framework for 'all-sky' data assimilation.

  13. Dental caries: an updated medical model of risk assessment.

    Science.gov (United States)

    Kutsch, V Kim

    2014-04-01

    Dental caries is a transmissible, complex biofilm disease that creates prolonged periods of low pH in the mouth, resulting in a net mineral loss from the teeth. Historically, the disease model for dental caries consisted of mutans streptococci and Lactobacillus species, and the dental profession focused on restoring the lesions/damage from the disease by using a surgical model. The current recommendation is to implement a risk-assessment-based medical model called CAMBRA (caries management by risk assessment) to diagnose and treat dental caries. Unfortunately, many of the suggestions of CAMBRA have been overly complicated and confusing for clinicians. The risk of caries, however, is usually related to just a few common factors, and these factors result in common patterns of disease. This article examines the biofilm model of dental caries, identifies the common disease patterns, and discusses their targeted therapeutic strategies to make CAMBRA more easily adaptable for the privately practicing professional. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  14. Development of Prototype Driver Models for Highway Design: Research Update

    Science.gov (United States)

    1999-06-01

    One of the high-priority research areas of the Federal Highway Administration (FHWA) is the development of the Interactive Highway Safety Design Model (IHSDM). The goal of the IHSDM research program is to develop a systematic approach that will allow...

  15. Real-time reservoir geological model updating using the hybrid EnKF and geostatistical technique

    Energy Technology Data Exchange (ETDEWEB)

    Li, H.; Chen, S.; Yang, D. [Regina Univ., SK (Canada). Petroleum Technology Research Centre

    2008-07-01

    Reservoir simulation plays an important role in modern reservoir management. Multiple geological models are needed in order to analyze the uncertainty of a given reservoir development scenario. Ideally, dynamic data should be incorporated into a reservoir geological model. This can be done by using history matching and tuning the model to match the past performance of reservoir history. This study proposed an assisted history matching technique to accelerate and improve the matching process. The Ensemble Kalman Filter (EnKF) technique, which is an efficient assisted history matching method, was integrated with a conditional geostatistical simulation technique to dynamically update reservoir geological models. The updated models were constrained to dynamic data, such as reservoir pressure and fluid saturations, and approaches geologically realistic at each time step by using the EnKF technique. The new technique was successfully applied in a heterogeneous synthetic reservoir. The uncertainty of the reservoir characterization was significantly reduced. More accurate forecasts were obtained from the updated models. 3 refs., 2 figs.

  16. Communication and Procedural Models of the E-Commerce Systems

    Directory of Open Access Journals (Sweden)

    Petr SUCHÁNEK

    2009-06-01

    Full Text Available E-commerce systems became a standard interface between sellers (or suppliers and customers. One of basic condition of an e-commerce system to be efficient is correct definitions and describes of the all internal and external processes. All is targeted the customers´ needs and requirements. The optimal and most exact way how to obtain and find optimal solution of e-commerce system and its processes structure in companies is the modeling and simulation. In this article author shows basic model of communication between customers and sellers in connection with the customer feedback and procedural models of e-commerce systems in terms of e-shops. Procedural model was made with the aid of definition of SOA.

  17. Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin

    DEFF Research Database (Denmark)

    Finsen, F.; Milzow, Christian; Smith, R.

    2014-01-01

    of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....

  18. Improved Approximation of Interactive Dynamic Influence DiagramsUsing Discriminative Model Updates

    DEFF Research Database (Denmark)

    Prashant, Doshi; Zeng, Yifeng

    2009-01-01

    Interactive dynamic influence diagrams (I-DIDs) are graphical models for sequential decision making in uncertain settings shared by other agents. Algorithms for solving I-DIDs face the challenge of an exponentially growing space of candidate models ascribed to other agents, over time. We formalize...... the concept of a minimal model set, which facilitates qualitative comparisons between different approximation techniques. We then present a new approximation technique that minimizes the space of candidate models by discriminating between model updates. We empirically demonstrate that our approach improves...

  19. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    Data.gov (United States)

    U.S. Environmental Protection Agency — The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size...

  20. A verification procedure for MSC/NASTRAN Finite Element Models

    Science.gov (United States)

    Stockwell, Alan E.

    1995-01-01

    Finite Element Models (FEM's) are used in the design and analysis of aircraft to mathematically describe the airframe structure for such diverse tasks as flutter analysis and actively controlled landing gear design. FEM's are used to model the entire airplane as well as airframe components. The purpose of this document is to describe recommended methods for verifying the quality of the FEM's and to specify a step-by-step procedure for implementing the methods.

  1. Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Final Report, Version 2)

    Science.gov (United States)

    EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing developmen...

  2. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. II. Optimization model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    improvements. The biological model of the replacement model is described in a previous paper and in this paper the optimization model is described. The model is developed as a prototype for use under practical conditions. The application of the model is demonstrated using data from two commercial Danish sow......Recent methodological improvements in replacement models comprising multi-level hierarchical Markov processes and Bayesian updating have hardly been implemented in any replacement model and the aim of this study is to present a sow replacement model that really uses these methodological...

  3. Cycle life versus depth of discharge update on modeling studies

    Science.gov (United States)

    Thaller, Lawrence H.

    1994-02-01

    The topics are presented in viewgraph form and cycle life vs. depth of discharge data for the following are presented: data as of three years ago; Air Force/Crane-Fuhr-Smithrick; Ken Fuhr's Data; Air Force/Crane Data; Eagle-Pitcher Data; Steve Schiffer's Data; John Smithrick's Data; temperature effects; and E-P, Yardney, and Hughes 26% Data. Other topics covered include the following: LeRC cycling tests of Yardney Space Station Cells; general statements; general observations; two different models of cycle life vs. depth of discharge; and other degradation modes.

  4. Update 1996: Blood collection and handling procedures for assessment of plasminogen activators and inhibitors (Leiden Fibrinolysis Workshop)

    NARCIS (Netherlands)

    Kluft, C.; Meijer, P.

    1996-01-01

    Procedures of blood collection and handling can be different for the various variables in fibrinolysis. Some of them may require adaptation to the progress in assay methodology and in biochemical and physiological knowledge. During the Leiden Fibrinolysis Workshop 6 in 1996, the procedures described

  5. An innovative model for teaching and learning clinical procedures.

    Science.gov (United States)

    Kneebone, Roger; Kidd, Jane; Nestel, Debra; Asvall, Suzanne; Paraskeva, Paraskevas; Darzi, Ara

    2002-07-01

    Performing a clinical procedure requires the integration of technical clinical skills with effective communication skills. However, these skills are often taught separately. To explore the feasibility and benefits of a new conceptual model for integrated skills teaching. : Design A qualitative observation and interview-based study of undergraduate medical students. Medical students performed technical and communication skills in realistic clinical scenarios (urinary catherization and wound closure), using latex models connected to simulated patients (SPs). Procedures were observed, videorecorded and assessed by tutors from an adjoining room. Students received immediate feedback from tutors and SPs, before engaging in a process of individual feedback through private review of their videotapes. Group interviews explored the response of students, SPs and tutors. Data were analysed using standard qualitative techniques. Fifty-one undergraduate students were recruited from the Faculty of Medicine, Imperial College, London. The scenarios provided a realistic simulation of two common clinical situations and proved feasible in terms of time, facilities and resources within this institution. Students found the opportunity to integrate communication and technical skills valuable, challenging and an appropriate learning experience. Immediate feedback was especially highly valued. Some students found difficulty integrating technical and communications skills, but benefited from conducting two procedures in the same session. The integrated model was feasible and was perceived to be valuable. Benefits include the opportunity to integrate, within a safe environment, skills which are often taught separately. Promoting reflective practice may enable the successful transfer of these integrated skills to other procedures.

  6. Continuous updating of a coupled reservoir-seismic model using an ensemble Kalman filter technique

    Energy Technology Data Exchange (ETDEWEB)

    Skjervheim, Jan-Arild

    2007-07-01

    This work presents the development of a method based on the ensemble Kalman filter (EnKF) for continuous reservoir model updating with respect to the combination of production data, 3D seismic data and time-lapse seismic data. The reservoir-seismic model system consists of a commercial reservoir simulator coupled to existing rock physics and seismic modelling software. The EnKF provides an ideal-setting for real time updating and prediction in reservoir simulation models, and has been applied to synthetic models and real field cases from the North Sea. In the EnKF method, static parameters as the porosity and permeability, and dynamic variables, as fluid saturations and pressure, are updated in the reservoir model at each step data become available. In addition, we have updated a lithology parameter (clay ratio) which is linked to the rock physics model, and the fracture density in a synthetic fractured reservoir. In the EnKF experiments we have assimilated various types of production and seismic data. Gas oil ratio (GOR), water cut (WCT) and bottom-hole pressure (BHP) are used in the data assimilation. Furthermore, inverted seismic data, such as Poisson's ratio and acoustic impedance, and seismic waveform data have been assimilated. In reservoir applications seismic data may introduce a large amount of data in the assimilation schemes, and the computational time becomes expensive. In this project efficient EnKF schemes are used to handle such large datasets, where challenging aspects such as the inversion of a large covariance matrix and potential loss of rank are considered. Time-lapse seismic data may be difficult to assimilate since they are time difference data, i.e. data which are related to the model variable at two or more time instances. Here we have presented a general sequential Bayesian formulation which incorporates time difference data, and we show that the posterior distribution includes both a filter and a smoother solution. Further, we show

  7. Box models for the evolution of atmospheric oxygen: an update.

    Science.gov (United States)

    Kasting, J F

    1991-01-01

    A simple 3-box model of the atmosphere/ocean system is used to describe the various stages in the evolution of atmospheric oxygen. In Stage I, which probably lasted until redbeds began to form about 2.0 Ga ago, the Earth's surface environment was generally devoid of free O2, except possibly in localized regions of high productivity in the surface ocean. In Stage II, which may have lasted for less than 150 Ma, the atmosphere and surface ocean were oxidizing, while the deep ocean remained anoxic. In Stage III, which commenced with the disappearance of banded iron formations around 1.85 Ga ago and has lasted until the present, all three surface reservoirs contained appreciable amounts of free O2. Recent and not-so-recent controversies regarding the abundance of oxygen in the Archean atmosphere are identified and discussed. The rate of O2 increase during the Middle and Late Proterozoic is identified as another outstanding question.

  8. Off-Highway Gasoline Consuption Estimation Models Used in the Federal Highway Administration Attribution Process: 2008 Updates

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Ho-Ling [ORNL; Davis, Stacy Cagle [ORNL

    2009-12-01

    This report is designed to document the analysis process and estimation models currently used by the Federal Highway Administration (FHWA) to estimate the off-highway gasoline consumption and public sector fuel consumption. An overview of the entire FHWA attribution process is provided along with specifics related to the latest update (2008) on the Off-Highway Gasoline Use Model and the Public Use of Gasoline Model. The Off-Highway Gasoline Use Model is made up of five individual modules, one for each of the off-highway categories: agricultural, industrial and commercial, construction, aviation, and marine. This 2008 update of the off-highway models was the second major update (the first model update was conducted during 2002-2003) after they were originally developed in mid-1990. The agricultural model methodology, specifically, underwent a significant revision because of changes in data availability since 2003. Some revision to the model was necessary due to removal of certain data elements used in the original estimation method. The revised agricultural model also made use of some newly available information, published by the data source agency in recent years. The other model methodologies were not drastically changed, though many data elements were updated to improve the accuracy of these models. Note that components in the Public Use of Gasoline Model were not updated in 2008. A major challenge in updating estimation methods applied by the public-use model is that they would have to rely on significant new data collection efforts. In addition, due to resource limitation, several components of the models (both off-highway and public-us models) that utilized regression modeling approaches were not recalibrated under the 2008 study. An investigation of the Environmental Protection Agency's NONROAD2005 model was also carried out under the 2008 model update. Results generated from the NONROAD2005 model were analyzed, examined, and compared, to the extent that

  9. RELAP-7 Software Verification and Validation Plan - Requirements Traceability Matrix (RTM) Part 2: Code Assessment Strategy, Procedure, and RTM Update

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Jun Soo [Idaho National Lab. (INL), Idaho Falls, ID (United States); Choi, Yong Joon [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    This document addresses two subjects involved with the RELAP-7 Software Verification and Validation Plan (SVVP): (i) the principles and plan to assure the independence of RELAP-7 assessment through the code development process, and (ii) the work performed to establish the RELAP-7 assessment plan, i.e., the assessment strategy, literature review, and identification of RELAP-7 requirements. Then, the Requirements Traceability Matrices (RTMs) proposed in previous document (INL-EXT-15-36684) are updated. These RTMs provide an efficient way to evaluate the RELAP-7 development status as well as the maturity of RELAP-7 assessment through the development process.

  10. Future-year ozone prediction for the United States using updated models and inputs.

    Science.gov (United States)

    Collet, Susan; Kidokoro, Toru; Karamchandani, Prakash; Shah, Tejas; Jung, Jaegun

    2017-08-01

    The relationship between emission reductions and changes in ozone can be studied using photochemical grid models. These models are updated with new information as it becomes available. The primary objective of this study was to update the previous Collet et al. studies by using the most up-to-date (at the time the study was done) modeling emission tools, inventories, and meteorology available to conduct ozone source attribution and sensitivity studies. Results show future-year, 2030, design values for 8-hr ozone concentrations were lower than base-year values, 2011. The ozone source attribution results for selected cities showed that boundary conditions were the dominant contributors to ozone concentrations at the western U.S. locations, and were important for many of the eastern U.S. Point sources were generally more important in the eastern United States than in the western United States. The contributions of on-road mobile emissions were less than 5 ppb at a majority of the cities selected for analysis. The higher-order decoupled direct method (HDDM) results showed that in most of the locations selected for analysis, NOx emission reductions were more effective than VOC emission reductions in reducing ozone levels. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies. The relationship between emission reductions and changes in ozone can be studied using photochemical grid models, which are updated with new available information. This study was to update the previous Collet et al. studies by using the most current, at the time the study was done, models and inventory to conduct ozone source attribution and sensitivity studies. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies.

  11. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    Science.gov (United States)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  12. Updating Linear Schedules with Lowest Cost: a Linear Programming Model

    Science.gov (United States)

    Biruk, Sławomir; Jaśkowski, Piotr; Czarnigowska, Agata

    2017-10-01

    Many civil engineering projects involve sets of tasks repeated in a predefined sequence in a number of work areas along a particular route. A useful graphical representation of schedules of such projects is time-distance diagrams that clearly show what process is conducted at a particular point of time and in particular location. With repetitive tasks, the quality of project performance is conditioned by the ability of the planner to optimize workflow by synchronizing the works and resources, which usually means that resources are planned to be continuously utilized. However, construction processes are prone to risks, and a fully synchronized schedule may expire if a disturbance (bad weather, machine failure etc.) affects even one task. In such cases, works need to be rescheduled, and another optimal schedule should be built for the changed circumstances. This typically means that, to meet the fixed completion date, durations of operations have to be reduced. A number of measures are possible to achieve such reduction: working overtime, employing more resources or relocating resources from less to more critical tasks, but they all come at a considerable cost and affect the whole project. The paper investigates the problem of selecting the measures that reduce durations of tasks of a linear project so that the cost of these measures is kept to the minimum and proposes an algorithm that could be applied to find optimal solutions as the need to reschedule arises. Considering that civil engineering projects, such as road building, usually involve less process types than construction projects, the complexity of scheduling problems is lower, and precise optimization algorithms can be applied. Therefore, the authors put forward a linear programming model of the problem and illustrate its principle of operation with an example.

  13. Resampling procedures to validate dendro-auxometric regression models

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Regression analysis has a large use in several sectors of forest research. The validation of a dendro-auxometric model is a basic step in the building of the model itself. The more a model resists to attempts of demonstrating its groundlessness, the more its reliability increases. In the last decades many new theories, that quite utilizes the calculation speed of the calculators, have been formulated. Here we show the results obtained by the application of a bootsprap resampling procedure as a validation tool.

  14. Atmospheric release model for the E-area low-level waste facility: Updates and modifications

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2017-11-16

    The atmospheric release model (ARM) utilizes GoldSim® Monte Carlo simulation software (GTG, 2017) to evaluate the flux of gaseous radionuclides as they volatilize from E-Area disposal facility waste zones, diffuse into the air-filled soil pores surrounding the waste, and emanate at the land surface. This report documents the updates and modifications to the ARM for the next planned E-Area PA considering recommendations from the 2015 PA strategic planning team outlined by Butcher and Phifer.

  15. Measuring online learning systems success: applying the updated DeLone and McLean model.

    Science.gov (United States)

    Lin, Hsiu-Fen

    2007-12-01

    Based on a survey of 232 undergraduate students, this study used the updated DeLone and McLean information systems success model to examine the determinants for successful use of online learning systems (OLS). The results provided an expanded understanding of the factors that measure OLS success. The results also showed that system quality, information quality, and service quality had a significant effect on actual OLS use through user satisfaction and behavioral intention to use OLS.

  16. Atmospheric release model for the E-area low-level waste facility: Updates and modifications

    International Nuclear Information System (INIS)

    None, None

    2017-01-01

    The atmospheric release model (ARM) utilizes GoldSim® Monte Carlo simulation software (GTG, 2017) to evaluate the flux of gaseous radionuclides as they volatilize from E-Area disposal facility waste zones, diffuse into the air-filled soil pores surrounding the waste, and emanate at the land surface. This report documents the updates and modifications to the ARM for the next planned E-Area PA considering recommendations from the 2015 PA strategic planning team outlined by Butcher and Phifer.

  17. User's guide to the MESOI diffusion model and to the utility programs UPDATE and LOGRVU

    Energy Technology Data Exchange (ETDEWEB)

    Athey, G.F.; Allwine, K.J.; Ramsdell, J.V.

    1981-11-01

    MESOI is an interactive, Lagrangian puff trajectory diffusion model. The model is documented separately (Ramsdell and Athey, 1981); this report is intended to provide MESOI users with the information needed to successfully conduct model simulations. The user is also provided with guidance in the use of the data file maintenance and review programs; UPDATE and LOGRVU. Complete examples are given for the operaton of all three programs and an appendix documents UPDATE and LOGRVU.

  18. Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

    International Nuclear Information System (INIS)

    Schubiger-Banz, S; Arisona, S M; Zhong, C

    2014-01-01

    This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis

  19. Research of Cadastral Data Modelling and Database Updating Based on Spatio-temporal Process

    Directory of Open Access Journals (Sweden)

    ZHANG Feng

    2016-02-01

    Full Text Available The core of modern cadastre management is to renew the cadastre database and keep its currentness,topology consistency and integrity.This paper analyzed the changes and their linkage of various cadastral objects in the update process.Combined object-oriented modeling technique with spatio-temporal objects' evolution express,the paper proposed a cadastral data updating model based on the spatio-temporal process according to people's thought.Change rules based on the spatio-temporal topological relations of evolution cadastral spatio-temporal objects are drafted and further more cascade updating and history back trace of cadastral features,land use and buildings are realized.This model implemented in cadastral management system-ReGIS.Achieved cascade changes are triggered by the direct driving force or perceived external events.The system records spatio-temporal objects' evolution process to facilitate the reconstruction of history,change tracking,analysis and forecasting future changes.

  20. Dynamic updating atlas for heart segmentation with a nonlinear field-based model.

    Science.gov (United States)

    Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng

    2017-09-01

    Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.

  1. A model to determine payments associated with radiology procedures.

    Science.gov (United States)

    Mabotuwana, Thusitha; Hall, Christopher S; Thomas, Shiby; Wald, Christoph

    2017-12-01

    Across the United States, there is a growing number of patients in Accountable Care Organizations and under risk contracts with commercial insurance. This is due to proliferation of new value-based payment models and care delivery reform efforts. In this context, the business model of radiology within a hospital or health system context is shifting from a primary profit-center to a cost-center with a goal of cost savings. Radiology departments need to increasingly understand how the transactional nature of the business relates to financial rewards. The main challenge with current reporting systems is that the information is presented only at an aggregated level, and often not broken down further, for instance, by type of exam. As such, the primary objective of this research is to provide better visibility into payments associated with individual radiology procedures in order to better calibrate expense/capital structure of the imaging enterprise to the actual revenue or value-add to the organization it belongs to. We propose a methodology that can be used to determine technical payments at a procedure level. We use a proportion based model to allocate payments to individual radiology procedures based on total charges (which also includes non-radiology related charges). Using a production dataset containing 424,250 radiology exams we calculated the overall average technical charge for Radiology to be $873.08 per procedure and the corresponding average payment to be $326.43 (range: $48.27 for XR and $2750.11 for PET/CT) resulting in an average payment percentage of 37.39% across all exams. We describe how charges associated with a procedure can be used to approximate technical payments at a more granular level with a focus on Radiology. The methodology is generalizable to approximate payment for other services as well. Understanding payments associated with each procedure can be useful during strategic practice planning. Charge-to-total charge ratio can be used to

  2. Modal testing and finite element model updating of laser spot welds

    Energy Technology Data Exchange (ETDEWEB)

    Husain, N Abu; Khodaparast, H Haddad; Snaylam, A; James, S; Sharp, M; Dearden, G; Ouyang, H, E-mail: h.ouyang@liverpool.ac.u [Department of Engineering, Harrison Hughes Building, University of Liverpool, L69 3GH (United Kingdom)

    2009-08-01

    Spot welds are used extensively in automotive engineering. One of the latest manufacturing techniques for producing spot welds is Laser Welding. Finite element (FE) modelling of laser welds for dynamic analysis is a research issue because of the complexity and uncertainty of the welds and thus formed structures. In this work, FE model of the welds is developed by employing CWELD element in NASTRAN and its feasibility for representing laser spot welds is investigated. The FE model is updated based on the measured modal data of hat-plate structures and cast as a structural minimisation problem by the application of NASTRAN codes.

  3. Spiral model of procedural cycle of educational process management

    OpenAIRE

    Bezrukov Valery I.; Lukashina Elena V.

    2016-01-01

    The article analyzes the nature and characteristics of the spiral model Procedure educational systems management cycle. The authors identify patterns between the development of information and communication technologies and the transformation of the education management process, give the characteristics of the concept of “information literacy” and “Media Education”. Consider the design function, determine its potential in changing the traditional educational paradigm to the new - information....

  4. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  5. Automatically updating predictive modeling workflows support decision-making in drug design.

    Science.gov (United States)

    Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O

    2016-09-01

    Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.

  6. Update on Small Modular Reactors Dynamic System Modeling Tool: Web Application

    Energy Technology Data Exchange (ETDEWEB)

    Hale, Richard Edward [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Batteh, John J [Modelon Corporation (Sweden); Tiller, Michael M. [Xogeny Corporation (United States)

    2015-01-01

    Previous reports focused on the development of component and system models as well as end-to-end system models using Modelica and Dymola for two advanced reactor architectures: (1) Advanced Liquid Metal Reactor and (2) fluoride high-temperature reactor (FHR). The focus of this report is the release of the first beta version of the web-based application for model use and collaboration, as well as an update on the FHR model. The web-based application allows novice users to configure end-to-end system models from preconfigured choices to investigate the instrumentation and controls implications of these designs and allows for the collaborative development of individual component models that can be benchmarked against test systems for potential inclusion in the model library. A description of this application is provided along with examples of its use and a listing and discussion of all the models that currently exist in the library.

  7. Drawing-Based Procedural Modeling of Chinese Architectures.

    Science.gov (United States)

    Fei Hou; Yue Qi; Hong Qin

    2012-01-01

    This paper presents a novel modeling framework to build 3D models of Chinese architectures from elevation drawing. Our algorithm integrates the capability of automatic drawing recognition with powerful procedural modeling to extract production rules from elevation drawing. First, different from the previous symbol-based floor plan recognition, based on the novel concept of repetitive pattern trees, small horizontal repetitive regions of the elevation drawing are clustered in a bottom-up manner to form architectural components with maximum repetition, which collectively serve as building blocks for 3D model generation. Second, to discover the global architectural structure and its components' interdependencies, the components are structured into a shape tree in a top-down subdivision manner and recognized hierarchically at each level of the shape tree based on Markov Random Fields (MRFs). Third, shape grammar rules can be derived to construct 3D semantic model and its possible variations with the help of a 3D component repository. The salient contribution lies in the novel integration of procedural modeling with elevation drawing, with a unique application to Chinese architectures.

  8. The TVT-obturator surgical procedure for the treatment of female stress urinary incontinence: a clinical update.

    Science.gov (United States)

    Waltregny, David; de Leval, Jean

    2009-03-01

    Six years ago, the inside-out transobturator tape TVT-O procedure was developed for the surgical treatment of female stress urinary incontinence (SUI) with the aim of minimizing the risk of urethra and bladder injuries and ensuring minimal tissue dissection. Initial feasibility and efficacy studies suggested that the TVT-O procedure is associated with high SUI cure rates and low morbidity at short term. A recent analysis of medium-term results indicated that the TVT-O procedure is efficient, with maintenance, after a 3-year minimum follow-up, of cure rates comparing favorably with those reported for TVT. No late complications were observed. As of July 2008, more than 35 clinical papers, including ten randomized trials and two national registries, have been published on the outcome of the TVT-O surgery. Results from these studies have confirmed that the TVT-O procedure is safe and as efficient as the TVT procedure, at least in the short/medium term.

  9. Updating and prospective validation of a prognostic model for high sickness absence.

    Science.gov (United States)

    Roelen, C A M; Heymans, M W; Twisk, J W R; van Rhenen, W; Pallesen, S; Bjorvatn, B; Moen, B E; Magerøy, N

    2015-01-01

    To further develop and validate a Dutch prognostic model for high sickness absence (SA). Three-wave longitudinal cohort study of 2,059 Norwegian nurses. The Dutch prognostic model was used to predict high SA among Norwegian nurses at wave 2. Subsequently, the model was updated by adding person-related (age, gender, marital status, children at home, and coping strategies), health-related (BMI, physical activity, smoking, and caffeine and alcohol intake), and work-related (job satisfaction, job demands, decision latitude, social support at work, and both work-to-family and family-to-work spillover) variables. The updated model was then prospectively validated for predictions at wave 3. 1,557 (77 %) nurses had complete data at wave 2 and 1,342 (65 %) at wave 3. The risk of high SA was under-estimated by the Dutch model, but discrimination between high-risk and low-risk nurses was fair after re-calibration to the Norwegian data. Gender, marital status, BMI, physical activity, smoking, alcohol intake, job satisfaction, job demands, decision latitude, support at the workplace, and work-to-family spillover were identified as potential predictors of high SA. However, these predictors did not improve the model's discriminative ability, which remained fair at wave 3. The prognostic model correctly identifies 73 % of Norwegian nurses at risk of high SA, although additional predictors are needed before the model can be used to screen working populations for risk of high SA.

  10. Updating representation of land surface-atmosphere feedbacks in airborne campaign modeling analysis

    Science.gov (United States)

    Huang, M.; Carmichael, G. R.; Crawford, J. H.; Chan, S.; Xu, X.; Fisher, J. A.

    2017-12-01

    An updated modeling system to support airborne field campaigns is being built at NASA Ames Pleiades, with focus on adjusting the representation of land surface-atmosphere feedbacks. The main updates, referring to previous experiences with ARCTAS-CARB and CalNex in the western US to study air pollution inflows, include: 1) migrating the WRF (Weather Research and Forecasting) coupled land surface model from Noah to improved/more complex models especially Noah-MP and Rapid Update Cycle; 2) enabling the WRF land initialization with suitably spun-up land model output; 3) incorporating satellite land cover, vegetation dynamics, and soil moisture data (i.e., assimilating Soil Moisture Active Passive data using the ensemble Kalman filter approach) into WRF. Examples are given of comparing the model fields with available aircraft observations during spring-summer 2016 field campaigns taken place at the eastern side of continents (KORUS-AQ in South Korea and ACT-America in the eastern US), the air pollution export regions. Under fair weather and stormy conditions, air pollution vertical distributions and column amounts, as well as the impact from land surface, are compared. These help identify challenges and opportunities for LEO/GEO satellite remote sensing and modeling of air quality in the northern hemisphere. Finally, we briefly show applications of this system on simulating Australian conditions, which would explore the needs for further development of the observing system in the southern hemisphere and inform the Clean Air and Urban Landscapes (https://www.nespurban.edu.au) modelers.

  11. Recreation of architectural structures using procedural modeling based on volumes

    Directory of Open Access Journals (Sweden)

    Santiago Barroso Juan

    2013-11-01

    Full Text Available While the procedural modeling of buildings and other architectural structures has evolved very significantly in recent years, there is noticeable absence of high-level tools that allow a designer, an artist or an historian, creating important buildings or architectonic structures in a particular city. In this paper we present a tool for creating buildings in a simple and clear, following rules that use the language and methodology of creating their own buildings, and hiding the user the algorithmic details of the creation of the model.

  12. Procedural Modeling for Rapid-Prototyping of Multiple Building Phases

    Science.gov (United States)

    Saldana, M.; Johanson, C.

    2013-02-01

    RomeLab is a multidisciplinary working group at UCLA that uses the city of Rome as a laboratory for the exploration of research approaches and dissemination practices centered on the intersection of space and time in antiquity. In this paper we present a multiplatform workflow for the rapid-prototyping of historical cityscapes through the use of geographic information systems, procedural modeling, and interactive game development. Our workflow begins by aggregating archaeological data in a GIS database. Next, 3D building models are generated from the ArcMap shapefiles in Esri CityEngine using procedural modeling techniques. A GIS-based terrain model is also adjusted in CityEngine to fit the building elevations. Finally, the terrain and city models are combined in Unity, a game engine which we used to produce web-based interactive environments which are linked to the GIS data using keyhole markup language (KML). The goal of our workflow is to demonstrate that knowledge generated within a first-person virtual world experience can inform the evaluation of data derived from textual and archaeological sources, and vice versa.

  13. 76 FR 39221 - HUD Debt Collection: Revisions and Update to the Procedures for the Collection of Claims

    Science.gov (United States)

    2011-07-05

    ... Other Actions, would include the procedures that apply when HUD seeks satisfaction of debts owed to HUD... of the salary of a Federal Government employee. The revisions proposed by this rule primarily apply... CFR 285.7.) Section 17.83(d) is revised to reference the employee's right to propose a repayment...

  14. An update on radiation absorbed dose to patients from diagnostic nuclear medicine procedures in Tehran: A study on four academic centers

    Science.gov (United States)

    Motazedian, Motahareh; Tabeie, F; Vatankhah, P; Shafiei, B; Amoui, M; Atefi, M; Ansari, M; Asli, I Neshandar

    2016-01-01

    Purpose: Use of radiopharmaceuticals for diagnostic nuclear medicine procedures is one of the main sources of radiation exposure. We performed this study with respect to the rapid growth in nuclear medicine in Iran and lack of updated statistics. Materials and Methods: The data were obtained for all active Nuclear Medicine Centers affiliated to Shahid Beheshti University of Medical Sciences during 2009 and 2010. Results: The most frequently performed procedures were bone (30.16%), cardiac (28.96%), renal (17.97%), and thyroid (7.93%) scans. There was a significant decrease in the number of thyroid scintigraphies with 131I and 99mTc-sulfur colloid liver/spleen scans and tremendous increase in the frequencies of cardiac and bone scintigraphies compared to one decade ago. Conclusion: Compared to previous studies, there were striking changes in trends of diagnostic nuclear medicine procedures in Tehran. This field is still evolving in the country, and this trend will further change with the introduction of positron emission tomography scanners in future. PMID:27095860

  15. Receiver Operating Characteristic Curve-Based Prediction Model for Periodontal Disease Updated With the Calibrated Community Periodontal Index.

    Science.gov (United States)

    Su, Chiu-Wen; Yen, Amy Ming-Fang; Lai, Hongmin; Chen, Hsiu-Hsi; Chen, Sam Li-Sheng

    2017-12-01

    The accuracy of a prediction model for periodontal disease using the community periodontal index (CPI) has been undertaken by using an area under a receiver operating characteristics (AUROC) curve. How the uncalibrated CPI, as measured by general dentists trained by periodontists in a large epidemiologic study, and affects the performance in a prediction model, has not been researched yet. A two-stage design was conducted by first proposing a validation study to calibrate CPI between a senior periodontal specialist and trained general dentists who measured CPIs in the main study of a nationwide survey. A Bayesian hierarchical logistic regression model was applied to estimate the non-updated and updated clinical weights used for building up risk scores. How the calibrated CPI affected performance of the updated prediction model was quantified by comparing AUROC curves between the original and updated models. Estimates regarding calibration of CPI obtained from the validation study were 66% and 85% for sensitivity and specificity, respectively. After updating, clinical weights of each predictor were inflated, and the risk score for the highest risk category was elevated from 434 to 630. Such an update improved the AUROC performance of the two corresponding prediction models from 62.6% (95% confidence interval [CI]: 61.7% to 63.6%) for the non-updated model to 68.9% (95% CI: 68.0% to 69.6%) for the updated one, reaching a statistically significant difference (P prediction model was demonstrated for periodontal disease as measured by the calibrated CPI derived from a large epidemiologic survey.

  16. An Updated Site Scale Saturated Zone Ground Water Transport Model for Yucca Mountain

    Science.gov (United States)

    Kelkar, S.; Ding, M.; Chu, S.; Robinson, B.; Arnold, B.; Meijer, A.

    2007-12-01

    The Yucca Mountain site scale saturated zone transport model has been revised to incorporate the updated flow model based on a hydrogeologic framework model using the latest lithology data, increased grid resolution that better resolves the geology within the model domain, updated sorption coefficient (Kd ) distributions for radionuclides of interest, and updated retardation factor distributions. The resulting numerical transport model is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. The transport model results are validated by comparing the model transport pathways with those derived from geochemical data, and by comparing the transit times from the repository footprint to the compliance boundary at the accessible environment with those derived from 14C-based age estimates. The transport model includes the processes of advection, dispersion, fracture flow, matrix diffusion in fractured volcanic formations, sorption, and colloid-facilitated transport. The transport of sorbing radionuclides in the aqueous phase is modeled as a linear, equilibrium process using the Kd model. The colloid-facilitated transport of radionuclides is modeled using two approaches: the colloids with irreversibly embedded radionuclides undergo reversible filtration only, while the migration of radionuclides that reversibly sorb to colloids is modeled with modified values for sorption coefficients and matrix diffusion coefficients. The base case results predict a transport time of 810 years for the breakthrough of half of the mass of a nonreactive radionuclide originating at a point within the footprint of the repository to the compliance boundary of the accessible environment at a distance of ~18 km downstream. The transport time is quite sensitive to the specific discharge through the model, varying between 31 to 52840 years for a range of specific discharge multiplier values between 0.1 to 8.9. Other

  17. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. I. Biological model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...

  18. A Review of the Updated Pharmacophore for the Alpha 5 GABA(A Benzodiazepine Receptor Model

    Directory of Open Access Journals (Sweden)

    Terry Clayton

    2015-01-01

    Full Text Available An updated model of the GABA(A benzodiazepine receptor pharmacophore of the α5-BzR/GABA(A subtype has been constructed prompted by the synthesis of subtype selective ligands in light of the recent developments in both ligand synthesis, behavioral studies, and molecular modeling studies of the binding site itself. A number of BzR/GABA(A α5 subtype selective compounds were synthesized, notably α5-subtype selective inverse agonist PWZ-029 (1 which is active in enhancing cognition in both rodents and primates. In addition, a chiral positive allosteric modulator (PAM, SH-053-2′F-R-CH3 (2, has been shown to reverse the deleterious effects in the MAM-model of schizophrenia as well as alleviate constriction in airway smooth muscle. Presented here is an updated model of the pharmacophore for α5β2γ2 Bz/GABA(A receptors, including a rendering of PWZ-029 docked within the α5-binding pocket showing specific interactions of the molecule with the receptor. Differences in the included volume as compared to α1β2γ2, α2β2γ2, and α3β2γ2 will be illustrated for clarity. These new models enhance the ability to understand structural characteristics of ligands which act as agonists, antagonists, or inverse agonists at the Bz BS of GABA(A receptors.

  19. Updating Finite Element Model of a Wind Turbine Blade Section Using Experimental Modal Analysis Results

    Directory of Open Access Journals (Sweden)

    Marcin Luczak

    2014-01-01

    Full Text Available This paper presents selected results and aspects of the multidisciplinary and interdisciplinary research oriented for the experimental and numerical study of the structural dynamics of a bend-twist coupled full scale section of a wind turbine blade structure. The main goal of the conducted research is to validate finite element model of the modified wind turbine blade section mounted in the flexible support structure accordingly to the experimental results. Bend-twist coupling was implemented by adding angled unidirectional layers on the suction and pressure side of the blade. Dynamic test and simulations were performed on a section of a full scale wind turbine blade provided by Vestas Wind Systems A/S. The numerical results are compared to the experimental measurements and the discrepancies are assessed by natural frequency difference and modal assurance criterion. Based on sensitivity analysis, set of model parameters was selected for the model updating process. Design of experiment and response surface method was implemented to find values of model parameters yielding results closest to the experimental. The updated finite element model is producing results more consistent with the measurement outcomes.

  20. Experimental Update of the Overtopping Model Used for the Wave Dragon Wave Energy Converter

    Directory of Open Access Journals (Sweden)

    Erik Friis-Madsen

    2013-04-01

    Full Text Available An overtopping model specifically suited for Wave Dragon is needed in order to improve the reliability of its performance estimates. The model shall be comprehensive of all relevant physical processes that affect overtopping and flexible to adapt to any local conditions and device configuration. An experimental investigation is carried out to update an existing formulation suited for 2D draft-limited, low-crested structures, in order to include the effects on the overtopping flow of the wave steepness, the 3D geometry of Wave Dragon, the wing reflectors, the device motions and the non-rigid connection between platform and reflectors. The study is carried out in four phases, each of them specifically targeted at quantifying one of these effects through a sensitivity analysis and at modeling it through custom-made parameters. These are depending on features of the wave or the device configuration, all of which can be measured in real-time. Instead of using new fitting coefficients, this approach allows a broader applicability of the model beyond the Wave Dragon case, to any overtopping WEC or structure within the range of tested conditions. Predictions reliability of overtopping over Wave Dragon increased, as the updated model allows improved accuracy and precision respect to the former version.

  1. Stepwise calibration procedure for regional coupled hydrological-hydrogeological models

    Science.gov (United States)

    Labarthe, Baptiste; Abasq, Lena; de Fouquet, Chantal; Flipo, Nicolas

    2014-05-01

    Stream-aquifer interaction is a complex process depending on regional and local processes. Indeed, the groundwater component of hydrosystem and large scale heterogeneities control the regional flows towards the alluvial plains and the rivers. In second instance, the local distribution of the stream bed permeabilities controls the dynamics of stream-aquifer water fluxes within the alluvial plain, and therefore the near-river piezometric head distribution. In order to better understand the water circulation and pollutant transport in watersheds, the integration of these multi-dimensional processes in modelling platform has to be performed. Thus, the nested interfaces concept in continental hydrosystem modelling (where regional fluxes, simulated by large scale models, are imposed at local stream-aquifer interfaces) has been presented in Flipo et al (2014). This concept has been implemented in EauDyssée modelling platform for a large alluvial plain model (900km2) part of a 11000km2 multi-layer aquifer system, located in the Seine basin (France). The hydrosystem modelling platform is composed of four spatially distributed modules (Surface, Sub-surface, River and Groundwater), corresponding to four components of the terrestrial water cycle. Considering the large number of parameters to be inferred simultaneously, the calibration process of coupled models is highly computationally demanding and therefore hardly applicable to a real case study of 10000km2. In order to improve the efficiency of the calibration process, a stepwise calibration procedure is proposed. The stepwise methodology involves determining optimal parameters of all components of the coupled model, to provide a near optimum prior information for the global calibration. It starts with the surface component parameters calibration. The surface parameters are optimised based on the comparison between simulated and observed discharges (or filtered discharges) at various locations. Once the surface parameters

  2. Resolving structural errors in a spatially distributed hydrologic model using ensemble Kalman filter state updates

    Directory of Open Access Journals (Sweden)

    J. H. Spaaks

    2013-09-01

    Full Text Available In hydrological modeling, model structures are developed in an iterative cycle as more and different types of measurements become available and our understanding of the hillslope or watershed improves. However, with increasing complexity of the model, it becomes more and more difficult to detect which parts of the model are deficient, or which processes should also be incorporated into the model during the next development step. In this study, we first compare two methods (the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA and the Simultaneous parameter Optimization and Data Assimilation algorithm (SODA to calibrate a purposely deficient 3-D hillslope-scale model to error-free, artificially generated measurements. We use a multi-objective approach based on distributed pressure head at the soil–bedrock interface and hillslope-scale discharge and water balance. For these idealized circumstances, SODA's usefulness as a diagnostic methodology is demonstrated by its ability to identify the timing and location of processes that are missing in the model. We show that SODA's state updates provide information that could readily be incorporated into an improved model structure, and that this type of information cannot be gained from parameter estimation methods such as SCEM-UA. We then expand on the SODA result by performing yet another calibration, in which we investigate whether SODA's state updating patterns are still capable of providing insight into model structure deficiencies when there are fewer measurements, which are moreover subject to measurement noise. We conclude that SODA can help guide the discussion between experimentalists and modelers by providing accurate and detailed information on how to improve spatially distributed hydrologic models.

  3. Image processing of full-field strain data and its use in model updating

    Energy Technology Data Exchange (ETDEWEB)

    Wang, W; Mottershead, J E [Centre for Engineering Dynamics, School of Engineering, University of Liverpool, UK, L69 3GH (United Kingdom); Sebastian, C M; Patterson, E A, E-mail: wangweizhuo@gmail.com [Composite Vehicle Research Center, Michigan State University, Lansing, MI (United States)

    2011-07-19

    Finite element model updating is an inverse problem based on measured structural outputs, typically natural frequencies. Full-field responses such as static stress/strain patterns and vibration mode shapes contain valuable information for model updating but within large volumes of highly-redundant data. Pattern recognition and image processing provide feasible techniques to extract effective and efficient information, often known as shape features, from this data. For instance, the Zernike polynomials having the properties of orthogonality and rotational invariance are powerful decomposition kernels for a shape defined within a unit circle. In this paper, full field strain patterns for a specimen, in the form of a square plate with a circular hole, under a tensile load are considered. Effective shape features can be constructed by a set of modified Zernike polynomials. The modification includes the application of a weighting function to the Zernike polynomials so that high strain magnitudes around the hole are well represented. The Gram-Schmidt process is then used to ensure orthogonality for the obtained decomposition kernels over the domain of the specimen. The difference between full-field strain patterns measured by digital image correlation (DIC) and reconstructed using 15 shape features (Zernike moment descriptors, ZMDs) at different steps in the elasto-plastic deformation of the specimen is found to be very small. It is significant that only a very small number of shape features are necessary and sufficient to represent the full-field data. Model updating of nonlinear elasto-plastic material properties is carried out by adjusting the parameters of a FE model until the FE strain pattern converges upon the measured strains as determined using ZMDs.

  4. a Procedural Solution to Model Roman Masonry Structures

    Science.gov (United States)

    Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.

    2013-07-01

    The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.

  5. Structural updates of alignment of protein domains and consequences on evolutionary models of domain superfamilies.

    Science.gov (United States)

    Mutt, Eshita; Rani, Sudha Sane; Sowdhamini, Ramanathan

    2013-11-15

    Influx of newly determined crystal structures into primary structural databases is increasing at a rapid pace. This leads to updation of primary and their dependent secondary databases which makes large scale analysis of structures even more challenging. Hence, it becomes essential to compare and appreciate replacement of data and inclusion of new data that is critical between two updates. PASS2 is a database that retains structure-based sequence alignments of protein domain superfamilies and relies on SCOP database for its hierarchy and definition of superfamily members. Since, accurate alignments of distantly related proteins are useful evolutionary models for depicting variations within protein superfamilies, this study aims to trace the changes in data in between PASS2 updates. In this study, differences in superfamily compositions, family constituents and length variations between different versions of PASS2 have been tracked. Studying length variations in protein domains, which have been introduced by indels (insertions/deletions), are important because theses indels act as evolutionary signatures in introducing variations in substrate specificity, domain interactions and sometimes even regulating protein stability. With this objective of classifying the nature and source of variations in the superfamilies during transitions (between the different versions of PASS2), increasing length-rigidity of the superfamilies in the recent version is observed. In order to study such length-variant superfamilies in detail, an improved classification approach is also presented, which divides the superfamilies into distinct groups based on their extent of length variation. An objective study in terms of transition between the database updates, detailed investigation of the new/old members and examination of their structural alignments is non-trivial and will help researchers in designing experiments on specific superfamilies, in various modelling studies, in linking

  6. Qualitative mechanism models and the rationalization of procedures

    Science.gov (United States)

    Farley, Arthur M.

    1989-01-01

    A qualitative, cluster-based approach to the representation of hydraulic systems is described and its potential for generating and explaining procedures is demonstrated. Many ideas are formalized and implemented as part of an interactive, computer-based system. The system allows for designing, displaying, and reasoning about hydraulic systems. The interactive system has an interface consisting of three windows: a design/control window, a cluster window, and a diagnosis/plan window. A qualitative mechanism model for the ORS (Orbital Refueling System) is presented to coordinate with ongoing research on this system being conducted at NASA Ames Research Center.

  7. Some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models.

    Science.gov (United States)

    Knudsen, Thomas; Aasbjerg Nielsen, Allan

    2013-04-01

    The Danish national elevation model, DK-DEM, was introduced in 2009 and is based on LiDAR data collected in the time frame 2005-2007. Hence, DK-DEM is aging, and it is time to consider how to integrate new data with the current model in a way that improves the representation of new landscape features, while still preserving the overall (very high) quality of the model. In LiDAR terms, 2005 is equivalent to some time between the palaeolithic and the neolithic. So evidently, when (and if) an update project is launched, we may expect some notable improvements due to the technical and scientific developments from the last half decade. To estimate the magnitude of these potential improvements, and to devise efficient and effective ways of integrating the new and old data, we currently carry out a number of case studies based on comparisons between the current terrain model (with a ground sample distance, GSD, of 1.6 m), and a number of new high resolution point clouds (10-70 points/m2). Not knowing anything about the terms of a potential update project, we consider multiple scenarios ranging from business as usual: A new model with the same GSD, but improved precision, to aggressive upscaling: A new model with 4 times better GSD, i.e. a 16-fold increase in the amount of data. Especially in the latter case speeding up the gridding process is important. Luckily recent results from one of our case studies reveal that for very high resolution data in smooth terrain (which is the common case in Denmark), using local mean (LM) as grid value estimator is only negligibly worse than using the theoretically "best" estimator, i.e. ordinary kriging (OK) with rigorous modelling of the semivariogram. The bias in a leave one out cross validation differs on the micrometer level, while the RMSE differs on the 0.1 mm level. This is fortunate, since a LM estimator can be implemented in plain stream mode, letting the points from the unstructured point cloud (i.e. no TIN generation) stream

  8. Contact-based model for strategy updating and evolution of cooperation

    Science.gov (United States)

    Zhang, Jianlei; Chen, Zengqiang

    2016-06-01

    To establish an available model for the astoundingly strategy decision process of players is not easy, sparking heated debate about the related strategy updating rules is intriguing. Models for evolutionary games have traditionally assumed that players imitate their successful partners by the comparison of respective payoffs, raising the question of what happens if the game information is not easily available. Focusing on this yet-unsolved case, the motivation behind the work presented here is to establish a novel model for the updating of states in a spatial population, by detouring the required payoffs in previous models and considering much more players' contact patterns. It can be handy and understandable to employ switching probabilities for determining the microscopic dynamics of strategy evolution. Our results illuminate the conditions under which the steady coexistence of competing strategies is possible. These findings reveal that the evolutionary fate of the coexisting strategies can be calculated analytically, and provide novel hints for the resolution of cooperative dilemmas in a competitive context. We hope that our results have disclosed new explanations about the survival and coexistence of competing strategies in structured populations.

  9. Updating a B. anthracis Risk Model with Field Data from a Bioterrorism Incident.

    Science.gov (United States)

    Hong, Tao; Gurian, Patrick L

    2015-06-02

    In this study, a Bayesian framework was applied to update a model of pathogen fate and transport in the indoor environment. Distributions for model parameters (e.g., release quantity of B. anthracis spores, risk of illness, spore setting velocity, resuspension rate, sample recovery efficiency, etc.) were updated by comparing model predictions with measurements of B. anthracis spores made after one of the 2001 anthrax letter attacks. The updating process, which was implemented by using Markov chain Monte Carlo (MCMC) methods, significantly reduced the uncertainties of inputs with uniformed prior estimates: total quantity of spores released, the amount of spores exiting the room, and risk to occupants. In contrast, uncertainties were not greatly reduced for inputs for which informed prior data were available: deposition rates, resuspension rates, and sample recovery efficiencies. This suggests that prior estimates of these quantities that were obtained from a review of the technical literature are consistent with the observed behavior of spores in an actual attack. Posterior estimates of mortality risk for people in the room, when the spores were released, are on the order of 0.01 to 0.1, which supports the decision to administer prophylactic antibiotics. Multivariate sensitivity analyses were conducted to assess how effective different measurements were at reducing uncertainty in the estimated risk for the prior scenario. This analysis revealed that if the size distribution of the released particulates is known, then environmental sampling can be limited to accurately characterizing floor concentrations; otherwise, samples from multiple locations, as well as particulate and building air circulation parameters, need to be measured.

  10. Blood vessel modeling for interactive simulation of interventional neuroradiology procedures.

    Science.gov (United States)

    Kerrien, E; Yureidini, A; Dequidt, J; Duriez, C; Anxionnat, R; Cotin, S

    2017-01-01

    Endovascular interventions can benefit from interactive simulation in their training phase but also during pre-operative and intra-operative phases if simulation scenarios are based on patient data. A key feature in this context is the ability to extract, from patient images, models of blood vessels that impede neither the realism nor the performance of simulation. This paper addresses both the segmentation and reconstruction of the vasculature from 3D Rotational Angiography data, and adapted to simulation: An original tracking algorithm is proposed to segment the vessel tree while filtering points extracted at the vessel surface in the vicinity of each point on the centerline; then an automatic procedure is described to reconstruct each local unstructured point set as a skeleton-based implicit surface (blobby model). The output of successively applying both algorithms is a new model of vasculature as a tree of local implicit models. The segmentation algorithm is compared with Multiple Hypothesis Testing (MHT) algorithm (Friman et al., 2010) on patient data, showing its greater ability to track blood vessels. The reconstruction algorithm is evaluated on both synthetic and patient data and demonstrate its ability to fit points with a subvoxel precision. Various tests are also reported where our model is used to simulate catheter navigation in interventional neuroradiology. An excellent realism, and much lower computational costs are reported when compared to triangular mesh surface models. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Lumping procedure for a kinetic model of catalytic naphtha reforming

    Directory of Open Access Journals (Sweden)

    H. M. Arani

    2009-12-01

    Full Text Available A lumping procedure is developed for obtaining kinetic and thermodynamic parameters of catalytic naphtha reforming. All kinetic and deactivation parameters are estimated from industrial data and thermodynamic parameters are calculated from derived mathematical expressions. The proposed model contains 17 lumps that include the C6 to C8+ hydrocarbon range and 15 reaction pathways. Hougen-Watson Langmuir-Hinshelwood type reaction rate expressions are used for kinetic simulation of catalytic reactions. The kinetic parameters are benchmarked with several sets of plant data and estimated by the SQP optimization method. After calculation of deactivation and kinetic parameters, plant data are compared with model predictions and only minor deviations between experimental and calculated data are generally observed.

  12. Procedures and Methods of Digital Modeling in Representation Didactics

    Science.gov (United States)

    La Mantia, M.

    2011-09-01

    At the Bachelor degree course in Engineering/Architecture of the University "La Sapienza" of Rome, the courses of Design and Survey, in addition to considering the learning of methods of representation, the application of descriptive geometry and survey, in order to expand the vision and spatial conception of the student, pay particular attention to the use of information technology for the preparation of design and survey drawings, achieving their goals through an educational path of "learning techniques, procedures and methods of modeling architectural structures." The fields of application involved two different educational areas: the analysis and that of survey, both from the acquisition of the given metric (design or survey) to the development of three-dimensional virtual model.

  13. Finite element model updating using the shadow hybrid Monte Carlo technique

    Science.gov (United States)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  14. Inhibition, Updating Working Memory, and Shifting Predict Reading Disability Symptoms in a Hybrid Model: Project KIDS.

    Science.gov (United States)

    Daucourt, Mia C; Schatschneider, Christopher; Connor, Carol M; Al Otaiba, Stephanie; Hart, Sara A

    2018-01-01

    Recent achievement research suggests that executive function (EF), a set of regulatory processes that control both thought and action necessary for goal-directed behavior, is related to typical and atypical reading performance. This project examines the relation of EF, as measured by its components, Inhibition, Updating Working Memory, and Shifting, with a hybrid model of reading disability (RD). Our sample included 420 children who participated in a broader intervention project when they were in KG-third grade (age M = 6.63 years, SD = 1.04 years, range = 4.79-10.40 years). At the time their EF was assessed, using a parent-report Behavior Rating Inventory of Executive Function (BRIEF), they had a mean age of 13.21 years ( SD = 1.54 years; range = 10.47-16.63 years). The hybrid model of RD was operationalized as a composite consisting of four symptoms, and set so that any child could have any one, any two, any three, any four, or none of the symptoms included in the hybrid model. The four symptoms include low word reading achievement, unexpected low word reading achievement, poorer reading comprehension compared to listening comprehension, and dual-discrepancy response-to-intervention, requiring both low achievement and low growth in word reading. The results of our multilevel ordinal logistic regression analyses showed a significant relation between all three components of EF (Inhibition, Updating Working Memory, and Shifting) and the hybrid model of RD, and that the strength of EF's predictive power for RD classification was the highest when RD was modeled as having at least one or more symptoms. Importantly, the chances of being classified as having RD increased as EF performance worsened and decreased as EF performance improved. The question of whether any one EF component would emerge as a superior predictor was also examined and results showed that Inhibition, Updating Working Memory, and Shifting were equally valuable as predictors of the hybrid model of RD

  15. Inhibition, Updating Working Memory, and Shifting Predict Reading Disability Symptoms in a Hybrid Model: Project KIDS

    Directory of Open Access Journals (Sweden)

    Mia C. Daucourt

    2018-03-01

    Full Text Available Recent achievement research suggests that executive function (EF, a set of regulatory processes that control both thought and action necessary for goal-directed behavior, is related to typical and atypical reading performance. This project examines the relation of EF, as measured by its components, Inhibition, Updating Working Memory, and Shifting, with a hybrid model of reading disability (RD. Our sample included 420 children who participated in a broader intervention project when they were in KG-third grade (age M = 6.63 years, SD = 1.04 years, range = 4.79–10.40 years. At the time their EF was assessed, using a parent-report Behavior Rating Inventory of Executive Function (BRIEF, they had a mean age of 13.21 years (SD = 1.54 years; range = 10.47–16.63 years. The hybrid model of RD was operationalized as a composite consisting of four symptoms, and set so that any child could have any one, any two, any three, any four, or none of the symptoms included in the hybrid model. The four symptoms include low word reading achievement, unexpected low word reading achievement, poorer reading comprehension compared to listening comprehension, and dual-discrepancy response-to-intervention, requiring both low achievement and low growth in word reading. The results of our multilevel ordinal logistic regression analyses showed a significant relation between all three components of EF (Inhibition, Updating Working Memory, and Shifting and the hybrid model of RD, and that the strength of EF’s predictive power for RD classification was the highest when RD was modeled as having at least one or more symptoms. Importantly, the chances of being classified as having RD increased as EF performance worsened and decreased as EF performance improved. The question of whether any one EF component would emerge as a superior predictor was also examined and results showed that Inhibition, Updating Working Memory, and Shifting were equally valuable as predictors of the

  16. Dose rates in nuclear medicine and the effectiveness of lead aprons: updating the department's knowledge on old and new procedures.

    Science.gov (United States)

    Young, Andy M

    2013-03-01

    Answers to common nuclear medicine radiation safety questions often involve the consideration of dose rates from injected patients and the inverse square law. For staff, lead aprons are available as an option, although they are not routinely used and their effectiveness varies depending on the isotope. New tests and procedures have been introduced at this hospital, including PET and Y microsphere implantation, which have required a review and investigation of their potential impact on staff doses. To answer these questions and to account for the recently introduced technologies and procedures, a study was conducted to measure and demonstrate the level of effectiveness of the department's lead aprons and to simulate patient dose rate measurements and estimations by obtaining measurements from water phantoms filled with these isotopes. A calibrated survey meter was used to measure dose rates at varying distances from water phantoms filled with Tc, Ga, I, F and Y. Thermoluminescence dosimeters attached to an anthropomorphic phantom with a lead apron were used to assess the effectiveness of the lead aprons available within the department. An uncollimated detector from a gamma camera was used to observe the changes to the energy spectrum in the presence of the lead apron. The results from the dose rate measurements demonstrated an overestimation by the inverse square law at close distances. This overestimation can be in excess of four times the measurements made within this study. The use of a lead apron was shown to reduce doses by varying degrees depending on the isotope used. A 64.5% dose reduction was observed when shielding against Tc with diminishing effectiveness against the remaining isotopes. The results for Y suggest that using a lead apron could result in dose escalation at shallow depths. A table of conversion factors, independent of the isotope, was generated for the estimation of dose rates from injected patients at various distances. An isotope

  17. Application of a Bayesian algorithm for the Statistical Energy model updating of a railway coach

    DEFF Research Database (Denmark)

    Sadri, Mehran; Brunskog, Jonas; Younesian, Davood

    2016-01-01

    The classical statistical energy analysis (SEA) theory is a common approach for vibroacoustic analysis of coupled complex structures, being efficient to predict high-frequency noise and vibration of engineering systems. There are however some limitations in applying the conventional SEA. The pres......The classical statistical energy analysis (SEA) theory is a common approach for vibroacoustic analysis of coupled complex structures, being efficient to predict high-frequency noise and vibration of engineering systems. There are however some limitations in applying the conventional SEA...... the performance of the proposed strategy, the SEA model updating of a railway passenger coach is carried out. First, a sensitivity analysis is carried out to select the most sensitive parameters of the SEA model. For the selected parameters of the model, prior probability density functions are then taken...

  18. Experimental Update of the Overtopping Model Used for the Wave Dragon Wave Energy Converter

    DEFF Research Database (Denmark)

    Parmeggiani, Stefano; Kofoed, Jens Peter; Friis-Madsen, Erik

    2013-01-01

    An overtopping model specifically suited for Wave Dragon is needed in order to improve the reliability of its performance estimates. The model shall be comprehensive of all relevant physical processes that affect overtopping and flexible to adapt to any local conditions and device configuration....... An experimental investigation is carried out to update an existing formulation suited for 2D draft-limited, low-crested structures, in order to include the effects on the overtopping flow of the wave steepness, the 3D geometry of Wave Dragon, the wing reflectors, the device motions and the non-rigid connection...... between platform and reflectors. The study is carried out in four phases, each of them specifically targeted at quantifying one of these effects through a sensitivity analysis and at modeling it through custom-made parameters. These are depending on features of the wave or the device configuration, all...

  19. Updated procedures for using drill cores and cuttings at the Lithologic Core Storage Library, Idaho National Laboratory, Idaho

    Science.gov (United States)

    Hodges, Mary K.V.; Davis, Linda C.; Bartholomay, Roy C.

    2018-01-30

    In 1990, the U.S. Geological Survey, in cooperation with the U.S. Department of Energy Idaho Operations Office, established the Lithologic Core Storage Library at the Idaho National Laboratory (INL). The facility was established to consolidate, catalog, and permanently store nonradioactive drill cores and cuttings from subsurface investigations conducted at the INL, and to provide a location for researchers to examine, sample, and test these materials.The facility is open by appointment to researchers for examination, sampling, and testing of cores and cuttings. This report describes the facility and cores and cuttings stored at the facility. Descriptions of cores and cuttings include the corehole names, corehole locations, and depth intervals available.Most cores and cuttings stored at the facility were drilled at or near the INL, on the eastern Snake River Plain; however, two cores drilled on the western Snake River Plain are stored for comparative studies. Basalt, rhyolite, sedimentary interbeds, and surficial sediments compose most cores and cuttings, most of which are continuous from land surface to their total depth. The deepest continuously drilled core stored at the facility was drilled to 5,000 feet below land surface. This report describes procedures and researchers' responsibilities for access to the facility and for examination, sampling, and return of materials.

  20. Summary of Expansions, Updates, and Results in GREET® 2016 Suite of Models

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2016-10-01

    This report documents the technical content of the expansions and updates in Argonne National Laboratory’s GREET® 2016 release and provides references and links to key documents related to these expansions and updates.

  1. Feature Classification for Robust Shape-Based Collaborative Tracking and Model Updating

    Directory of Open Access Journals (Sweden)

    C. S. Regazzoni

    2008-09-01

    Full Text Available A new collaborative tracking approach is introduced which takes advantage of classified features. The core of this tracker is a single tracker that is able to detect occlusions and classify features contributing in localizing the object. Features are classified in four classes: good, suspicious, malicious, and neutral. Good features are estimated to be parts of the object with a high degree of confidence. Suspicious ones have a lower, yet significantly high, degree of confidence to be a part of the object. Malicious features are estimated to be generated by clutter, while neutral features are characterized with not a sufficient level of uncertainty to be assigned to the tracked object. When there is no occlusion, the single tracker acts alone, and the feature classification module helps it to overcome distracters such as still objects or little clutter in the scene. When more than one desired moving objects bounding boxes are close enough, the collaborative tracker is activated and it exploits the advantages of the classified features to localize each object precisely as well as updating the objects shape models more precisely by assigning again the classified features to the objects. The experimental results show successful tracking compared with the collaborative tracker that does not use the classified features. Moreover, more precise updated object shape models will be shown.

  2. Updated Life-Cycle Assessment of Aluminum Production and Semi-fabrication for the GREET Model

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Qiang [Argonne National Lab. (ANL), Argonne, IL (United States); Kelly, Jarod C. [Argonne National Lab. (ANL), Argonne, IL (United States); Burnham, Andrew [Argonne National Lab. (ANL), Argonne, IL (United States); Elgowainy, Amgad [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-09-01

    This report serves as an update for the life-cycle analysis (LCA) of aluminum production based on the most recent data representing the state-of-the-art of the industry in North America. The 2013 Aluminum Association (AA) LCA report on the environmental footprint of semifinished aluminum products in North America provides the basis for the update (The Aluminum Association, 2013). The scope of this study covers primary aluminum production, secondary aluminum production, as well as aluminum semi-fabrication processes including hot rolling, cold rolling, extrusion and shape casting. This report focuses on energy consumptions, material inputs and criteria air pollutant emissions for each process from the cradle-to-gate of aluminum, which starts from bauxite extraction, and ends with manufacturing of semi-fabricated aluminum products. The life-cycle inventory (LCI) tables compiled are to be incorporated into the vehicle cycle model of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET) Model for the release of its 2015 version.

  3. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. I. Biological model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...... that really uses all these methodological improvements. In this paper, the biological model describing the performance and feed intake of sows is presented. In particular, estimation of herd specific parameters is emphasized. The optimization model is described in a subsequent paper...

  4. State Token Petri Net modeling method for formal verification of computerized procedure including operator's interruptions of procedure execution flow

    International Nuclear Information System (INIS)

    Kim, Yun Goo; Seong, Poong Hyun

    2012-01-01

    The Computerized Procedure System (CPS) is one of the primary operating support systems in the digital Main Control Room. The CPS displays procedure on the computer screen in the form of a flow chart, and displays plant operating information along with procedure instructions. It also supports operator decision making by providing a system decision. A procedure flow should be correct and reliable, as an error would lead to operator misjudgement and inadequate control. In this paper we present a modeling for the CPS that enables formal verification based on Petri nets. The proposed State Token Petri Nets (STPN) also support modeling of a procedure flow that has various interruptions by the operator, according to the plant condition. STPN modeling is compared with Coloured Petri net when they are applied to Emergency Operating Computerized Procedure. A converting program for Computerized Procedure (CP) to STPN has been also developed. The formal verification and validation methods of CP with STPN increase the safety of a nuclear power plant and provide digital quality assurance means that are needed when the role and function of the CPS is increasing.

  5. GENERATING ALTERNATIVE PROPOSALS FOR THE LOUVRE USING PROCEDURAL MODELING

    Directory of Open Access Journals (Sweden)

    E. Calogero

    2012-09-01

    Full Text Available This paper presents the process of reconstructing two facade designs for the East wing of the Louvre using procedural modeling. The first proposal reconstructed is Louis Le Vau's 1662 scheme and the second is the 1668 design of the "petit conseil" that still stands today. The initial results presented show how such reconstructions may aid general and expert understanding of the two designs. It is claimed that by formalizing the facade description into a shape grammar in CityEngine, a systematized approach to a stylistic analysis is possible. It is also asserted that such an analysis is still best understood in the historical context of what is known about the contemporary design intentions of the building creators and commissioners.

  6. Developing Physiologic Models for Emergency Medical Procedures Under Microgravity

    Science.gov (United States)

    Parker, Nigel; O'Quinn, Veronica

    2012-01-01

    Several technological enhancements have been made to METI's commercial Emergency Care Simulator (ECS) with regard to how microgravity affects human physiology. The ECS uses both a software-only lung simulation, and an integrated mannequin lung that uses a physical lung bag for creating chest excursions, and a digital simulation of lung mechanics and gas exchange. METI s patient simulators incorporate models of human physiology that simulate lung and chest wall mechanics, as well as pulmonary gas exchange. Microgravity affects how O2 and CO2 are exchanged in the lungs. Procedures were also developed to take into affect the Glasgow Coma Scale for determining levels of consciousness by varying the ECS eye-blinking function to partially indicate the level of consciousness of the patient. In addition, the ECS was modified to provide various levels of pulses from weak and thready to hyper-dynamic to assist in assessing patient conditions from the femoral, carotid, brachial, and pedal pulse locations.

  7. Generating Alternative Proposals for the Louvre Using Procedural Modeling

    Science.gov (United States)

    Calogero, E.; Arnold, D.

    2011-09-01

    This paper presents the process of reconstructing two facade designs for the East wing of the Louvre using procedural modeling. The first proposal reconstructed is Louis Le Vau's 1662 scheme and the second is the 1668 design of the "petit conseil" that still stands today. The initial results presented show how such reconstructions may aid general and expert understanding of the two designs. It is claimed that by formalizing the facade description into a shape grammar in CityEngine, a systematized approach to a stylistic analysis is possible. It is also asserted that such an analysis is still best understood in the historical context of what is known about the contemporary design intentions of the building creators and commissioners.

  8. Effects of lateral boundary condition resolution and update frequency on regional climate model predictions

    Science.gov (United States)

    Pankatz, Klaus; Kerkweg, Astrid

    2015-04-01

    The work presented is part of the joint project "DecReg" ("Regional decadal predictability") which is in turn part of the project "MiKlip" ("Decadal predictions"), an effort funded by the German Federal Ministry of Education and Research to improve decadal predictions on a global and regional scale. In MiKlip, one big question is if regional climate modeling shows "added value", i.e. to evaluate, if regional climate models (RCM) produce better results than the driving models. However, the scope of this study is to look more closely at the setup specific details of regional climate modeling. As regional models only simulate a small domain, they have to inherit information about the state of the atmosphere at their lateral boundaries from external data sets. There are many unresolved questions concerning the setup of lateral boundary conditions (LBC). External data sets come from global models or from global reanalysis data-sets. A temporal resolution of six hours is common for this kind of data. This is mainly due to the fact, that storage space is a limiting factor, especially for climate simulations. However, theoretically, the coupling frequency could be as high as the time step of the driving model. Meanwhile, it is unclear if a more frequent update of the LBCs has a significant effect on the climate in the domain of the RCM. The first study examines how the RCM reacts to a higher update frequency. The study is based on a 30 year time slice experiment for three update frequencies of the LBC, namely six hours, one hour and six minutes. The evaluation of means, standard deviations and statistics of the climate in the regional domain shows only small deviations, some statistically significant though, of 2m temperature, sea level pressure and precipitation. The second part of the first study assesses parameters linked to cyclone activity, which is affected by the LBC update frequency. Differences in track density and strength are found when comparing the simulations

  9. A procedure for determining parameters of a simplified ligament model.

    Science.gov (United States)

    Barrett, Jeff M; Callaghan, Jack P

    2018-01-03

    A previous mathematical model of ligament force-generation treated their behavior as a population of collagen fibres arranged in parallel. When damage was ignored in this model, an expression for ligament force in terms of the deflection, x, effective stiffness, k, mean collagen slack length, μ, and the standard deviation of slack lengths, σ, was obtained. We present a simple three-step method for determining the three model parameters (k, μ, and σ) from force-deflection data: (1) determine the equation of the line in the linear region of this curve, its slope is k and its x -intercept is -μ; (2) interpolate the force-deflection data when x is -μ to obtain F 0 ; (3) calculate σ with the equation σ=2πF 0 /k. Results from this method were in good agreement to those obtained from a least-squares procedure on experimental data - all falling within 6%. Therefore, parameters obtained using the proposed method provide a systematic way of reporting ligament parameters, or for obtaining an initial guess for nonlinear least-squares. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Experiences with a procedure for modeling product knowledge

    DEFF Research Database (Denmark)

    Hansen, Benjamin Loer; Hvam, Lars

    2002-01-01

    This paper presents experiences with a procedure for building configurators. The procedure has been used in an American company producing custom-made precision air conditioning equipment. The paper describes experiences with the use of the procedure and experiences with the project in general....

  11. Update of the hydrogeologic model of the Cerro Prieto field based on recent well data

    Energy Technology Data Exchange (ETDEWEB)

    Halfman, S.E.; Manon, A.; Lippmann, M.J.

    1986-01-01

    The hydrogeologic model of the Cerro Prieto geothermal field in Baja California, Mexico has been updated and modified on the basis of geologic and reservoir engineering data from 21 newly completed wells. Previously, only two reservoirs had been discovered: the shallow ..cap alpha.. reservoir and the deeper ..beta.. reservoir. Recently, three deep wells drilled east of the main wellfield penetrated a third geothermal reservoir (called the ..gamma.. reservoir) below the sandstones corresponding to the ..beta.. reservoir in the main part of the field. The new well data delimit the ..beta.. reservoir, confirm the important role of Fault H in controlling the flow of geothermal fluids, and enable us to refine the hydrogeologic model of the field.

  12. Updating the CHAOS series of field models using Swarm data and resulting candidate models for IGRF-12

    DEFF Research Database (Denmark)

    Finlay, Chris; Olsen, Nils; Tøffner-Clausen, Lars

    Ten months of data from ESA's Swarm mission, together with recent ground observatory monthly means, are used to update the CHAOS series of geomagnetic field models with a focus on time-changes of the core field. As for previous CHAOS field models quiet-time, night-side, data selection criteria......th order spline representation with knot points spaced at 0.5 year intervals. The resulting field model is able to consistently fit data from six independent low Earth orbit satellites: Oersted, CHAMP, SAC-C and the three Swarm satellites. As an example, we present comparisons of the excellent model...... fit obtained to both the Swarm data and the CHAMP data. The new model also provides a good description of observatory secular variation, capturing rapid field evolution events during the past decade. Maps of the core surface field and its secular variation can already be extracted in the Swarm-era. We...

  13. Role of sensory information in updating internal models of the effector during arm tracking.

    Science.gov (United States)

    Vercher, Jean-Louis; Sarès, Frédéric; Blouin, Jean; Bourdin, Christophe; Gauthier, Gabriel

    2003-01-01

    This chapter is divided into three main parts. Firstly, on the basis of the literature, we will shortly discuss how the recent introduction of the concept of internal models by Daniel Wolpert and Mitsuo Kawato contributes to a better understanding of what is motor learning and what is motor adaptation. Then, we will present a model of eye-hand co-ordination during self-moved target tracking, which we used as a way to specifically address these topics. Finally, we will show some evidence about the use of proprioceptive information for updating the internal models, in the context of eye-hand co-ordination. Motor and afferent information appears to contribute to the parametric adjustment (adaptation) between arm motor command and visual information about arm motion. The study reported here was aimed at assessing the contribution of arm proprioception in building (learning) and updating (adaptation) these representations. The subjects (including a deafferented subject) had to make back and forth movements with their forearm in the horizontal plane, over learned amplitude and at constant frequency, and to track an arm-driven target with their eyes. The dynamical conditions of arm movement were altered (unexpectedly or systematically) during the movement by changing the mechanical properties of the manipulandum. The results showed a significant change of the latency and the gain of the smooth pursuit system, before and after the perturbation for the control subjects, but not for the deafferented subject. Moreover, in control subjects, vibrations of the arm muscles prevented adaptation to the mechanical perturbation. These results suggest that in a self-moved target tracking task, the arm motor system shares with the smooth pursuit system an internal representation of the arm dynamical properties, and that arm proprioception is necessary to build this internal model. As suggested by Ghez et al. (1990) (Cold Spring Harbor Symp. Quant. Biol., 55: 837-8471), proprioception

  14. Enhancing paramedics procedural skills using a cadaveric model.

    Science.gov (United States)

    Lim, David; Bartlett, Stephen; Horrocks, Peter; Grant-Wakefield, Courtenay; Kelly, Jodie; Tippett, Vivienne

    2014-07-08

    Paramedic education has evolved in recent times from vocational post-employment to tertiary pre-employment supplemented by clinical placement. Simulation is advocated as a means of transferring learned skills to clinical practice. Sole reliance of simulation learning using mannequin-based models may not be sufficient to prepare students for variance in human anatomy. In 2012, we trialled the use of fresh frozen human cadavers to supplement undergraduate paramedic procedural skill training. The purpose of this study is to evaluate whether cadaveric training is an effective adjunct to mannequin simulation and clinical placement. A multi-method approach was adopted. The first step involved a Delphi methodology to formulate and validate the evaluation instrument. The instrument comprised of knowledge-based MCQs, Likert for self-evaluation of procedural skills and behaviours, and open answer. The second step involved a pre-post evaluation of the 2013 cadaveric training. One hundred and fourteen students attended the workshop and 96 evaluations were included in the analysis, representing a return rate of 84%. There was statistically significant improved anatomical knowledge after the workshop. Students' self-rated confidence in performing procedural skills on real patients improved significantly after the workshop: inserting laryngeal mask (MD 0.667), oropharyngeal (MD 0.198) and nasopharyngeal (MD 0.600) airways, performing Bag-Valve-Mask (MD 0.379), double (MD 0.344) and triple (MD 0.326,) airway manoeuvre, doing 12-lead electrocardiography (MD 0.729), using laryngoscope (MD 0.726), using Magill® forceps to remove foreign body (MD 0.632), attempting thoracocentesis (MD 1.240), and putting on a traction splint (MD 0.865). The students commented that the workshop provided context to their theoretical knowledge and that they gained an appreciation of the differences in normal tissue variation. Following engagement in/ completion of the workshop, students were more aware

  15. Application of a procedure oriented crew model to modelling nuclear plant operation

    International Nuclear Information System (INIS)

    Baron, S.

    1986-01-01

    PROCRU (PROCEDURE-ORIENTED CREW MODEL) is a model developed to analyze flight crew procedures in a commercial ILS approach-to-landing. The model builds on earlier, validated control-theoretic models for human estimation and control behavior, but incorporates features appropriate to analyzing supervisory control in multi-task environments. In this paper, the basic ideas underlying the PROCRU model, and the generalization of these ideas to provide a supervisory control model of wider applicability are discussed. The potential application of this supervisory control model to nuclear power plant operations is considered. The range of problems that can be addressed, the kinds of data that will be needed and the nature of the results that might be expected from such an application are indicated

  16. Sensitivity of Hydrologic Response to Climate Model Debiasing Procedures

    Science.gov (United States)

    Channell, K.; Gronewold, A.; Rood, R. B.; Xiao, C.; Lofgren, B. M.; Hunter, T.

    2017-12-01

    Climate change is already having a profound impact on the global hydrologic cycle. In the Laurentian Great Lakes, changes in long-term evaporation and precipitation can lead to rapid water level fluctuations in the lakes, as evidenced by unprecedented change in water levels seen in the last two decades. These fluctuations often have an adverse impact on the region's human, environmental, and economic well-being, making accurate long-term water level projections invaluable to regional water resources management planning. Here we use hydrological components from a downscaled climate model (GFDL-CM3/WRF), to obtain future water supplies for the Great Lakes. We then apply a suite of bias correction procedures before propagating these water supplies through a routing model to produce lake water levels. Results using conventional bias correction methods suggest that water levels will decline by several feet in the coming century. However, methods that reflect the seasonal water cycle and explicitly debias individual hydrological components (overlake precipitation, overlake evaporation, runoff) imply that future water levels may be closer to their historical average. This discrepancy between debiased results indicates that water level forecasts are highly influenced by the bias correction method, a source of sensitivity that is commonly overlooked. Debiasing, however, does not remedy misrepresentation of the underlying physical processes in the climate model that produce these biases and contribute uncertainty to the hydrological projections. This uncertainty coupled with the differences in water level forecasts from varying bias correction methods are important for water management and long term planning in the Great Lakes region.

  17. Improving prediction models with new markers: a comparison of updating strategies

    Directory of Open Access Journals (Sweden)

    D. Nieboer

    2016-09-01

    Full Text Available Abstract Background New markers hold the promise of improving risk prediction for individual patients. We aimed to compare the performance of different strategies to extend a previously developed prediction model with a new marker. Methods Our motivating example was the extension of a risk calculator for prostate cancer with a new marker that was available in a relatively small dataset. Performance of the strategies was also investigated in simulations. Development, marker and test sets with different sample sizes originating from the same underlying population were generated. A prediction model was fitted using logistic regression in the development set, extended using the marker set and validated in the test set. Extension strategies considered were re-estimating individual regression coefficients, updating of predictions using conditional likelihood ratios (LR and imputation of marker values in the development set and subsequently fitting a model in the combined development and marker sets. Sample sizes considered for the development and marker set were 500 and 100, 500 and 500, and 100 and 500 patients. Discriminative ability of the extended models was quantified using the concordance statistic (c-statistic and calibration was quantified using the calibration slope. Results All strategies led to extended models with increased discrimination (c-statistic increase from 0.75 to 0.80 in test sets. Strategies estimating a large number of parameters (re-estimation of all coefficients and updating using conditional LR led to overfitting (calibration slope below 1. Parsimonious methods, limiting the number of coefficients to be re-estimated, or applying shrinkage after model revision, limited the amount of overfitting. Combining the development and marker set using imputation of missing marker values approach led to consistently good performing models in all scenarios. Similar results were observed in the motivating example. Conclusion When the

  18. Worst case prediction of additives migration from polystyrene for food safety purposes: a model update.

    Science.gov (United States)

    Martínez-López, Brais; Gontard, Nathalie; Peyron, Stéphane

    2018-03-01

    A reliable prediction of migration levels of plastic additives into food requires a robust estimation of diffusivity. Predictive modelling of diffusivity as recommended by the EU commission is carried out using a semi-empirical equation that relies on two polymer-dependent parameters. These parameters were determined for the polymers most used by packaging industry (LLDPE, HDPE, PP, PET, PS, HIPS) from the diffusivity data available at that time. In the specific case of general purpose polystyrene, the diffusivity data published since then shows that the use of the equation with the original parameters results in systematic underestimation of diffusivity. The goal of this study was therefore, to propose an update of the aforementioned parameters for PS on the basis of up to date diffusivity data, so the equation can be used for a reasoned overestimation of diffusivity.

  19. [Social determinants of health and disability: updating the model for determination].

    Science.gov (United States)

    Tamayo, Mauro; Besoaín, Álvaro; Rebolledo, Jaime

    Social determinants of health (SDH) are conditions in which people live. These conditions impact their lives, health status and social inclusion level. In line with the conceptual and comprehensive progression of disability, it is important to update SDH due to their broad implications in implementing health interventions in society. This proposal supports incorporating disability in the model as a structural determinant, as it would lead to the same social inclusion/exclusion of people described in other structural SDH. This proposal encourages giving importance to designing and implementing public policies to improve societal conditions and contribute to social equity. This will be an act of reparation, justice and fulfilment with the Convention on the Rights of Persons with Disabilities. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. Towards a neural basis of music perception -- A review and updated model

    Directory of Open Access Journals (Sweden)

    Stefan eKoelsch

    2011-06-01

    Full Text Available Music perception involves acoustic analysis, auditory memory, auditoryscene analysis, processing of interval relations, of musical syntax and semantics,and activation of (premotor representations of actions. Moreover, music percep-tion potentially elicits emotions, thus giving rise to the modulation of emotionaleffector systems such as the subjective feeling system, the autonomic nervoussystem, the hormonal, and the immune system. Building on a previous article(Koelsch & Siebel, 2005, this review presents an updated model of music percep-tion and its neural correlates. The article describes processes involved in musicperception, and reports EEG and fMRI studies that inform about the time courseof these processes, as well as about where in the brain these processes might belocated.

  1. The Cornell Net Carbohydrate and Protein System: Updates to the model and evaluation of version 6.5.

    Science.gov (United States)

    Van Amburgh, M E; Collao-Saenz, E A; Higgs, R J; Ross, D A; Recktenwald, E B; Raffrenato, E; Chase, L E; Overton, T R; Mills, J K; Foskolos, A

    2015-09-01

    New laboratory and animal sampling methods and data have been generated over the last 10 yr that had the potential to improve the predictions for energy, protein, and AA supply and requirements in the Cornell Net Carbohydrate and Protein System (CNCPS). The objectives of this study were to describe updates to the CNCPS and evaluate model performance against both literature and on-farm data. The changes to the feed library were significant and are reported in a separate manuscript. Degradation rates of protein and carbohydrate fractions were adjusted according to new fractionation schemes, and corresponding changes to equations used to calculate rumen outflows and postrumen digestion were presented. In response to the feed-library changes and an increased supply of essential AA because of updated contents of AA, a combined efficiency of use was adopted in place of separate calculations for maintenance and lactation to better represent the biology of the cow. Four different data sets were developed to evaluate Lys and Met requirements, rumen N balance, and milk yield predictions. In total 99 peer-reviewed studies with 389 treatments and 15 regional farms with 50 different diets were included. The broken-line model with plateau was used to identify the concentration of Lys and Met that maximizes milk protein yield and content. Results suggested concentrations of 7.00 and 2.60% of metabolizable protein (MP) for Lys and Met, respectively, for maximal protein yield and 6.77 and 2.85% of MP for Lys and Met, respectively, for maximal protein content. Updated AA concentrations were numerically higher for Lys and 11 to 18% higher for Met compared with CNCPS v6.0, and this is attributed to the increased content of Met and Lys in feeds that were previously incorrectly analyzed and described. The prediction of postruminal flows of N and milk yield were evaluated using the correlation coefficient from the BLUP (R(2)BLUP) procedure or model predictions (R(2)MDP) and the

  2. Metal-rich, Metal-poor: Updated Stellar Population Models for Old Stellar Systems

    Science.gov (United States)

    Conroy, Charlie; Villaume, Alexa; van Dokkum, Pieter G.; Lind, Karin

    2018-02-01

    We present updated stellar population models appropriate for old ages (>1 Gyr) and covering a wide range in metallicities (‑1.5 ≲ [Fe/H] ≲ 0.3). These models predict the full spectral variation associated with individual element abundance variation as a function of metallicity and age. The models span the optical–NIR wavelength range (0.37–2.4 μm), include a range of initial mass functions, and contain the flexibility to vary 18 individual elements including C, N, O, Mg, Si, Ca, Ti, and Fe. To test the fidelity of the models, we fit them to integrated light optical spectra of 41 Galactic globular clusters (GCs). The value of testing models against GCs is that their ages, metallicities, and detailed abundance patterns have been derived from the Hertzsprung–Russell diagram in combination with high-resolution spectroscopy of individual stars. We determine stellar population parameters from fits to all wavelengths simultaneously (“full spectrum fitting”), and demonstrate explicitly with mock tests that this approach produces smaller uncertainties at fixed signal-to-noise ratio than fitting a standard set of 14 line indices. Comparison of our integrated-light results to literature values reveals good agreement in metallicity, [Fe/H]. When restricting to GCs without prominent blue horizontal branch populations, we also find good agreement with literature values for ages, [Mg/Fe], [Si/Fe], and [Ti/Fe].

  3. A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models

    Science.gov (United States)

    Scott, Erin; Serpetti, Natalia; Steenbeek, Jeroen; Heymans, Johanna Jacomina

    The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE) models to observation reference data (Mackinson et al. 2009). The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting > 1000 specific individual searches to find the statistically 'best fit' model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the 'best fit' model for ecological accuracy.

  4. Procedure for identifying models for the heat dynamics of buildings

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik

    This report describes a new method for obtaining detailed information about the heat dynamics of a building using frequent reading of the heat consumption. Such a procedure is considered to be of uttermost importance as a key procedure for using readings from smart meters, which is expected...

  5. Automated detection of healthcare associated infections: external validation and updating of a model for surveillance of drain-related meningitis.

    Directory of Open Access Journals (Sweden)

    Maaike S M van Mourik

    Full Text Available OBJECTIVE: Automated surveillance of healthcare-associated infections can improve efficiency and reliability of surveillance. The aim was to validate and update a previously developed multivariable prediction model for the detection of drain-related meningitis (DRM. DESIGN: Retrospective cohort study using traditional surveillance by infection control professionals as reference standard. PATIENTS: Patients receiving an external cerebrospinal fluid drain, either ventricular (EVD or lumbar (ELD in a tertiary medical care center. Children, patients with simultaneous drains, <1 day of follow-up or pre-existing meningitis were excluded leaving 105 patients in validation set (2010-2011 and 653 in updating set (2004-2011. METHODS: For validation, the original model was applied. Discrimination, classification and calibration were assessed. For updating, data from all available years was used to optimally re-estimate coefficients and determine whether extension with new predictors is necessary. The updated model was validated and adjusted for optimism (overfitting using bootstrapping techniques. RESULTS: In model validation, the rate of DRM was 17.4/1000 days at risk. All cases were detected by the model. The area under the ROC curve was 0.951. The positive predictive value was 58.8% (95% CI 40.7-75.4 and calibration was good. The revised model also includes Gram stain results. Area under the ROC curve after correction for optimism was 0.963 (95% CI 0.953- 0.974. Group-level prediction was adequate. CONCLUSIONS: The previously developed multivariable prediction model maintains discriminatory power and calibration in an independent patient population. The updated model incorporates all available data and performs well, also after elaborate adjustment for optimism.

  6. Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2)

    Science.gov (United States)

    Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2017-07-01

    The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs) and Earth system models (ESMs) to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx), HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect caused by the

  7. The Updated BaSTI Stellar Evolution Models and Isochrones. I. Solar-scaled Calculations

    Science.gov (United States)

    Hidalgo, Sebastian L.; Pietrinferni, Adriano; Cassisi, Santi; Salaris, Maurizio; Mucciarelli, Alessio; Savino, Alessandro; Aparicio, Antonio; Silva Aguirre, Victor; Verma, Kuldeep

    2018-04-01

    We present an updated release of the BaSTI (a Bag of Stellar Tracks and Isochrones) stellar model and isochrone library for a solar-scaled heavy element distribution. The main input physics that have been changed from the previous BaSTI release include the solar metal mixture, electron conduction opacities, a few nuclear reaction rates, bolometric corrections, and the treatment of the overshooting efficiency for shrinking convective cores. The new model calculations cover a mass range between 0.1 and 15 M ⊙, 22 initial chemical compositions between [Fe/H] = ‑3.20 and +0.45, with helium to metal enrichment ratio dY/dZ = 1.31. The isochrones cover an age range between 20 Myr and 14.5 Gyr, consistently take into account the pre-main-sequence phase, and have been translated to a large number of popular photometric systems. Asteroseismic properties of the theoretical models have also been calculated. We compare our isochrones with results from independent databases and with several sets of observations to test the accuracy of the calculations. All stellar evolution tracks, asteroseismic properties, and isochrones are made available through a dedicated web site.

  8. An updated conceptual model of Delta Smelt biology: Our evolving understanding of an estuarine fish

    Science.gov (United States)

    Baxter, Randy; Brown, Larry R.; Castillo, Gonzalo; Conrad, Louise; Culberson, Steven D.; Dekar, Matthew P.; Dekar, Melissa; Feyrer, Frederick; Hunt, Thaddeus; Jones, Kristopher; Kirsch, Joseph; Mueller-Solger, Anke; Nobriga, Matthew; Slater, Steven B.; Sommer, Ted; Souza, Kelly; Erickson, Gregg; Fong, Stephanie; Gehrts, Karen; Grimaldo, Lenny; Herbold, Bruce

    2015-01-01

    The main purpose of this report is to provide an up-to-date assessment and conceptual model of factors affecting Delta Smelt (Hypomesus transpacificus) throughout its primarily annual life cycle and to demonstrate how this conceptual model can be used for scientific and management purposes. The Delta Smelt is a small estuarine fish that only occurs in the San Francisco Estuary. Once abundant, it is now rare and has been protected under the federal and California Endangered Species Acts since 1993. The Delta Smelt listing was related to a step decline in the early 1980s; however, population abundance decreased even further with the onset of the “pelagic organism decline” (POD) around 2002. A substantial, albeit short-lived, increase in abundance of all life stages in 2011 showed that the Delta Smelt population can still rebound when conditions are favorable for spawning, growth, and survival. In this report, we update previous conceptual models for Delta Smelt to reflect new data and information since the release of the last synthesis report about the POD by the Interagency Ecological Program for the San Francisco Estuary (IEP) in 2010. Specific objectives include:

  9. A finite element model updating technique for adjustment of parameters near boundaries

    Science.gov (United States)

    Gwinn, Allen Fort, Jr.

    Even though there have been many advances in research related to methods of updating finite element models based on measured normal mode vibration characteristics, there is yet to be a widely accepted method that works reliably with a wide range of problems. This dissertation focuses on the specific class of problems having to do with changes in stiffness near the clamped boundary of plate structures. This class of problems is especially important as it relates to the performance of turbine engine blades, where a change in stiffness at the base of the blade can be indicative of structural damage. The method that is presented herein is a new technique for resolving the differences between the physical structure and the finite element model. It is a semi-iterative technique that incorporates a "physical expansion" of the measured eigenvectors along with appropriate scaling of these expanded eigenvectors into an iterative loop that uses the Engel's model modification method to then calculate adjusted stiffness parameters for the finite element model. Three example problems are presented that use eigenvalues and mass normalized eigenvectors that have been calculated from experimentally obtained accelerometer readings. The test articles that were used were all thin plates with one edge fully clamped. They each had a cantilevered length of 8.5 inches and a width of 4 inches. The three plates differed from one another in thickness from 0.100 inches to 0.188 inches. These dimensions were selected in order to approximate a gas turbine engine blade. The semi-iterative modification technique is shown to do an excellent job of calculating the necessary adjustments to the finite element model so that the analytically determined eigenvalues and eigenvectors for the adjusted model match the corresponding values from the experimental data with good agreement. Furthermore, the semi-iterative method is quite robust. For the examples presented here, the method consistently converged

  10. Do lateral boundary condition update frequency and the resolution of the boundary data affect the regional model COSMO-CLM? A sensitivity study.

    Science.gov (United States)

    Pankatz, K.; Kerkweg, A.

    2014-12-01

    The work presented is part of the joint project "DecReg" ("Regional decadal predictability") which is in turn part of the project "MiKlip" ("Decadal predictions"), an effort funded by the german Federal Ministry of Education and Research to improve decadal predictions on a global and regional scale. In regional climate modeling it is common to update the lateral boundary conditions (LBC) of the regional model every six hours. This is mainly due to the fact, that reference data sets like ERA are only available every six hours. Additionally, for offline coupling procedures it would be too costly to store LBC data in higher temporal resolution for climate simulations. However, theoretically, the coupling frequency could be as high as the time step of the driving model. Meanwhile, it is unclear if a more frequent update of the LBC has a significant effect on the climate in the domain of the regional model (RCM). This study uses the RCM COSMO-CLM/MESSy (Kerkweg and Jöckel, 2012) to couple COSMO-CLM offline to the GCM ECHAM5. One study examines a 30 year time slice experiment for three update frequencies of the LBC, namely six hours, one hour and six minutes. The evaluation of means, standard deviations and statistics of the climate in regional domain shows only small deviations, some stastically significant though, of 2m temperature, sea level pressure and precipitaion.The second scope of the study assesses parameters linked to cyclone activity, which is affected by the LBC update frequency. Differences in track density and strength are found when comparing the simulations.The second study examines the quality of decadal hind-casts of the decade 2001-2010 when the horizontal resolution of the driving model, namely T42, T63, T85, T106, from which the LBC are calculated, is altered. Two sets of simulations are evaluated. For the first set of simulations, the GCM simulations are performed at different resolutions using the same boundary conditions for GHGs and SSTs, thus

  11. FEM Updating of the Heritage Court Building Structure

    DEFF Research Database (Denmark)

    Ventura, C. E.; Brincker, Rune; Dascotte, E.

    2001-01-01

    . The starting model of the structure was developed from the information provided in the design documentation of the building. Different parameters of the model were then modified using an automated procedure to improve the correlation between measured and calculated modal parameters. Careful attention......This paper describes results of a model updating study conducted on a 15-storey reinforced concrete shear core building. The output-only modal identification results obtained from ambient vibration measurements of the building were used to update a finite element model of the structure...... was placed to the selection of the parameters to be modified by the updating software in order to ensure that the necessary changes to the model were realistic and physically realisable and meaningful. The paper highlights the model updating process and provides an assessment of the usefulness of using...

  12. Modelling of groundwater flow and solute transport in Olkiluoto. Update 2008

    International Nuclear Information System (INIS)

    Loefman, J.; Pitkaenen, P.; Meszaros, F.; Keto, V.; Ahokas, H.

    2009-10-01

    Posiva Oy is preparing for the final disposal of spent nuclear fuel in the crystalline bedrock in Finland. Olkiluoto in Eurajoki has been selected as the primary site for the repository, subject to further detailed characterisation which is currently focused on the construction of an underground rock characterisation and research facility (the ONKALO). An essential part of the site investigation programme is analysis of the deep groundwater flow by means of numerical flow modelling. This study is the latest update concerning the site-scale flow modelling and is based on all the hydrogeological data gathered from field investigations by the end of 2007. The work is divided into two separate modelling tasks: 1) characterization of the baseline groundwater flow conditions before excavation of the ONKALO, and 2) a prediction/outcome (P/O) study of the potential hydrogeological disturbances due to the ONKALO. The flow model was calibrated by using all the available data that was appropriate for the applied, deterministic, equivalent porous medium (EPM) / dual-porosity (DP) approach. In the baseline modelling, calibration of the flow model focused on improving the agreement between the calculated results and the undisturbed observations. The calibration resulted in a satisfactory agreement with the measured pumping test responses, a very good overall agreement with the observed pressures in the deep drill holes and a fairly good agreement with the observed salinity. Some discrepancies still remained in a few single drill hole sections, because the fresh water infiltration in the model tends to dilute the groundwater too much at shallow depths. In the P/O calculations the flow model was further calibrated by using the monitoring data on the ONKALO disturbances. Having significantly more information on the inflows to the tunnel (compared with the previous study) allowed better calibration of the model, which allowed it to capture very well the observed inflow, the

  13. Fena Valley Reservoir watershed and water-balance model updates and expansion of watershed modeling to southern Guam

    Science.gov (United States)

    Rosa, Sarah N.; Hay, Lauren E.

    2017-12-01

    In 2014, the U.S. Geological Survey, in cooperation with the U.S. Department of Defense’s Strategic Environmental Research and Development Program, initiated a project to evaluate the potential impacts of projected climate-change on Department of Defense installations that rely on Guam’s water resources. A major task of that project was to develop a watershed model of southern Guam and a water-balance model for the Fena Valley Reservoir. The southern Guam watershed model provides a physically based tool to estimate surface-water availability in southern Guam. The U.S. Geological Survey’s Precipitation Runoff Modeling System, PRMS-IV, was used to construct the watershed model. The PRMS-IV code simulates different parts of the hydrologic cycle based on a set of user-defined modules. The southern Guam watershed model was constructed by updating a watershed model for the Fena Valley watersheds, and expanding the modeled area to include all of southern Guam. The Fena Valley watershed model was combined with a previously developed, but recently updated and recalibrated Fena Valley Reservoir water-balance model.Two important surface-water resources for the U.S. Navy and the citizens of Guam were modeled in this study; the extended model now includes the Ugum River watershed and improves upon the previous model of the Fena Valley watersheds. Surface water from the Ugum River watershed is diverted and treated for drinking water, and the Fena Valley watersheds feed the largest surface-water reservoir on Guam. The southern Guam watershed model performed “very good,” according to the criteria of Moriasi and others (2007), in the Ugum River watershed above Talofofo Falls with monthly Nash-Sutcliffe efficiency statistic values of 0.97 for the calibration period and 0.93 for the verification period (a value of 1.0 represents perfect model fit). In the Fena Valley watershed, monthly simulated streamflow volumes from the watershed model compared reasonably well with the

  14. Geometric subspace updates with applications to online adaptive nonlinear model reduction

    DEFF Research Database (Denmark)

    Zimmermann, Ralf; Peherstorfer, Benjamin; Willcox, Karen

    2017-01-01

    In many scientific applications, including model reduction and image processing, subspaces are used as ansatz spaces for the low-dimensional approximation and reconstruction of the state vectors of interest. We introduce a procedure for adapting an existing subspace based on information from...... Estimation (GROUSE). We establish for GROUSE a closed-form expression for the residual function along the geodesic descent direction. Specific applications of subspace adaptation are discussed in the context of image processing and model reduction of nonlinear partial differential equation systems....

  15. Avoiding drift related to linear analysis update with Lagrangian coordinate models

    Science.gov (United States)

    Wang, Yiguo; Counillon, Francois; Bertino, Laurent

    2015-04-01

    When applying data assimilation to Lagrangian coordinate models, it is profitable to correct its grid (position, volume). In isopycnal ocean coordinate model, such information is provided by the layer thickness that can be massless but must remains positive (truncated Gaussian distribution). A linear gaussian analysis does not ensure positivity for such variable. Existing methods have been proposed to handle this issue - e.g. post processing, anamorphosis or resampling - but none ensures conservation of the mean, which is imperative in climate application. Here, a framework is introduced to test a new method, which proceed as following. First, layers for which analysis yields negative values are iteratively grouped with neighboring layers, resulting in a probability density function with a larger mean and smaller standard deviation that prevent appearance of negative values. Second, analysis increments of the grouped layer are uniformly distributed, which prevent massless layers to become filled and vice-versa. The new method is proved fully conservative with e.g. OI or 3DVAR but a small drift remains with ensemble-based methods (e.g. EnKF, DEnKF, …) during the update of the ensemble anomaly. However, the resulting drift with the latter is small (an order of magnitude smaller than with post-processing) and the increase of the computational cost moderate. The new method is demonstrated with a realistic application in the Norwegian Climate Prediction Model (NorCPM) that provides climate prediction by assimilating sea surface temperature with the Ensemble Kalman Filter in a fully coupled Earth System model (NorESM) with an isopycnal ocean model (MICOM). Over 25-year analysis period, the new method does not impair the predictive skill of the system but corrects the artificial steric drift introduced by data assimilation, and provide estimate in good agreement with IPCC AR5.

  16. Basic Technology and Clinical Applications of the Updated Model of Laser Speckle Flowgraphy to Ocular Diseases

    Directory of Open Access Journals (Sweden)

    Tetsuya Sugiyama

    2014-08-01

    Full Text Available Laser speckle flowgraphy (LSFG allows for quantitative estimation of blood flow in the optic nerve head (ONH, choroid and retina, utilizing the laser speckle phenomenon. The basic technology and clinical applications of LSFG-NAVI, the updated model of LSFG, are summarized in this review. For developing a commercial version of LSFG, the special area sensor was replaced by the ordinary charge-coupled device camera. In LSFG-NAVI, the mean blur rate (MBR has been introduced as a new parameter. Compared to the original LSFG model, LSFG-NAVI demonstrates a better spatial resolution of the blood flow map of human ocular fundus. The observation area is 24 times larger than the original system. The analysis software can separately calculate MBRs in the blood vessels and tissues (capillaries of an entire ONH and the measurements have good reproducibility. The absolute values of MBR in the ONH have been shown to linearly correlate with the capillary blood flow. The Analysis of MBR pulse waveform provides parameters including skew, blowout score, blowout time, rising and falling rates, flow acceleration index, acceleration time index, and resistivity index for comparing different eyes. Recently, there have been an increasing number of reports on the clinical applications of LSFG-NAVI to ocular diseases, including glaucoma, retinal and choroidal diseases.

  17. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2010

    Energy Technology Data Exchange (ETDEWEB)

    Rahmat Aryaeinejad; Douglas S. Crawford; Mark D. DeHart; George W. Griffith; D. Scott Lucas; Joseph W. Nielsen; David W. Nigg; James R. Parry; Jorge Navarro

    2010-09-01

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance and, to some extent, experiment management are obsolete, inconsistent with the state of modern nuclear engineering practice, and are becoming increasingly difficult to properly verify and validate (V&V). Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In 2009 the Idaho National Laboratory (INL) initiated a focused effort to address this situation through the introduction of modern high-fidelity computational software and protocols, with appropriate V&V, within the next 3-4 years via the ATR Core Modeling and Simulation and V&V Update (or “Core Modeling Update”) Project. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF).

  18. A Numerical Comparison of Three Procedures Used in Failure Model Discrimination

    Directory of Open Access Journals (Sweden)

    Samir Kamel Ashour

    2014-05-01

    Full Text Available Three different selection procedures namely RML, S and F-procedure are reviewed with application to exponential, Weibull, Pareto, and Finite range models. Some inacurate results were discovered in the article of Pandy et al. (1991, it will be illustrated and modified. A simulation study is developed to numerically compare between the three procedures by obtaining the probability of correct selection

  19. A Comparison of Exposure Control Procedures in CATs Using the 3PL Model

    Science.gov (United States)

    Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G.

    2013-01-01

    This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…

  20. Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model

    International Nuclear Information System (INIS)

    Taylor, G. A.; Hiergesell, R. A.

    2013-01-01

    The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow

  1. Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, G. A.; Hiergesell, R. A.

    2013-11-12

    The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow

  2. Procedure to Determine Coefficients for the Sandia Array Performance Model (SAPM)

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce Hardison; Hansen, Clifford; Riley, Daniel; Robinson, Charles David; Pratt, Larry

    2016-06-01

    The Sandia Array Performance Model (SAPM), a semi-empirical model for predicting PV system power, has been in use for more than a decade. While several studies have presented comparisons of measurements and analysis results among laboratories, detailed procedures for determining model coefficients have not yet been published. Independent test laboratories must develop in-house procedures to determine SAPM coefficients, which contributes to uncertainty in the resulting models. Here we present a standard procedure for calibrating the SAPM using outdoor electrical and meteorological measurements. Analysis procedures are illustrated with data measured outdoors for a 36-cell silicon photovoltaic module.

  3. COMPARISON OF VIRTUAL FIELDS METHOD, PARALLEL NETWORK MATERIAL MODEL AND FINITE ELEMENT UPDATING FOR MATERIAL PARAMETER DETERMINATION

    Directory of Open Access Journals (Sweden)

    Florian Dirisamer

    2016-12-01

    Full Text Available Extracting material parameters from test specimens is very intensive in terms of cost and time, especially for viscoelastic material models, where the parameters are dependent of time (frequency, temperature and environmental conditions. Therefore, three different methods for extracting these parameters were tested. Firstly, digital image correlation combined with virtual fields method, secondly, a parallel network material model and thirdly, finite element updating. These three methods are shown and the results are compared in terms of accuracy and experimental effort.

  4. Modelling African aerosol using updated fossil fuel and biofuel emission inventories for 2005 and 2030

    Science.gov (United States)

    Liousse, C.; Penner, J. E.; Assamoi, E.; Xu, L.; Criqui, P.; Mima, S.; Guillaume, B.; Rosset, R.

    2010-12-01

    A regional fossil fuel and biofuel emission inventory for particulates has been developed for Africa at a resolution of 0.25° x 0.25° for the year 2005. The original database of Junker and Liousse (2008) was used after modification for updated regional fuel consumption and emission factors. Consumption data were corrected after direct inquiries conducted in Africa, including a new emitter category (i.e. two-wheel vehicles including “zemidjans”) and a new activity sector (i.e. power plants) since both were not considered in the previous emission inventory. Emission factors were measured during the 2005 AMMA campaign (Assamoi and Liousse, 2010) and combustion chamber experiments. Two prospective inventories for 2030 are derived based on this new regional inventory and two energy consumption forecasts by the Prospective Outlook on Long-term Energy Systems (POLES) model (Criqui, 2001). The first is a reference scenario, where no emission controls beyond those achieved in 2003 are taken into account, and the second is for a "clean" scenario where possible and planned policies for emission control are assumed to be effective. BC and OCp emission budgets for these new inventories will be discussed and compared to the previous global dataset. These new inventories along with the most recent open biomass burning inventory (Liousse et al., 2010) have been tested in the ORISAM-TM5 global chemistry-climate model with a focus over Africa at a 1° x 1° resolution. Global simulations for BC and primary OC for the years 2005 and 2030 are carried out and the modelled particulate concentrations for 2005 are compared to available measurements in Africa. Finally, BC and OC radiative properties (aerosol optical depths and single scattering albedo) are calculated and the direct radiative forcing is estimated using an off line model (Wang and Penner, 2009). Results of sensitivity tests driven with different emission scenarios will be presented.

  5. HOMCOS: an updated server to search and model complex 3D structures.

    Science.gov (United States)

    Kawabata, Takeshi

    2016-12-01

    The HOMCOS server ( http://homcos.pdbj.org ) was updated for both searching and modeling the 3D complexes for all molecules in the PDB. As compared to the previous HOMCOS server, the current server targets all of the molecules in the PDB including proteins, nucleic acids, small compounds and metal ions. Their binding relationships are stored in the database. Five services are available for users. For the services "Modeling a Homo Protein Multimer" and "Modeling a Hetero Protein Multimer", a user can input one or two proteins as the queries, while for the service "Protein-Compound Complex", a user can input one chemical compound and one protein. The server searches similar molecules by BLAST and KCOMBU. Based on each similar complex found, a simple sequence-replaced model is quickly generated by replacing the residue names and numbers with those of the query protein. A target compound is flexibly superimposed onto the template compound using the program fkcombu. If monomeric 3D structures are input as the query, then template-based docking can be performed. For the service "Searching Contact Molecules for a Query Protein", a user inputs one protein sequence as the query, and then the server searches for its homologous proteins in PDB and summarizes their contacting molecules as the predicted contacting molecules. The results are summarized in "Summary Bars" or "Site Table"display. The latter shows the results as a one-site-one-row table, which is useful for annotating the effects of mutations. The service "Searching Contact Molecules for a Query Compound" is also available.

  6. An updated PREDICT breast cancer prognostication and treatment benefit prediction model with independent validation.

    Science.gov (United States)

    Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P

    2017-05-22

    PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age

  7. A Proposal for a Procedural Terrain Modelling Framework

    NARCIS (Netherlands)

    Smelik, R.M.; T. Tutenel, T.; Kraker, K.J. de; Bidarra, R.

    2008-01-01

    Manual game content creation is an increasingly laborious task; with each advance in graphics hardware, a higher level of fidelity and detail is achievable and, therefore, expected. Although numerous automatic (e.g. procedural) content generation algorithms and techniques have been developed over

  8. A declarative approach to procedural modeling of virtual worlds

    NARCIS (Netherlands)

    Smelik, R.M.; Tutenel, T.; Kraker, K.J.de; Bidarra, R.

    2011-01-01

    With the ever increasing costs of manual content creation for virtual worlds, the potential of creating it automatically becomes too attractive to ignore. However, for most designers, traditional procedural content generation methods are complex and unintuitive to use, hard to control, and generated

  9. A Survey of Procedural Methods for Terrain Modelling

    NARCIS (Netherlands)

    Smelik, R.M.; Kraker, J.K. de; Groenewegen, S.A.; Tutenel, T.; Bidarra, R.

    2009-01-01

    Procedural methods are a promising but underused alternative to manual content creation. Commonly heard drawbacks are the randomness of and the lack of control over the output and the absence of integrated solutions, although more recent publications increasingly address these issues. This paper

  10. Evaluating procedural modelling for 3D models of informal settlements in urban design activities

    Directory of Open Access Journals (Sweden)

    Victoria Rautenbach

    2015-11-01

    Full Text Available Three-dimensional (3D modelling and visualisation is one of the fastest growing application fields in geographic information science. 3D city models are being researched extensively for a variety of purposes and in various domains, including urban design, disaster management, education and computer gaming. These models typically depict urban business districts (downtown or suburban residential areas. Despite informal settlements being a prevailing feature of many cities in developing countries, 3D models of informal settlements are virtually non-existent. 3D models of informal settlements could be useful in various ways, e.g. to gather information about the current environment in the informal settlements, to design upgrades, to communicate these and to educate inhabitants about environmental challenges. In this article, we described the development of a 3D model of the Slovo Park informal settlement in the City of Johannesburg Metropolitan Municipality, South Africa. Instead of using time-consuming traditional manual methods, we followed the procedural modelling technique. Visualisation characteristics of 3D models of informal settlements were described and the importance of each characteristic in urban design activities for informal settlement upgrades was assessed. Next, the visualisation characteristics of the Slovo Park model were evaluated. The results of the evaluation showed that the 3D model produced by the procedural modelling technique is suitable for urban design activities in informal settlements. The visualisation characteristics and their assessment are also useful as guidelines for developing 3D models of informal settlements. In future, we plan to empirically test the use of such 3D models in urban design projects in informal settlements.

  11. An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.

    Directory of Open Access Journals (Sweden)

    P Martin Sander

    Full Text Available Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM. This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism". Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.

  12. An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.

    Science.gov (United States)

    Sander, P Martin

    2013-01-01

    Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM). This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism"). Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits) were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.

  13. An Evolutionary Cascade Model for Sauropod Dinosaur Gigantism - Overview, Update and Tests

    Science.gov (United States)

    Sander, P. Martin

    2013-01-01

    Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM). This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades (“Reproduction”, “Feeding”, “Head and neck”, “Avian-style lung”, and “Metabolism”). Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait “Very high body mass”. Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits) were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size. PMID:24205267

  14. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2013

    Energy Technology Data Exchange (ETDEWEB)

    Nigg, David W. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2013-09-01

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for effective application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Update Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF).

  15. Updated comparison of groundwater flow model results and isotopic data in the Leon Valley, Mexico

    Science.gov (United States)

    Hernandez-Garcia, G. D.

    2015-12-01

    Northwest of Mexico City, the study area is located in the State of Guanajuato. Leon Valley has covered with groundwater its demand of water, estimated in 20.6 cubic meters per second. The constant increase of population and economic activities in the region, mainly in cities and automobile factories, has also a constant growth in water needs. Related extraction rate has produced an average decrease of approximately 1.0 m per year over the past two decades. This suggests that the present management of the groundwater should be checked. Management of groundwater in the study area involves the possibility of producing environmental impacts by extraction. This vital resource under stress becomes necessary studying its hydrogeological functioning to achieve scientific management of groundwater in the Valley. This research was based on the analysis and integration of existing information and the field generated by the authors. On the base of updated concepts like the geological structure of the area, the hydraulic parameters and the composition of deuterium-delta and delta-oxygen -18, this research has new results. This information has been fully analyzed by applying a groundwater flow model with particle tracking: the result has also a similar result in terms of travel time and paths derived from isotopic data.

  16. Life cycle reliability assessment of new products—A Bayesian model updating approach

    International Nuclear Information System (INIS)

    Peng, Weiwen; Huang, Hong-Zhong; Li, Yanfeng; Zuo, Ming J.; Xie, Min

    2013-01-01

    The rapidly increasing pace and continuously evolving reliability requirements of new products have made life cycle reliability assessment of new products an imperative yet difficult work. While much work has been done to separately estimate reliability of new products in specific stages, a gap exists in carrying out life cycle reliability assessment throughout all life cycle stages. We present a Bayesian model updating approach (BMUA) for life cycle reliability assessment of new products. Novel features of this approach are the development of Bayesian information toolkits by separately including “reliability improvement factor” and “information fusion factor”, which allow the integration of subjective information in a specific life cycle stage and the transition of integrated information between adjacent life cycle stages. They lead to the unique characteristics of the BMUA in which information generated throughout life cycle stages are integrated coherently. To illustrate the approach, an application to the life cycle reliability assessment of a newly developed Gantry Machining Center is shown

  17. Robust and efficient solution procedures for association models

    DEFF Research Database (Denmark)

    Michelsen, Michael Locht

    2006-01-01

    Equations of state that incorporate the Wertheim association expression are more difficult to apply than conventional pressure explicit equations, because the association term is implicit and requires solution for an internal set of composition variables. In this work, we analyze the convergence ...... behavior of different solution methods and demonstrate how a simple and efficient, yet globally convergent, procedure for the solution of the equation of state can be formulated....

  18. Recent updates in the aerosol component of the C-IFS model run by ECMWF

    Science.gov (United States)

    Remy, Samuel; Boucher, Olivier; Hauglustaine, Didier; Kipling, Zak; Flemming, Johannes

    2017-04-01

    The Composition-Integrated Forecast System (C-IFS) is a global atmospheric composition forecasting tool, run by ECMWF within the framework of the Copernicus Atmospheric Monitoring Service (CAMS). The aerosol model of C-IFS is a simple bulk scheme that forecasts 5 species: dust, sea-salt, black carbon, organic matter and sulfate. Three bins represent the dust and sea-salt, for the super-coarse, coarse and fine mode of these species (Morcrette et al., 2009). This talk will present recent updates of the aerosol model, and also introduce forthcoming developments. It will also present the impact of these changes as measured scores against AERONET Aerosol Optical Depth (AOD) and Airbase PM10 observations. The next cycle of C-IFS will include a mass fixer, because the semi-Lagrangian advection scheme used in C-IFS is not mass-conservative. C-IFS now offers the possibility to emit biomass-burning aerosols at an injection height that is provided by a new version of the Global Fire Assimilation System (GFAS). Secondary Organic Aerosols (SOA) production will be scaled on non-biomass burning CO fluxes. This approach allows to represent the anthropogenic contribution to SOA production; it brought a notable improvement in the skill of the model, especially over Europe. Lastly, the emissions of SO2 are now provided by the MACCity inventory instead of and older version of the EDGAR dataset. The seasonal and yearly variability of SO2 emissions are better captured by the MACCity dataset. Upcoming developments of the aerosol model of C-IFS consist mainly in the implementation of a nitrate and ammonium module, with 2 bins (fine and coarse) for nitrate. Nitrate and ammonium sulfate particle formation from gaseous precursors is represented following Hauglustaine et al. (2014); formation of coarse nitrate over pre-existing sea-salt or dust particles is also represented. This extension of the forward model improved scores over heavily populated areas such as Europe, China and Eastern

  19. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012

    Energy Technology Data Exchange (ETDEWEB)

    David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager

    2012-09-01

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Update Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core

  20. Parabolic Trough Collector Cost Update for the System Advisor Model (SAM)

    Energy Technology Data Exchange (ETDEWEB)

    Kurup, Parthiv [National Renewable Energy Lab. (NREL), Golden, CO (United States); Turchi, Craig S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-11-01

    This report updates the baseline cost for parabolic trough solar fields in the United States within NREL's System Advisor Model (SAM). SAM, available at no cost at https://sam.nrel.gov/, is a performance and financial model designed to facilitate decision making for people involved in the renewable energy industry. SAM is the primary tool used by NREL and the U.S. Department of Energy (DOE) for estimating the performance and cost of concentrating solar power (CSP) technologies and projects. The study performed a bottom-up build and cost estimate for two state-of-the-art parabolic trough designs -- the SkyTrough and the Ultimate Trough. The SkyTrough analysis estimated the potential installed cost for a solar field of 1500 SCAs as $170/m2 +/- $6/m2. The investigation found that SkyTrough installed costs were sensitive to factors such as raw aluminum alloy cost and production volume. For example, in the case of the SkyTrough, the installed cost would rise to nearly $210/m2 if the aluminum alloy cost was $1.70/lb instead of $1.03/lb. Accordingly, one must be aware of fluctuations in the relevant commodities markets to track system cost over time. The estimated installed cost for the Ultimate Trough was only slightly higher at $178/m2, which includes an assembly facility of $11.6 million amortized over the required production volume. Considering the size and overall cost of a 700 SCA Ultimate Trough solar field, two parallel production lines in a fully covered assembly facility, each with the specific torque box, module and mirror jigs, would be justified for a full CSP plant.

  1. A Comparison of Exposure Control Procedures in CAT Systems Based on Different Measurement Models for Testlets

    Science.gov (United States)

    Boyd, Aimee M.; Dodd, Barbara; Fitzpatrick, Steven

    2013-01-01

    This study compared several exposure control procedures for CAT systems based on the three-parameter logistic testlet response theory model (Wang, Bradlow, & Wainer, 2002) and Masters' (1982) partial credit model when applied to a pool consisting entirely of testlets. The exposure control procedures studied were the modified within 0.10 logits…

  2. A testing procedure for wind turbine generators based on the power grid statistical model

    DEFF Research Database (Denmark)

    Farajzadehbibalan, Saber; Ramezani, Mohammad Hossein; Nielsen, Peter

    2017-01-01

    In this study, a comprehensive test procedure is developed to test wind turbine generators with a hardware-in-loop setup. The procedure employs the statistical model of the power grid considering the restrictions of the test facility and system dynamics. Given the model in the latent space, the j...

  3. Combining Multi-Source Remotely Sensed Data and a Process-Based Model for Forest Aboveground Biomass Updating.

    Science.gov (United States)

    Lu, Xiaoman; Zheng, Guang; Miller, Colton; Alvarado, Ernesto

    2017-09-08

    Monitoring and understanding the spatio-temporal variations of forest aboveground biomass (AGB) is a key basis to quantitatively assess the carbon sequestration capacity of a forest ecosystem. To map and update forest AGB in the Greater Khingan Mountains (GKM) of China, this work proposes a physical-based approach. Based on the baseline forest AGB from Landsat Enhanced Thematic Mapper Plus (ETM+) images in 2008, we dynamically updated the annual forest AGB from 2009 to 2012 by adding the annual AGB increment (ABI) obtained from the simulated daily and annual net primary productivity (NPP) using the Boreal Ecosystem Productivity Simulator (BEPS) model. The 2012 result was validated by both field- and aerial laser scanning (ALS)-based AGBs. The predicted forest AGB for 2012 estimated from the process-based model can explain 31% ( n = 35, p BEPS-based AGB tended to underestimate/overestimate the AGB for dense/sparse forests. Generally, our results showed that the remotely sensed forest AGB estimates could serve as the initial carbon pool to parameterize the process-based model for NPP simulation, and the combination of the baseline forest AGB and BEPS model could effectively update the spatiotemporal distribution of forest AGB.

  4. A RENORMALIZATION PROCEDURE FOR TENSOR MODELS AND SCALAR-TENSOR THEORIES OF GRAVITY

    OpenAIRE

    SASAKURA, NAOKI

    2010-01-01

    Tensor models are more-index generalizations of the so-called matrix models, and provide models of quantum gravity with the idea that spaces and general relativity are emergent phenomena. In this paper, a renormalization procedure for the tensor models whose dynamical variable is a totally symmetric real three-tensor is discussed. It is proven that configurations with certain Gaussian forms are the attractors of the three-tensor under the renormalization procedure. Since these Gaussian config...

  5. Updates on Modeling the Water Cycle with the NASA Ames Mars Global Climate Model

    Science.gov (United States)

    Kahre, M. A.; Haberle, R. M.; Hollingsworth, J. L.; Montmessin, F.; Brecht, A. S.; Urata, R.; Klassen, D. R.; Wolff, M. J.

    2017-01-01

    Global Circulation Models (GCMs) have made steady progress in simulating the current Mars water cycle. It is now widely recognized that clouds are a critical component that can significantly affect the nature of the simulated water cycle. Two processes in particular are key to implementing clouds in a GCM: the microphysical processes of formation and dissipation, and their radiative effects on heating/ cooling rates. Together, these processes alter the thermal structure, change the dynamics, and regulate inter-hemispheric transport. We have made considerable progress representing these processes in the NASA Ames GCM, particularly in the presence of radiatively active water ice clouds. We present the current state of our group's water cycle modeling efforts, show results from selected simulations, highlight some of the issues, and discuss avenues for further investigation.­

  6. PROCRU: A model for analyzing crew procedures in approach to landing

    Science.gov (United States)

    Baron, S.; Muralidharan, R.; Lancraft, R.; Zacharias, G.

    1980-01-01

    A model for analyzing crew procedures in approach to landing is developed. The model employs the information processing structure used in the optimal control model and in recent models for monitoring and failure detection. Mechanisms are added to this basic structure to model crew decision making in this multi task environment. Decisions are based on probability assessments and potential mission impact (or gain). Sub models for procedural activities are included. The model distinguishes among external visual, instrument visual, and auditory sources of information. The external visual scene perception models incorporate limitations in obtaining information. The auditory information channel contains a buffer to allow for storage in memory until that information can be processed.

  7. Recent updates in the aerosol model of C-IFS and their impact on skill scores

    Science.gov (United States)

    Remy, Samuel; Boucher, Olivier; Hauglustaine, Didier

    2016-04-01

    The Composition-Integrated Forecast System (C-IFS) is a global atmospheric composition forecasting tool, run by ECMWF within the framework of the Copernicus Atmospheric Monitoring Services (CAMS). The aerosol model of C-IFS is a simple bulk scheme that forecasts 5 species: dust, sea-salt, black carbon, organic matter and sulfates. Three bins represent the dust and sea-salt, for the super-coarse, coarse and fine mode of these species (Morcrette et al., 2009). This talk will present recent updates of the aerosol model, and also introduce coming upgrades. It will also present evaluations of these scores against AERONET observations. Next cycle of the C-IFS will include a mass fixer, because the semi-Lagrangian advection scheme used in C-IFS is not mass-conservative. This modification has a negligible impact for most species except for black carbon and organic matter; it allows to close the budgets between sources and sinks in the diagnostics. Dust emissions have been tuned to favor the emissions of large particles, which were under-represented. This brought an overall decrease of the burden of dust aerosol and improved scores especially close to source regions. The biomass-burning aerosol emissions are now emitted at an injection height that is provided by a new version of the Global Fire Assimilation System (GFAS). This brought a small increase in biomass burning aerosols, and a better representation of some large fire events. Lastly, SO2 emissions are now provided by the MACCity dataset instead of and older version of the EDGAR dataset. The seasonal and yearly variability of SO2 emissions are better captured by the MACCity dataset; the use of which brought significant improvements of the forecasts against observations. Upcoming upgrades of the aerosol model of C-IFS consist mainly in the overhaul of the representation of secondary aerosols. Secondary Organic Aerosols (SOA) production will be dynamically estimated by scaling them on CO fluxes. This approach has been

  8. A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models

    Directory of Open Access Journals (Sweden)

    Erin Scott

    2016-01-01

    Full Text Available The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE models to observation reference data (Mackinson et al. 2009. The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting >1000 specific individual searches to find the statistically ‘best fit’ model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the ‘best fit’ model for ecological accuracy.

  9. Medical Updates Number 5 to the International Space Station Probability Risk Assessment (PRA) Model Using the Integrated Medical Model

    Science.gov (United States)

    Butler, Doug; Bauman, David; Johnson-Throop, Kathy

    2011-01-01

    The Integrated Medical Model (IMM) Project has been developing a probabilistic risk assessment tool, the IMM, to help evaluate in-flight crew health needs and impacts to the mission due to medical events. This package is a follow-up to a data package provided in June 2009. The IMM currently represents 83 medical conditions and associated ISS resources required to mitigate medical events. IMM end state forecasts relevant to the ISS PRA model include evacuation (EVAC) and loss of crew life (LOCL). The current version of the IMM provides the basis for the operational version of IMM expected in the January 2011 timeframe. The objectives of this data package are: 1. To provide a preliminary understanding of medical risk data used to update the ISS PRA Model. The IMM has had limited validation and an initial characterization of maturity has been completed using NASA STD 7009 Standard for Models and Simulation. The IMM has been internally validated by IMM personnel but has not been validated by an independent body external to the IMM Project. 2. To support a continued dialogue between the ISS PRA and IMM teams. To ensure accurate data interpretation, and that IMM output format and content meets the needs of the ISS Risk Management Office and ISS PRA Model, periodic discussions are anticipated between the risk teams. 3. To help assess the differences between the current ISS PRA and IMM medical risk forecasts of EVAC and LOCL. Follow-on activities are anticipated based on the differences between the current ISS PRA medical risk data and the latest medical risk data produced by IMM.

  10. Constitutional Justice Procedure in Lithuania: a Search for Optimal Model

    OpenAIRE

    Pūraitė-Andrikienė, Dovilė

    2017-01-01

    The dissertation systematically analyzes the preconditions for optimising the existing constitutional justice model, i.e. whether the current model meets the expectations of Lithuanian society and the legal community, corresponds to the capabilities of the legal system, and is in line with the tendencies of constitutional justice in European states, identifies the problematic aspects of the existing constitutional justice model and brings forward proposals regarding how the legal regulation c...

  11. modelling room cooling capacity with fuzzy logic procedure

    African Journals Online (AJOL)

    user

    for automatic and economical supplementary tools that will allow expertise input into design process [9]. Reasoning based on fuzzy models was however identified to provide an optional direction of handling the way humans think and make judgments [10]. This study developed and validated a model capable of estimating ...

  12. modelling room cooling capacity with fuzzy logic procedure

    African Journals Online (AJOL)

    The primary aim of this study is to develop a model for estimation of the cooling requirement of residential rooms. Fuzzy logic was employed to model four input variables (window area (m2), roof area (m2), external wall area (m2) and internal load (Watt). The algorithm of the inference engine applied sets of 81 linguistic ...

  13. Cassini Spacecraft Uncertainty Analysis Data and Methodology Review and Update/Volume 1: Updated Parameter Uncertainty Models for the Consequence Analysis

    Energy Technology Data Exchange (ETDEWEB)

    WHEELER, TIMOTHY A.; WYSS, GREGORY D.; HARPER, FREDERICK T.

    2000-11-01

    Uncertainty distributions for specific parameters of the Cassini General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) Final Safety Analysis Report consequence risk analysis were revised and updated. The revisions and updates were done for all consequence parameters for which relevant information exists from the joint project on Probabilistic Accident Consequence Uncertainty Analysis by the United States Nuclear Regulatory Commission and the Commission of European Communities.

  14. Updated U.S. Geothermal Supply Characterization and Representation for Market Penetration Model Input

    Energy Technology Data Exchange (ETDEWEB)

    Augustine, C.

    2011-10-01

    The U.S. Department of Energy (DOE) Geothermal Technologies Program (GTP) tasked the National Renewable Energy Laboratory (NREL) with conducting the annual geothermal supply curve update. This report documents the approach taken to identify geothermal resources, determine the electrical producing potential of these resources, and estimate the levelized cost of electricity (LCOE), capital costs, and operating and maintenance costs from these geothermal resources at present and future timeframes under various GTP funding levels. Finally, this report discusses the resulting supply curve representation and how improvements can be made to future supply curve updates.

  15. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Katya Le Blanc; Johanna Oxstrand

    2012-04-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

  16. Procedures and models for estimating preconstruction costs of highway projects.

    Science.gov (United States)

    2012-07-01

    This study presents data driven and component based PE cost prediction models by utilizing critical factors retrieved from ten years of historical project data obtained from ODOT roadway division. The study used factor analysis of covariance and corr...

  17. A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, Andrew P.; Kabilan, Senthil; Carson, James P.; Corley, Richard A.; Einstein, Daniel R.

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton’s Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple

  18. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, A.P., E-mail: andrew.kuprat@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Kabilan, S., E-mail: senthil.kabilan@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Carson, J.P., E-mail: james.carson@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Corley, R.A., E-mail: rick.corley@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Einstein, D.R., E-mail: daniel.einstein@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States)

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  19. A bidirectional coupling procedure applied to multiscale respiratory modeling

    International Nuclear Information System (INIS)

    Kuprat, A.P.; Kabilan, S.; Carson, J.P.; Corley, R.A.; Einstein, D.R.

    2013-01-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  20. Study on Finite Element Model Updating in Highway Bridge Static Loading Test Using Spatially-Distributed Optical Fiber Sensors.

    Science.gov (United States)

    Wu, Bitao; Lu, Huaxi; Chen, Bo; Gao, Zhicheng

    2017-07-19

    A finite model updating method that combines dynamic-static long-gauge strain responses is proposed for highway bridge static loading tests. For this method, the objective function consisting of static long-gauge stains and the first order modal macro-strain parameter (frequency) is established, wherein the local bending stiffness, density and boundary conditions of the structures are selected as the design variables. The relationship between the macro-strain and local element stiffness was studied first. It is revealed that the macro-strain is inversely proportional to the local stiffness covered by the long-gauge strain sensor. This corresponding relation is important for the modification of the local stiffness based on the macro-strain. The local and global parameters can be simultaneously updated. Then, a series of numerical simulation and experiments were conducted to verify the effectiveness of the proposed method. The results show that the static deformation, macro-strain and macro-strain modal can be predicted well by using the proposed updating model.

  1. A simple but accurate procedure for solving the five-parameter model

    International Nuclear Information System (INIS)

    Mares, Oana; Paulescu, Marius; Badescu, Viorel

    2015-01-01

    Highlights: • A new procedure for extracting the parameters of the one-diode model is proposed. • Only the basic information listed in the datasheet of PV modules are required. • Results demonstrate a simple, robust and accurate procedure. - Abstract: The current–voltage characteristic of a photovoltaic module is typically evaluated by using a model based on the solar cell equivalent circuit. The complexity of the procedure applied for extracting the model parameters depends on data available in manufacture’s datasheet. Since the datasheet is not detailed enough, simplified models have to be used in many cases. This paper proposes a new procedure for extracting the parameters of the one-diode model in standard test conditions, using only the basic data listed by all manufactures in datasheet (short circuit current, open circuit voltage and maximum power point). The procedure is validated by using manufacturers’ data for six commercially crystalline silicon photovoltaic modules. Comparing the computed and measured current–voltage characteristics the determination coefficient is in the range 0.976–0.998. Thus, the proposed procedure represents a feasible tool for solving the five-parameter model applied to crystalline silicon photovoltaic modules. The procedure is described in detail, to guide potential users to derive similar models for other types of photovoltaic modules.

  2. [The emphases and basic procedures of genetic counseling in psychotherapeutic model].

    Science.gov (United States)

    Zhang, Yuan-Zhi; Zhong, Nanbert

    2006-11-01

    The emphases and basic procedures of genetic counseling are all different with those in old models. In the psychotherapeutic model, genetic counseling will not only focus on counselees' genetic disorders and birth defects, but also their psychological problems. "Client-centered therapy" termed by Carl Rogers plays an important role in genetic counseling process. The basic procedures of psychotherapeutic model of genetic counseling include 7 steps: initial contact, introduction, agendas, inquiry of family history, presenting information, closing the session and follow-up.

  3. Computational model for dosimetric purposes in dental procedures

    International Nuclear Information System (INIS)

    Kawamoto, Renato H.; Campos, Tarcisio R.

    2013-01-01

    This study aims to develop a computational model for dosimetric purposes the oral region, based on computational tools SISCODES and MCNP-5, to predict deterministic effects and minimize stochastic effects caused by ionizing radiation by radiodiagnosis. Based on a set of digital information provided by computed tomography, three-dimensional voxel model was created, and its tissues represented. The model was exported to the MCNP code. In association with SICODES, we used the Monte Carlo N-Particle Transport Code (MCNP-5) method to play the corresponding interaction of nuclear particles with human tissues statistical process. The study will serve as a source of data for dosimetric studies in the oral region, providing deterministic effect and minimize the stochastic effect of ionizing radiation

  4. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  5. Predictive market segmentation model: An application of logistic regression model and CHAID procedure

    Directory of Open Access Journals (Sweden)

    Soldić-Aleksić Jasna

    2009-01-01

    Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.

  6. An auto-calibration procedure for empirical solar radiation models

    NARCIS (Netherlands)

    Bojanowski, J.S.; Donatelli, Marcello; Skidmore, A.K.; Vrieling, A.

    2013-01-01

    Solar radiation data are an important input for estimating evapotranspiration and modelling crop growth. Direct measurement of solar radiation is now carried out in most European countries, but the network of measuring stations is too sparse for reliable interpolation of measured values. Instead of

  7. Constructing Self-Modeling Videos: Procedures and Technology

    Science.gov (United States)

    Collier-Meek, Melissa A.; Fallon, Lindsay M.; Johnson, Austin H.; Sanetti, Lisa M. H.; Delcampo, Marisa A.

    2012-01-01

    Although widely recommended, evidence-based interventions are not regularly utilized by school practitioners. Video self-modeling is an effective and efficient evidence-based intervention for a variety of student problem behaviors. However, like many other evidence-based interventions, it is not frequently used in schools. As video creation…

  8. TSCALE: A New Multidimensional Scaling Procedure Based on Tversky's Contrast Model.

    Science.gov (United States)

    DeSarbo, Wayne S.; And Others

    1992-01-01

    TSCALE, a multidimensional scaling procedure based on the contrast model of A. Tversky for asymmetric three-way, two-mode proximity data, is presented. TSCALE conceptualizes a latent dimensional structure to describe the judgmental stimuli. A Monte Carlo analysis and two consumer psychology applications illustrate the procedure. (SLD)

  9. Planned updates and refinements to the Central Valley hydrologic model with an emphasis on improving the simulation of land subsidence in the San Joaquin Valley

    Science.gov (United States)

    Faunt, Claudia C.; Hanson, Randall T.; Martin, Peter; Schmid, Wolfgang

    2011-01-01

    California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence.

  10. A single model procedure for estimating tank calibration equations

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1997-10-01

    A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes

  11. Non-invasive electrical and magnetic stimulation of the brain, spinal cord, roots and peripheral nerves: Basic principles and procedures for routine clinical and research application. An updated report from an I.F.C.N. Committee.

    Science.gov (United States)

    Rossini, P M; Burke, D; Chen, R; Cohen, L G; Daskalakis, Z; Di Iorio, R; Di Lazzaro, V; Ferreri, F; Fitzgerald, P B; George, M S; Hallett, M; Lefaucheur, J P; Langguth, B; Matsumoto, H; Miniussi, C; Nitsche, M A; Pascual-Leone, A; Paulus, W; Rossi, S; Rothwell, J C; Siebner, H R; Ugawa, Y; Walsh, V; Ziemann, U

    2015-06-01

    These guidelines provide an up-date of previous IFCN report on "Non-invasive electrical and magnetic stimulation of the brain, spinal cord and roots: basic principles and procedures for routine clinical application" (Rossini et al., 1994). A new Committee, composed of international experts, some of whom were in the panel of the 1994 "Report", was selected to produce a current state-of-the-art review of non-invasive stimulation both for clinical application and research in neuroscience. Since 1994, the international scientific community has seen a rapid increase in non-invasive brain stimulation in studying cognition, brain-behavior relationship and pathophysiology of various neurologic and psychiatric disorders. New paradigms of stimulation and new techniques have been developed. Furthermore, a large number of studies and clinical trials have demonstrated potential therapeutic applications of non-invasive brain stimulation, especially for TMS. Recent guidelines can be found in the literature covering specific aspects of non-invasive brain stimulation, such as safety (Rossi et al., 2009), methodology (Groppa et al., 2012) and therapeutic applications (Lefaucheur et al., 2014). This up-dated review covers theoretical, physiological and practical aspects of non-invasive stimulation of brain, spinal cord, nerve roots and peripheral nerves in the light of more updated knowledge, and include some recent extensions and developments. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  12. Development of procedures to stabilize chlorfenvinphos in model cattle dips

    Energy Technology Data Exchange (ETDEWEB)

    Caceres, Tanya [Pontificia Universidad Catolica del Ecuador, Quito (Ecuador). Dept. de Biologia; University of South Australia (Australia). Centre for Environmental Risk Assessment and Remediation (CERAR); E-mail: cactp001@students.unisa.edu.au; Pastor, Yolanda; Merino, Ramiro [Comision Ecuatoriana de Energia Atomica, Quito (Ecuador). Dept. de Ecotoxicologia

    2007-07-01

    The environmental fate and dissipation of the acaricide chlorfenvinphos was studied in water and sediment in model cattle dips with monthly recharge and without monthly recharge. Chlorfenvinphos concentration decreased with time in both model dips and the monthly recharge at 10% of the initial concentration was inefficient to maintain the right concentration which would be effective for tick control. Volatilization was the principal factor that influenced the dissipation of the pesticide. The sediment bound residues increased with time. Mineralization of {sup 14}C- chlorfenvinphos due to microbial activity showed that the {sup 14}CO{sub 2} production increased with time in biometers flasks with different amounts of sediment. 2,4 - dichloroacetophenone, and 2,4-dichlorobenzaldehyde were identified as degradation products. Isomerization of chlorfenvinphos from isomer Z to E was influenced by sunlight and it affected the efficiency of the pesticide as Z isomer is more active on tick control than E isomer. (author)

  13. Comparison of Estimation Procedures for Multilevel AR(1 Models

    Directory of Open Access Journals (Sweden)

    Tanja eKrone

    2016-04-01

    Full Text Available To estimate a time series model for multiple individuals, a multilevel model may be used.In this paper we compare two estimation methods for the autocorrelation in Multilevel AR(1 models, namely Maximum Likelihood Estimation (MLE and Bayesian Markov Chain Monte Carlo.Furthermore, we examine the difference between modeling fixed and random individual parameters.To this end, we perform a simulation study with a fully crossed design, in which we vary the length of the time series (10 or 25, the number of individuals per sample (10 or 25, the mean of the autocorrelation (-0.6 to 0.6 inclusive, in steps of 0.3 and the standard deviation of the autocorrelation (0.25 or 0.40.We found that the random estimators of the population autocorrelation show less bias and higher power, compared to the fixed estimators. As expected, the random estimators profit strongly from a higher number of individuals, while this effect is small for the fixed estimators.The fixed estimators profit slightly more from a higher number of time points than the random estimators.When possible, random estimation is preferred to fixed estimation.The difference between MLE and Bayesian estimation is nearly negligible. The Bayesian estimation shows a smaller bias, but MLE shows a smaller variability (i.e., standard deviation of the parameter estimates.Finally, better results are found for a higher number of individuals and time points, and for a lower individual variability of the autocorrelation. The effect of the size of the autocorrelation differs between outcome measures.

  14. A baseline-free procedure for transformation models under interval censorship.

    Science.gov (United States)

    Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin

    2005-12-01

    An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.

  15. Land Boundary Conditions for the Goddard Earth Observing System Model Version 5 (GEOS-5) Climate Modeling System: Recent Updates and Data File Descriptions

    Science.gov (United States)

    Mahanama, Sarith P.; Koster, Randal D.; Walker, Gregory K.; Takacs, Lawrence L.; Reichle, Rolf H.; De Lannoy, Gabrielle; Liu, Qing; Zhao, Bin; Suarez, Max J.

    2015-01-01

    The Earths land surface boundary conditions in the Goddard Earth Observing System version 5 (GEOS-5) modeling system were updated using recent high spatial and temporal resolution global data products. The updates include: (i) construction of a global 10-arcsec land-ocean lakes-ice mask; (ii) incorporation of a 10-arcsec Globcover 2009 land cover dataset; (iii) implementation of Level 12 Pfafstetter hydrologic catchments; (iv) use of hybridized SRTM global topography data; (v) construction of the HWSDv1.21-STATSGO2 merged global 30 arc second soil mineral and carbon data in conjunction with a highly-refined soil classification system; (vi) production of diffuse visible and near-infrared 8-day MODIS albedo climatologies at 30-arcsec from the period 2001-2011; and (vii) production of the GEOLAND2 and MODIS merged 8-day LAI climatology at 30-arcsec for GEOS-5. The global data sets were preprocessed and used to construct global raster data files for the software (mkCatchParam) that computes parameters on catchment-tiles for various atmospheric grids. The updates also include a few bug fixes in mkCatchParam, as well as changes (improvements in algorithms, etc.) to mkCatchParam that allow it to produce tile-space parameters efficiently for high resolution AGCM grids. The update process also includes the construction of data files describing the vegetation type fractions, soil background albedo, nitrogen deposition and mean annual 2m air temperature to be used with the future Catchment CN model and the global stream channel network to be used with the future global runoff routing model. This report provides detailed descriptions of the data production process and data file format of each updated data set.

  16. Development of procedures to stabilize chlorfenvinphos in model cattle dips

    International Nuclear Information System (INIS)

    Pastor, Y.; Caceres, T.; Merino, R.; Villamar, P.; Castro, R.

    1997-01-01

    The environmental fate and dissipation of chlorfenvinphos acaricide was studied in water and sediment in model cattle dips with recharge, without recharge and with added stabilizers. Chlorfenvinphos concentration decreased with time in all of them, and the monthly recharge at 10% of the initial concentration was inefficient to maintain a concentration that would be effective for tick control. However, the loss of pesticide in the model dip with added phosphate buffer as stabilizer was the least. Volatilization was the principal factor that influenced the dissipation of the pesticide. The sediment bound residues increased with time. Mineralization of 14 C-chlorfenvinphos due to microbial activity showed that the 14 CO 2 production increased with time in biometers flasks with different amounts of sediment. 2,4-Dichloroacetophenone, and 2,4-dichlorobenzaldehyde were identified as degradation products. Isomerization of chlorfenviphos from isomer Z to E was influenced by sunlight. Chlorfenvinphos was stable in aqueous solution for 14 days under pH 4 to 9. Leaching tests demonstrated that the pesticide was not a potential pollutant of ground water. (author)

  17. Comments on the Updated Tetrapartite Pallium Model in the Mouse and Chick, Featuring a Homologous Claustro-Insular Complex.

    Science.gov (United States)

    Puelles, Luis

    2017-01-01

    This essay reviews step by step the conceptual changes of the updated tetrapartite pallium model from its tripartite and early tetrapartite antecedents. The crucial observations in mouse material are explained first in the context of assumptions, tentative interpretations, and literature data. Errors and the solutions offered to resolve them are made explicit. Next, attention is centered on the lateral pallium sector of the updated model, whose definition is novel in incorporating a claustro-insular complex distinct from both olfactory centers (ventral pallium) and the isocortex (dorsal pallium). The general validity of the model is postulated at least for tetrapods. Genoarchitectonic studies performed to check the presence of a claustro-insular field homolog in the avian brain are reviewed next. These studies have indeed revealed the existence of such a complex in the avian mesopallium (though stratified outside-in rather than inside-out as in mammals), and there are indications that the same pattern may be found in reptiles as well. Peculiar pallio-pallial tangential migratory phenomena are apparently shared as well between mice and chicks. The issue of whether the avian mesopallium has connections that are similar to the known connections of the mammalian claustro-insular complex is considered next. Accrued data are consistent with similar connections for the avian insula homolog, but they are judged to be insufficient to reach definitive conclusions about the avian claustrum. An aside discusses that conserved connections are not a necessary feature of field-homologous neural centers. Finally, the present scenario on the evolution of the pallium of sauropsids and mammals is briefly visited, as highlighted by the updated tetrapartite model and present results. © 2017 S. Karger AG, Basel.

  18. Updating the Cornell Net Carbohydrate and Protein System feed library and analyzing model sensitivity to feed inputs.

    Science.gov (United States)

    Higgs, R J; Chase, L E; Ross, D A; Van Amburgh, M E

    2015-09-01

    The Cornell Net Carbohydrate and Protein System (CNCPS) is a nutritional model that evaluates the environmental and nutritional resources available in an animal production system and enables the formulation of diets that closely match the predicted animal requirements. The model includes a library of approximately 800 different ingredients that provide the platform for describing the chemical composition of the diet to be formulated. Each feed in the feed library was evaluated against data from 2 commercial laboratories and updated when required to enable more precise predictions of dietary energy and protein supply. A multistep approach was developed to predict uncertain values using linear regression, matrix regression, and optimization. The approach provided an efficient and repeatable way of evaluating and refining the composition of a large number of different feeds against commercially generated data similar to that used by CNCPS users on a daily basis. The protein A fraction in the CNCPS, formerly classified as nonprotein nitrogen, was reclassified to ammonia for ease and availability of analysis and to provide a better prediction of the contribution of metabolizable protein from free AA and small peptides. Amino acid profiles were updated using contemporary data sets and now represent the profile of AA in the whole feed rather than the insoluble residue. Model sensitivity to variation in feed library inputs was investigated using Monte Carlo simulation. Results showed the prediction of metabolizable energy was most sensitive to variation in feed chemistry and fractionation, whereas predictions of metabolizable protein were most sensitive to variation in digestion rates. Regular laboratory analysis of samples taken on-farm remains the recommended approach to characterizing the chemical components of feeds in a ration. However, updates to the CNCPS feed library provide a database of ingredients that are consistent with current feed chemistry information and

  19. «Soft Power»: the Updated Theoretical Concept and Russian Assembly Model

    Directory of Open Access Journals (Sweden)

    Владимир Сергеевич Изотов

    2011-12-01

    Full Text Available The article is dedicated to critically important informational and ideological aspects of Russia's foreign policy. The goal is to revise and specify the notion soft power in the context of rapidly changing space of global politics. During the last years international isolation of Russia, including informational and ideological sphere is increasing. The way to overcome this negative trend is modernization of foreign policy strategy on the basis of updating of operational tools and ideological accents. It's becoming obvious that the real foreign policy success in the global world system is achieved by the use of soft power. The author tries to specify and conceptualize the phenomenon of Russia's soft power as a purposeful external ideology facing the urgent need of updating.

  20. SHINE Virtual Machine Model for In-flight Updates of Critical Mission Software

    Science.gov (United States)

    Plesea, Lucian

    2008-01-01

    This software is a new target for the Spacecraft Health Inference Engine (SHINE) knowledge base that compiles a knowledge base to a language called Tiny C - an interpreted version of C that can be embedded on flight processors. This new target allows portions of a running SHINE knowledge base to be updated on a "live" system without needing to halt and restart the containing SHINE application. This enhancement will directly provide this capability without the risk of software validation problems and can also enable complete integration of BEAM and SHINE into a single application. This innovation enables SHINE deployment in domains where autonomy is used during flight-critical applications that require updates. This capability eliminates the need for halting the application and performing potentially serious total system uploads before resuming the application with the loss of system integrity. This software enables additional applications at JPL (microsensors, embedded mission hardware) and increases the marketability of these applications outside of JPL.

  1. New Procedure to Develop Lumped Kinetic Models for Heavy Fuel Oil Combustion

    KAUST Repository

    Han, Yunqing

    2016-09-20

    A new procedure to develop accurate lumped kinetic models for complex fuels is proposed, and applied to the experimental data of the heavy fuel oil measured by thermogravimetry. The new procedure is based on the pseudocomponents representing different reaction stages, which are determined by a systematic optimization process to ensure that the separation of different reaction stages with highest accuracy. The procedure is implemented and the model prediction was compared against that from a conventional method, yielding a significantly improved agreement with the experimental data. © 2016 American Chemical Society.

  2. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  3. 78 FR 20148 - Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in...

    Science.gov (United States)

    2013-04-03

    ... mathematical modeling methods used in predicting the dispersion of heated effluent in natural water bodies. The... COMMISSION Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in... Mathematical Models Selected to Predict Heated Effluent Dispersion in Natural Water Bodies.'' The guide is...

  4. In the absence of physical practice, observation and imagery do not result in updating of internal models for aiming.

    Science.gov (United States)

    Ong, Nicole T; Larssen, Beverley C; Hodges, Nicola J

    2012-04-01

    The presence of after-effects in adaptation tasks implies that an existing internal model has been updated. Previously, we showed that although observers adapted to a visuomotor perturbation, they did not show after-effects. In this experiment, we tested 2 further observer groups and an actor group. Observers were now actively engaged in watching (encouraged through imagery and movement estimation), with one group physically practising for 25% of the trials (mixed). Participants estimated the hand movements that produced various cursor trajectories and/or their own hand movement from a preceding trial. These trials also allowed us to assess the development of explicit knowledge as a function of the three practice conditions. The pure observation group did not show after-effects, whereas the actor and mixed groups did. The pure observation group improved their ability to estimate hand movement of the video model. Although the actor and mixed groups improved in actual reaching accuracy, they did not improve in explicit estimation. The mixed group was more accurate in reaching during adaptation and showed larger after-effects than the actors. We suggest that observation encourages an explicit mode of learning, enabling performance benefits without corresponding changes to an internal model of the mapping between output and sensory input. However, some physical practice interspersed with observation can change the manner with which learning is achieved, encouraging implicit learning and the updating of an existing internal model.

  5. Accuracy of axial depth of cut in micromilling operations - Simplified procedure and uncertainty model

    DEFF Research Database (Denmark)

    Bissacco, Giuliano

    2005-01-01

    In order to maintain an optimum cutting speed, the reduction of mill diameters requires machine tools with high rotational speed capabilities. A solution to update existing machine tools is the use of high speed attached spindles. Major drawbacks of these attachments are the high thermal expansion...... and their rapid warming and cooling, which prevent the achievement of a steady state. Several other factors, independent on the tool-workpiece interaction, influence the machining accuracy. The cutting parameter most heavily affected is the axial depth of cut which is the most critical when using micro end mills......, due to the easy breakage particularly when milling on hard materials [1]. Typical values for the errors on the control of the axial depth of cut are in the order of 50 microns, while the aimed depth of cut can be as low as 5 microns. The author has developed a machining procedure for optimal control...

  6. Animal models of autism with a particular focus on the neural basis of changes in social behaviour: an update article.

    Science.gov (United States)

    Olexová, Lucia; Talarovičová, Alžbeta; Lewis-Evans, Ben; Borbélyová, Veronika; Kršková, Lucia

    2012-12-01

    Research on autism has been gaining more and more attention. However, its aetiology is not entirely known and several factors are thought to contribute to the development of this neurodevelopmental disorder. These potential contributing factors range from genetic heritability to environmental effects. A significant number of reviews have already been published on different aspects of autism research as well as focusing on using animal models to help expand current knowledge around its aetiology. However, the diverse range of symptoms and possible causes of autism have resulted in as equally wide variety of animal models of autism. In this update article we focus only on the animal models with neurobehavioural characteristics of social deficit related to autism and present an overview of the animal models with alterations in brain regions, neurotransmitters, or hormones that are involved in a decrease in sociability. Copyright © 2012 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  7. A Procedure for Building Product Models in Intelligent Agent-based OperationsManagement

    DEFF Research Database (Denmark)

    Hvam, Lars; Riis, Jesper; Malis, Martin

    2003-01-01

    This article presents a procedure for building product models to support the specification processes dealing with sales, design of product variants and production preparation. The procedure includes, as the first phase, an analysis and redesign of the business processes that are to be supported...... by product models. The next phase includes an analysis of the product assortment, and the set up of a so-called product master. Finally the product model is designed and implemented by using object oriented modelling. The procedure is developed in order to ensure that the product models constructed are fit...... for the business processes they support, and properly structured and documented in order to facilitate the maintenance and further development of the systems. The research has been carried out at the Centre for Industrialisation of Engineering, Department of Manufacturing Engineering, Technical University...

  8. An analysis of leading, lagging, and coincident economic indicators in the United States and its relationship to the volume of plastic surgery procedures performed: an update for 2012.

    Science.gov (United States)

    Paik, Angie M; Hoppe, Ian C; Pastor, Craig J

    2013-09-01

    As physician compensation and reimbursement tightens throughout the United States, it is important for physicians to be aware of the influence that the economic environment has on the unique medical field of plastic and reconstructive surgery. This study will attempt to determine a relationship between the volume of different plastic surgical procedures and various economic indicators. Information from the American Society of Plastic Surgeons' annual reports on plastic surgery statistics available on the Internet (http://www.plasticsurgery.org/Media/Statistics.html) was collected from the years 2000 through 2011. Yearly economic indicators were collected from readily available Web sites. In terms of the total number of plastic surgery procedures performed, there was a significant positive relationship with GDP, GDP per capita, personal income, consumer price index (CPI) (all), and CPI (medical), and a significant negative relationship with the issuance of new home permits. There was a significant positive relationship with total cosmetic procedures and GDP, GDP per capita, personal income, CPI (all), and CPI (medical), and a significant negative relationship with the issuance of new home permits. There was a significant positive relationship between cosmetic surgical procedures and the issuance of new home permits and the average prime rate charged by banks. There was a significant positive relationship with cosmetic minimally invasive procedures and GDP, GDP per capita, personal income, CPI (all), and CPI (medical), and a significant negative relationship with the issuance of new home permits. There was a significant negative relationship between reconstructive procedures and GDP, GDP per capita, personal income, CPI (all), and CPI (medical). Cosmetic minimally invasive procedures involve less downtime, are generally less expensive than surgical options, and are widely available, making it easier for patients to decide on them quickly during good economic times

  9. An initialization procedure for assimilating geostationary satellite data into numerical weather prediction models

    Science.gov (United States)

    Gal-Chen, T.; Schmidt, B.; Uccellini, L. W.

    1985-01-01

    An attempt was made to offset the limitations of GEO satellites for supplying timely initialization data for numerical weather prediction models (NWP). The NWP considered combined an isentropic representation of the free atmosphere with a sigma-coordinate model for the lower 200 mb. A flux form of the predictive equations described vertical transport interactions at the boundary of the two model domains, thereby accounting for the poor vertical temperature and wind field resolution of GEO satellite data. A variational analysis approach was employed to insert low resolution satellite-sensed temperature data at varying rates. The model vertical resolution was limited to that available from the satellite. Test simulations demonstrated that accuracy increases with the frequency of data updates, e.g., every 0.5-1 hr. The tests also showed that extensive cloud cover negates the capabilities of IR sensors and that microwave sensors will be needed for temperature estimations for 500-1000 mb levels.

  10. An Updated Subsequent Injury Categorisation Model (SIC-2.0): Data-Driven Categorisation of Subsequent Injuries in Sport.

    Science.gov (United States)

    Toohey, Liam A; Drew, Michael K; Fortington, Lauren V; Finch, Caroline F; Cook, Jill L

    2018-03-03

    Accounting for subsequent injuries is critical for sports injury epidemiology. The subsequent injury categorisation (SIC-1.0) model was developed to create a framework for accurate categorisation of subsequent injuries but its operationalisation has been challenging. The objective of this study was to update the subsequent injury categorisation (SIC-1.0 to SIC-2.0) model to improve its utility and application to sports injury datasets, and to test its applicability to a sports injury dataset. The SIC-1.0 model was expanded to include two levels of categorisation describing how previous injuries relate to subsequent events. A data-driven classification level was established containing eight discrete injury categories identifiable without clinical input. A sequential classification level that sub-categorised the data-driven categories according to their level of clinical relatedness has 16 distinct subsequent injury types. Manual and automated SIC-2.0 model categorisation were applied to a prospective injury dataset collected for elite rugby sevens players over a 2-year period. Absolute agreement between the two coding methods was assessed. An automated script for automatic data-driven categorisation and a flowchart for manual coding were developed for the SIC-2.0 model. The SIC-2.0 model was applied to 246 injuries sustained by 55 players (median four injuries, range 1-12), 46 (83.6%) of whom experienced more than one injury. The majority of subsequent injuries (78.7%) were sustained to a different site and were of a different nature. Absolute agreement between the manual coding and automated statistical script category allocation was 100%. The updated SIC-2.0 model provides a simple flowchart and automated electronic script to allow both an accurate and efficient method of categorising subsequent injury data in sport.

  11. New format for storage of voxel phantom, and exposure computer model EGS4/MAX to EGSnrc/MASH update

    International Nuclear Information System (INIS)

    Leal Neto, Viriato; Vieira, Jose W.; Lima, Fernando R.A.; Lima, Lindeval F.

    2011-01-01

    In order to estimate the dosage absorbed by those subjected to ionizing radiation, it is necessary to perform simulations using the exposure computational model (ECM). Such models are consists essentially of an anthropomorphic phantom and a Monte Carlo code (MC). The conjunction of a voxel phantom of the MC code is a complex process and often results in solving a specific problem. This is partly due to the way the phantom voxel is stored on a computer. It is usually required a substantial amount of space to store a static representation of the human body and also a significant amount of memory for reading and processing a given simulation. This paper presents a new way to store data concerning the geometry irradiated (similar to the technique of repeated structures used in the geometry of MCNP code), reducing by 52% the disk space required for storage when compared to the previous format applied by Grupo de Dosimetria Numerica (GDN/CNPq). On the other hand, research in numerical dosimetry leads to a constant improvement on the resolution of voxel phantoms leading thus to a new requirement, namely, to develop new estimates of dose. Therefore, this work also performs an update of the MAX (Male Adult voXel)/EGS4 ECM for the MASH (Adult MaleMeSH)/EGSnrc ECM and presents instances of dosimetric evaluations using the new ECM. Besides the update of the phantom and the MC code, the algorithm of the source used has also been improved in contrast to previous publications. (author)

  12. Bayesian updating of reliability of civil infrastructure facilities based on condition-state data and fault-tree model

    International Nuclear Information System (INIS)

    Ching Jianye; Leu, S.-S.

    2009-01-01

    This paper considers a difficult but practical circumstance of civil infrastructure management-deterioration/failure data of the infrastructure system are absent while only condition-state data of its components are available. The goal is to develop a framework for estimating time-varying reliabilities of civil infrastructure facilities under such a circumstance. A novel method of analyzing time-varying condition-state data that only reports operational/non-operational status of the components is proposed to update the reliabilities of civil infrastructure facilities. The proposed method assumes that the degradation arrivals can be modeled as a Poisson process with unknown time-varying arrival rate and damage impact and that the target system can be represented as a fault-tree model. To accommodate large uncertainties, a Bayesian algorithm is proposed, and the reliability of the infrastructure system can be quickly updated based on the condition-state data. Use of the new method is demonstrated with a real-world example of hydraulic spillway gate system.

  13. New format for storage of voxel phantom, and exposure computer model EGS4/MAX to EGSnrc/MASH update

    Energy Technology Data Exchange (ETDEWEB)

    Leal Neto, Viriato [Departamento de Energia Nuclear (DEN). Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco. Recife, PE (Brazil); Vieira, Jose W. [Escola Politecnica de Pernambuco. UPE, Recife, PE (Brazil); Lima, Fernando R.A., E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares (CRCN/NE-CNEN-PE), Recife, PE (Brazil); Lima, Lindeval F., E-mail: lindeval@dmat.ufrr.br [Departamento de Matematica. Universidade Federal de Roraima (UFRR), Boa Vista, RR (Brazil)

    2011-07-01

    In order to estimate the dosage absorbed by those subjected to ionizing radiation, it is necessary to perform simulations using the exposure computational model (ECM). Such models are consists essentially of an anthropomorphic phantom and a Monte Carlo code (MC). The conjunction of a voxel phantom of the MC code is a complex process and often results in solving a specific problem. This is partly due to the way the phantom voxel is stored on a computer. It is usually required a substantial amount of space to store a static representation of the human body and also a significant amount of memory for reading and processing a given simulation. This paper presents a new way to store data concerning the geometry irradiated (similar to the technique of repeated structures used in the geometry of MCNP code), reducing by 52% the disk space required for storage when compared to the previous format applied by Grupo de Dosimetria Numerica (GDN/CNPq). On the other hand, research in numerical dosimetry leads to a constant improvement on the resolution of voxel phantoms leading thus to a new requirement, namely, to develop new estimates of dose. Therefore, this work also performs an update of the MAX (Male Adult voXel)/EGS4 ECM for the MASH (Adult MaleMeSH)/EGSnrc ECM and presents instances of dosimetric evaluations using the new ECM. Besides the update of the phantom and the MC code, the algorithm of the source used has also been improved in contrast to previous publications. (author)

  14. BPS-ICF model, a tool to measure biopsychosocial functioning and disability within ICF concepts: theory and practice updated.

    Science.gov (United States)

    Talo, Seija A; Rytökoski, Ulla M

    2016-03-01

    The transformation of International Classification of Impairments, Disabilities and Handicaps into International Classification of Functioning, Disability and Health (ICF) meant a lot for those needing to communicate in terms of functioning concept in their daily work. With ICF's commonly understood language, the decades' uncertainty on what concepts and terms describe functioning and disabilities seemed to be dispelled. Instead, operationalizing ICF to measure the level of functioning along with the new nomenclature has not been as unambiguous. Transforming linguistic terms into quantified functioning seems to need another type of theorizing. Irrespective of challenging tasks, numerous projects were formulated during the past decades to apply ICF for measurement purposes. This article updates one of them, the so-called biopsychosocial-ICF model, which uses all ICF categories but classifies them into more components than ICF for measurement purposes. The model suggests that both disabilities and functional resources should be described by collecting and organizing functional measurement data in a multidisciplinary, biopsychosocial data matrice.

  15. Procedural guide for modelling and analyzing the flight characteristics of a helicopter design using Flightlab

    OpenAIRE

    McVaney, Gary P.

    1993-01-01

    Approved for public release; distribution is unlimited. This thesis presents one method for modelling and analyzing a helicopter design using Flightlab. Flightlab is a computer program that provides for engineering design, analysis and simulation of aircraft using non-linear dynamic modeling techniques. The procedure to model a single main rotor helicopter is outlined using the sample helicopter design in the book 'Helicopter Performance, Stability, and Control' by Ray Prouty. The analysis...

  16. The Chain-Link Fence Model: A Framework for Creating Security Procedures

    OpenAIRE

    Houghton, Robert F.

    2013-01-01

    A long standing problem in information technology security is how to help reduce the security footprint. Many specific proposals exist to address specific problems in information technology security. Most information technology solutions need to be repeatable throughout the course of an information systems lifecycle. The Chain-Link Fence Model is a new model for creating and implementing information technology procedures. This model was validated by two different methods: the first being int...

  17. A Markov Chain Model for evaluating the effectiveness of randomized surveillance procedures

    Energy Technology Data Exchange (ETDEWEB)

    Edmunds, T.A.

    1994-01-01

    A Markov Chain Model has been developed to evaluate the effectiveness of randomized surveillance procedures. The model is applicable for surveillance systems that monitor a collection of assets by randomly selecting and inspecting the assets. The model provides an estimate of the detection probability as a function of the amount of time that an adversary would require to steal or sabotage the asset. An interactive computer code has been written to perform the necessary computations.

  18. A network society communicative model for optimizing the Refugee Status Determination (RSD procedures

    Directory of Open Access Journals (Sweden)

    Andrea Pacheco Pacífico

    2013-01-01

    Full Text Available This article recommends a new way to improve Refugee Status Determination (RSD procedures by proposing a network society communicative model based on active involvement and dialogue among all implementing partners. This model, named after proposals from Castells, Habermas, Apel, Chimni, and Betts, would be mediated by the United Nations High Commissioner for Refugees (UNHCR, whose role would be modeled after that of the International Committee of the Red Cross (ICRC practice.

  19. Finite element model updating of multi-span steel-arch-steel-girder bridges based on ambient vibrations

    Science.gov (United States)

    Hou, Tsung-Chin; Gao, Wei-Yuan; Chang, Chia-Sheng; Zhu, Guan-Rong; Su, Yu-Min

    2017-04-01

    The three-span steel-arch-steel-girder Jiaxian Bridge was newly constructed in 2010 to replace the former one that has been destroyed by Typhoon Sinlaku (2008, Taiwan). It was designed and built to continue the domestic service requirement, as well as to improve the tourism business of the Kaohsiung city government, Taiwan. This study aimed at establishing the baseline model of Jiaxian Bridge for hazardous scenario simulation such as typhoons, floods and earthquakes. Necessities of these precaution works were attributed to the inherent vulnerability of the sites: near fault and river cross. The uncalibrated baseline bridge model was built with structural finite element in accordance with the blueprints. Ambient vibration measurements were performed repeatedly to acquire the elastic dynamic characteristics of the bridge structure. Two frequency domain system identification algorithms were employed to extract the measured operational modal parameters. Modal shapes, frequencies, and modal assurance criteria (MAC) were configured as the fitting targets so as to calibrate/update the structural parameters of the baseline model. It has been recognized that different types of structural parameters contribute distinguishably to the fitting targets, as this study has similarly explored. For steel-arch-steel-girder bridges in particular this case, joint rigidity of the steel components was found to be dominant while material properties and section geometries relatively minor. The updated model was capable of providing more rational elastic responses of the bridge superstructure under normal service conditions as well as hazardous scenarios, and can be used for manage the health conditions of the bridge structure.

  20. Circular Updates

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Circular Updates are periodic sequentially numbered instructions to debriefing staff and observers informing them of changes or additions to scientific and specimen...

  1. Implementation of a combined association-linkage model for quantitative traits in linear mixed model procedures of statistical packages

    NARCIS (Netherlands)

    Beem, A. Leo; Boomsma, Dorret I.

    2006-01-01

    A transmission disequilibrium test for quantitative traits which combines association and linkage analyses is currently available in several dedicated software packages. We describe how to implement such models in linear mixed model procedures that are available in widely used statistical packages

  2. Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2) (External Review Draft)

    Science.gov (United States)

    EPA announced the availability of the draft report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) for a 30-day public comment period. The ICLUS version 2 (v2) modeling tool furthered land change mod...

  3. Improving Semantic Updating Method on 3d City Models Using Hybrid Semantic-Geometric 3d Segmentation Technique

    Science.gov (United States)

    Sharkawi, K.-H.; Abdul-Rahman, A.

    2013-09-01

    to LoD4. The accuracy and structural complexity of the 3D objects increases with the LoD level where LoD0 is the simplest LoD (2.5D; Digital Terrain Model (DTM) + building or roof print) while LoD4 is the most complex LoD (architectural details with interior structures). Semantic information is one of the main components in CityGML and 3D City Models, and provides important information for any analyses. However, more often than not, the semantic information is not available for the 3D city model due to the unstandardized modelling process. One of the examples is where a building is normally generated as one object (without specific feature layers such as Roof, Ground floor, Level 1, Level 2, Block A, Block B, etc). This research attempts to develop a method to improve the semantic data updating process by segmenting the 3D building into simpler parts which will make it easier for the users to select and update the semantic information. The methodology is implemented for 3D buildings in LoD2 where the buildings are generated without architectural details but with distinct roof structures. This paper also introduces hybrid semantic-geometric 3D segmentation method that deals with hierarchical segmentation of a 3D building based on its semantic value and surface characteristics, fitted by one of the predefined primitives. For future work, the segmentation method will be implemented as part of the change detection module that can detect any changes on the 3D buildings, store and retrieve semantic information of the changed structure, automatically updates the 3D models and visualize the results in a userfriendly graphical user interface (GUI).

  4. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  5. Validation of Short-Term Noise Assessment Procedures: FY16 Summary of Procedures, Progress, and Preliminary Results

    Science.gov (United States)

    summary of the work performed in fiscal year 2016 for the Environment, Safety and Occupational Health Short-Term Noise Assessment Procedure Demonstration...Validation project. This report describes the procedure used to generate the noise models output dataset, and then it compares that dataset to the...algorithm is continuing in FY17. Updates to the noise assessment tools are identified. Throughout this document, procedures used for calculations and analysis are included.

  6. Procedural 3d Modelling for Traditional Settlements. The Case Study of Central Zagori

    Science.gov (United States)

    Kitsakis, D.; Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.

    2017-02-01

    Over the last decades 3D modelling has been a fast growing field in Geographic Information Science, extensively applied in various domains including reconstruction and visualization of cultural heritage, especially monuments and traditional settlements. Technological advances in computer graphics, allow for modelling of complex 3D objects achieving high precision and accuracy. Procedural modelling is an effective tool and a relatively novel method, based on algorithmic modelling concept. It is utilized for the generation of accurate 3D models and composite facade textures from sets of rules which are called Computer Generated Architecture grammars (CGA grammars), defining the objects' detailed geometry, rather than altering or editing the model manually. In this paper, procedural modelling tools have been exploited to generate the 3D model of a traditional settlement in the region of Central Zagori in Greece. The detailed geometries of 3D models derived from the application of shape grammars on selected footprints, and the process resulted in a final 3D model, optimally describing the built environment of Central Zagori, in three levels of Detail (LoD). The final 3D scene was exported and published as 3D web-scene which can be viewed with 3D CityEngine viewer, giving a walkthrough the whole model, same as in virtual reality or game environments. This research work addresses issues regarding textures' precision, LoD for 3D objects and interactive visualization within one 3D scene, as well as the effectiveness of large scale modelling, along with the benefits and drawbacks that derive from procedural modelling techniques in the field of cultural heritage and more specifically on 3D modelling of traditional settlements.

  7. An incremental procedure model for e-learning projects at universities

    Directory of Open Access Journals (Sweden)

    Pahlke, Friedrich

    2006-11-01

    Full Text Available E-learning projects at universities are produced under different conditions than in industry. The main characteristic of many university projects is that these are realized quasi in a solo effort. In contrast, in private industry the different, interdisciplinary skills that are necessary for the development of e-learning are typically supplied by a multimedia agency.A specific procedure tailored for the use at universities is therefore required to facilitate mastering the amount and complexity of the tasks.In this paper an incremental procedure model is presented, which describes the proceeding in every phase of the project. It allows a high degree of flexibility and emphasizes the didactical concept – instead of the technical implementation. In the second part, we illustrate the practical use of the theoretical procedure model based on the project “Online training in Genetic Epidemiology”.

  8. Using genetic algorithm and TOPSIS for Xinanjiang model calibration with a single procedure

    Science.gov (United States)

    Cheng, Chun-Tian; Zhao, Ming-Yan; Chau, K. W.; Wu, Xin-Yu

    2006-01-01

    Genetic Algorithm (GA) is globally oriented in searching and thus useful in optimizing multiobjective problems, especially where the objective functions are ill-defined. Conceptual rainfall-runoff models that aim at predicting streamflow from the knowledge of precipitation over a catchment have become a basic tool for flood forecasting. The parameter calibration of a conceptual model usually involves the multiple criteria for judging the performances of observed data. However, it is often difficult to derive all objective functions for the parameter calibration problem of a conceptual model. Thus, a new method to the multiple criteria parameter calibration problem, which combines GA with TOPSIS (technique for order performance by similarity to ideal solution) for Xinanjiang model, is presented. This study is an immediate further development of authors' previous research (Cheng, C.T., Ou, C.P., Chau, K.W., 2002. Combining a fuzzy optimal model with a genetic algorithm to solve multi-objective rainfall-runoff model calibration. Journal of Hydrology, 268, 72-86), whose obvious disadvantages are to split the whole procedure into two parts and to become difficult to integrally grasp the best behaviors of model during the calibration procedure. The current method integrates the two parts of Xinanjiang rainfall-runoff model calibration together, simplifying the procedures of model calibration and validation and easily demonstrated the intrinsic phenomenon of observed data in integrity. Comparison of results with two-step procedure shows that the current methodology gives similar results to the previous method, is also feasible and robust, but simpler and easier to apply in practice.

  9. A comparative review between the updated models of Brazilian, United Kingdom and American eye banks and lamellar transplants

    Directory of Open Access Journals (Sweden)

    Gustavo Victor

    2014-12-01

    Full Text Available The corneal transplantation (CT is the most commonly performed type of transplant in the world and the Eye Banks are organizations whose capture, evaluate, preserve, store and distribute ocular tissues. With the evolution of surgical techniques and equipment for CT, the BOs had to evolve to keep up with these requirements. This evolution goes from tissues capture techniques, donating money and clarification to the patient (e.g. internet-based, use of current equipment for more adequate tissues supply for the most current surgical techniques, integration of BOs of certain country and real-time management of stocks of ocular tissues, and adequacy of laws that manage the entire process. This review aims to make a comparative review between the updated models of Brazilian, United Kingdon and American Eye Banks. Like, check what the trend towards lamellar transplants in these three countries.

  10. New calculation of derived limits for the 1960 radiation protection guides reflecting updated models for dosimetry and biological transport

    International Nuclear Information System (INIS)

    Eckerman, K.F.; Watson, S.B.; Nelson, C.B.; Nelson, D.R.; Richardson, A.C.B.; Sullivan, R.E.

    1984-12-01

    This report presents revised values for the radioactivity concentration guides (RCGs), based on the 1960 primary radiation protection guides (RPGs) for occupational exposure (FRC 1960) and for underground uranium miners (EPA 1971a) using the updated dosimetric models developed to prepare ICRP Publication 30. Unlike the derived quantities presented in Publication 30, which are based on limitation of the weighted sum of doses to all irradiated tissues, these RCGs are based on the ''critical organ'' approach of the 1960 guidance, which was a single limit for the most critically irradiated organ or tissue. This report provides revised guides for the 1960 Federal guidance which are consistent with current dosimetric relationships. 2 figs., 4 tabs

  11. Exploring into the New Model Procedure in Translation: Wafting as a Case in Point

    Science.gov (United States)

    Akbari, Alireza

    2013-01-01

    Choosing the near equivalence for translator is of great concern nowadays. While it seems more commitment has been given to this issue, yet there are still rooms for more attention to innovative methods of Translation Studies in this direction. The idea of wafting procedure in Intermediacy model of translation comes from a book by Alireza Akbari…

  12. User Acceptance of YouTube for Procedural Learning: An Extension of the Technology Acceptance Model

    Science.gov (United States)

    Lee, Doo Young; Lehto, Mark R.

    2013-01-01

    The present study was framed using the Technology Acceptance Model (TAM) to identify determinants affecting behavioral intention to use YouTube. Most importantly, this research emphasizes the motives for using YouTube, which is notable given its extrinsic task goal of being used for procedural learning tasks. Our conceptual framework included two…

  13. A computational model to investigate assumptions in the headturn preference procedure

    NARCIS (Netherlands)

    Bergmann, C.; Bosch, L.F.M. ten; Fikkert, J.P.M.; Boves, L.W.J.

    2013-01-01

    In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP): (1) behavioral differences originate in different processing; (2)

  14. The Psychology Department Model Advisement Procedure: A Comprehensive, Systematic Approach to Career Development Advisement

    Science.gov (United States)

    Howell-Carter, Marya; Nieman-Gonder, Jennifer; Pellegrino, Jennifer; Catapano, Brittani; Hutzel, Kimberly

    2016-01-01

    The MAP (Model Advisement Procedure) is a comprehensive, systematic approach to developmental student advisement. The MAP was implemented to improve advisement consistency, improve student preparation for internships/senior projects, increase career exploration, reduce career uncertainty, and, ultimately, improve student satisfaction with the…

  15. Conceptualizing and Testing Random Indirect Effects and Moderated Mediation in Multilevel Models: New Procedures and Recommendations

    Science.gov (United States)

    Bauer, Daniel J.; Preacher, Kristopher J.; Gil, Karen M.

    2006-01-01

    The authors propose new procedures for evaluating direct, indirect, and total effects in multilevel models when all relevant variables are measured at Level 1 and all effects are random. Formulas are provided for the mean and variance of the indirect and total effects and for the sampling variances of the average indirect and total effects.…

  16. A new experimental procedure for incorporation of model contaminants in polymer hosts

    NARCIS (Netherlands)

    Papaspyrides, C.D.; Voultzatis, Y.; Pavlidou, S.; Tsenoglou, C.; Dole, P.; Feigenbaum, A.; Paseiro, P.; Pastorelli, S.; Cruz Garcia, C. de la; Hankemeier, T.; Aucejo, S.

    2005-01-01

    A new experimental procedure for incorporation of model contaminants in polymers was developed as part of a general scheme for testing the efficiency of functional barriers in food packaging. The aim was to progressively pollute polymers in a controlled fashion up to a high level in the range of

  17. Evaluation of alternative surface runoff accounting procedures using the SWAT model

    Science.gov (United States)

    For surface runoff estimation in the Soil and Water Assessment Tool (SWAT) model, the curve number (CN) procedure is commonly adopted to calculate surface runoff by utilizing antecedent soil moisture condition (SCSI) in field. In the recent version of SWAT (SWAT2005), an alternative approach is ava...

  18. Test Standard Revision Update: JESD57, "Procedures for the Measurement of Single-Event Effects in Semiconductor Devices from Heavy-Ion Irradiation"

    Science.gov (United States)

    Lauenstein, Jean-Marie

    2015-01-01

    The JEDEC JESD57 test standard, Procedures for the Measurement of Single-Event Effects in Semiconductor Devices from Heavy-Ion Irradiation, is undergoing its first revision since 1996. In this talk, we place this test standard into context with other relevant radiation test standards to show its importance for single-event effect radiation testing for space applications. We show the range of industry, government, and end-user party involvement in the revision. Finally, we highlight some of the key changes being made and discuss the trade-space in which setting standards must be made to be both useful and broadly adopted.

  19. Calibrating and Updating the Global Forest Products Model (GFPM version 2014 with BPMPD)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai Zhu

    2014-01-01

    The Global Forest Products Model (GFPM) is an economic model of global production, consumption, and trade of forest products. An earlier version of the model is described in Buongiorno et al. (2003). The GFPM 2014 has data and parameters to simulate changes of the forest sector from 2010 to 2030. Buongiorno and Zhu (2014) describe how to use the model for simulation....

  20. Calibrating and updating the Global Forest Products Model (GFPM version 2016 with BPMPD)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai  Zhu

    2016-01-01

    The Global Forest Products Model (GFPM) is an economic model of global production, consumption, and trade of forest products. An earlier version of the model is described in Buongiorno et al. (2003). The GFPM 2016 has data and parameters to simulate changes of the forest sector from 2013 to 2030. Buongiorno and Zhu (2015) describe how to use the model for...

  1. Macrophyte and pH buffering updates to the Klamath River water-quality model upstream of Keno Dam, Oregon

    Science.gov (United States)

    Sullivan, Annett B.; Rounds, Stewart A.; Asbill-Case, Jessica R.; Deas, Michael L.

    2013-01-01

    A hydrodynamic, water temperature, and water-quality model of the Link River to Keno Dam reach of the upper Klamath River was updated to account for macrophytes and enhanced pH buffering from dissolved organic matter, ammonia, and orthophosphorus. Macrophytes had been observed in this reach by field personnel, so macrophyte field data were collected in summer and fall (June-October) 2011 to provide a dataset to guide the inclusion of macrophytes in the model. Three types of macrophytes were most common: pondweed (Potamogeton species), coontail (Ceratophyllum demersum), and common waterweed (Elodea canadensis). Pondweed was found throughout the Link River to Keno Dam reach in early summer with densities declining by mid-summer and fall. Coontail and common waterweed were more common in the lower reach near Keno Dam and were at highest density in summer. All species were most dense in shallow water (less than 2 meters deep) near shore. The highest estimated dry weight biomass for any sample during the study was 202 grams per square meter for coontail in August. Guided by field results, three macrophyte groups were incorporated into the CE-QUAL-W2 model for calendar years 2006-09. The CE-QUAL-W2 model code was adjusted to allow the user to initialize macrophyte populations spatially across the model grid. The default CE-QUAL-W2 model includes pH buffering by carbonates, but does not include pH buffering by organic matter, ammonia, or orthophosphorus. These three constituents, especially dissolved organic matter, are present in the upper Klamath River at concentrations that provide substantial pH buffering capacity. In this study, CE-QUAL-W2 was updated to include this enhanced buffering capacity in the simulation of pH. Acid dissociation constants for ammonium and phosphoric acid were taken from the literature. For dissolved organic matter, the number of organic acid groups and each group's acid dissociation constant (Ka) and site density (moles of sites per mole of

  2. A P-value model for theoretical power analysis and its applications in multiple testing procedures

    Directory of Open Access Journals (Sweden)

    Fengqing Zhang

    2016-10-01

    Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.

  3. PACIAE 2.0: An updated parton and hadron cascade model (program) for the relativistic nuclear collisions

    Science.gov (United States)

    Sa, Ben-Hao; Zhou, Dai-Mei; Yan, Yu-Liang; Li, Xiao-Mei; Feng, Sheng-Qin; Dong, Bao-Guo; Cai, Xu

    2012-02-01

    We have updated the parton and hadron cascade model PACIAE for the relativistic nuclear collisions, from based on JETSET 6.4 and PYTHIA 5.7 to based on PYTHIA 6.4, and renamed as PACIAE 2.0. The main physics concerning the stages of the parton initiation, parton rescattering, hadronization, and hadron rescattering were discussed. The structures of the programs were briefly explained. In addition, some calculated examples were compared with the experimental data. It turns out that this model (program) works well. Program summaryProgram title: PACIAE version 2.0 Catalogue identifier: AEKI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 297 523 No. of bytes in distributed program, including test data, etc.: 2 051 274 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: DELL Studio XPS and others with a FORTRAN 77 or GFORTRAN compiler Operating system: Unix/Linux RAM: 1 G words Word size: 64 bits Classification: 11.2 Nature of problem: The Monte Carlo simulation of hadron transport (cascade) model is successful in studying the observables at final state in the relativistic nuclear collisions. However the high p suppression, the jet quenching (energy loss), and the eccentricity scaling of v etc., observed in high energy nuclear collisions, indicates the important effect of the initial partonic state on the final hadronic state. Therefore better parton and hadron transport (cascade) models for the relativistic nuclear collisions are highly required. Solution method: The parton and hadron cascade model PACIAE is originally based on the JETSET 7.4 and PYTHIA 5.7. The PYTHIA model has been updated to PYTHIA 6.4 with the additions of new physics, the improvements in existing physics, and the

  4. A Proposed Model for Selecting Measurement Procedures for the Assessment and Treatment of Problem Behavior.

    Science.gov (United States)

    LeBlanc, Linda A; Raetz, Paige B; Sellers, Tyra P; Carr, James E

    2016-03-01

    Practicing behavior analysts frequently assess and treat problem behavior as part of their ongoing job responsibilities. Effective measurement of problem behavior is critical to success in these activities because some measures of problem behavior provide more accurate and complete information about the behavior than others. However, not every measurement procedure is appropriate for every problem behavior and therapeutic circumstance. We summarize the most commonly used measurement procedures, describe the contexts for which they are most appropriate, and propose a clinical decision-making model for selecting measurement produces given certain features of the behavior and constraints of the therapeutic environment.

  5. Detection Procedure for a Single Additive Outlier and Innovational Outlier in a Bilinear Model

    Directory of Open Access Journals (Sweden)

    Azami Zaharim

    2007-01-01

    Full Text Available A single outlier detection procedure for data generated from BL(1,1,1,1 models is developed. It is carried out in three stages. Firstly, the measure of impact of an IO and AO denoted by IO ω , AO ω , respectively are derived based on least squares method. Secondly, test statistics and test criteria are defined for classifying an observation as an outlier of its respective type. Finally, a general single outlier detection procedure is presented to distinguish a particular type of outlier at a time point t.

  6. Utilizing the Updated Gamma-Ray Bursts and Type Ia Supernovae to Constrain the Cardassian Expansion Model and Dark Energy

    Directory of Open Access Journals (Sweden)

    Jun-Jie Wei

    2015-01-01

    Full Text Available We update gamma-ray burst (GRB luminosity relations among certain spectral and light-curve features with 139 GRBs. The distance modulus of 82 GRBs at z>1.4 can be calibrated with the sample at z≤1.4 by using the cubic spline interpolation method from the Union2.1 Type Ia supernovae (SNe Ia set. We investigate the joint constraints on the Cardassian expansion model and dark energy with 580 Union2.1 SNe Ia sample (z<1.4 and 82 calibrated GRBs’ data (1.4model, the best fit is Ωm=0.24-0.15+0.15 and n=0.16-0.52+0.30  (1σ, which is consistent with the ΛCDM cosmology (n=0 in the 1σ confidence region. We also discuss two dark energy models in which the equation of state w(z is parameterized as w(z=w0 and w(z=w0+w1z/(1+z, respectively. Based on our analysis, we see that our universe at higher redshift up to z=8.2 is consistent with the concordance model within 1σ confidence level.

  7. Updated and integrated modelling of the 1995 - 2008 Mise-a-la-masse survey data in Olkiluoto

    International Nuclear Information System (INIS)

    Ahokas, T.; Paananen, M.

    2010-01-01

    Posiva Oy prepares for disposal of spent nuclear fuel into bedrock focusing in Olkiluoto, Eurajoki. This is in accordance of the Decision-in-Principle of the State Council in 2000, and ratification by the Parliament in 2001. The ONKALO underground characterization premises have been constructed since 2004. Posiva Oy is aiming for submitting the construction licence application in 2012. To support the compilation of the safety case and repository and ONKALO design and construction, an integrated Olkiluoto site Description including geological, rock mechanics, hydrogeological and hydrogeochemical models will be depicted. Mise-a-la-masse (MAM) surveys have been carried out in the Olkiluoto area since 1995 to follow electric conductors from drillhole to drillhole, from drillhole to the ground surface and also between the ONKALO access tunnel and drillholes or the ground surface. The data and some visualisation of the data have been presented as part of reporting of the 1995 and 2008 surveys. The work presented in this paper includes modelling of all the measured data and combining single conductors modelled from different surveys to conductive zones. The results from this work will be used in updating the geological and hydrogeological models of the Olkiluoto site area. Several electrically conductive zones were modelled from the examined data, many of them coincide with the known brittle deformation zones but also indications of many so far unknown zones were detected. During the modelling Comsol Multiphysics software for calculating theoretical potential field anomalies of different models was tested. The test calculations showed that this software is useful in confirming the modelling results, especially in complicated cases. (orig.)

  8. An updated MILES stellar library and stellar population models (Research Note)

    NARCIS (Netherlands)

    Falcon-Barroso, J.; Sanchez-Blazquez, P.; Vazdekis, A.; Ricciardelli, E.; Cardiel, N.; Cenarro, A. J.; Gorgas, J.; Peletier, R. F.

    Aims: We present a number of improvements to the MILES library and stellar population models. We correct some small errors in the radial velocities of the stars, measure the spectral resolution of the library and models more accurately, and give a better absolute flux calibration of the models.

  9. An Updated Geophysical Model for AMSR-E and SSMIS Brightness Temperature Simulations over Oceans

    Directory of Open Access Journals (Sweden)

    Elizaveta Zabolotskikh

    2014-03-01

    Full Text Available In this study, we considered the geophysical model for microwave brightness temperature (BT simulation for the Atmosphere-Ocean System under non-precipitating conditions. The model is presented as a combination of atmospheric absorption and ocean emission models. We validated this model for two satellite instruments—for Advanced Microwave Sounding Radiometer-Earth Observing System (AMSR-E onboard Aqua satellite and for Special Sensor Microwave Imager/Sounder (SSMIS onboard F16 satellite of Defense Meteorological Satellite Program (DMSP series. We compared simulated BT values with satellite BT measurements for different combinations of various water vapor and oxygen absorption models and wind induced ocean emission models. A dataset of clear sky atmospheric and oceanic parameters, collocated in time and space with satellite measurements, was used for the comparison. We found the best model combination, providing the least root mean square error between calculations and measurements. A single combination of models ensured the best results for all considered radiometric channels. We also obtained the adjustments to simulated BT values, as averaged differences between the model simulations and satellite measurements. These adjustments can be used in any research based on modeling data for removing model/calibration inconsistencies. We demonstrated the application of the model by means of the development of the new algorithm for sea surface wind speed retrieval from AMSR-E data.

  10. ETM documentation update – including modelling conventions and manual for software tools

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik

    , it summarises the work done during 2013, and it also contains presentations for promotion of fusion as a future element in the electricity generation mix and presentations for the modelling community concerning model development and model documentation – in particular for TIAM collaboration workshops....

  11. Resolving structural errors in a spatially distributed hydrologic model using ensemble Kalman filter state updates

    NARCIS (Netherlands)

    Spaaks, J.H.; Bouten, W.

    2013-01-01

    In hydrological modeling, model structures are developed in an iterative cycle as more and different types of measurements become available and our understanding of the hillslope or watershed improves. However, with increasing complexity of the model, it becomes more and more difficult to detect

  12. Calibration procedure for a potato crop growth model using information from across Europe

    DEFF Research Database (Denmark)

    Heidmann, Tove; Tofteng, Charlotte; Abrahamsen, Per

    2008-01-01

    In the FertOrgaNic EU project, 3 years of field experiments with drip irrigation and fertigation were carried out at six different sites across Europe, involving seven different varieties of potato. The Daisy model, which simulates plant growth together with water and nitrogen dynamics, was used...... to simulate the field experiments. An initial potato parameterisation was generated from an independent dataset and was used for site-specific calibrations. At those sites where the same variety was used for all 3 years, the calibration of the initial potato model was based on the first 2 years using the last...... for adaptation of the Daisy model to new potato varieties or for the improvement of the existing parameter set. The procedure is then, as a starting point, to focus the calibration process on the recommended list of parameters to change. We demonstrate this approach by showing the procedure for recalibrating...

  13. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  14. A Procedure for Modeling Photovoltaic Arrays under Any Configuration and Shading Conditions

    Directory of Open Access Journals (Sweden)

    Daniel Gonzalez Montoya

    2018-03-01

    Full Text Available Photovoltaic (PV arrays can be connected following regular or irregular connection patterns to form regular configurations (e.g., series-parallel, total cross-tied, bridge-linked, etc. or irregular configurations, respectively. Several reported works propose models for a single configuration; hence, making the evaluation of arrays with different configuration is a considerable time-consuming task. Moreover, if the PV array adopts an irregular configuration, the classical models cannot be used for its analysis. This paper proposes a modeling procedure for PV arrays connected in any configuration and operating under uniform or partial shading conditions. The procedure divides the array into smaller arrays, named sub-arrays, which can be independently solved. The modeling procedure selects the mesh current solution or the node voltage solution depending on the topology of each sub-array. Therefore, the proposed approach analyzes the PV array using the least number of nonlinear equations. The proposed solution is validated through simulation and experimental results, which demonstrate the proposed model capacity to reproduce the electrical behavior of PV arrays connected in any configuration.

  15. Visual perception of procedural textures: identifying perceptual dimensions and predicting generation models.

    Directory of Open Access Journals (Sweden)

    Jun Liu

    Full Text Available Procedural models are widely used in computer graphics for generating realistic, natural-looking textures. However, these mathematical models are not perceptually meaningful, whereas the users, such as artists and designers, would prefer to make descriptions using intuitive and perceptual characteristics like "repetitive," "directional," "structured," and so on. To make up for this gap, we investigated the perceptual dimensions of textures generated by a collection of procedural models. Two psychophysical experiments were conducted: free-grouping and rating. We applied Hierarchical Cluster Analysis (HCA and Singular Value Decomposition (SVD to discover the perceptual features used by the observers in grouping similar textures. The results suggested that existing dimensions in literature cannot accommodate random textures. We therefore utilized isometric feature mapping (Isomap to establish a three-dimensional perceptual texture space which better explains the features used by humans in texture similarity judgment. Finally, we proposed computational models to map perceptual features to the perceptual texture space, which can suggest a procedural model to produce textures according to user-defined perceptual scales.

  16. CANFIS: A non-linear regression procedure to produce statistical air-quality forecast models

    Energy Technology Data Exchange (ETDEWEB)

    Burrows, W.R.; Montpetit, J. [Environment Canada, Downsview, Ontario (Canada). Meteorological Research Branch; Pudykiewicz, J. [Environment Canada, Dorval, Quebec (Canada)

    1997-12-31

    Statistical models for forecasts of environmental variables can provide a good trade-off between significance and precision in return for substantial saving of computer execution time. Recent non-linear regression techniques give significantly increased accuracy compared to traditional linear regression methods. Two are Classification and Regression Trees (CART) and the Neuro-Fuzzy Inference System (NFIS). Both can model predict and distributions, including the tails, with much better accuracy than linear regression. Given a learning data set of matched predict and predictors, CART regression produces a non-linear, tree-based, piecewise-continuous model of the predict and data. Its variance-minimizing procedure optimizes the task of predictor selection, often greatly reducing initial data dimensionality. NFIS reduces dimensionality by a procedure known as subtractive clustering but it does not of itself eliminate predictors. Over-lapping coverage in predictor space is enhanced by NFIS with a Gaussian membership function for each cluster component. Coefficients for a continuous response model based on the fuzzified cluster centers are obtained by a least-squares estimation procedure. CANFIS is a two-stage data-modeling technique that combines the strength of CART to optimize the process of selecting predictors from a large pool of potential predictors with the modeling strength of NFIS. A CANFIS model requires negligible computer time to run. CANFIS models for ground-level O{sub 3}, particulates, and other pollutants will be produced for each of about 100 Canadian sites. The air-quality models will run twice daily using a small number of predictors isolated from a large pool of upstream and local Lagrangian potential predictors.

  17. An updated 18S rRNA phylogeny of tunicates based on mixture and secondary structure models

    Directory of Open Access Journals (Sweden)

    Shenkar Noa

    2009-08-01

    Full Text Available Abstract Background Tunicates have been recently revealed to be the closest living relatives of vertebrates. Yet, with more than 2500 described species, details of their evolutionary history are still obscure. From a molecular point of view, tunicate phylogenetic relationships have been mostly studied based on analyses of 18S rRNA sequences, which indicate several major clades at odds with the traditional class-level arrangements. Nonetheless, substantial uncertainty remains about the phylogenetic relationships and taxonomic status of key groups such as the Aplousobranchia, Appendicularia, and Thaliacea. Results Thirty new complete 18S rRNA sequences were acquired from previously unsampled tunicate species, with special focus on groups presenting high evolutionary rate. The updated 18S rRNA dataset has been aligned with respect to the constraint on homology imposed by the rRNA secondary structure. A probabilistic framework of phylogenetic reconstruction was adopted to accommodate the particular evolutionary dynamics of this ribosomal marker. Detailed Bayesian analyses were conducted under the non-parametric CAT mixture model accounting for site-specific heterogeneity of the evolutionary process, and under RNA-specific doublet models accommodating the occurrence of compensatory substitutions in stem regions. Our results support the division of tunicates into three major clades: 1 Phlebobranchia + Thaliacea + Aplousobranchia, 2 Appendicularia, and 3 Stolidobranchia, but the position of Appendicularia could not be firmly resolved. Our study additionally reveals that most Aplousobranchia evolve at extremely high rates involving changes in secondary structure of their 18S rRNA, with the exception of the family Clavelinidae, which appears to be slowly evolving. This extreme rate heterogeneity precluded resolving with certainty the exact phylogenetic placement of Aplousobranchia. Finally, the best fitting secondary-structure and CAT-mixture models

  18. 2017 publication guidelines for structural modelling of small-angle scattering data from biomolecules in solution: an update.

    Science.gov (United States)

    Trewhella, Jill; Duff, Anthony P; Durand, Dominique; Gabel, Frank; Guss, J Mitchell; Hendrickson, Wayne A; Hura, Greg L; Jacques, David A; Kirby, Nigel M; Kwan, Ann H; Pérez, Javier; Pollack, Lois; Ryan, Timothy M; Sali, Andrej; Schneidman-Duhovny, Dina; Schwede, Torsten; Svergun, Dmitri I; Sugiyama, Masaaki; Tainer, John A; Vachette, Patrice; Westbrook, John; Whitten, Andrew E

    2017-09-01

    In 2012, preliminary guidelines were published addressing sample quality, data acquisition and reduction, presentation of scattering data and validation, and modelling for biomolecular small-angle scattering (SAS) experiments. Biomolecular SAS has since continued to grow and authors have increasingly adopted the preliminary guidelines. In parallel, integrative/hybrid determination of biomolecular structures is a rapidly growing field that is expanding the scope of structural biology. For SAS to contribute maximally to this field, it is essential to ensure open access to the information required for evaluation of the quality of SAS samples and data, as well as the validity of SAS-based structural models. To this end, the preliminary guidelines for data presentation in a publication are reviewed and updated, and the deposition of data and associated models in a public archive is recommended. These guidelines and recommendations have been prepared in consultation with the members of the International Union of Crystallography (IUCr) Small-Angle Scattering and Journals Commissions, the Worldwide Protein Data Bank (wwPDB) Small-Angle Scattering Validation Task Force and additional experts in the field.

  19. A Two-Dimensional Modeling Procedure to Estimate the Loss Equivalent Resistance Including the Saturation Effect

    Directory of Open Access Journals (Sweden)

    Rosa Ana Salas

    2013-11-01

    Full Text Available We propose a modeling procedure specifically designed for a ferrite inductor excited by a waveform in time domain. We estimate the loss resistance in the core (parameter of the electrical model of the inductor by means of a Finite Element Method in 2D which leads to significant computational advantages over the 3D model. The methodology is validated for an RM (rectangular modulus ferrite core working in the linear and the saturation regions. Excellent agreement is found between the experimental data and the computational results.

  20. Visualizing the Impact of Art: An Update and Comparison of Current Psychological Models of Art Experience

    Science.gov (United States)

    Pelowski, Matthew; Markey, Patrick S.; Lauring, Jon O.; Leder, Helmut

    2016-01-01

    The last decade has witnessed a renaissance of empirical and psychological approaches to art study, especially regarding cognitive models of art processing experience. This new emphasis on modeling has often become the basis for our theoretical understanding of human interaction with art. Models also often define areas of focus and hypotheses for new empirical research, and are increasingly important for connecting psychological theory to discussions of the brain. However, models are often made by different researchers, with quite different emphases or visual styles. Inputs and psychological outcomes may be differently considered, or can be under-reported with regards to key functional components. Thus, we may lose the major theoretical improvements and ability for comparison that can be had with models. To begin addressing this, this paper presents a theoretical assessment, comparison, and new articulation of a selection of key contemporary cognitive or information-processing-based approaches detailing the mechanisms underlying the viewing of art. We review six major models in contemporary psychological aesthetics. We in turn present redesigns of these models using a unified visual form, in some cases making additions or creating new models where none had previously existed. We also frame these approaches in respect to their targeted outputs (e.g., emotion, appraisal, physiological reaction) and their strengths within a more general framework of early, intermediate, and later processing stages. This is used as a basis for general comparison and discussion of implications and future directions for modeling, and for theoretically understanding our engagement with visual art. PMID:27199697

  1. Semantic Modeling of Administrative Procedures from a Spanish Regional Public Administration

    Directory of Open Access Journals (Sweden)

    Francisco José Hidalgo López

    2018-02-01

    Full Text Available Over the past few years, Public Administrations have been providing systems for procedures and files electronic processing to ensure compliance with regulations and provide public services to citizens. Although each administration provides similar services to their citizens, these systems usually differ from the internal information management point of view since they usually come from different products and manufacturers. The common framework that regulations demand, and that Public Administrations must respect when processing electronic files, provides a unique opportunity for the development of intelligent agents in the field of administrative processes. However, for this development to be truly effective and applicable to the public sector, it is necessary to have a common representation model for these administrative processes. Although a lot of work has already been done in the development of public information reuse initiatives and common vocabularies standardization, this has not been carried out at the processes level. In this paper, we propose a semantic representation model of both processes models and processes for Public Administrations: the procedures and administrative files. The goal is to improve public administration open data initiatives and help to develop their sustainability policies, such as improving decision-making procedures and administrative management sustainability. As a case study, we modelled public administrative processes and files in collaboration with a Regional Public Administration in Spain, the Principality of Asturias, which enabled access to its information systems, helping the evaluation of our approach.

  2. Guidelines for developing and updating Bayesian belief networks applied to ecological modeling and conservation.

    Science.gov (United States)

    B.G. Marcot; J.D. Steventon; G.D. Sutherland; R.K. McCann

    2006-01-01

    We provide practical guidelines for developing, testing, and revising Bayesian belief networks (BBNs). Primary steps in this process include creating influence diagrams of the hypothesized "causal web" of key factors affecting a species or ecological outcome of interest; developing a first, alpha-level BBN model from the influence diagram; revising the model...

  3. Update on Small Modular Reactors Dynamics System Modeling Tool -- Molten Salt Cooled Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Hale, Richard Edward [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Qualls, A L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Borum, Robert C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chaleff, Ethan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogerson, Doug W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Batteh, John J. [Modelon Corporation (Sweden); Tiller, Michael M. [Xogeny Corporation, Canton, MI (United States)

    2014-08-01

    The Small Modular Reactor (SMR) Dynamic System Modeling Tool project is in the third year of development. The project is designed to support collaborative modeling and study of various advanced SMR (non-light water cooled) concepts, including the use of multiple coupled reactors at a single site. The objective of the project is to provide a common simulation environment and baseline modeling resources to facilitate rapid development of dynamic advanced reactor SMR models, ensure consistency among research products within the Instrumentation, Controls, and Human-Machine Interface (ICHMI) technical area, and leverage cross-cutting capabilities while minimizing duplication of effort. The combined simulation environment and suite of models are identified as the Modular Dynamic SIMulation (MoDSIM) tool. The critical elements of this effort include (1) defining a standardized, common simulation environment that can be applied throughout the program, (2) developing a library of baseline component modules that can be assembled into full plant models using existing geometry and thermal-hydraulic data, (3) defining modeling conventions for interconnecting component models, and (4) establishing user interfaces and support tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.

  4. An updated prediction model of the global risk of cardiovascular disease in HIV-positive persons

    DEFF Research Database (Denmark)

    Friis-Møller, Nina; Nielsen, Lene Ryom; Smith, Colette

    2016-01-01

    ,663 HIV-positive persons from 20 countries in Europe and Australia, who were free of CVD at entry into the Data-collection on Adverse Effects of Anti-HIV Drugs (D:A:D) study. Cox regression models (full and reduced) were developed that predict the risk of a global CVD endpoint. The predictive performance...... of the D:A:D models were compared with a recent CVD prediction model from the Framingham study, which was assessed recalibrated to the D:A:D dataset. A total of 1010 CVD events occurred during 186,364.5 person-years. The full D:A:D CVD prediction model included age, gender, systolic blood pressure, smoking...... significantly predicted risk more accurately than the recalibrated Framingham model (Harrell's c-statistic of 0.791, 0.783 and 0.766 for the D:A:D full, D:A:D reduced, and Framingham models respectively; p models also more accurately predicted five-year CVD-risk for key prognostic subgroups...

  5. A Perspective On Procedural Modeling Based On Structural Analysis = Una perspectiva sobre modelado procedural basado en analisis estructurales

    OpenAIRE

    Fita, Josep Lluís; Besuievsky, Gonzalo; Patow, Gustavo

    2017-01-01

    Con el creciente aumento de las capacidades computacionales, el análisis estructural se ha convertido en una herramienta clave para la evaluación del estudio y conservación de edificios antiguos por parte de los historiadores del arte, curadores y otros especialistas. Por otro lado, el floreciente campo del modelado procedural ha proporcionado avances interesantes para la reconstrucción de edificios y estructuras urbanas no accesibles. Sin embargo, h...

  6. Net Metering and Interconnection Procedures-- Incorporating Best Practices

    Energy Technology Data Exchange (ETDEWEB)

    Jason Keyes, Kevin Fox, Joseph Wiedman, Staff at North Carolina Solar Center

    2009-04-01

    State utility commissions and utilities themselves are actively developing and revising their procedures for the interconnection and net metering of distributed generation. However, the procedures most often used by regulators and utilities as models have not been updated in the past three years, in which time most of the distributed solar facilities in the United States have been installed. In that period, the Interstate Renewable Energy Council (IREC) has been a participant in more than thirty state utility commission rulemakings regarding interconnection and net metering of distributed generation. With the knowledge gained from this experience, IREC has updated its model procedures to incorporate current best practices. This paper presents the most significant changes made to IREC’s model interconnection and net metering procedures.

  7. Update for nurse anesthetists. The Starling resistor: a model for explaining and treating obstructive sleep apnea.

    Science.gov (United States)

    Stalford, Catherine B

    2004-04-01

    Recent epidemiological research places the incidence of obstructive sleep apnea as high as 16% in the general population. Serious postoperative respiratory complications and death have been reported in this population. Anesthetic drugs contribute to these complications secondary to acute and residual influences on the complex orchestration of airway muscles and reflexes involved in airway patency. The Starling resistor model is a theoretical model that has application in explaining upper airway dynamics and the treatment and management of obstructive sleep apnea. The model postulates the oropharynx as a collapsible tube. The oropharynx remains open or partially or completely closed as a result of pressure upstream at the nose and mouth, pressure downstream at the trachea and below, or tissue pressure surrounding the oropharynx. This AANA Journal course provides an overview of the Starling resistor model, its application to obstructive sleep apnea, and preoperative and postoperative anesthetic considerations.

  8. Model Updating and Uncertainty Management for Aircraft Prognostic Systems, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal addresses the integration of physics-based damage propagation models with diagnostic measures of current state of health in a mathematically rigorous...

  9. Experimental Validation of Model Updating and Damage Detection via Eigenvalue Sensitivity Methods with Artificial Boundary Conditions

    Science.gov (United States)

    2017-09-01

    and its FEM can be related to the loss in flexural rigidity, as it is usually assumed that mass modeling is correct. This indicator allows...structure and its FEM can be related to the loss in flexural rigidity, as it is usually assumed that mass modeling is correct. This indicator allows...shaker added an unacceptably high mass percentage to the test article, which could potentially corrupt results. As can be seen in Figure 14, the shaker

  10. Baseline groundwater model update for p-area groundwater operable unit, NBN

    Energy Technology Data Exchange (ETDEWEB)

    Ross, J. [Savannah River Site (SRS), Aiken, SC (United States); Amidon, M. [Savannah River Site (SRS), Aiken, SC (United States)

    2015-09-01

    This report documents the development of a numerical groundwater flow and transport model of the hydrogeologic system of the P-Area Reactor Groundwater Operable Unit at the Savannah River Site (SRS) (Figure 1-1). The P-Area model provides a tool to aid in understanding the hydrologic and geochemical processes that control the development and migration of the current tritium, tetrachloroethene (PCE), and trichloroethene (TCE) plumes in this region.

  11. The Stochastic Quasi-chemical Model for Bacterial Growth: Variational Bayesian Parameter Update

    Science.gov (United States)

    Tsilifis, Panagiotis; Browning, William J.; Wood, Thomas E.; Newton, Paul K.; Ghanem, Roger G.

    2018-02-01

    We develop Bayesian methodologies for constructing and estimating a stochastic quasi-chemical model (QCM) for bacterial growth. The deterministic QCM, described as a nonlinear system of ODEs, is treated as a dynamical system with random parameters, and a variational approach is used to approximate their probability distributions and explore the propagation of uncertainty through the model. The approach consists of approximating the parameters' posterior distribution by a probability measure chosen from a parametric family, through minimization of their Kullback-Leibler divergence.

  12. Updating the NASA LEO Orbital Debris Environment Model with Recent Radar and Optical Observations and in Situ Measurements

    Science.gov (United States)

    Liou, J.-C.; Anz-Meador, P.; Matney, M. J.; Kessler, D. J.; Theall, J.; Johnson, N. L.

    2000-01-01

    The Low Earth Orbit (LEO, between 200 and 2000 km altitudes) debris environment has been constantly measured by NASA Johnson Space Center's Liquid Mirror Telescope (LMT) since 1996 (Africano et al. 1999, NASA JSC-28826) and by Haystack and Haystack Auxiliary radars at MIT Lincoln Laboratory since 1990 (Settecerri et al. 1999, NASA JSC-28744). Debris particles as small as 3 mm can be detected by the radars and as small as 3 cm can be measured by LMT. Objects about 10 cm in diameter and greater are tracked and catalogued by the US Space Surveillance Network. Much smaller (down to several micrometers) natural and debris particle populations can be estimated based on in situ measurements, such as Long Duration Exposure Facility, and based on analyses of returned surfaces, such as Hubble Space Telescope solar arrays, European Retrievable Carrier, and Space Shuttles. To increase our understanding of the current LEO debris environment, the Orbital Debris Program Office at NASA JSC has initiated an effort to improve and update the ORDEM96 model (Kessler et al. 1996, NASA TM-104825) utilizing the recently available data. This paper gives an overview of the new NASA orbital debris engineering model, ORDEM2000.

  13. Prediction of Placental Barrier Permeability: A Model Based on Partial Least Squares Variable Selection Procedure

    Directory of Open Access Journals (Sweden)

    Yong-Hong Zhang

    2015-05-01

    Full Text Available Assessing the human placental barrier permeability of drugs is very important to guarantee drug safety during pregnancy. Quantitative structure–activity relationship (QSAR method was used as an effective assessing tool for the placental transfer study of drugs, while in vitro human placental perfusion is the most widely used method. In this study, the partial least squares (PLS variable selection and modeling procedure was used to pick out optimal descriptors from a pool of 620 descriptors of 65 compounds and to simultaneously develop a QSAR model between the descriptors and the placental barrier permeability expressed by the clearance indices (CI. The model was subjected to internal validation by cross-validation and y-randomization and to external validation by predicting CI values of 19 compounds. It was shown that the model developed is robust and has a good predictive potential (r2 = 0.9064, RMSE = 0.09, q2 = 0.7323, rp2 = 0.7656, RMSP = 0.14. The mechanistic interpretation of the final model was given by the high variable importance in projection values of descriptors. Using PLS procedure, we can rapidly and effectively select optimal descriptors and thus construct a model with good stability and predictability. This analysis can provide an effective tool for the high-throughput screening of the placental barrier permeability of drugs.

  14. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist

    2015-01-01

    The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......) identifiability problems. By using the presented approach it is possible to detect potential identifiability problems and avoid pointless calibration (and experimental!) effort....

  15. Improving the Distributed Hydrological Model Performance in Upper Huai River Basin: Using Streamflow Observations to Update the Basin States via the Ensemble Kalman Filter

    Directory of Open Access Journals (Sweden)

    Yongwei Liu

    2016-01-01

    Full Text Available This study investigates the capability of improving the distributed hydrological model performance by assimilating the streamflow observations. Incorrectly estimated model states will lead to discrepancies between the observed and estimated streamflow. Consequently, streamflow observations can be used to update the model states, and the improved model states will eventually benefit the streamflow predictions. This study tests this concept in upper Huai River basin. We assimilate the streamflow observations sequentially into the Soil and Water Assessment Tool (SWAT using the ensemble Kalman filter (EnKF to update the model states. Both synthetic experiments and real data application are used to demonstrate the benefit of this data assimilation scheme. The experiment shows that assimilating the streamflow observations at interior sites significantly improves the streamflow predictions for the whole basin. Assimilating the catchment outlet streamflow improves the streamflow predictions near the catchment outlet. In real data case, the estimated streamflow at the catchment outlet is significantly improved by assimilating the in situ streamflow measurements at interior gauges. Assimilating the in situ catchment outlet streamflow also improves the streamflow prediction of one interior location on the main reach. This may demonstrate that updating model states using streamflow observations can constrain the flux estimates in distributed hydrological modeling.

  16. An updated fracture-flow model for total-system performance assessment of Yucca Mountain

    Energy Technology Data Exchange (ETDEWEB)

    Gauthier, J.H. [Spectra Research Inst., Albuquerque, NM (United States)

    1994-07-01

    Improvements have been made to the fracture-flow model being used in the total-system performance assessment of a potential high-level radioactive waste repository at Yucca Mountain, Nevada. The ``weeps model`` now includes (1) weeps of varied sizes, (2) flow-pattern fluctuations caused by climate change, and (3) flow-pattern perturbations caused by repository heat generation. Comparison with the original weeps model indicates that allowing weeps of varied sizes substantially reduces the number of weeps and the number of containers contacted by weeps. However, flow-pattern perturbations caused by either climate change or repository heat generation greatly increases the number of containers contacted by weeps. In preliminary total-system calculations, using a phenomenological container-failure and radionuclide-release model, the weeps model predicts that radionuclide releases from a high-level radioactive waste repository at Yucca Mountain will be below the EPA standard specified in 40 CFR 191, but that the maximum radiation dose to an individual could be significant. Specific data from the site are required to determine the validity of the weep-flow mechanism and to better determine the parameters to which the dose calculation is sensitive.

  17. An updated fracture-flow model for total-system performance assessment of Yucca Mountain

    International Nuclear Information System (INIS)

    Gauthier, J.H.

    1994-01-01

    Improvements have been made to the fracture-flow model being used in the total-system performance assessment of a potential high-level radioactive waste repository at Yucca Mountain, Nevada. The ''weeps model'' now includes (1) weeps of varied sizes, (2) flow-pattern fluctuations caused by climate change, and (3) flow-pattern perturbations caused by repository heat generation. Comparison with the original weeps model indicates that allowing weeps of varied sizes substantially reduces the number of weeps and the number of containers contacted by weeps. However, flow-pattern perturbations caused by either climate change or repository heat generation greatly increases the number of containers contacted by weeps. In preliminary total-system calculations, using a phenomenological container-failure and radionuclide-release model, the weeps model predicts that radionuclide releases from a high-level radioactive waste repository at Yucca Mountain will be below the EPA standard specified in 40 CFR 191, but that the maximum radiation dose to an individual could be significant. Specific data from the site are required to determine the validity of the weep-flow mechanism and to better determine the parameters to which the dose calculation is sensitive

  18. Animal models for glucocorticoid-induced postmenopausal osteoporosis: An updated review.

    Science.gov (United States)

    Zhang, Zhida; Ren, Hui; Shen, Gengyang; Qiu, Ting; Liang, De; Yang, Zhidong; Yao, Zhensong; Tang, Jingjing; Jiang, Xiaobing; Wei, Qiushi

    2016-12-01

    Glucocorticoid-induced postmenopausal osteoporosis is a severe osteoporosis, with high risk of major osteoporotic fractures. This severe osteoporosis urges more extensive and deeper basic study, in which suitable animal models are indispensable. However, no relevant review is available introducing this model systematically. Based on the recent studies on GI-PMOP, this brief review introduces the GI-PMOP animal model in terms of its establishment, evaluation of bone mass and discuss its molecular mechanism. Rat, rabbit and sheep with their respective merits were chosen. Both direct and indirect evaluation of bone mass help to understand the bone metabolism under different intervention. The crucial signaling pathways, miRNAs, osteogenic- or adipogenic- related factors and estrogen level may be the predominant contributors to the development of glucocorticoid-induced postmenopausal osteoporosis. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  19. An update on single field models of inflation in light of WMAP7

    International Nuclear Information System (INIS)

    Alabidi, Laila; Huston, Ian

    2010-01-01

    In this paper we summarise the status of single field models of inflation in light of the WMAP 7 data release. We find little has changed since the 5 year release, and results are consistent with previous findings. The increase in the upper bound on the running of the spectral index impacts on the status of the production of Primordial Black Holes from single field models. The lower bound on f equi NL is reduced and thus the bounds on the theoretical parameters of (UV) DBI single brane models are weakened. In the case of multiple coincident branes the bounds are also weakened and the two, three or four brane cases will produce a tensor-signal that could possibly be observed in the future

  20. Output-only identification of civil structures using nonlinear finite element model updating

    Science.gov (United States)

    Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.

    2015-03-01

    This paper presents a novel approach for output-only nonlinear system identification of structures using data recorded during earthquake events. In this approach, state-of-the-art nonlinear structural FE modeling and analysis techniques are combined with Bayesian Inference method to estimate (i) time-invariant parameters governing the nonlinear hysteretic material constitutive models used in the FE model of the structure, and (ii) the time history of the earthquake ground motion. To validate the performance of the proposed framework, the simulated responses of a bridge pier to an earthquake ground motion is polluted with artificial output measurement noise and used to jointly estimate the unknown material parameters and the time history of the earthquake ground motion. This proof-of-concept example illustrates the successful performance of the proposed approach even in the presence of high measurement noise.

  1. The linear nonthreshold (LNT) model as used in radiation protection: an NCRP update.

    Science.gov (United States)

    Boice, John D

    2017-10-01

    The linear nonthreshold (LNT) model has been used in radiation protection for over 40 years and has been hotly debated. It relies heavily on human epidemiology, with support from radiobiology. The scientific underpinnings include NCRP Report No. 136 ('Evaluation of the Linear-Nonthreshold Dose-Response Model for Ionizing Radiation'), UNSCEAR 2000, ICRP Publication 99 (2004) and the National Academies BEIR VII Report (2006). NCRP Scientific Committee 1-25 is reviewing recent epidemiologic studies focusing on dose-response models, including threshold, and the relevance to radiation protection. Recent studies after the BEIR VII Report are being critically reviewed and include atomic-bomb survivors, Mayak workers, atomic veterans, populations on the Techa River, U.S. radiological technologists, the U.S. Million Person Study, international workers (INWORKS), Chernobyl cleanup workers, children given computerized tomography scans, and tuberculosis-fluoroscopy patients. Methodologic limitations, dose uncertainties and statistical approaches (and modeling assumptions) are being systematically evaluated. The review of studies continues and will be published as an NCRP commentary in 2017. Most studies reviewed to date are consistent with a straight-line dose response but there are a few exceptions. In the past, the scientific consensus process has worked in providing practical and prudent guidance. So pragmatic judgment is anticipated. The evaluations are ongoing and the extensive NCRP review process has just begun, so no decisions or recommendations are in stone. The march of science requires a constant assessment of emerging evidence to provide an optimum, though not necessarily perfect, approach to radiation protection. Alternatives to the LNT model may be forthcoming, e.g. an approach that couples the best epidemiology with biologically-based models of carcinogenesis, focusing on chronic (not acute) exposure circumstances. Currently for the practical purposes of

  2. Tornado missile simulation and design methodology. Volume 2: model verification and data base updates. Final report

    International Nuclear Information System (INIS)

    Twisdale, L.A.; Dunn, W.L.

    1981-08-01

    A probabilistic methodology has been developed to predict the probabilities of tornado-propelled missiles impacting and damaging nuclear power plant structures. Mathematical models of each event in the tornado missile hazard have been developed and sequenced to form an integrated, time-history simulation methodology. The models are data based where feasible. The data include documented records of tornado occurrence, field observations of missile transport, results of wind tunnel experiments, and missile impact tests. Probabilistic Monte Carlo techniques are used to estimate the risk probabilities. The methodology has been encoded in the TORMIS computer code to facilitate numerical analysis and plant-specific tornado missile probability assessments

  3. Earth Global Reference Atmospheric Model (GRAM) Overview and Updates: DOLWG Meeting

    Science.gov (United States)

    White, Patrick

    2017-01-01

    What is Earth-GRAM (Global Reference Atmospheric Model): Provides monthly mean and standard deviation for any point in atmosphere - Monthly, Geographic, and Altitude Variation; Earth-GRAM is a C++ software package - Currently distributed as Earth-GRAM 2016; Atmospheric variables included: pressure, density, temperature, horizontal and vertical winds, speed of sound, and atmospheric constituents; Used by engineering community because of ability to create dispersions in atmosphere at a rapid runtime - Often embedded in trajectory simulation software; Not a forecast model; Does not readily capture localized atmospheric effects.

  4. Implications of the Declarative/Procedural Model for Improving Second Language Learning: The Role of Memory Enhancement Techniques

    Science.gov (United States)

    Ullman, Michael T.; Lovelett, Jarrett T.

    2018-01-01

    The declarative/procedural (DP) model posits that the learning, storage, and use of language critically depend on two learning and memory systems in the brain: declarative memory and procedural memory. Thus, on the basis of independent research on the memory systems, the model can generate specific and often novel predictions for language. Till…

  5. An Update on the Conceptual-Production Systems Model of Apraxia: Evidence from Stroke

    Science.gov (United States)

    Stamenova, Vessela; Black, Sandra E.; Roy, Eric A.

    2012-01-01

    Limb apraxia is a neurological disorder characterized by an inability to pantomime and/or imitate gestures. It is more commonly observed after left hemisphere damage (LHD), but has also been reported after right hemisphere damage (RHD). The Conceptual-Production Systems model (Roy, 1996) suggests that three systems are involved in the control of…

  6. Improving prediction models with new markers: A comparison of updating strategies

    NARCIS (Netherlands)

    D. Nieboer (Daan); Y. Vergouwe (Yvonne); D. Ankerst (Donna); M.J. Roobol-Bouts (Monique); E.W. Steyerberg (Ewout)

    2016-01-01

    textabstractBackground: New markers hold the promise of improving risk prediction for individual patients. We aimed to compare the performance of different strategies to extend a previously developed prediction model with a new marker. Methods: Our motivating example was the extension of a risk

  7. An updated fracture-flow model for total-system performance assessment of Yucca Mountain

    International Nuclear Information System (INIS)

    Gauthier, J.H.

    1994-01-01

    Improvements have been made to the fracture-flow model being used in the total-system performance assessment of a potential high-level radioactive waste repository at Yucca Mountain, Nevada. The open-quotes weeps modelclose quotes now includes (1) weeps of varied sizes, (2) flow-pattern fluctuations caused by climate change, and (3) flow-pattern perturbations caused by repository heat generation. Comparison with the original weeps model indicates that allowing weeps of varied sizes substantially reduces the number of weeps and the number of containers contacted by weeps. However, flow-pattern perturbations caused by either climate change or repository heat generation greatly increases the number of containers contacted by weeps. In preliminary total-system calculations, using a phenomenological container-failure and radionuclide-release model, the weeps model predicts that radionuclide releases from a high-level radioactive waste repository at Yucca Mountain will be below the EPA standard specified in 40 CFR 191, but that the maximum radiation dose to an individual could be significant. Specific data from the site are required to determine the validity of the weep-flow mechanism and to better determine the parameters to which the dose calculation is sensitive

  8. Real-time resource model updating in continuous mining environment utilizing online sensor data

    NARCIS (Netherlands)

    Yüksel, C.

    2017-01-01

    In mining, modelling of the deposit geology is the basis for many actions to be taken in the future, such as predictions of quality attributes, mineral resources and ore reserves, as well as mine design and long-term production planning. The essential knowledge about the raw materialproduct is based

  9. Prediction model of RSV-hospitalization in late preterm infants: An update and validation study

    NARCIS (Netherlands)

    Korsten, K.; Blanken, M.O.; Nibbelke, E.E.; Moons, K.G.; Bont, L.; Liem, K.D.; et al.,

    2016-01-01

    BACKGROUND: New vaccines and RSV therapeutics have been developed in the past decade. With approval of these new pharmaceuticals on the horizon, new challenges lie ahead in selecting the appropriate target population. We aimed to improve a previously published prediction model for prediction of

  10. Recent updates on the Standard Model Higgs boson measurements from the ATLAS and CMS experiments

    CERN Document Server

    Wang, Song-Ming

    2017-01-01

    This report presents the latest results from the ATLAS and CMS experiments on the measurements of the Standard Model Higgs boson by using the proton-proton collisions produced by the Large Hadron Collider during the first two years of Run 2 data taking.

  11. Modeling the Flyby Anomalies with Dark Matter Scattering: Update with Additional Data and Further Predictions

    Science.gov (United States)

    Adler, Stephen L.

    2013-06-01

    We continue our exploration of whether the flyby anomalies can be explained by scattering of spacecraft nucleons from dark matter gravitationally bound to the Earth, with the addition of data from five new flybys to that from the original six. We continue to use our model in which inelastic and elastic scatterers populate shells generated by the precession of circular orbits with normals tilted with respect to the Earth's axis. With 11 data points and eight parameters in the model, a statistically meaningful fit is obtained with a chi-squared of 2.7. We give plots of the anomalous acceleration along the spacecraft trajectory, and the cumulative velocity change, for the five flybys which exhibit a significant nonzero anomaly. We also discuss implications of the fit for dark matter-nucleon cross-sections, give the prediction of our fit for the anomaly to be expected from the future Juno flyby, and give predictions of our fit for flyby orbit orientation changes. In addition, we give formulas for estimating the flyby temperature increase caused by dark matter inelastic scattering, and for the fraction of flyby nucleons undergoing such scatters. Finally, for circular satellite orbits, we give a table of predicted secular changes in orbit radius. These are much too large to be reasonable — comparing with data for COBE and GP-B supplied to us by Edward Wright (after the first version of this paper was posted), we find that our model predicts changes in orbit radius that are too large by many orders of magnitude. So the model studied here is ruled out. We conclude that further modeling of the flyby anomalies must simultaneously attempt to fit constraints coming from satellite orbits.

  12. Collaborative CAD Synchronization Based on a Symmetric and Consistent Modeling Procedure

    Directory of Open Access Journals (Sweden)

    Yiqi Wu

    2017-04-01

    Full Text Available One basic issue with collaborative computer aided design (Co-CAD is how to maintain valid and consistent modeling results across all design sites. Moreover, modeling history is important in parametric CAD modeling. Therefore, different from a typical co-editing approach, this paper proposes a novel method for Co-CAD synchronization, in which all Co-CAD sites maintain symmetric and consistent operating procedures. Consequently, the consistency of both modeling results and history can be achieved. In order to generate a valid, unique, and symmetric queue among collaborative sites, a set of correlated mechanisms is presented in this paper. Firstly, the causal relationship of operations is maintained. Secondly, the operation queue is reconstructed for partial concurrency operation, and the concurrent operation can be retrieved. Thirdly, a symmetric, concurrent operation control strategy is proposed to determine the order of operations and resolve possible conflicts. Compared with existing Co-CAD consistency methods, the proposed method is convenient and flexible in supporting collaborative design. The experiment performed based on the collaborative modeling procedure demonstrates the correctness and applicability of this work.

  13. Seismic response trends evaluation via long term monitoring and finite element model updating of an RC building including soil-structure interaction

    Science.gov (United States)

    Butt, F.; Omenzetter, P.

    2012-04-01

    This paper presents a study on the seismic response trends evaluation and finite element model updating of a reinforced concrete building monitored for a period of more than two years. The three storey reinforced concrete building is instrumented with five tri-axial accelerometers and a free-field tri-axial accelerometer. The time domain N4SID system identification technique was used to obtain the frequencies and damping ratios considering flexible base models taking into account the soil-structure-interaction (SSI) using 50 earthquakes. Trends of variation of seismic response were developed by correlating the peak response acceleration at the roof level with identified frequencies and damping ratios. A general trend of decreasing frequencies was observed with increased level of shaking. To simulate the behavior of the building, a three dimensional finite element model (FEM) was developed. To incorporate real in-situ conditions, soil underneath the foundation and around the building was modeled using spring elements and non-structural components (claddings and partitions) were also included. The developed FEM was then calibrated using a sensitivity based model updating technique taking into account soil flexibility and non-structural components as updating parameters. It was concluded from the investigation that knowledge of the variation of seismic response of buildings is necessary to better understand their behavior during earthquakes, and also that the participation of soil and non-structural components is significant towards the seismic response of the building and these should be considered in models to simulate the real behavior.

  14. Worst case prediction of additives migration from polystyrene for food safety purposes: a model update

    DEFF Research Database (Denmark)

    Martinez Lopez, Brais; Gontard, Nathalie; Peyron, Stephane

    2018-01-01

    A reliable prediction of migration levels of plastic additives into food requires a robust estimation of diffusivity. Predictive modelling of diffusivity as recommended by the EU commission is carried out using a semi-empirical equation that relies on two polymer-dependent parameters. These param......A reliable prediction of migration levels of plastic additives into food requires a robust estimation of diffusivity. Predictive modelling of diffusivity as recommended by the EU commission is carried out using a semi-empirical equation that relies on two polymer-dependent parameters....... These parameters were determined for the polymers most used by packaging industry (LLDPE, HDPE, PP, PET, PS, HIPS) from the diffusivity data available at that time. In the specific case of general purpose polystyrene, the diffusivity data published since then shows that the use of the equation with the original...

  15. The STR/ort mouse model of spontaneous osteoarthritis - an update.

    Science.gov (United States)

    Staines, K A; Poulet, B; Wentworth, D N; Pitsillides, A A

    2017-06-01

    Osteoarthritis is a degenerative joint disease and a world-wide healthcare burden. Characterized by cartilage degradation, subchondral bone thickening and osteophyte formation, osteoarthritis inflicts much pain and suffering, for which there are currently no disease-modifying treatments available. Mouse models of osteoarthritis are proving critical in advancing our understanding of the underpinning molecular mechanisms. The STR/ort mouse is a well-recognized model which develops a natural form of osteoarthritis very similar to the human disease. In this Review we discuss the use of the STR/ort mouse in understanding this multifactorial disease with an emphasis on recent advances in its genetics and its bone, endochondral and immune phenotypes. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Business models for renewable energy in the built environment. Updated version

    Energy Technology Data Exchange (ETDEWEB)

    Wuertenberger, L.; Menkveld, M.; Vethman, P.; Van Tilburg, X. [ECN Policy Studies, Amsterdam (Netherlands); Bleyl, J.W. [Energetic Solutions, Graz (Austria)

    2012-04-15

    The project RE-BIZZ aims to provide insight to policy makers and market actors in the way new and innovative business models (and/or policy measures) can stimulate the deployment of renewable energy technologies (RET) and energy efficiency (EE) measures in the built environment. The project is initiated and funded by the IEA Implementing Agreement for Renewable Energy Technology Deployment (IEA-RETD). It analysed ten business models in three categories (amongst others different types of Energy Service Companies (ESCOs), Developing properties certified with a 'green' building label, Building owners profiting from rent increases after EE measures, Property Assessed Clean Energy (PACE) financing, On-bill financing, and Leasing of RET equipment) including their organisational and financial structure, the existing market and policy context, and an analysis of Strengths, Weaknesses, Opportunities and Threats (SWOT). The study concludes with recommendations for policy makers and other market actors.

  17. An updated program-controlled analog processor, model AP-006, for semiconductor detector spectrometers

    International Nuclear Information System (INIS)

    Shkola, N.F.; Shevchenko, Yu.A.

    1989-01-01

    An analog processor, model AP-006, is reported. The processor is a development of a series of spectrometric units based on a shaper of the type 'DL dif +TVS+gated ideal integrator'. Structural and circuits design features are described. The results of testing the processor in a setup with a Si(Li) detecting unit over an input count-rate range of up to 5x10 5 cps are presented. Processor applications are illustrated. (orig.)

  18. LITHO1.0: An Updated Crust and Lithosphere Model of the Earth

    Science.gov (United States)

    2010-09-01

    A. M., S. Bloch, and M. Landisman (1969). A technique for the analysis of transient seismic signals. Bull. Seism . Soc. Am. 59: 427^44. Durek, J. and...G. Ekstrom (1996). A radial model of anelasticity consistent with long-period surface-wave attenuation, Bull. Seism . Soc. Am. 86: 144 158. Goes, S...Peculiarities of surface wave propagation across central Eurasia, Bull. Seism . Soc. Am. 82: 2464 2493. Levshin, A. L., M. H. Ritzwoller, and J. Resovsky

  19. Update on Multi-Variable Parametric Cost Models for Ground and Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda

    2012-01-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper reports on recent revisions and improvements to our ground telescope cost model and refinements of our understanding of space telescope cost models. One interesting observation is that while space telescopes are 50X to 100X more expensive than ground telescopes, their respective scaling relationships are similar. Another interesting speculation is that the role of technology development may be different between ground and space telescopes. For ground telescopes, the data indicates that technology development tends to reduce cost by approximately 50% every 20 years. But for space telescopes, there appears to be no such cost reduction because we do not tend to re-fly similar systems. Thus, instead of reducing cost, 20 years of technology development may be required to enable a doubling of space telescope capability. Other findings include: mass should not be used to estimate cost; spacecraft and science instrument costs account for approximately 50% of total mission cost; and, integration and testing accounts for only about 10% of total mission cost.

  20. The value of information for woodland management: Updating a state–transition model

    Science.gov (United States)

    Morris, William K.; Runge, Michael C.; Vesk, Peter A.

    2017-01-01

    Value of information (VOI) analyses reveal the expected benefit of reducing uncertainty to a decision maker. Most ecological VOI analyses have focused on population models rarely addressing more complex community models. We performed a VOI analysis for a complex state–transition model of Box-Ironbark Forest and Woodland management. With three management alternatives (limited harvest/firewood removal (HF), ecological thinning (ET), and no management), managing the system optimally (for 150 yr) with the original information would, on average, increase the amount of forest in a desirable state from 19% to 35% (a 16-percentage point increase). Resolving all uncertainty would, on average, increase the final percentage to 42% (a 19-percentage point increase). However, only resolving the uncertainty for a single parameter was worth almost two-thirds the value of resolving all uncertainty. We found the VOI to depend on the number of management options, increasing as the management flexibility increased. Our analyses show it is more cost-effective to monitor low-density regrowth forest than other states and more cost-effective to experiment with the no-management alternative than the other management alternatives. Importantly, the most cost-effective strategies did not include either the most desired forest states or the least understood management strategy, ET. This implies that managers cannot just rely on intuition to tell them where the most VOI will lie, as critical uncertainties in a complex system are sometimes cryptic.

  1. Composite Transport Model and Water and Solute Transport across Plant Roots: An Update

    Directory of Open Access Journals (Sweden)

    Yangmin X. Kim

    2018-02-01

    Full Text Available The present review examines recent experimental findings in root transport phenomena in terms of the composite transport model (CTM. It has been a well-accepted conceptual model to explain the complex water and solute flows across the root that has been related to the composite anatomical structure. There are three parallel pathways involved in the transport of water and solutes in roots – apoplast, symplast, and transcellular paths. The role of aquaporins (AQPs, which facilitate water flows through the transcellular path, and root apoplast is examined in terms of the CTM. The contribution of the plasma membrane bound AQPs for the overall water transport in the whole plant level was varying depending on the plant species, age of roots with varying developmental stages of apoplastic barriers, and driving forces (hydrostatic vs. osmotic. Many studies have demonstrated that the apoplastic barriers, such as Casparian bands in the primary anticlinal walls and suberin lamellae in the secondary cell walls, in the endo- and exodermis are not perfect barriers and unable to completely block the transport of water and some solute transport into the stele. Recent research on water and solute transport of roots with and without exodermis triggered the importance of the extension of conventional CTM adding resistances that arrange in series (epidermis, exodermis, mid-cortex, endodermis, and pericycle. The extension of the model may answer current questions about the applicability of CTM for composite water and solute transport of roots that contain complex anatomical structures with heterogeneous cell layers.

  2. A systematic procedure for the incorporation of common cause events into risk and reliability models

    International Nuclear Information System (INIS)

    Fleming, K.N.; Mosleh, A.; Deremer, R.K.

    1986-01-01

    Common cause events are an important class of dependent events with respect to their contribution to system unavailability and to plant risk. Unfortunately, these events have not been treated with any king of consistency in applied risk studies over the past decade. Many probabilistic risk assessments (PRA) have not included these events at all, and those that have did not employ the kind of systematic procedures that are needed to achieve consistency, accuracy, and credibility in this area of PRA methodology. In this paper, the authors report on the progress recently made in the development of a systematic approach for incorporating common cause events into applied risk and reliability evaluations. This approach takes advantage of experience from recently completed PRAs and is the result of a project, sponsored by the Electric Power Research Institute (EPRI), in which procedures for dependent events analysis are being developed. Described in this paper is a general framework for system-level common cause failure (CCF) analysis and its application to a three-train auxiliary feedwater system. Within this general framework, three parametric CCF models are compared, including the basic parameter (BP), multiple Greek letter (MGL), and binominal failure rate (BFR) models. Pitfalls of not following the recommended procedure are discussed, and some old issues, such as the benefits of redundancy and diversity, are reexamined. (orig.)

  3. The geothermal energy potential in Denmark - updating the database and new structural and thermal models

    Science.gov (United States)

    Nielsen, Lars Henrik; Sparre Andersen, Morten; Balling, Niels; Boldreel, Lars Ole; Fuchs, Sven; Leth Hjuler, Morten; Kristensen, Lars; Mathiesen, Anders; Olivarius, Mette; Weibel, Rikke

    2017-04-01

    Knowledge of structural, hydraulic and thermal conditions of the subsurface is fundamental for the planning and use of hydrothermal energy. In the framework of a project under the Danish Research program 'Sustainable Energy and Environment' funded by the 'Danish Agency for Science, Technology and Innovation', fundamental geological and geophysical information of importance for the utilization of geothermal energy in Denmark was compiled, analyzed and re-interpreted. A 3D geological model was constructed and used as structural basis for the development of a national subsurface temperature model. In that frame, all available reflection seismic data were interpreted, quality controlled and integrated to improve the regional structural understanding. The analyses and interpretation of available relevant data (i.e. old and new seismic profiles, core and well-log data, literature data) and a new time-depth conversion allowed a consistent correlation of seismic surfaces for whole Denmark and across tectonic features. On this basis, new topologically consistent depth and thickness maps for 16 geological units from the top pre-Zechstein to the surface were drawn. A new 3D structural geological model was developed with special emphasis on potential geothermal reservoirs. The interpretation of petrophysical data (core data and well-logs) allows to evaluate the hydraulic and thermal properties of potential geothermal reservoirs and to develop a parameterized numerical 3D conductive subsurface temperature model. Reservoir properties and quality were estimated by integrating petrography and diagenesis studies with porosity-permeability data. Detailed interpretation of the reservoir quality of the geological formations was made by estimating net reservoir sandstone thickness based on well-log analysis, determination of mineralogy including sediment provenance analysis, and burial history data. New local surface heat-flow values (range: 64-84 mW/m2) were determined for the Danish

  4. Procedures: Source Term Measurement Program

    International Nuclear Information System (INIS)

    Dyer, N.C.; Keller, J.H.; Nieschmidt, E.B.; Motes, B.J.

    1977-10-01

    The report contains procedures for the Source Term Measurement Project being performed by Idaho National Engineering Laboratory for the Nuclear Regulatory Commission. This work is being conducted for the Office of Nuclear Regulatory Research in support of requirements of the Effluent Treatment Systems Branch of the Office of Nuclear Reactor Regulation. This project is designed to obtain source term information at operating light water reactors to update the parameters used in NRC calculational models (GALE codes). Detailed procedures and methods used for collection and analysis of samples are presented. This provides a reference base to supplement a series of reports to be issued by the Source Term Measurements Project which will present data obtained from measurements in specific nuclear power stations. Reference to appropriate parts of these procedures will be made as required

  5. Updating flood maps efficiently using existing hydraulic models, very-high-accuracy elevation data, and a geographic information system; a pilot study on the Nisqually River, Washington

    Science.gov (United States)

    Jones, Joseph L.; Haluska, Tana L.; Kresch, David L.

    2001-01-01

    A method of updating flood inundation maps at a fraction of the expense of using traditional methods was piloted in Washington State as part of the U.S. Geological Survey Urban Geologic and Hydrologic Hazards Initiative. Large savings in expense may be achieved by building upon previous Flood Insurance Studies and automating the process of flood delineation with a Geographic Information System (GIS); increases in accuracy and detail result from the use of very-high-accuracy elevation data and automated delineation; and the resulting digital data sets contain valuable ancillary information such as flood depth, as well as greatly facilitating map storage and utility. The method consists of creating stage-discharge relations from the archived output of the existing hydraulic model, using these relations to create updated flood stages for recalculated flood discharges, and using a GIS to automate the map generation process. Many of the effective flood maps were created in the late 1970?s and early 1980?s, and suffer from a number of well recognized deficiencies such as out-of-date or inaccurate estimates of discharges for selected recurrence intervals, changes in basin characteristics, and relatively low quality elevation data used for flood delineation. FEMA estimates that 45 percent of effective maps are over 10 years old (FEMA, 1997). Consequently, Congress has mandated the updating and periodic review of existing maps, which have cost the Nation almost 3 billion (1997) dollars. The need to update maps and the cost of doing so were the primary motivations for piloting a more cost-effective and efficient updating method. New technologies such as Geographic Information Systems and LIDAR (Light Detection and Ranging) elevation mapping are key to improving the efficiency of flood map updating, but they also improve the accuracy, detail, and usefulness of the resulting digital flood maps. GISs produce digital maps without manual estimation of inundated areas between

  6. Updating the asymmetric osmium-catalyzed dihydroxylation (AD) mnemonic. Q2MM modeling and new kinetic measurements

    DEFF Research Database (Denmark)

    Fristrup, Peter; Tanner, David Ackland; Norrby, Per-Ola

    2003-01-01

    The mnemonic device for predicting stereoselectivities in the Sharpless asymmetric dihydroxylation (AD) reaction has been updated based on extensive computational studies. Kinetic measurements from competition reactions validate the new proposal. The interactions responsible for the high stereose...

  7. Quantification of seismic damage in steel beam-column connection using PVDF strain sensors and model-updating technique

    Science.gov (United States)

    Suzuki, Akiko; Kurata, Masahiro; Li, Xiaohua; Minegishi, Kaede; Tang, Zhenyun; Burton, Andrew

    2015-03-01

    This paper presents an experimental verification of a method of evaluating local damage in steel beam-column connections using modal vibratory characteristics under ambient vibrations. First, a unique testing method is proposed to provide a vibration-test environment which enables measurements of modal vibration characteristics of steel beamcolumn connection as damage proceeds. In the testing method, a specimen of structural component is installed in a resonance frame that supports large fictitious mass and the resonance frequency of the entire system is set as the natural frequency of a mid-rise steel building. The specimen is damaged quasi-statically, and resonance vibration tests are conducted with a modal shaker. The proposed method enables evaluation of realistic damage in structural components without constructing a large specimen of an entire structural system. The transition of the neutral axis and the reduction of the root mean square (RMS) of dynamic strain response are tracked in order to quantify damage in floor slabs and steel beams, respectively. Two specimens of steel beam-column connection with or without floor slab were tested to investigate sensitivity of the damage-related features to loss of floor composite action and fractures in steel beams. In the end, by updating numerical models of the specimens using the identified damage-related features, seismic capacities of damaged specimens were estimated.

  8. A model treatment refusal procedure for defendants found incompetent to stand trial in the ninth circuit.

    Science.gov (United States)

    Epson, Martin; Rodol, Liban; Bloom, Joseph D

    2012-01-01

    Pretrial detainees have a constitutionally protected right to refuse medical treatment in most circumstances; however, individuals found incompetent to stand trial (IST) due to a mental disorder can be treated involuntarily by clinicians who adhere to careful medical and legal procedures. The process of involuntary treatment of IST pretrial detainees begins with categorization into particular legal and medical groups. These different categories affect the individual's access to treatment. In this article, we review the relevant case law for the jurisdiction of the Ninth Circuit and place the medical-legal debate regarding these procedures in the context of recent cases. To address the medical-legal disjunction, we propose and discuss a model for managing treatment refusal in pretrial detainees found IST.

  9. Time of emergence in regional precipitation changes: an updated assessment using the CMIP5 multi-model ensemble

    Science.gov (United States)

    Nguyen, Thuy-Huong; Min, Seung-Ki; Paik, Seungmok; Lee, Donghyun

    2018-01-01

    This study conducted an updated time of emergence (ToE) analysis of regional precipitation changes over land regions across the globe using multiple climate models from the Coupled Model Intercomparison Project phase 5 (CMIP5). ToEs were estimated for 14 selected hotspots over two seasons of April to September (AS) and October to March (OM) from three RCP scenarios representing low (RCP2.6), medium (RCP4.5), and high (RCP8.5) emissions. Results from the RCP8.5 scenario indicate that ToEs would occur before 2040 over seven hotspots including three northern high-latitude regions (OM wettening), East Africa (OM wettening), South Asia (AS wettening), East Asia (AS wettening) and South Africa (AS drying). The Mediterranean (both OM and AS drying) is expected to experience ToEs in the mid-twenty-first century (2040-2080). In order to measure possible benefits from taking low-emission scenarios, ToE differences were examined between the RCP2.6 scenario and the RCP4.5 and RCP8.5 scenarios. Significant ToE delays from 26 years to longer than 67 years were identified over East Africa (OM wettening), the Mediterranean (both AS and OM drying), South Asia (AS wettening), and South Africa (AS drying). Further, we investigated ToE differences between CMIP3-based and CMIP5-based models using the same number of models for the comparable scenario pairs (SRESA2 vs. RCP8.5, and SRESB1 vs. RCP4.5). Results were largely consistent between two model groups, indicating the robustness of ToE results. Considerable differences in ToEs (larger than 20 years) between two model groups appeared over East Asia and South Asia (AS wettening) and South Africa (AS drying), which were found due to stronger signals in CMIP5 models. Our results provide useful information on the timing of emerging signals in regional and seasonal hydrological changes, having important implications for associated adaptation and mitigation plans.

  10. The 2003 Update of the ASPO Oil and Gas Depletion Model

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Colin; Sivertsson, Anders [Uppsala Univ. (Sweden). Hydrocarbon Depletion Study Group

    2003-07-01

    What we can term the ASPO Oil and Gas Depletion Model has developed over many years, based on an evolving knowledge of the resource base, culled from many sources, and evolving ideas about how to model depletion. It is sure that the estimates and forecasts are incorrect. The question is: By how much? The model recognises so-called Regular Oil, which excludes the following categories: Oil from coal and shale; Bitumen and synthetics derived therefrom; Extra Heavy Oil (<10 deg API); Heavy Oil (10-17 deg API); Deepwater Oil (>500 m); Polar Oil; Liquids from gas fields and gas plants. It has provided most oil to-date and will dominate all supply far into the future. Its depletion therefore determines the date of peak. The evidence suggests that about 896 Gb (billion barrels) had been produced to end 2002; about 871 Gb remain to produce from known fields and about 113 Gb is expected to be produced from new fields. It is convenient to set a cut-off of, say 2075, for such production, to avoid having to worry about the tail end that can drag on for a long time. A simple depletion model assumes that production declines at the current Depletion Rate (annual production as a percentage of future production) or at the Midpoint Rate in countries that have not yet reached Midpoint (namely half the total). The five main Middle East producers, which hold about half of what remains, are assumed to exercise a swing role, making up the difference between world demand and what the other countries can supply. The base case scenario assumes that consumption will be on average flat until 2010 because of recession; and that the Middle East swing role will end then, as in practice those countries will no longer have the capacity to discharge it. Whether the Iraq war results in extending or shortening the swing role remains to be seen. Adding the contributions of the other categories of oil and gas liquids gives an overall peak in 2010. Gas depletes differently, being more influenced by

  11. Cycle life versus depth of discharge update on modeling studies. [Nickel-Hydrogen Batteries

    Science.gov (United States)

    Thaller, Lawrence H.

    1994-01-01

    The topics are presented in viewgraph form and cycle life vs. depth of discharge data for the following are presented: data as of three years ago; Air Force/Crane-Fuhr-Smithrick; Ken Fuhr's Data; Air Force/Crane Data; Eagle-Pitcher Data; Steve Schiffer's Data; John Smithrick's Data; temperature effects; and E-P, Yardney, and Hughes 26% Data. Other topics covered include the following: LeRC cycling tests of Yardney Space Station Cells; general statements; general observations; two different models of cycle life vs. depth of discharge; and other degradation modes.

  12. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  13. The use of flow models for design of plant operating procedures

    International Nuclear Information System (INIS)

    Lind, M.

    1982-03-01

    The report describe a systematic approach to the design of operating procedures or sequence automatics for process plant control. It is shown how flow models representing the topology of mass and energy flows on different levels of function provide plant information which is important for the considered design problem. The modelling methodology leads to the definition of three categories of control tasks. Two tasks relate to the regulation and control of changes of levels and flows of mass and energy in a system within a defined mode of operation. The third type relate to the control actions necessary for switching operations involved in changes of operating mode. These control tasks are identified for a given plant as part of the flow modelling activity. It is discussed how the flow model deal with the problem of assigning control task precedence in time eg. during start-up or shut-down operations. The method may be a basis for providing automated procedure support to the operator in unforeseen situations or may be a tool for control design. (auth.)

  14. Programmed cell death 4 mechanism of action: The model to be updated?

    Science.gov (United States)

    Vikhreva, Polina N; Kalinichenko, Svetlana V; Korobko, Igor V

    2017-10-02

    Programmed cell death 4 (Pdcd4) is frequently suppressed in tumors of various origins and its suppression correlates with tumor progression. Pdcd4 inhibits cap-dependent translation from mRNAs with highly structured 5'-regions through interaction with the eukaryotic translation initiation factor 4A (eIF4A) helicase and a target transcript. Decrease in Pdcd4 protein is believed to provide a relief of otherwise suppressed eIF4A-dependent translation of proteins facilitating tumor progression. However, it remains unknown if lowered Pdcd4 levels in cells suffices to cause a relief in translation inhibition through appearance of the Pdcd4-free translation-competent eIF4A protein, or more complex and selective mechanisms are involved. Here we showed that eIF4A1, the eIF4A isoform involved in translation, significantly over-represents Pdcd4 both in cancerous and normal cells. This observation excludes the possibility that cytoplasmic Pdcd4 can efficiently exert its translation suppression function owing to excess of eIF4A, with Pdcd4-free eIF4A being in excess over Pdcd4-bound translation-incompetent eIF4A, thus leaving translation from Pdcd4 mRNA targets unaffected. This contradiction is resumed in the proposed model, which supposes initial complexing between Pdcd4 and its target mRNAs in the nucleus, with subsequent transport of translation-incompetent, Pdcd4-bound target mRNAs into the cytoplasm. Noteworthy, loss of nuclear Pdcd4 in cancer cells was reported to correlate with tumor progression, which supports the proposed model of Pdcd4 functioning.

  15. Nitrous oxide emissions from cropland: a procedure for calibrating the DayCent biogeochemical model using inverse modelling

    Science.gov (United States)

    Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.

    2013-01-01

    DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.

  16. A single-photon ecat reconstruction procedure based on a PSF model

    International Nuclear Information System (INIS)

    Ying-Lie, O.

    1984-01-01

    Emission Computed Axial Tomography (ECAT) has been applied in nuclear medicine for the past few years. Owing to attenuation and scatter along the ray path, adequate correction methods are required. In this thesis, a correction method for attenuation, detector response and Compton scatter has been proposed. The method developed is based on a PSF model. The parameters of the models were derived by fitting experimental and simulation data. Because of its flexibility, a Monte Carlo simulation method has been employed. Using the PSF models, it was found that the ECAT problem can be described by the added modified equation. Application of the reconstruction procedure on simulation data yield satisfactory results. The algorithm tends to amplify noise and distortion in the data, however. Therefore, the applicability of the method on patient studies remain to be seen. (Auth.)

  17. Dynamics of microparticles in vacuum breakdown: Cranberg’s scenario updated by numerical modeling

    Directory of Open Access Journals (Sweden)

    B. Seznec

    2017-07-01

    Full Text Available Microparticles (MP and thermofield emission in vacuum are mainly caused by the roughness present at the surface of electrodes holding a high voltage. They can act as a trigger for breakdown, especially under high vacuum. This theoretical study discusses the interactions between one MP and the thermofield emission electron current as well as the consequences on the MP’s transit. Starting from Cranberg’s assumptions, new phenomena have been taken into account such as MP charge variation due to the secondary electron emission induced by energetic electron bombardment. Hence, the present model can be solved only numerically. Four scenarios have been identified based on the results, depending on the electron emission current from the cathode roughness (tip and the size of the MP released at the anode, namely (i one way; (ii back and forth; (iii oscillation; and (iv vaporization. A crash study of the MP on the cathode shows that the electron emission can decrease if the MP covers the thermoemissive tip, i.e., if the MP is larger than the tip size—a phenomenon often called “conditioning”—and helping to increase the voltage holding in vacuum without breakdown.

  18. N-3 polyunsaturated fatty acids in animal models with neuroinflammation: An update.

    Science.gov (United States)

    Trépanier, Marc-Olivier; Hopperton, Kathryn E; Orr, Sarah K; Bazinet, Richard P

    2016-08-15

    Neuroinflammation is a characteristic of a multitude of neurological and psychiatric disorders. Modulating inflammatory pathways offers a potential therapeutic target in these disorders. Omega-3 polyunsaturated fatty acids have anti-inflammatory and pro-resolving properties in the periphery, however, their effect on neuroinflammation is less studied. This review summarizes 61 animal studies that tested the effect of omega-3 polyunsaturated fatty acids on neuroinflammatory outcomes in vivo in various models including stroke, spinal cord injury, aging, Alzheimer's disease, Parkinson's disease, lipopolysaccharide and IL-1β injections, diabetes, neuropathic pain, traumatic brain injury, depression, surgically induced cognitive decline, whole body irradiation, amyotrophic lateral sclerosis, N-methyl-D-aspartate-induced excitotoxicity and lupus. The evidence presented in this review suggests anti-neuroinflammatory properties of omega-3 polyunsaturated fatty acids, however, it is not clear by which mechanism omega-3 polyunsaturated fatty acids exert their effect. Future research should aim to isolate the effect of omega-3 polyunsaturated fatty acids on neuroinflammatory signaling in vivo and elucidate the mechanisms underlying these effects. Copyright © 2016. Published by Elsevier B.V.

  19. Quantifying Update Effects in Citizen-Oriented Software

    Directory of Open Access Journals (Sweden)

    Ion Ivan

    2009-02-01

    Full Text Available Defining citizen-oriented software. Detailing technical issues regarding update process in this kind of software. Presenting different effects triggered by types of update. Building model for update costs estimation, including producer-side and consumer-side effects. Analyzing model applicability on INVMAT – large scale matrix inversion software. Proposing a model for update effects estimation. Specifying ways for softening effects of inaccurate updates.

  20. A stochastic estimation procedure for intermittently-observed semi-Markov multistate models with back transitions.

    Science.gov (United States)

    Aralis, Hilary; Brookmeyer, Ron

    2017-01-01

    Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.

  1. Modeling urban building energy use: A review of modeling approaches and procedures

    Energy Technology Data Exchange (ETDEWEB)

    Li, Wenliang; Zhou, Yuyu; Cetin, Kristen; Eom, Jiyong; Wang, Yu; Chen, Gang; Zhang, Xuesong

    2017-12-01

    With rapid urbanization and economic development, the world has been experiencing an unprecedented increase in energy consumption and greenhouse gas (GHG) emissions. While reducing energy consumption and GHG emissions is a common interest shared by major developed and developing countries, actions to enable these global reductions are generally implemented at the city scale. This is because baseline information from individual cities plays an important role in identifying economical options for improving building energy efficiency and reducing GHG emissions. Numerous approaches have been proposed for modeling urban building energy use in the past decades. This paper aims to provide an up-to-date review of the broad categories of energy models for urban buildings and describes the basic workflow of physics-based, bottom-up models and their applications in simulating urban-scale building energy use. Because there are significant differences across models with varied potential for application, strengths and weaknesses of the reviewed models are also presented. This is followed by a discussion of challenging issues associated with model preparation and calibration.

  2. A Computational Model of the Temporal Dynamics of Plasticity in Procedural Learning: Sensitivity to Feedback Timing

    Directory of Open Access Journals (Sweden)

    Vivian V. Valentin

    2014-07-01

    Full Text Available The evidence is now good that different memory systems mediate the learning of different types of category structures. In particular, declarative memory dominates rule-based (RB category learning and procedural memory dominates information-integration (II category learning. For example, several studies have reported that feedback timing is critical for II category learning, but not for RB category learning – results that have broad support within the memory systems literature. Specifically, II category learning has been shown to be best with feedback delays of 500ms compared to delays of 0 and 1000ms, and highly impaired with delays of 2.5 seconds or longer. In contrast, RB learning is unaffected by any feedback delay up to 10 seconds. We propose a neurobiologically detailed theory of procedural learning that is sensitive to different feedback delays. The theory assumes that procedural learning is mediated by plasticity at cortical-striatal synapses that are modified by dopamine-mediated reinforcement learning. The model captures the time-course of the biochemical events in the striatum that cause synaptic plasticity, and thereby accounts for the empirical effects of various feedback delays on II category learning.

  3. Modeling of the LA-ICPMS surface rastering procedure to optimize elemental imaging

    International Nuclear Information System (INIS)

    Elteren, J.T. van; Triglav, J.; Selih, V.S.; Zivin, M.

    2009-01-01

    Full text: The quality of elemental image maps generated by LA-ICPMS is a function of the instrumental settings (laser fluence, pulse rate, beam diameter, scanning speed, gas flow rate and acquisition time). Optimizing these settings is a matter of trial and error since quality criteria for elemental imaging (sensitivity, resolution, analysis time) are intricately linked. A theoretical model (and software) will be discussed with which it is possible to simply compute the image distortion introduced by the LA-ICPMS as a function of the instrumental settings and optimize the surface rastering procedure prior to the actual analysis to meet the required quality criteria. (author)

  4. Updated activated sludge model number 1 parameter values for improved prediction of nitrogen removal in activated sludge processes: validation at 13 full-scale plants.

    Science.gov (United States)

    Choubert, Jean-Marc; Stricker, Anne-Emmanuelle; Marquot, Aurélien; Racault, Yvan; Gillot, Sylvie; Héduit, Alain

    2009-01-01

    The Activated Sludge Model number 1 (ASM1) is the main model used in simulation projects focusing on nitrogen removal. Recent laboratory-scale studies have found that the default values given 20 years ago for the decay rate of nitrifiers and for the heterotrophic biomass yield in anoxic conditions were inadequate. To verify the relevance of the revised parameter values at full scale, a series of simulations were carried out with ASM1 using the original and updated set of parameters at 20 degrees C and 10 degrees C. The simulation results were compared with data collected at 13 full-scale nitrifying-denitrifying municipal treatment plants. This work shows that simulations using the original ASM1 default parameters tend to overpredict the nitrification rate and underpredict the denitrification rate. The updated set of parameters allows more realistic predictions over a wide range of operating conditions.

  5. Development of an updated PBPK model for trichloroethylene and metabolites in mice, and its application to discern the role of oxidative metabolism in TCE-induced hepatomegaly

    International Nuclear Information System (INIS)

    Evans, M.V.; Chiu, W.A.; Okino, M.S.; Caldwell, J.C.

    2009-01-01

    Trichloroethylene (TCE) is a lipophilic solvent rapidly absorbed and metabolized via oxidation and conjugation to a variety of metabolites that cause toxicity to several internal targets. Increases in liver weight (hepatomegaly) have been reported to occur quickly in rodents after TCE exposure, with liver tumor induction reported in mice after long-term exposure. An integrated dataset for gavage and inhalation TCE exposure and oral data for exposure to two of its oxidative metabolites (TCA and DCA) was used, in combination with an updated and more accurate physiologically-based pharmacokinetic (PBPK) model, to examine the question as to whether the presence of TCA in the liver is responsible for TCE-induced hepatomegaly in mice. The updated PBPK model was used to help discern the quantitative contribution of metabolites to this effect. The update of the model was based on a detailed evaluation of predictions from previously published models and additional preliminary analyses based on gas uptake inhalation data in mice. The parameters of the updated model were calibrated using Bayesian methods with an expanded pharmacokinetic database consisting of oral, inhalation, and iv studies of TCE administration as well as studies of TCE metabolites in mice. The dose-response relationships for hepatomegaly derived from the multi-study database showed that the proportionality of dose to response for TCE- and DCA-induced hepatomegaly is not observed for administered doses of TCA in the studied range. The updated PBPK model was used to make a quantitative comparison of internal dose of metabolized and administered TCA. While the internal dose of TCA predicted by modeling of TCE exposure (i.e., mg TCA/kg-d) showed a linear relationship with hepatomegaly, the slope of the relationship was much greater than that for directly administered TCA. Thus, the degree of hepatomegaly induced per unit of TCA produced through TCE oxidation is greater than that expected per unit of TCA

  6. Comparison of In-Flight Measured and Computed Aeroelastic Damping: Modal Identification Procedures and Modeling Approaches

    Directory of Open Access Journals (Sweden)

    Roberto da Cunha Follador

    2016-04-01

    Full Text Available The Operational Modal Analysis technique is a methodology very often applied for the identification of dynamic systems when the input signal is unknown. The applied methodology is based on a technique to estimate the Frequency Response Functions and extract the modal parameters using only the structural dynamic response data, without assuming the knowledge of the excitation forces. Such approach is an adequate way for measuring the aircraft aeroelastic response due to random input, like atmospheric turbulence. The in-flight structural response has been measured by accelerometers distributed along the aircraft wings, fuselage and empennages. The Enhanced Frequency Domain Decomposition technique was chosen to identify the airframe dynamic parameters. This technique is based on the hypothesis that the system is randomly excited with a broadband spectrum with almost constant power spectral density. The system identification procedure is based on the Single Value Decomposition of the power spectral densities of system output signals, estimated by the usual Fast Fourier Transform method. This procedure has been applied to different flight conditions to evaluate the modal parameters and the aeroelastic stability trends of the airframe under investigation. The experimental results obtained by this methodology were compared with the predicted results supplied by aeroelastic numerical models in order to check the consistency of the proposed output-only methodology. The objective of this paper is to compare in-flight measured aeroelastic damping against the corresponding parameters computed from numerical aeroelastic models. Different aerodynamic modeling approaches should be investigated such as the use of source panel body models, cruciform and flat plate projection. As a result of this investigation it is expected the choice of the better aeroelastic modeling and Operational Modal Analysis techniques to be included in a standard aeroelastic

  7. Generation of normative pediatric skull models for use in cranial vault remodeling procedures.

    Science.gov (United States)

    Saber, Nikoo R; Phillips, John; Looi, Thomas; Usmani, Zoha; Burge, Jonathan; Drake, James; Kim, Peter C W

    2012-03-01

    While the goal of craniofacial reconstruction surgery is to restore the cranial head shape as much towards normal as possible, for the individual patient, there is, in fact, no normal three-dimensional (3D) model to act as a guide. In this project, we generated a library of normative pediatric skulls from which a guiding template could be fabricated for a more standardized, objective and precise correction of craniosynostosis. Computed tomography data from 103 normal subjects aged 8-12 months were compiled and a 3D computational model of the skull was generated for each subject. The models were mathematically registered to a baseline model for each month of age within this range and then averaged, resulting in a single 3D point cloud. An external cranial surface was subsequently passed through the point cloud and its shape and size customized to fit the head circumference of individual patients. The resultant fabricated skull models provide a novel and applicable tool for a detailed, quantitative comparison between the normative and patient skulls for preoperative planning and practice for a variety of craniofacial procedures including vault remodeling. Additionally, it was possible to extract the suprafrontal orbit anatomy from the normative model and fabricate a bandeau template to guide intraoperative reshaping. Normative head shapes for pediatric patients have wide application for craniofacial surgery including planning, practice, standarized operative repair, and standardized measurement and reporting of outcomes.

  8. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    Science.gov (United States)

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.

  9. Update and extension of the Brazil SimSmoke model to estimate the health impact of cigarette smoking by pregnant women in Brazil

    OpenAIRE

    Szklo, André Salem; Yuan, Zhe; Levy, David

    2017-01-01

    Abstract: A previous application of the Brazil SimSmoke tobacco control policy simulation model was used to show the effect of policies implemented between 1989 and 2010 on smoking-attributable deaths (SADs). In this study, we updated and further validated the Brazil SimSmoke model to incorporate policies implemented since 2011 (e.g., a new tax structure with the purpose of increasing revenues/real prices). In addition, we extended the model to estimate smoking-attributable maternal and child...

  10. Updated model for radionuclide transport in the near-surface till at Forsmark - Implementation of decay chains and sensitivity analyses

    Energy Technology Data Exchange (ETDEWEB)

    Pique, Angels; Pekala, Marek; Molinero, Jorge; Duro, Lara; Trinchero, Paolo; Vries, Luis Manuel de [Amphos 21 Consulting S.L., Barcelona (Spain)

    2013-02-15

    The Forsmark area has been proposed for potential siting of a deep underground (geological) repository for radioactive waste in Sweden. Safety assessment of the repository requires radionuclide transport from the disposal depth to recipients at the surface to be studied quantitatively. The near-surface quaternary deposits at Forsmark are considered a pathway for potential discharge of radioactivity from the underground facility to the biosphere, thus radionuclide transport in this system has been extensively investigated over the last years. The most recent work of Pique and co-workers (reported in SKB report R-10-30) demonstrated that in case of release of radioactivity the near-surface sedimentary system at Forsmark would act as an important geochemical barrier, retarding the transport of reactive radionuclides through a combination of retention processes. In this report the conceptual model of radionuclide transport in the quaternary till at Forsmark has been updated, by considering recent revisions regarding the near-surface lithology. In addition, the impact of important conceptual assumptions made in the model has been evaluated through a series of deterministic and probabilistic (Monte Carlo) sensitivity calculations. The sensitivity study focused on the following effects: 1. Radioactive decay of {sup 135}Cs, {sup 59}Ni, {sup 230}Th and {sup 226}Ra and effects on their transport. 2. Variability in key geochemical parameters, such as the composition of the deep groundwater, availability of sorbing materials in the till, and mineral equilibria. 3. Variability in hydraulic parameters, such as the definition of hydraulic boundaries, and values of hydraulic conductivity, dispersivity and the deep groundwater inflow rate. The overarching conclusion from this study is that the current implementation of the model is robust (the model is largely insensitive to variations in the parameters within the studied ranges) and conservative (the Base Case calculations have a

  11. A Tractable Model of the LTE Access Reservation Procedure for Machine-Type Communications

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Min Kim, Dong; Madueño, Germán Corrales

    2015-01-01

    A canonical scenario in Machine-Type Communications (MTC) is the one featuring a large number of devices, each of them with sporadic traffic. Hence, the number of served devices in a single LTE cell is not determined by the available aggregate rate, but rather by the limitations of the LTE access...... reservation protocol. Specifically, the limited number of contention preambles and the limited amount of uplink grants per random access response are crucial to consider when dimensioning LTE networks for MTC. We propose a low-complexity model that encompasses these two limitations and allows us to evaluate...... on the preamble collisions. A comparison with the simulated LTE access reservation procedure that follows the 3GPP specifications, confirms that our model provides an accurate estimation of the system outage event and the number of supported MTC devices....

  12. Procedural Portfolio Planning in Plastic Surgery, Part 2: Collaboration Between Surgeons and Hospital Administrators to Develop a Funds Flow Model for Procedures Performed at an Academic Medical Center.

    Science.gov (United States)

    Hultman, Charles Scott

    2016-06-01

    Although plastic surgeons make important contributions to the clinical, educational, and research missions of academic medical centers (AMCs), determining the financial value of a plastic surgery service can be difficult, due to complex cost accounting systems. We analyzed the financial impact of plastic surgery on an AMC, by examining the contribution margins and operating income of surgical procedures. We collaborated with hospital administrators to implement 3 types of strategic changes: (1) growth of areas with high contribution margin, (2) curtailment of high-risk procedures with negative contribution margin, (3) improved efficiency of mission-critical services with high resource consumption. Outcome measures included: facility charges, hospital collections, contribution margin, operating margin, and operating room times. We also studied the top 50 Current Procedural Terminology codes (total case number × charge/case), ranking procedures for profitability, as determined by operating margin. During the 2-year study period, we had no turnover in faculty; did not pursue any formal marketing; did not change our surgical fees, billing system, or payer mix; and maintained our commitment to indigent care. After rebalancing our case mix, through procedural portfolio planning, average hospital operating income/procedure increased from $-79 to $+816. Volume and diversity of cases increased, with no change in payer mix. Although charges/case decreased, both contribution margin and operating margin increased, due to improved throughput and decreased operating room times. The 5 most profitable procedures for the hospital were hernia repair, mandibular osteotomy, hand skin graft, free fibula flap, and head and neck flap, whereas the 5 least profitable were latissimus breast reconstruction, craniosynostosis repair, free-flap breast reconstruction, trunk skin graft, and cutaneous free flap. Total operating income for the hospital, from plastic surgery procedures, increased

  13. An updated animal model capturing both the cognitive and emotional features of post-traumatic stress disorder (PTSD).

    Science.gov (United States)

    Berardi, Andrea; Trezza, Viviana; Palmery, Maura; Trabace, Luigia; Cuomo, Vincenzo; Campolongo, Patrizia

    2014-01-01

    The new-released Diagnostic and Statistical Manual of Mental Disorders (DSM-5) defines post-traumatic stress disorder (PTSD) as a "trauma and stressor-related disorder". PTSD pathogenesis relies on paradoxical changes of emotional memory processing induced by the trauma exposure and associated with emotional dysfunction. Several animal models of PTSD have been validated and are currently used. Each one mimics a particular subset of the disorder with particular emphasis, mainly driven by the past classification of PTSD in the DSM-4, on the emotional features. In view of the recent update in the DSM-5, our aim was to develop, by using well-validated paradigms, a modified model of PTSD able to mimic at the same time both the cognitive and emotional features of the disease. We exposed male rats to either a piece of worn cat collar or to a series of inescapable footshocks paired with a PTSD risk factor, i.e., social isolation. Animals were subsequently re-exposed to the conditioned contexts at different time intervals in order to test memory retention for the stressors. In addition, footshock-exposed rats were tested in the elevated-plus-maze and social interaction tests. We found that rats exposed to a cat collar exhibited an acute fear response that did not lead to enduring memory retention. Conversely, footshock-exposed rats expressed a successful retention of the stressful experience at 1, 7, 14, 21 and 56 post-exposure days. Footshock-exposed rats displayed an anxious behavioral profile in the social interaction test and a significantly reduced locomotor activity in the elevated-plus-maze test. These dysfunctions were not observed when animals were socially housed, thus highlighting a social buffering effect in the development of the pathology. Our results underline the good validity of a footshock-based paradigm paired with social isolation as a PTSD animal model, able to mimic at the same time both some of the enduring cognitive and emotional facets of the

  14. Capturing both the cognitive and emotional features of post-traumatic stress disorder (PTSD in rats: An updated animal model

    Directory of Open Access Journals (Sweden)

    Andrea eBerardi

    2014-04-01

    Full Text Available The new-released Diagnostic and Statistical Manual of Mental Disorders (DSM-5 defines post-traumatic stress disorder (PTSD as a trauma and stressor-related disorder. PTSD pathogenesis relies on paradoxical changes of emotional memory processing induced by the trauma exposure and associated with emotional dysfunction. Several animal models of PTSD have been validated and are currently used. Each one mimics a particular subset of the disorder with particular emphasis, mainly driven by the past classification of PTSD in the DSM-4, on the emotional features. In view of the recent update in the DSM-5, our aim was to develop, by using well-validated paradigms, a modified model of PTSD able to mimic at the same time both the cognitive and emotional features of the disease. We exposed male rats to either a piece of worn cat collar or to a series of inescapable footshocks paired with a PTSD risk factor, i.e. social isolation. Animals were subsequently re-exposed to the conditioned contexts at different time intervals in order to test memory retention for the stressors. In addition, footshock-exposed rats were tested in the elevated-plus-maze and social interaction tests. We found that rats exposed to a cat collar exhibited an acute fear response that did not lead to enduring memory retention. Conversely, footshock-exposed rats expressed a successful retention of the stressful experience at 1, 7, 14, 21 and 56 post-exposure days. Footshock-exposed rats displayed an anxious behavioral profile in the social interaction test and a significantly reduced locomotor activity in the elevated-plus-maze test. These dysfunctions were not observed when animals were socially housed, thus highlighting a social buffer effect in the development of the pathology. Our results underline the good validity of a footshock-based paradigm paired with social isolation as a PTSD animal model, able to mimic at the same time both some of the enduring cognitive and emotional facets

  15. An updated animal model capturing both the cognitive and emotional features of post-traumatic stress disorder (PTSD)

    Science.gov (United States)

    Berardi, Andrea; Trezza, Viviana; Palmery, Maura; Trabace, Luigia; Cuomo, Vincenzo; Campolongo, Patrizia

    2014-01-01

    The new-released Diagnostic and Statistical Manual of Mental Disorders (DSM-5) defines post-traumatic stress disorder (PTSD) as a “trauma and stressor-related disorder”. PTSD pathogenesis relies on paradoxical changes of emotional memory processing induced by the trauma exposure and associated with emotional dysfunction. Several animal models of PTSD have been validated and are currently used. Each one mimics a particular subset of the disorder with particular emphasis, mainly driven by the past classification of PTSD in the DSM-4, on the emotional features. In view of the recent update in the DSM-5, our aim was to develop, by using well-validated paradigms, a modified model of PTSD able to mimic at the same time both the cognitive and emotional features of the disease. We exposed male rats to either a piece of worn cat collar or to a series of inescapable footshocks paired with a PTSD risk factor, i.e., social isolation. Animals were subsequently re-exposed to the conditioned contexts at different time intervals in order to test memory retention for the stressors. In addition, footshock-exposed rats were tested in the elevated-plus-maze and social interaction tests. We found that rats exposed to a cat collar exhibited an acute fear response that did not lead to enduring memory retention. Conversely, footshock-exposed rats expressed a successful retention of the stressful experience at 1, 7, 14, 21 and 56 post-exposure days. Footshock-exposed rats displayed an anxious behavioral profile in the social interaction test and a significantly reduced locomotor activity in the elevated-plus-maze test. These dysfunctions were not observed when animals were socially housed, thus highlighting a social buffering effect in the development of the pathology. Our results underline the good validity of a footshock-based paradigm paired with social isolation as a PTSD animal model, able to mimic at the same time both some of the enduring cognitive and emotional facets of the

  16. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    Science.gov (United States)

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  17. Updated global soil map for the Weather Research and Forecasting model and soil moisture initialization for the Noah land surface model

    Science.gov (United States)

    DY, C. Y.; Fung, J. C. H.

    2016-08-01

    A meteorological model requires accurate initial conditions and boundary conditions to obtain realistic numerical weather predictions. The land surface controls the surface heat and moisture exchanges, which can be determined by the physical properties of the soil and soil state variables, subsequently exerting an effect on the boundary layer meteorology. The initial and boundary conditions of soil moisture are currently obtained via National Centers for Environmental Prediction FNL (Final) Operational Global Analysis data, which are collected operationally in 1° by 1° resolutions every 6 h. Another input to the model is the soil map generated by the Food and Agriculture Organization of the United Nations - United Nations Educational, Scientific and Cultural Organization (FAO-UNESCO) soil database, which combines several soil surveys from around the world. Both soil moisture from the FNL analysis data and the default soil map lack accuracy and feature coarse resolutions, particularly for certain areas of China. In this study, we update the global soil map with data from Beijing Normal University in 1 km by 1 km grids and propose an alternative method of soil moisture initialization. Simulations of the Weather Research and Forecasting model show that spinning-up the soil moisture improves near-surface temperature and relative humidity prediction using different types of soil moisture initialization. Explanations of that improvement and improvement of the planetary boundary layer height in performing process analysis are provided.

  18. Mathematical Model and Calibration Procedure of a PSD Sensor Used in Local Positioning Systems.

    Science.gov (United States)

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Domingo-Perez, Francisco; Tsirigotis, Georgios

    2016-09-15

    Here, we propose a mathematical model and a calibration procedure for a PSD (position sensitive device) sensor equipped with an optical system, to enable accurate measurement of the angle of arrival of one or more beams of light emitted by infrared (IR) transmitters located at distances of between 4 and 6 m. To achieve this objective, it was necessary to characterize the intrinsic parameters that model the system and obtain their values. This first approach was based on a pin-hole model, to which system nonlinearities were added, and this was used to model the points obtained with the nA currents provided by the PSD. In addition, we analyzed the main sources of error, including PSD sensor signal noise, gain factor imbalances and PSD sensor distortion. The results indicated that the proposed model and method provided satisfactory calibration and yielded precise parameter values, enabling accurate measurement of the angle of arrival with a low degree of error, as evidenced by the experimental results.

  19. Updating the model TREMOD - Mobile Machinery (TREMOD-MM); Aktualisierung des Modells TREMOD - Mobile Machinery (TREMOD-MM)

    Energy Technology Data Exchange (ETDEWEB)

    Helms, Hinrich; Lambrecht, Udo; Knoerr, Wolfram [ifeu - Institut fuer Energie- und Umweltforschung Heidelberg gGmbH, Heidelberg (Germany)

    2010-05-15

    In the context of the project ''Development of a model for the computation of the air pollutant emissions and the fuel consumption of combustion engines in mobile devices and machines'', the Institute for Energy and Environmental Research GmbH (Heidelberg, Federal Republic of Germany) has created the model TREMOD-MM (TREMOD Mobile Machinery). Thus a detailed computation of the emissions from mobile devices and machines in the agriculture, construction industry, forestry and gardening as well as the sport shipping and passenger shipping can be accomplished. Strongly differentiated data are considered to the age structure, engine performance, use and emission behaviour. Thus it is possible to compute the emissions for different scenarios in high degree of detail.

  20. A computational model to investigate assumptions in the headturn preference procedure

    Directory of Open Access Journals (Sweden)

    Christina eBergmann

    2013-10-01

    Full Text Available In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP: (1 behavioural differences originate in different processing; (2 processing involves some form of recognition; (3 words are segmented from connected speech; and (4 differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a the specific voices used in the two parts on HPP experiments (familiarisation and test and (b the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximise cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviours observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP.