Townsend, Molly T; Sarigul-Klijn, Nesrin
2016-01-01
Simplified material models are commonly used in computational simulation of biological soft tissue as an approximation of the complicated material response and to minimize computational resources. However, the simulation of complex loadings, such as long-duration tissue swelling, necessitates complex models that are not easy to formulate. This paper strives to offer the updated Lagrangian formulation comprehensive procedure of various non-linear material models for the application of finite element analysis of biological soft tissues including a definition of the Cauchy stress and the spatial tangential stiffness. The relationships between water content, osmotic pressure, ionic concentration and the pore pressure stress of the tissue are discussed with the merits of these models and their applications.
Institute of Scientific and Technical Information of China (English)
Kailei Liu; Zhijia Li; Cheng Yao; Ji Chen; Ke Zhang; Muhammad Saifullah
2016-01-01
The Kalman filter (KF) updating method has been widely used as an efficient measure to assimilate real-time hydrological variables for reducing forecast uncertainty and providing improved forecasts. However, the accuracy of the KF relies much on the estimates of the state transition matrix and is limited due to the errors inherit from parameters and variables of the flood forecasting models. A new real-time updating approach (named KN2K) is produced by coupling the k-nearest neighbor (KNN) procedure with the KF for flood forecasting models. The nonparametric KNN algorithm, which can be utilized to predict the response of a system on the basis of the k most representative predictors, is still efficient when the descriptions for input-output mapping are insufficient. In this study, the KNN procedure is used to provide more accurate estimates of the state transition matrix to extend the applicability of the KF. The updating performance of KN2K is investigated in the middle reach of the Huai River based on a one-dimensional hydraulic model with the lead times ranging from 2 to 12 h. The forecasts from the KN2K are compared with the observations, the original forecasts and the KF-updated forecasts. The results indicate that the KN2K method, with the Nash-Sutcliffe efficiency larger than 0.85 in the 12-h-ahead forecasts, has a significant advantage in accuracy and robustness compared to the KF method. It is demonstrated that improved updating results can be obtained through the use of KNN procedure. The tests show that the KN2K method can be used as an effective tool for real-time flood forecasting.
Rapid Map Updating Procedures Using Orthophotos
Alrajhi, M.
2009-04-01
The General Directorate of Surveying and Mapping (GDSM) of the Ministry for Municipal and Rural Affairs (MOMRA) of the Kingdom of Saudi Arabia has the mandate for large scale mapping of 220 Saudi Arabian cities. During the last 30 years all of these cities have been mapped in 3D at least once using stereo photogrammetric procedures. The output of these maps is in digital vector files with more than 300 types of features coded. Mapping at the required scales of 1:10,000 for the urban and suburban areas and at 1:1,000 for the urban areas proper has been a lengthy and costly process, which did not lend itself to regular updating procedures. For this reason the major cities, where most of the developments took place, have been newly mapped at about 10 year intervals. To record the changes of urban landscapes more rapidly orthophotomapping has recently been introduced. Rather than waiting for about 5 years for the line mapping of a large city after the inception of a mapping project, orthophotos could be produced a few months after a new aerial flight was made. While new, but slow stereomapping in 3D provides accurate results in conformity with the usual urban mapping specifications, the geocoded superposition of outdated maps with the more recent orthophotos provided a very useful monitoring of the urban changes. At the same time the use of orthophotos opens up a new possibility for urban map updating by on-screen digitizing in 2D. This can at least be done for the most relevant features, such as buildings, walls, roads and vegetation. As this is a faster method than 3D stereo plotting a lesser geometric accuracy is to be expected for the on-screen digitization. There is a need to investigate and to compare the two methods with respect to accuracy and speed of operation as a basis for a decision, whether to continue with new 3D stereomapping every 10 years or to introduce rapid map updating in 2D via on-screen digitization every 3 to 5 years. This presentation is about
Adjustment or updating of models
Indian Academy of Sciences (India)
D J Ewins
2000-06-01
In this paper, first a review of the terminology used in the model adjustment or updating is presented. This is followed by an outline of the major updating algorithms cuurently available, together with a discussion of the advantages and disadvantages of each, and the current state-of-the-art of this important application and part of optimum design technology.
Update on Nonsurgical Lung Volume Reduction Procedures
Directory of Open Access Journals (Sweden)
J. Alberto Neder
2016-01-01
Full Text Available There has been a surge of interest in endoscopic lung volume reduction (ELVR strategies for advanced COPD. Valve implants, coil implants, biological LVR (BioLVR, bronchial thermal vapour ablation, and airway stents are used to induce lung deflation with the ultimate goal of improving respiratory mechanics and chronic dyspnea. Patients presenting with severe air trapping (e.g., inspiratory capacity/total lung capacity (TLC 225% predicted and thoracic hyperinflation (TLC > 150% predicted have the greatest potential to derive benefit from ELVR procedures. Pre-LVRS or ELVR assessment should ideally include cardiological evaluation, high resolution CT scan, ventilation and perfusion scintigraphy, full pulmonary function tests, and cardiopulmonary exercise testing. ELVR procedures are currently available in selected Canadian research centers as part of ethically approved clinical trials. If a decision is made to offer an ELVR procedure, one-way valves are the first option in the presence of complete lobar exclusion and no significant collateral ventilation. When the fissure is not complete, when collateral ventilation is evident in heterogeneous emphysema or when emphysema is homogeneous, coil implants or BioLVR (in that order are the next logical alternatives.
The update of the accounting procedures in Agricultural Cooperatives.
Directory of Open Access Journals (Sweden)
Rafael Enrique Viña Echevarría
2014-06-01
Full Text Available As part of the implementation of Internal Control in Agricultural Cooperatives from the standards established by the General Controller of the Republic, and the harmonization of accounting procedures Cuban Accounting Standards, It is need to update the accounting procedure manuals to guide and regulate the flows, times and registration basis, considering the current legislation, being these the purpose of the discussion in this investigation. The results focused on organizational dynamics of cooperatives, serving the agricultural cooperative sector and its relation to internal control and accounting management guidelines based on economic and social policy of the Party and the Revolution, as well as updating the procedure manuals. It even showed limitations in the application of internal control procedures and accounting according to the current regulations in Cuba, expressing the need to continue its development.
Updating Small Generator Interconnection Procedures for New Market Conditions
Energy Technology Data Exchange (ETDEWEB)
Coddington, M.; Fox, K.; Stanfield, S.; Varnado, L.; Culley, T.; Sheehan, M.
2012-12-01
Federal and state regulators are faced with the challenge of keeping interconnection procedures updated against a backdrop of evolving technology, new codes and standards, and considerably transformed market conditions. This report is intended to educate policymakers and stakeholders on beneficial reforms that will keep interconnection processes efficient and cost-effective while maintaining a safe and reliable power system.
Empirical testing of forecast update procedure forseasonal products
DEFF Research Database (Denmark)
Wong, Chee Yew; Johansen, John
2008-01-01
of a toy supply chain. The theoretical simulation involves historical weekly consumer demand data for 122 toy products. The empirical test is then carried out in real-time with 291 toy products. The results show that the proposed forecast updating procedure: 1) reduced forecast errors of the annual...
Standard Review Plan Update and Development Program. Implementing Procedures Document
Energy Technology Data Exchange (ETDEWEB)
1992-05-01
This implementing procedures document (IPD) was prepared for use in implementing tasks under the standard review plan update and development program (SRP-UDP). The IPD provides comprehensive guidance and detailed procedures for SRP-UDP tasks. The IPD is mandatory for contractors performing work for the SRP-UDP. It is guidance for the staff. At the completion of the SRP-UDP, the IPD will be revised (to remove the UDP aspects) and will replace NRR Office Letter No. 800 as long-term maintenance procedures.
Model validation: Correlation for updating
Indian Academy of Sciences (India)
D J Ewins
2000-06-01
In this paper, a review is presented of the various methods which are available for the purpose of performing a systematic comparison and correlation between two sets of vibration data. In the present case, the application of interest is in conducting this correlation process as a prelude to model correlation or updating activity.
CTL Model Update for System Modifications
Ding, Yulin; Zhang, Yan; Zhang, Y; 10.1613/jair.2420
2011-01-01
Model checking is a promising technology, which has been applied for verification of many hardware and software systems. In this paper, we introduce the concept of model update towards the development of an automatic system modification tool that extends model checking functions. We define primitive update operations on the models of Computation Tree Logic (CTL) and formalize the principle of minimal change for CTL model update. These primitive update operations, together with the underlying minimal change principle, serve as the foundation for CTL model update. Essential semantic and computational characterizations are provided for our CTL model update approach. We then describe a formal algorithm that implements this approach. We also illustrate two case studies of CTL model updates for the well-known microwave oven example and the Andrew File System 1, from which we further propose a method to optimize the update results in complex system modifications.
Experimental model updating using frequency response functions
Hong, Yu; Liu, Xi; Dong, Xinjun; Wang, Yang; Pu, Qianhui
2016-04-01
In order to obtain a finite element (FE) model that can more accurately describe structural behaviors, experimental data measured from the actual structure can be used to update the FE model. The process is known as FE model updating. In this paper, a frequency response function (FRF)-based model updating approach is presented. The approach attempts to minimize the difference between analytical and experimental FRFs, while the experimental FRFs are calculated using simultaneously measured dynamic excitation and corresponding structural responses. In this study, the FRF-based model updating method is validated through laboratory experiments on a four-story shear-frame structure. To obtain the experimental FRFs, shake table tests and impact hammer tests are performed. The FRF-based model updating method is shown to successfully update the stiffness, mass and damping parameters of the four-story structure, so that the analytical and experimental FRFs match well with each other.
Model Updating Nonlinear System Identification Toolbox Project
National Aeronautics and Space Administration — ZONA Technology (ZONA) proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology that utilizes flight data with...
Jang, Jinwoo; Smyth, Andrew W.
2017-01-01
The objective of structural model updating is to reduce inherent modeling errors in Finite Element (FE) models due to simplifications, idealized connections, and uncertainties of material properties. Updated FE models, which have less discrepancies with real structures, give more precise predictions of dynamic behaviors for future analyses. However, model updating becomes more difficult when applied to civil structures with a large number of structural components and complicated connections. In this paper, a full-scale FE model of a major long-span bridge has been updated for improved consistency with real measured data. Two methods are applied to improve the model updating process. The first method focuses on improving the agreement of the updated mode shapes with the measured data. A nonlinear inequality constraint equation is used to an optimization procedure, providing the capability to regulate updated mode shapes to remain within reasonable agreements with those observed. An interior point algorithm deals with nonlinearity in the objective function and constraints. The second method finds very efficient updating parameters in a more systematic way. The selection of updating parameters in FE models is essential to have a successful updating result because the parameters are directly related to the modal properties of dynamic systems. An in-depth sensitivity analysis is carried out in an effort to precisely understand the effects of physical parameters in the FE model on natural frequencies. Based on the sensitivity analysis, cluster analysis is conducted to find a very efficient set of updating parameters.
Dynamic Model Updating Using Virtual Antiresonances
Directory of Open Access Journals (Sweden)
Walter D’Ambrogio
2004-01-01
Full Text Available This paper considers an extension of the model updating method that minimizes the antiresonance error, besides the natural frequency error. By defining virtual antiresonances, this extension allows the use of previously identified modal data. Virtual antiresonances can be evaluated from a truncated modal expansion, and do not correspond to any physical system. The method is applied to the Finite Element model updating of the GARTEUR benchmark, used within an European project on updating. Results are compared with those previously obtained by estimating actual antiresonances after computing low and high frequency residuals, and with results obtained by using the correlation (MAC between identified and analytical mode shapes.
Model Updating Nonlinear System Identification Toolbox Project
National Aeronautics and Space Administration — ZONA Technology proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology by adopting the flight data with state-of-the-art...
Stochastic model updating using distance discrimination analysis
Institute of Scientific and Technical Information of China (English)
Deng Zhongmin; Bi Sifeng; Sez Atamturktur
2014-01-01
This manuscript presents a stochastic model updating method, taking both uncertainties in models and variability in testing into account. The updated finite element (FE) models obtained through the proposed technique can aid in the analysis and design of structural systems. The authors developed a stochastic model updating method integrating distance discrimination analysis (DDA) and advanced Monte Carlo (MC) technique to (1) enable more efficient MC by using a response surface model, (2) calibrate parameters with an iterative test-analysis correlation based upon DDA, and (3) utilize and compare different distance functions as correlation metrics. Using DDA, the influence of distance functions on model updating results is analyzed. The proposed sto-chastic method makes it possible to obtain a precise model updating outcome with acceptable cal-culation cost. The stochastic method is demonstrated on a helicopter case study updated using both Euclidian and Mahalanobis distance metrics. It is observed that the selected distance function influ-ences the iterative calibration process and thus, the calibration outcome, indicating that an integra-tion of different metrics might yield improved results.
Institute of Scientific and Technical Information of China (English)
Yang Liu; DeJun Wang; Jun Ma; Yang Li
2014-01-01
To investigate the application of meta-model for finite element ( FE) model updating of structures, the performance of two popular meta-model, i.e., Kriging model and response surface model (RSM), were compared in detail. Firstly, above two kinds of meta-model were introduced briefly. Secondly, some key issues of the application of meta-model to FE model updating of structures were proposed and discussed, and then some advices were presented in order to select a reasonable meta-model for the purpose of updating the FE model of structures. Finally, the procedure of FE model updating based on meta-model was implemented by updating the FE model of a truss bridge model with the measured modal parameters. The results showed that the Kriging model was more proper for FE model updating of complex structures.
Model updating in flexible-link multibody systems
Belotti, R.; Caneva, G.; Palomba, I.; Richiedei, D.; Trevisani, A.
2016-09-01
The dynamic response of flexible-link multibody systems (FLMSs) can be predicted through nonlinear models based on finite elements, to describe the coupling between rigid- body and elastic behaviour. Their accuracy should be as high as possible to synthesize controllers and observers. Model updating based on experimental measurements is hence necessary. By taking advantage of the experimental modal analysis, this work proposes a model updating procedure for FLMSs and applies it experimentally to a planar robot. Indeed, several peculiarities of the model of FLMS should be carefully tackled. On the one hand, nonlinear models of a FLMS should be linearized about static equilibrium configurations. On the other, the experimental mode shapes should be corrected to be consistent with the elastic displacements represented in the model, which are defined with respect to a fictitious moving reference (the equivalent rigid link system). Then, since rotational degrees of freedom are also represented in the model, interpolation of the experimental data should be performed to match the model displacement vector. Model updating has been finally cast as an optimization problem in the presence of bounds on the feasible values, by also adopting methods to improve the numerical conditioning and to compute meaningful updated inertial and elastic parameters.
High-speed AMB machining spindle model updating and model validation
Wroblewski, Adam C.; Sawicki, Jerzy T.; Pesch, Alexander H.
2011-04-01
High-Speed Machining (HSM) spindles equipped with Active Magnetic Bearings (AMBs) have been envisioned to be capable of automated self-identification and self-optimization in efforts to accurately calculate parameters for stable high-speed machining operation. With this in mind, this work presents rotor model development accompanied by automated model-updating methodology followed by updated model validation. The model updating methodology is developed to address the dynamic inaccuracies of the nominal open-loop plant model when compared with experimental open-loop transfer function data obtained by the built in AMB sensors. The nominal open-loop model is altered by utilizing an unconstrained optimization algorithm to adjust only parameters that are a result of engineering assumptions and simplifications, in this case Young's modulus of selected finite elements. Minimizing the error of both resonance and anti-resonance frequencies simultaneously (between model and experimental data) takes into account rotor natural frequencies and mode shape information. To verify the predictive ability of the updated rotor model, its performance is assessed at the tool location which is independent of the experimental transfer function data used in model updating procedures. Verification of the updated model is carried out with complementary temporal and spatial response comparisons substantiating that the updating methodology is effective for derivation of open-loop models for predictive use.
Neuman systems model in holland: an update.
Merks, André; Verberk, Frans; de Kuiper, Marlou; Lowry, Lois W
2012-10-01
The authors of this column, leading members of the International Neuman Systems Model Association, provide an update on the use of Neuman systems model in Holland and document the various changes in The Netherlands that have influenced the use of the model in that country. The model's link to systems theory and stress theory are discussed, as well as a shift to greater emphasis on patient self-management. The model is also linked to healthcare quality improvement and interprofessional collaboration in Holland.
A Provenance Tracking Model for Data Updates
Directory of Open Access Journals (Sweden)
Gabriel Ciobanu
2012-08-01
Full Text Available For data-centric systems, provenance tracking is particularly important when the system is open and decentralised, such as the Web of Linked Data. In this paper, a concise but expressive calculus which models data updates is presented. The calculus is used to provide an operational semantics for a system where data and updates interact concurrently. The operational semantics of the calculus also tracks the provenance of data with respect to updates. This provides a new formal semantics extending provenance diagrams which takes into account the execution of processes in a concurrent setting. Moreover, a sound and complete model for the calculus based on ideals of series-parallel DAGs is provided. The notion of provenance introduced can be used as a subjective indicator of the quality of data in concurrent interacting systems.
An Updated AP2 Beamline TURTLE Model
Energy Technology Data Exchange (ETDEWEB)
Gormley, M.; O' Day, S.
1991-08-23
This note describes a TURTLE model of the AP2 beamline. This model was created by D. Johnson and improved by J. Hangst. The authors of this note have made additional improvements which reflect recent element and magnet setting changes. The magnet characteristics measurements and survey data compiled to update the model will be presented. A printout of the actual TURTLE deck may be found in appendix A.
Update on procedure-related risks for prenatal diagnosis techniques
DEFF Research Database (Denmark)
Tabor, Ann; Alfirevic, Zarko
2010-01-01
Introduction: As a consequence of the introduction of effective screening methods, the number of invasive prenatal diagnostic procedures is steadily declining. The aim of this review is to summarize the risks related to these procedures. Material and Methods: Review of the literature. Results: Data...... from randomised controlled trials as well as from systematic reviews and a large national registry study are consistent with a procedure-related miscarriage rate of 0.5-1.0% for amniocentesis as well as for chorionic villus sampling (CVS). In single-center studies performance may be remarkably good due...... not be performed before 15 + 0 weeks' gestation. CVS on the other hand should not be performed before 10 weeks' gestation due to a possible increase in risk of limb reduction defects. Discussion: Experienced operators have a higher success rate and a lower complication rate. The decreasing number of prenatal...
New Jersey Vocational Student Organizations Policies and Procedures Manual. Update
New Jersey Department of Education, 2005
2005-01-01
The purpose of this manual is to provide information regarding the policies and procedures required for daily operation of the state- and local-level activities and events of New Jersey's VSOs. There is variation by organization since each was developed independently with separate parent organizations, constitutions, bylaws, rules, regulations,…
OSPREY Model Development Status Update
Energy Technology Data Exchange (ETDEWEB)
Veronica J Rutledge
2014-04-01
During the processing of used nuclear fuel, volatile radionuclides will be discharged to the atmosphere if no recovery processes are in place to limit their release. The volatile radionuclides of concern are 3H, 14C, 85Kr, and 129I. Methods are being developed, via adsorption and absorption unit operations, to capture these radionuclides. It is necessary to model these unit operations to aid in the evaluation of technologies and in the future development of an advanced used nuclear fuel processing plant. A collaboration between Fuel Cycle Research and Development Offgas Sigma Team member INL and a NEUP grant including ORNL, Syracuse University, and Georgia Institute of Technology has been formed to develop off gas models and support off gas research. Georgia Institute of Technology is developing fundamental level model to describe the equilibrium and kinetics of the adsorption process, which are to be integrated with OSPREY. This report discusses the progress made on expanding OSPREY to be multiple component and the integration of macroscale and microscale level models. Also included in this report is a brief OSPREY user guide.
Energy Technology Data Exchange (ETDEWEB)
None, None
2011-06-30
The Miami Science Museum energy model has been used during DD to test the building's potential for energy savings as measured by ASHRAE 90.1-2007 Appendix G. This standard compares the designed building's yearly energy cost with that of a code-compliant building. The building is currently on track show 20% or better improvement over the ASHRAE 90.1-2007 Appendix G baseline; this performance would ensure minimum compliance with both LEED 2.2 and current Florida Energy Code, which both reference a less strict version of ASHRAE 90.1. In addition to being an exercise in energy code compliance, the energy model has been used as a design tool to show the relative performance benefit of individual energy conservation measures (ECMs). These ECMs are areas where the design team has improved upon code-minimum design paths to improve the energy performance of the building. By adding ECMs one a time to a code-compliant baseline building, the current analysis identifies which ECMs are most effective in helping the building meet its energy performance goals.
Finite element modelling and updating of a lively footbridge: The complete process
Živanović, Stana; Pavic, Aleksandar; Reynolds, Paul
2007-03-01
The finite element (FE) model updating technology was originally developed in the aerospace and mechanical engineering disciplines to automatically update numerical models of structures to match their experimentally measured counterparts. The process of updating identifies the drawbacks in the FE modelling and the updated FE model could be used to produce more reliable results in further dynamic analysis. In the last decade, the updating technology has been introduced into civil structural engineering. It can serve as an advanced tool for getting reliable modal properties of large structures. The updating process has four key phases: initial FE modelling, modal testing, manual model tuning and automatic updating (conducted using specialist software). However, the published literature does not connect well these phases, although this is crucial when implementing the updating technology. This paper therefore aims to clarify the importance of this linking and to describe the complete model updating process as applicable in civil structural engineering. The complete process consisting the four phases is outlined and brief theory is presented as appropriate. Then, the procedure is implemented on a lively steel box girder footbridge. It was found that even a very detailed initial FE model underestimated the natural frequencies of all seven experimentally identified modes of vibration, with the maximum error being almost 30%. Manual FE model tuning by trial and error found that flexible supports in the longitudinal direction should be introduced at the girder ends to improve correlation between the measured and FE-calculated modes. This significantly reduced the maximum frequency error to only 4%. It was demonstrated that only then could the FE model be automatically updated in a meaningful way. The automatic updating was successfully conducted by updating 22 uncertain structural parameters. Finally, a physical interpretation of all parameter changes is discussed. This
Update on procedure-related risks for prenatal diagnosis techniques
DEFF Research Database (Denmark)
Tabor, Ann; Alfirevic, Zarko
2010-01-01
from randomised controlled trials as well as from systematic reviews and a large national registry study are consistent with a procedure-related miscarriage rate of 0.5-1.0% for amniocentesis as well as for chorionic villus sampling (CVS). In single-center studies performance may be remarkably good due...... to very skilled operators, but these figures cannot be used for general counselling. Amniocentesis performed prior to 15 weeks had a significantly higher miscarriage rate than CVS and mid-trimester amniocentesis, and also increased the risk of talipes equinovarus. Amniocentesis should therefore...
Assessment of stochastically updated finite element models using reliability indicator
Hua, X. G.; Wen, Q.; Ni, Y. Q.; Chen, Z. Q.
2017-01-01
Finite element (FE) model updating techniques have been a viable approach to correcting an initial mathematical model based on test data. Validation of the updated FE models is usually conducted by comparing model predictions with independent test data that have not been used for model updating. This approach of model validation cannot be readily applied in the case of a stochastically updated FE model. In recognizing that structural reliability is a major decision factor throughout the lifecycle of a structure, this study investigates the use of structural reliability as a measure for assessing the quality of stochastically updated FE models. A recently developed perturbation method for stochastic FE model updating is first applied to attain the stochastically updated models by using the measured modal parameters with uncertainty. The reliability index and failure probability for predefined limit states are computed for the initial and the stochastically updated models, respectively, and are compared with those obtained from the 'true' model to assess the quality of the two models. Numerical simulation of a truss bridge is provided as an example. The simulated modal parameters involving different uncertainty magnitudes are used to update an initial model of the bridge. It is shown that the reliability index obtained from the updated model is much closer to true reliability index than that obtained from the initial model in the case of small uncertainty magnitude; in the case of large uncertainty magnitude, the reliability index computed from the initial model rather than from the updated model is closer to the true value. The present study confirms the usefulness of measurement-calibrated FE models and at the same time also highlights the importance of the uncertainty reduction in test data for reliable model updating and reliability evaluation.
Perna, L.; Pezzopane, M.; Pietrella, M.; Zolesi, B.; Cander, L. R.
2017-09-01
The SIRM model proposed by Zolesi et al. (1993, 1996) is an ionospheric regional model for predicting the vertical-sounding characteristics that has been frequently used in developing ionospheric web prediction services (Zolesi and Cander, 2014). Recently the model and its outputs were implemented in the framework of two European projects: DIAS (DIgital upper Atmosphere Server; http://www.iono.noa.gr/DIAS/ (Belehaki et al., 2005, 2015) and ESPAS (Near-Earth Space Data Infrastructure for e-Science; http://www.espas-fp7.eu/) (Belehaki et al., 2016). In this paper an updated version of the SIRM model, called SIRMPol, is described and corresponding outputs in terms of the F2-layer critical frequency (foF2) are compared with values recorded at the mid-latitude station of Rome (41.8°N, 12.5°E), for extremely high (year 1958) and low (years 2008 and 2009) solar activity. The main novelties introduced in the SIRMPol model are: (1) an extension of the Rome ionosonde input dataset that, besides data from 1957 to 1987, includes also data from 1988 to 2007; (2) the use of second order polynomial regressions, instead of linear ones, to fit the relation foF2 vs. solar activity index R12; (3) the use of polynomial relations, instead of linear ones, to fit the relations A0 vs. R12, An vs. R12 and Yn vs. R12, where A0, An and Yn are the coefficients of the Fourier analysis performed by the SIRM model to reproduce the values calculated by using relations in (2). The obtained results show that SIRMPol outputs are better than those of the SIRM model. As the SIRMPol model represents only a partial updating of the SIRM model based on inputs from only Rome ionosonde data, it can be considered a particular case of a single-station model. Nevertheless, the development of the SIRMPol model allowed getting some useful guidelines for a future complete and more accurate updating of the SIRM model, of which both DIAS and ESPAS could benefit.
Novel procedure for characterizing nonlinear systems with memory: 2017 update
Nuttall, Albert H.; Katz, Richard A.; Hughes, Derke R.; Koch, Robert M.
2017-05-01
The present article discusses novel improvements in nonlinear signal processing made by the prime algorithm developer, Dr. Albert H. Nuttall and co-authors, a consortium of research scientists from the Naval Undersea Warfare Center Division, Newport, RI. The algorithm, called the Nuttall-Wiener-Volterra or 'NWV' algorithm is named for its principal contributors [1], [2],[ 3] . The NWV algorithm significantly reduces the computational workload for characterizing nonlinear systems with memory. Following this formulation, two measurement waveforms are required in order to characterize a specified nonlinear system under consideration: (1) an excitation input waveform, x(t) (the transmitted signal); and, (2) a response output waveform, z(t) (the received signal). Given these two measurement waveforms for a given propagation channel, a 'kernel' or 'channel response', h= [h0,h1,h2,h3] between the two measurement points, is computed via a least squares approach that optimizes modeled kernel values by performing a best fit between measured response z(t) and a modeled response y(t). New techniques significantly diminish the exponential growth of the number of computed kernel coefficients at second and third order and alleviate the Curse of Dimensionality (COD) in order to realize practical nonlinear solutions of scientific and engineering interest.
Can model updating tell the truth?
Energy Technology Data Exchange (ETDEWEB)
Hemez, F.M.
1998-02-01
This paper discusses to which extent updating methods may be able to correct finite element models in such a way that the test structure is better simulated. After having unified some of the most popular modal residues used as the basis for optimization algorithms, the relationship between modal residues and model correlation is investigated. This theoretical approach leads to an error estimator that may be implemented to provide an a priori upper bound of a model`s predictive quality relative to test data. These estimates however assume that a full measurement set is available. Finally, an application example is presented that illustrates the effectiveness of the estimator proposed when less measurement points than degrees of freedom are available.
Update of the SPS Impedance Model
Salvant, B; Zannini, C; Arduini, G; Berrig, O; Caspers, F; Grudiev, A; Métral, E; Rumolo, G; Shaposhnikova, E; Zotter, B; Migliorati, M; Spataro, B
2010-01-01
The beam coupling impedance of the CERN SPS is expected to be one of the limitations to an intensity upgrade of the LHC complex. In order to be able to reduce the SPS impedance, its main contributors need to be identified. An impedance model for the SPS has been gathered from theoretical calculations, electromagnetic simulations and bench measurements of single SPS elements. The current model accounts for the longitudinal and transverse impedance of the kickers, the horizontal and vertical electrostatic beam position monitors, the RF cavities and the 6.7 km beam pipe. In order to assess the validity of this model, macroparticle simulations of a bunch interacting with this updated SPS impedance model are compared to measurements performed with the SPS beam.
DEFF Research Database (Denmark)
Hansen, Lisbeth S.; Borup, Morten; Møller, A.;
2011-01-01
the performance of the updating procedure for flow forecasting. Measured water levels in combination with rain gauge input are used as basis for the evaluation. When compared to simulations without updating, the results show that it is possible to obtain an improvement in the 20 minute forecast of the water level...... to eliminate some of the unavoidable discrepancies between model and reality. The latter can partly be achieved by using the commercial tool MOUSE UPDATE, which is capable of inserting measured water levels from the system into the distributed, physically based MOUSE model. This study evaluates and documents...
Frequency response function-based model updating using Kriging model
Wang, J. T.; Wang, C. J.; Zhao, J. P.
2017-03-01
An acceleration frequency response function (FRF) based model updating method is presented in this paper, which introduces Kriging model as metamodel into the optimization process instead of iterating the finite element analysis directly. The Kriging model is taken as a fast running model that can reduce solving time and facilitate the application of intelligent algorithms in model updating. The training samples for Kriging model are generated by the design of experiment (DOE), whose response corresponds to the difference between experimental acceleration FRFs and its counterpart of finite element model (FEM) at selected frequency points. The boundary condition is taken into account, and a two-step DOE method is proposed for reducing the number of training samples. The first step is to select the design variables from the boundary condition, and the selected variables will be passed to the second step for generating the training samples. The optimization results of the design variables are taken as the updated values of the design variables to calibrate the FEM, and then the analytical FRFs tend to coincide with the experimental FRFs. The proposed method is performed successfully on a composite structure of honeycomb sandwich beam, after model updating, the analytical acceleration FRFs have a significant improvement to match the experimental data especially when the damping ratios are adjusted.
A NEW PROCEDURE FOR FORESTRY DATABASE UPDATING WITH GIS AND REMOTE SENSING
Directory of Open Access Journals (Sweden)
Luis M. T. de Carvalho
2003-07-01
Full Text Available The aim of this study was to develop an automated, simple and flexibleprocedure for updating raster-based forestry database. Four modules compose the procedure:(1 location of changed sites, (2 quantification of changed area, (3 identification of the newland cover, and (4 database updating. Firstly, a difference image is decomposed with wavelettransforms in order to extract changed sites. Secondly, segmentation is performed on thedifference image. Thirdly, each changed pixel or each segmented region is assigned to theland cover class with the highest probability of membership. Then, the output is used toupdate the GIS layer where changes took place. This procedure was less sensitive togeometric and radiometric misregistration, and less dependent on ground truth, whencompared with post classification comparison and direct multidate classification.
Development of a cyber-physical experimental platform for real-time dynamic model updating
Song, Wei; Dyke, Shirley
2013-05-01
Model updating procedures are traditionally performed off-line. With the significant recent advances in embedded systems and the related real-time computing capabilities, online or real-time, model updating can be performed to inform decision making and controller actions. The applications for real-time model updating are mainly in the areas of (i) condition diagnosis and prognosis of engineering systems; and (ii) control systems that benefit from accurate modeling of the system plant. Herein, the development of a cyber-physical real-time model updating experimental platform, including real-time computing environment, model updating algorithm, hardware architecture and testbed, is described. Results from two challenging experimental implementations are presented to illustrate the performance of this cyber-physical platform in achieving the goal of updating nonlinear systems in real-time. The experiments consider typical nonlinear engineering systems that exhibit hysteresis. Among the available algorithms capable of identification of such complex nonlinearities, the unscented Kalman filter (UKF) is selected for these experiments as an effective method to update nonlinear dynamic system models under realistic conditions. The implementation of the platform is discussed for successful completion of these experiments, including required timing constraints and overall evaluation of the system.
2014 Update of the EULAR standardised operating procedures for EULAR-endorsed recommendations.
van der Heijde, Désirée; Aletaha, Daniel; Carmona, Loreto; Edwards, Christopher J; Kvien, Tore K; Kouloumas, Marios; Machado, Pedro; Oliver, Sue; de Wit, Maarten; Dougados, Maxime
2015-01-01
In this article, the European League Against Rheumatism (EULAR) standardised operating procedures for the elaboration, evaluation, dissemination and implementation of recommendations endorsed by the EULAR standing committees published in 2004 have been updated. The various steps from the application to implementation have been described in detail. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Resource Tracking Model Updates and Trade Studies
Chambliss, Joe; Stambaugh, Imelda; Moore, Michael
2016-01-01
The Resource Tracking Model has been updated to capture system manager and project manager inputs. Both the Trick/General Use Nodal Network Solver Resource Tracking Model (RTM) simulator and the RTM mass balance spreadsheet have been revised to address inputs from system managers and to refine the way mass balance is illustrated. The revisions to the RTM included the addition of a Plasma Pyrolysis Assembly (PPA) to recover hydrogen from Sabatier Reactor methane, which was vented in the prior version of the RTM. The effect of the PPA on the overall balance of resources in an exploration vehicle is illustrated in the increased recycle of vehicle oxygen. Case studies have been run to show the relative effect of performance changes on vehicle resources.
Experimental Studies on Finite Element Model Updating for a Heated Beam-Like Structure
Directory of Open Access Journals (Sweden)
Kaipeng Sun
2015-01-01
Full Text Available An experimental study was made for the identification procedure of time-varying modal parameters and the finite element model updating technique of a beam-like thermal structure in both steady and unsteady high temperature environments. An improved time-varying autoregressive method was proposed first to extract the instantaneous natural frequencies of the structure in the unsteady high temperature environment. Based on the identified modal parameters, then, a finite element model for the structure was updated by using Kriging meta-model and optimization-based finite-element model updating method. The temperature-dependent parameters to be updated were expressed as low-order polynomials of temperature increase, and the finite element model updating problem was solved by updating several coefficients of the polynomials. The experimental results demonstrated the effectiveness of the time-varying modal parameter identification method and showed that the instantaneous natural frequencies of the updated model well tracked the trends of the measured values with high accuracy.
Variance analysis for model updating with a finite element based subspace fitting approach
Gautier, Guillaume; Mevel, Laurent; Mencik, Jean-Mathieu; Serra, Roger; Döhler, Michael
2017-07-01
Recently, a subspace fitting approach has been proposed for vibration-based finite element model updating. The approach makes use of subspace-based system identification, where the extended observability matrix is estimated from vibration measurements. Finite element model updating is performed by correlating the model-based observability matrix with the estimated one, by using a single set of experimental data. Hence, the updated finite element model only reflects this single test case. However, estimates from vibration measurements are inherently exposed to uncertainty due to unknown excitation, measurement noise and finite data length. In this paper, a covariance estimation procedure for the updated model parameters is proposed, which propagates the data-related covariance to the updated model parameters by considering a first-order sensitivity analysis. In particular, this propagation is performed through each iteration step of the updating minimization problem, by taking into account the covariance between the updated parameters and the data-related quantities. Simulated vibration signals are used to demonstrate the accuracy and practicability of the derived expressions. Furthermore, an application is shown on experimental data of a beam.
MARMOT update for oxide fuel modeling
Energy Technology Data Exchange (ETDEWEB)
Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schwen, Daniel [Idaho National Lab. (INL), Idaho Falls, ID (United States); Chakraborty, Pritam [Idaho National Lab. (INL), Idaho Falls, ID (United States); Jiang, Chao [Idaho National Lab. (INL), Idaho Falls, ID (United States); Aagesen, Larry [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ahmed, Karim [Idaho National Lab. (INL), Idaho Falls, ID (United States); Jiang, Wen [Idaho National Lab. (INL), Idaho Falls, ID (United States); Biner, Bulent [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bai, Xianming [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Tonks, Michael [Pennsylvania State Univ., University Park, PA (United States); Millett, Paul [Univ. of Arkansas, Fayetteville, AR (United States)
2016-09-01
This report summarizes the lower-length-scale research and development progresses in FY16 at Idaho National Laboratory in developing mechanistic materials models for oxide fuels, in parallel to the development of the MARMOT code which will be summarized in a separate report. This effort is a critical component of the microstructure based fuel performance modeling approach, supported by the Fuels Product Line in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. The progresses can be classified into three categories: 1) development of materials models to be used in engineering scale fuel performance modeling regarding the effect of lattice defects on thermal conductivity, 2) development of modeling capabilities for mesoscale fuel behaviors including stage-3 gas release, grain growth, high burn-up structure, fracture and creep, and 3) improved understanding in material science by calculating the anisotropic grain boundary energies in UO$_2$ and obtaining thermodynamic data for solid fission products. Many of these topics are still under active development. They are updated in the report with proper amount of details. For some topics, separate reports are generated in parallel and so stated in the text. The accomplishments have led to better understanding of fuel behaviors and enhance capability of the MOOSE-BISON-MARMOT toolkit.
Updating river basin models with radar altimetry
DEFF Research Database (Denmark)
Michailovsky, Claire Irene B.
response of a catchment to meteorological forcing. While river discharge cannot be directly measured from space, radar altimetry (RA) can measure water level variations in rivers at the locations where the satellite ground track and river network intersect called virtual stations or VS. In this PhD study...... been between 10 and 35 days for altimetry missions until now. The location of the VS is also not necessarily the point at which measurements are needed. On the other hand, one of the main strengths of the dataset is its availability in near-real time. These characteristics make radar altimetry ideally...... suited for use in data assimilation frameworks which combine the information content from models and current observations to produce improved forecasts and reduce prediction uncertainty. The focus of the second and third papers of this thesis was therefore the use of radar altimetry as update data...
Operational modal analysis by updating autoregressive model
Vu, V. H.; Thomas, M.; Lakis, A. A.; Marcouiller, L.
2011-04-01
This paper presents improvements of a multivariable autoregressive (AR) model for applications in operational modal analysis considering simultaneously the temporal response data of multi-channel measurements. The parameters are estimated by using the least squares method via the implementation of the QR factorization. A new noise rate-based factor called the Noise rate Order Factor (NOF) is introduced for use in the effective selection of model order and noise rate estimation. For the selection of structural modes, an orderwise criterion called the Order Modal Assurance Criterion (OMAC) is used, based on the correlation of mode shapes computed from two successive orders. Specifically, the algorithm is updated with respect to model order from a small value to produce a cost-effective computation. Furthermore, the confidence intervals of each natural frequency, damping ratio and mode shapes are also computed and evaluated with respect to model order and noise rate. This method is thus very effective for identifying the modal parameters in case of ambient vibrations dealing with modern output-only modal analysis. Simulations and discussions on a steel plate structure are presented, and the experimental results show good agreement with the finite element analysis.
Declarative XML Update Language Based on a Higher Data Model
Institute of Scientific and Technical Information of China (English)
Guo-Ren Wang; Xiao-Lin Zhang
2005-01-01
With the extensive use of XML in applications over the Web, how to update XML data is becoming an important issue because the role of XML has expanded beyond traditional applications in which XML is used for information exchange and data representation over the Web. So far, several languages have been proposed for updating XML data, but they are all based on lower, so-called graph-based or tree-based data models. Update requests are thus expressed in a nonintuitive and unnatural way and update statements are too complicated to comprehend. This paper presents a novel declarative XML update language which is an extension of the XML-RL query language. Compared with other existing XML update languages, it has the following features. First, it is the only XML data manipulation language based on a higher data model. Second, this language can express complex update requests at multiple levels in a hierarchy in a simple and flat way. Third, this language directly supports the functionality of updating complex objects while all other update languages do not support these operations. Lastly, most of existing languages use rename to modify attribute and element names, which is a different way from updates on value. The proposed language modifies tag names, values, and objects in a unified way by the introduction of three kinds of logical binding variables: object variables, value variables, and name variables.
General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data
Energy Technology Data Exchange (ETDEWEB)
Bagwell, L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Bennett, P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-02-21
This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).
Update on microkinetic modeling of lean NOx trap chemistry.
Energy Technology Data Exchange (ETDEWEB)
Larson, Richard S.; Daw, C. Stuart (Oak Ridge National Laboratory, Oak Ridge, TN); Pihl, Josh A. (Oak Ridge National Laboratory, Oak Ridge, TN); Choi, Jae-Soon (Oak Ridge National Laboratory, Oak Ridge, TN); Chakravarthy, V, Kalyana (Oak Ridge National Laboratory, Oak Ridge, TN)
2010-04-01
Our previously developed microkinetic model for lean NOx trap (LNT) storage and regeneration has been updated to address some longstanding issues, in particular the formation of N2O during the regeneration phase at low temperatures. To this finalized mechanism has been added a relatively simple (12-step) scheme that accounts semi-quantitatively for the main features observed during sulfation and desulfation experiments, namely (a) the essentially complete trapping of SO2 at normal LNT operating temperatures, (b) the plug-like sulfation of both barium oxide (NOx storage) and cerium oxide (oxygen storage) sites, (c) the degradation of NOx storage behavior arising from sulfation, (d) the evolution of H2S and SO2 during high temperature desulfation (temperature programmed reduction) under H2, and (e) the complete restoration of NOx storage capacity achievable through the chosen desulfation procedure.
Effectiveness of radio waves application in modern general dental procedures: An update.
Qureshi, Arslan; Kellesarian, Sergio Varela; Pikos, Michael A; Javed, Fawad; Romanos, Georgios E
2017-01-01
The purpose of the present study was to review indexed literature and provide an update on the effectiveness of high-frequency radio waves (HRW) application in modern general dentistry procedures. Indexed databases were searched to identify articles that assessed the efficacy of radio waves in dental procedures. Radiosurgery is a refined form of electrosurgery that uses waves of electrons at a radiofrequency ranging between 2 and 4 MHz. Radio waves have also been reported to cause much less thermal damage to peripheral tissues compared with electrosurgery or carbon dioxide laser-assisted surgery. Formation of reparative dentin in direct pulp capping procedures is also significantly higher when HRW are used to achieve hemostasis in teeth with minimally exposed dental pulps compared with traditional techniques for achieving hemostasis. A few case reports have reported that radiosurgery is useful for procedures such as gingivectomy and gingivoplasty, stage-two surgery for implant exposure, operculectomy, oral biopsy, and frenectomy. Radiosurgery is a relatively modern therapeutic methodology for the treatment of trigeminal neuralgia; however, its long-term efficacy is unclear. Radio waves can also be used for periodontal procedures, such as gingivectomies, coronal flap advancement, harvesting palatal grafts for periodontal soft tissue grafting, and crown lengthening. Although there are a limited number of studies in indexed literature regarding the efficacy of radio waves in modern dentistry, the available evidence shows that use of radio waves is a modernization in clinical dentistry that might be a contemporary substitute for traditional clinical dental procedures.
Finite element model updating using bayesian framework and modal properties
CSIR Research Space (South Africa)
Marwala, T
2005-01-01
Full Text Available Finite element (FE) models are widely used to predict the dynamic characteristics of aerospace structures. These models often give results that differ from measured results and therefore need to be updated to match measured results. Some...
Reservoir management under geological uncertainty using fast model update
Hanea, R.; Evensen, G.; Hustoft, L.; Ek, T.; Chitu, A.; Wilschut, F.
2015-01-01
Statoil is implementing "Fast Model Update (FMU)," an integrated and automated workflow for reservoir modeling and characterization. FMU connects all steps and disciplines from seismic depth conversion to prediction and reservoir management taking into account relevant reservoir uncertainty. FMU del
Update Legal Documents Using Hierarchical Ranking Models and Word Clustering
Pham, Minh Quang Nhat; Nguyen, Minh Le; Shimazu, Akira
2010-01-01
Our research addresses the task of updating legal documents when newinformation emerges. In this paper, we employ a hierarchical ranking model tothe task of updating legal documents. Word clustering features are incorporatedto the ranking models to exploit semantic relations between words. Experimentalresults on legal data built from the United States Code show that the hierarchicalranking model with word clustering outperforms baseline methods using VectorSpace Model, and word cluster-based ...
Directory of Open Access Journals (Sweden)
Lisbet Sneftrup Hansen
2014-07-01
Full Text Available There is a growing requirement to generate more precise model simulations and forecasts of flows in urban drainage systems in both offline and online situations. Data assimilation tools are hence needed to make it possible to include system measurements in distributed, physically-based urban drainage models and reduce a number of unavoidable discrepancies between the model and reality. The latter can be achieved partly by inserting measured water levels from the sewer system into the model. This article describes how deterministic updating of model states in this manner affects a simulation, and then evaluates and documents the performance of this particular updating procedure for flow forecasting. A hypothetical case study and synthetic observations are used to illustrate how the Update method works and affects downstream nodes. A real case study in a 544 ha urban catchment furthermore shows that it is possible to improve the 20-min forecast of water levels in an updated node and the three-hour forecast of flow through a downstream node, compared to simulations without updating. Deterministic water level updating produces better forecasts when implemented in large networks with slow flow dynamics and with measurements from upstream basins that contribute significantly to the flow at the forecast location.
Model updating of nonlinear structures from measured FRFs
Canbaloğlu, Güvenç; Özgüven, H. Nevzat
2016-12-01
There are always certain discrepancies between modal and response data of a structure obtained from its mathematical model and experimentally measured ones. Therefore it is a general practice to update the theoretical model by using experimental measurements in order to have a more accurate model. Most of the model updating methods used in structural dynamics are for linear systems. However, in real life applications most of the structures have nonlinearities, which restrict us applying model updating techniques available for linear structures, unless they work in linear range. Well-established frequency response function (FRF) based model updating methods would easily be extended to a nonlinear system if the FRFs of the underlying linear system (linear FRFs) could be experimentally measured. When frictional type of nonlinearity co-exists with other types of nonlinearities, it is not possible to obtain linear FRFs experimentally by using low level forcing. In this study a method (named as Pseudo Receptance Difference (PRD) method) is presented to obtain linear FRFs of a nonlinear structure having multiple nonlinearities including friction type of nonlinearity. PRD method, calculates linear FRFs of a nonlinear structure by using FRFs measured at various forcing levels, and simultaneously identifies all nonlinearities in the system. Then, any model updating method can be used to update the linear part of the mathematical model. In this present work, PRD method is used to predict the linear FRFs from measured nonlinear FRFs, and the inverse eigensensitivity method is employed to update the linear finite element (FE) model of the nonlinear structure. The proposed method is validated with different case studies using nonlinear lumped single-degree of freedom system, as well as a continuous system. Finally, a real nonlinear T-beam test structure is used to show the application and the accuracy of the proposed method. The accuracy of the updated nonlinear model of the
Structural Dynamics Model Updating with Positive Definiteness and No Spillover
Directory of Open Access Journals (Sweden)
Yongxin Yuan
2014-01-01
Full Text Available Model updating is a common method to improve the correlation between structural dynamics models and measured data. In conducting the updating, it is desirable to match only the measured spectral data without tampering with the other unmeasured and unknown eigeninformation in the original model (if so, the model is said to be updated with no spillover and to maintain the positive definiteness of the coefficient matrices. In this paper, an efficient numerical method for updating mass and stiffness matrices simultaneously is presented. The method first updates the modal frequencies. Then, a method is presented to construct a transformation matrix and this matrix is used to correct the analytical eigenvectors so that the updated model is compatible with the measurement of the eigenvectors. The method can preserve both no spillover and the symmetric positive definiteness of the mass and stiffness matrices. The method is computationally efficient as neither iteration nor numerical optimization is required. The numerical example shows that the presented method is quite accurate and efficient.
Finite element model updating of natural fibre reinforced composite structure in structural dynamics
Directory of Open Access Journals (Sweden)
Sani M.S.M.
2016-01-01
Full Text Available Model updating is a process of making adjustment of certain parameters of finite element model in order to reduce discrepancy between analytical predictions of finite element (FE and experimental results. Finite element model updating is considered as an important field of study as practical application of finite element method often shows discrepancy to the test result. The aim of this research is to perform model updating procedure on a composite structure as well as trying improving the presumed geometrical and material properties of tested composite structure in finite element prediction. The composite structure concerned in this study is a plate of reinforced kenaf fiber with epoxy. Modal properties (natural frequency, mode shapes, and damping ratio of the kenaf fiber structure will be determined using both experimental modal analysis (EMA and finite element analysis (FEA. In EMA, modal testing will be carried out using impact hammer test while normal mode analysis using FEA will be carried out using MSC. Nastran/Patran software. Correlation of the data will be carried out before optimizing the data from FEA. Several parameters will be considered and selected for the model updating procedure.
Nonlinear structural finite element model updating and uncertainty quantification
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.
2015-04-01
This paper presents a framework for nonlinear finite element (FE) model updating, in which state-of-the-art nonlinear structural FE modeling and analysis techniques are combined with the maximum likelihood estimation method (MLE) to estimate time-invariant parameters governing the nonlinear hysteretic material constitutive models used in the FE model of the structure. The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem. A proof-of-concept example, consisting of a cantilever steel column representing a bridge pier, is provided to verify the proposed nonlinear FE model updating framework.
Updated Delft Mass Transport model DMT-2: computation and validation
Hashemi Farahani, Hassan; Ditmar, Pavel; Inacio, Pedro; Klees, Roland; Guo, Jing; Guo, Xiang; Liu, Xianglin; Zhao, Qile; Didova, Olga; Ran, Jiangjun; Sun, Yu; Tangdamrongsub, Natthachet; Gunter, Brian; Riva, Ricardo; Steele-Dunne, Susan
2014-05-01
A number of research centers compute models of mass transport in the Earth's system using primarily K-Band Ranging (KBR) data from the Gravity Recovery And Climate Experiment (GRACE) satellite mission. These models typically consist of a time series of monthly solutions, each of which is defined in terms of a set of spherical harmonic coefficients up to degree 60-120. One of such models, the Delft Mass Transport, release 2 (DMT-2), is computed at the Delft University of Technology (The Netherlands) in collaboration with Wuhan University. An updated variant of this model has been produced recently. A unique feature of the computational scheme designed to compute DMT-2 is the preparation of an accurate stochastic description of data noise in the frequency domain using an Auto-Regressive Moving-Average (ARMA) model, which is derived for each particular month. The benefits of such an approach are a proper frequency-dependent data weighting in the data inversion and an accurate variance-covariance matrix of noise in the estimated spherical harmonic coefficients. Furthermore, the data prior to the inversion are subject to an advanced high-pass filtering, which makes use of a spatially-dependent weighting scheme, so that noise is primarily estimated on the basis of data collected over areas with minor mass transport signals (e.g., oceans). On the one hand, this procedure efficiently suppresses noise, which are caused by inaccuracies in satellite orbits and, on the other hand, preserves mass transport signals in the data. Finally, the unconstrained monthly solutions are filtered using a Wiener filter, which is based on estimates of the signal and noise variance-covariance matrices. In combination with a proper data weighting, this noticeably improves the spatial resolution of the monthly gravity models and the associated mass transport models.. For instance, the computed solutions allow long-term negative trends to be clearly seen in sufficiently small regions notorious
Coupled vibro-acoustic model updating using frequency response functions
Nehete, D. V.; Modak, S. V.; Gupta, K.
2016-03-01
Interior noise in cavities of motorized vehicles is of increasing significance due to the lightweight design of these structures. Accurate coupled vibro-acoustic FE models of such cavities are required so as to allow a reliable design and analysis. It is, however, experienced that the vibro-acoustic predictions using these models do not often correlate acceptably well with the experimental measurements and hence require model updating. Both the structural and the acoustic parameters addressing the stiffness as well as the damping modeling inaccuracies need to be considered simultaneously in the model updating framework in order to obtain an accurate estimate of these parameters. It is also noted that the acoustic absorption properties are generally frequency dependent. This makes use of modal data based methods for updating vibro-acoustic FE models difficult. In view of this, the present paper proposes a method based on vibro-acoustic frequency response functions that allow updating of a coupled FE model by considering simultaneously the parameters associated with both the structural as well as the acoustic model of the cavity. The effectiveness of the proposed method is demonstrated through numerical studies on a 3D rectangular box cavity with a flexible plate. Updating parameters related to the material property, stiffness of joints between the plate and the rectangular cavity and the properties of absorbing surfaces of the acoustic cavity are considered. The robustness of the method under presence of noise is also studied.
Update to Core reporting practices in structural equation modeling.
Schreiber, James B
2016-07-21
This paper is a technical update to "Core Reporting Practices in Structural Equation Modeling."(1) As such, the content covered in this paper includes, sample size, missing data, specification and identification of models, estimation method choices, fit and residual concerns, nested, alternative, and equivalent models, and unique issues within the SEM family of techniques.
Three Case Studies in Finite Element Model Updating
Directory of Open Access Journals (Sweden)
M. Imregun
1995-01-01
Full Text Available This article summarizes the basic formulation of two well-established finite element model (FEM updating techniques for improved dynamic analysis, namely the response function method (RFM and the inverse eigensensitivity method (IESM. Emphasis is placed on the similarities in their mathematical formulation, numerical treatment, and on the uniqueness of the resulting updated models. Three case studies that include welded L-plate specimens, a car exhaust system, and a highway bridge were examined in some detail and measured vibration data were used throughout the investigation. It was experimentally observed that significant dynamic behavior discrepancies existed between some of the nominally identical structures, a feature that makes the task of model updating even more difficult because no unequivocal reference data exist in this particular case. Although significant improvements were obtained in all cases where the updating of the FE model was possible, it was found that the success of the updated models depended very heavily on the parameters used, such as the selection and number of the frequency points for RFM, and the selection of modes and the balancing of the sensitivity matrix for IESM. Finally, the performance of the two methods was compared from general applicability, numerical stability, and computational effort standpoints.
Model update mechanism for mean-shift tracking
Institute of Scientific and Technical Information of China (English)
Peng Ningsong; Yang Jie; Liu Erqi
2005-01-01
In order to solve the model update problem in mean-shift based tracker, a novel mechanism is proposed.Kalman filter is employed to update object model by filtering object kernel-histogram using previous model and current candidate. A self-tuning method is used for adaptively adjust all the parameters of the filters under the analysis of the filtering residuals. In addition, hypothesis testing servers as the criterion for determining whether to accept filtering result. Therefore, the tracker has the ability to handle occlusion so as to avoid over-update. The experimental results show that our method can not only keep up with the object appearance and scale changes but also be robust to occlusion.
Dynamic causal modelling of electrographic seizure activity using Bayesian belief updating.
Cooray, Gerald K; Sengupta, Biswa; Douglas, Pamela K; Friston, Karl
2016-01-15
Seizure activity in EEG recordings can persist for hours with seizure dynamics changing rapidly over time and space. To characterise the spatiotemporal evolution of seizure activity, large data sets often need to be analysed. Dynamic causal modelling (DCM) can be used to estimate the synaptic drivers of cortical dynamics during a seizure; however, the requisite (Bayesian) inversion procedure is computationally expensive. In this note, we describe a straightforward procedure, within the DCM framework, that provides efficient inversion of seizure activity measured with non-invasive and invasive physiological recordings; namely, EEG/ECoG. We describe the theoretical background behind a Bayesian belief updating scheme for DCM. The scheme is tested on simulated and empirical seizure activity (recorded both invasively and non-invasively) and compared with standard Bayesian inversion. We show that the Bayesian belief updating scheme provides similar estimates of time-varying synaptic parameters, compared to standard schemes, indicating no significant qualitative change in accuracy. The difference in variance explained was small (less than 5%). The updating method was substantially more efficient, taking approximately 5-10min compared to approximately 1-2h. Moreover, the setup of the model under the updating scheme allows for a clear specification of how neuronal variables fluctuate over separable timescales. This method now allows us to investigate the effect of fast (neuronal) activity on slow fluctuations in (synaptic) parameters, paving a way forward to understand how seizure activity is generated.
Institute of Scientific and Technical Information of China (English)
GUO Qintao; ZHANG Lingmi; TAO Zheng
2008-01-01
Thin wall component is utilized to absorb impact energy of a structure. However, the dynamic behavior of such thin-walled structure is highly non-linear with material, geometry and boundary non-linearity. A model updating and validation procedure is proposed to build accurate finite element model of a frame structure with a non-linear thin-walled component for dynamic analysis. Design of experiments (DOE) and principal component decomposition (PCD) approach are applied to extract dynamic feature from nonlinear impact response for correlation of impact test result and FE model of the non-linear structure. A strain-rate-dependent non-linear model updating method is then developed to build accurate FE model of the structure. Computer simulation and a real frame structure with a highly non-linear thin-walled component are employed to demonstrate the feasibility and effectiveness of the proposed approach.
Zhai, Xue; Fei, Cheng-Wei; Choy, Yat-Sze; Wang, Jian-Jun
2017-01-01
To improve the accuracy and efficiency of computation model for complex structures, the stochastic model updating (SMU) strategy was proposed by combining the improved response surface model (IRSM) and the advanced Monte Carlo (MC) method based on experimental static test, prior information and uncertainties. Firstly, the IRSM and its mathematical model were developed with the emphasis on moving least-square method, and the advanced MC simulation method is studied based on Latin hypercube sampling method as well. And then the SMU procedure was presented with experimental static test for complex structure. The SMUs of simply-supported beam and aeroengine stator system (casings) were implemented to validate the proposed IRSM and advanced MC simulation method. The results show that (1) the SMU strategy hold high computational precision and efficiency for the SMUs of complex structural system; (2) the IRSM is demonstrated to be an effective model due to its SMU time is far less than that of traditional response surface method, which is promising to improve the computational speed and accuracy of SMU; (3) the advanced MC method observably decrease the samples from finite element simulations and the elapsed time of SMU. The efforts of this paper provide a promising SMU strategy for complex structure and enrich the theory of model updating.
Finite Element Model Updating Using Response Surface Method
Marwala, Tshilidzi
2007-01-01
This paper proposes the response surface method for finite element model updating. The response surface method is implemented by approximating the finite element model surface response equation by a multi-layer perceptron. The updated parameters of the finite element model were calculated using genetic algorithm by optimizing the surface response equation. The proposed method was compared to the existing methods that use simulated annealing or genetic algorithm together with a full finite element model for finite element model updating. The proposed method was tested on an unsymmetri-cal H-shaped structure. It was observed that the proposed method gave the updated natural frequen-cies and mode shapes that were of the same order of accuracy as those given by simulated annealing and genetic algorithm. Furthermore, it was observed that the response surface method achieved these results at a computational speed that was more than 2.5 times as fast as the genetic algorithm and a full finite element model and 24 ti...
Application of firefly algorithm to the dynamic model updating problem
Shabbir, Faisal; Omenzetter, Piotr
2015-04-01
Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.
Real-Time Flood Forecasting System Using Channel Flow Routing Model with Updating by Particle Filter
Kudo, R.; Chikamori, H.; Nagai, A.
2008-12-01
A real-time flood forecasting system using channel flow routing model was developed for runoff forecasting at water gauged and ungaged points along river channels. The system is based on a flood runoff model composed of upstream part models, tributary part models and downstream part models. The upstream part models and tributary part models are lumped rainfall-runoff models, and the downstream part models consist of a lumped rainfall-runoff model for hillslopes adjacent to a river channel and a kinematic flow routing model for a river channel. The flow forecast of this model is updated by Particle filtering of the downstream part model as well as by the extended Kalman filtering of the upstream part model and the tributary part models. The Particle filtering is a simple and powerful updating algorithm for non-linear and non-gaussian system, so that it can be easily applied to the downstream part model without complicated linearization. The presented flood runoff model has an advantage in simlecity of updating procedure to the grid-based distributed models, which is because of less number of state variables. This system was applied to the Gono-kawa River Basin in Japan, and flood forecasting accuracy of the system with both Particle filtering and extended Kalman filtering and that of the system with only extended Kalman filtering were compared. In this study, water gauging stations in the objective basin were divided into two types of stations, that is, reference stations and verification stations. Reference stations ware regarded as ordinary water gauging stations and observed data at these stations are used for calibration and updating of the model. Verification stations ware considered as ungaged or arbitrary points and observed data at these stations are used not for calibration nor updating but for only evaluation of forecasting accuracy. The result confirms that Particle filtering of the downstream part model improves forecasting accuracy of runoff at
Applying Modeling Tools to Ground System Procedures
Di Pasquale, Peter
2012-01-01
As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.
Numerical modelling of mine workings: annual update 1999/2000.
CSIR Research Space (South Africa)
Lightfoot, N
1999-09-01
Full Text Available The SIMRAC project GAP629 has two aspects. Firstly, the production of an updated edition of the guidebook Numerical Modelling of Mine Workings. The original document was launched to the South African mining industry in April 1999. Secondly...
Procedural Modeling for Digital Cultural Heritage
Directory of Open Access Journals (Sweden)
Müller Pascal
2009-01-01
Full Text Available The rapid development of computer graphics and imaging provides the modern archeologist with several tools to realistically model and visualize archeological sites in 3D. This, however, creates a tension between veridical and realistic modeling. Visually compelling models may lead people to falsely believe that there exists very precise knowledge about the past appearance of a site. In order to make the underlying uncertainty visible, it has been proposed to encode this uncertainty with different levels of transparency in the rendering, or of decoloration of the textures. We argue that procedural modeling technology based on shape grammars provides an interesting alternative to such measures, as they tend to spoil the experience for the observer. Both its efficiency and compactness make procedural modeling a tool to produce multiple models, which together sample the space of possibilities. Variations between the different models express levels of uncertainty implicitly, while letting each individual model keeping its realistic appearance. The underlying, structural description makes the uncertainty explicit. Additionally, procedural modeling also yields the flexibility to incorporate changes as knowledge of an archeological site gets refined. Annotations explaining modeling decisions can be included. We demonstrate our procedural modeling implementation with several recent examples.
Crushed-salt constitutive model update
Energy Technology Data Exchange (ETDEWEB)
Callahan, G.D.; Loken, M.C.; Mellegard, K.D. [RE/SPEC Inc., Rapid City, SD (United States); Hansen, F.D. [Sandia National Labs., Albuquerque, NM (United States)
1998-01-01
Modifications to the constitutive model used to describe the deformation of crushed salt are presented in this report. Two mechanisms--dislocation creep and grain boundary diffusional pressure solutioning--defined previously but used separately are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. New creep consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt to determine material parameters for the constitutive model. Nonlinear least-squares model fitting to data from the shear consolidation tests and a combination of the shear and hydrostatic consolidation tests produced two sets of material parameter values for the model. The change in material parameter values from test group to test group indicates the empirical nature of the model but demonstrates improvement over earlier work with the previous models. Key improvements are the ability to capture lateral strain reversal and better resolve parameter values. To demonstrate the predictive capability of the model, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the model to predict the test data, the model appears to capture the creep consolidation behavior of crushed salt quite well.
A Successive Selection Method for finite element model updating
Gou, Baiyong; Zhang, Weijie; Lu, Qiuhai; Wang, Bo
2016-03-01
Finite Element (FE) model can be updated effectively and efficiently by using the Response Surface Method (RSM). However, it often involves performance trade-offs such as high computational cost for better accuracy or loss of efficiency for lots of design parameter updates. This paper proposes a Successive Selection Method (SSM), which is based on the linear Response Surface (RS) function and orthogonal design. SSM rewrites the linear RS function into a number of linear equations to adjust the Design of Experiment (DOE) after every FE calculation. SSM aims to interpret the implicit information provided by the FE analysis, to locate the Design of Experiment (DOE) points more quickly and accurately, and thereby to alleviate the computational burden. This paper introduces the SSM and its application, describes the solution steps of point selection for DOE in detail, and analyzes SSM's high efficiency and accuracy in the FE model updating. A numerical example of a simply supported beam and a practical example of a vehicle brake disc show that the SSM can provide higher speed and precision in FE model updating for engineering problems than traditional RSM.
Neurodevelopmental model of schizophrenia: update 2012
National Research Council Canada - National Science Library
Rapoport, J L; Giedd, J N; Gogtay, N
2012-01-01
The neurodevelopmental model of schizophrenia, which posits that the illness is the end state of abnormal neurodevelopmental processes that started years before the illness onset, is widely accepted...
A last updating evolution model for online social networks
Bu, Zhan; Xia, Zhengyou; Wang, Jiandong; Zhang, Chengcui
2013-05-01
As information technology has advanced, people are turning to electronic media more frequently for communication, and social relationships are increasingly found on online channels. However, there is very limited knowledge about the actual evolution of the online social networks. In this paper, we propose and study a novel evolution network model with the new concept of “last updating time”, which exists in many real-life online social networks. The last updating evolution network model can maintain the robustness of scale-free networks and can improve the network reliance against intentional attacks. What is more, we also found that it has the “small-world effect”, which is the inherent property of most social networks. Simulation experiment based on this model show that the results and the real-life data are consistent, which means that our model is valid.
Updating the debate on model complexity
Simmons, Craig T.; Hunt, Randall J.
2012-01-01
As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”
Chen, G. W.; Omenzetter, P.
2016-04-01
This paper presents the implementation of an updating procedure for the finite element model (FEM) of a prestressed concrete continuous box-girder highway off-ramp bridge. Ambient vibration testing was conducted to excite the bridge, assisted by linear chirp sweepings induced by two small electrodynamic shakes deployed to enhance the excitation levels, since the bridge was closed to traffic. The data-driven stochastic subspace identification method was executed to recover the modal properties from measurement data. An initial FEM was developed and correlation between the experimental modal results and their analytical counterparts was studied. Modelling of the pier and abutment bearings was carefully adjusted to reflect the real operational conditions of the bridge. The subproblem approximation method was subsequently utilized to automatically update the FEM. For this purpose, the influences of bearing stiffness, and mass density and Young's modulus of materials were examined as uncertain parameters using sensitivity analysis. The updating objective function was defined based on a summation of squared values of relative errors of natural frequencies between the FEM and experimentation. All the identified modes were used as the target responses with the purpose of putting more constrains for the optimization process and decreasing the number of potentially feasible combinations for parameter changes. The updated FEM of the bridge was able to produce sufficient improvements in natural frequencies in most modes of interest, and can serve for a more precise dynamic response prediction or future investigation of the bridge health.
An update of Leighton's solar dynamo model
Cameron, R H
2016-01-01
In 1969 Leighton developed a quasi-1D mathematical model of the solar dynamo, building upon the phenomenological scenario of Babcock(1961). Here we present a modification and extension of Leighton's model. Using the axisymmetric component of the magnetic field, we consider the radial field component at the solar surface and the radially integrated toroidal magnetic flux in the convection zone, both as functions of latitude. No assumptions are made with regard to the radial location of the toroidal flux. The model includes the effects of turbulent diffusion at the surface and in the convection zone, poleward meridional flow at the surface and an equatorward return flow affecting the toroidal flux, latitudinal differential rotation and the near-surface layer of radial rotational shear, downward convective pumping of magnetic flux in the shear layer, and flux emergence in the form of tilted bipolar magnetic regions. While the parameters relevant for the transport of the surface field are taken from observations,...
Richards, V. M.; Dai, W.
2014-01-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
The Lunar Mapping and Modeling Project Update
Noble, S.; French, R.; Nall, M.; Muery, K.
2010-01-01
The Lunar Mapping and Modeling Project (LMMP) is managing the development of a suite of lunar mapping and modeling tools and data products that support lunar exploration activities, including the planning, design, development, test, and operations associated with crewed and/or robotic operations on the lunar surface. In addition, LMMP should prove to be a convenient and useful tool for scientific analysis and for education and public outreach (E/PO) activities. LMMP will utilize data predominately from the Lunar Reconnaissance Orbiter, but also historical and international lunar mission data (e.g. Lunar Prospector, Clementine, Apollo, Lunar Orbiter, Kaguya, and Chandrayaan-1) as available and appropriate. LMMP will provide such products as image mosaics, DEMs, hazard assessment maps, temperature maps, lighting maps and models, gravity models, and resource maps. We are working closely with the LRO team to prevent duplication of efforts and ensure the highest quality data products. A beta version of the LMMP software was released for limited distribution in December 2009, with the public release of version 1 expected in the Fall of 2010.
The International Reference Ionosphere: Model Update 2016
Bilitza, Dieter; Altadill, David; Reinisch, Bodo; Galkin, Ivan; Shubin, Valentin; Truhlik, Vladimir
2016-04-01
The International Reference Ionosphere (IRI) is recognized as the official standard for the ionosphere (COSPAR, URSI, ISO) and is widely used for a multitude of different applications as evidenced by the many papers in science and engineering journals that acknowledge the use of IRI (e.g., about 11% of all Radio Science papers each year). One of the shortcomings of the model has been the dependence of the F2 peak height modeling on the propagation factor M(3000)F2. With the 2016 version of IRI, two new models will be introduced for hmF2 that were developed directly based on hmF2 measurements by ionosondes [Altadill et al., 2013] and by COSMIC radio occultation [Shubin, 2015], respectively. In addition IRI-2016 will include an improved representation of the ionosphere during the very low solar activities that were reached during the last solar minimum in 2008/2009. This presentation will review these and other improvements that are being implemented with the 2016 version of the IRI model. We will also discuss recent IRI workshops and their findings and results. One of the most exciting new projects is the development of the Real-Time IRI [Galkin et al., 2012]. We will discuss the current status and plans for the future. Altadill, D., S. Magdaleno, J.M. Torta, E. Blanch (2013), Global empirical models of the density peak height and of the equivalent scale height for quiet conditions, Advances in Space Research 52, 1756-1769, doi:10.1016/j.asr.2012.11.018. Galkin, I.A., B.W. Reinisch, X. Huang, and D. Bilitza (2012), Assimilation of GIRO Data into a Real-Time IRI, Radio Science, 47, RS0L07, doi:10.1029/2011RS004952. Shubin V.N. (2015), Global median model of the F2-layer peak height based on ionospheric radio-occultation and ground-based Digisonde observations, Advances in Space Research 56, 916-928, doi:10.1016/j.asr.2015.05.029.
An update of Leighton's solar dynamo model
Cameron, R. H.; Schüssler, M.
2017-02-01
In 1969, Leighton developed a quasi-1D mathematical model of the solar dynamo, building upon the phenomenological scenario of Babcock published in 1961. Here we present a modification and extension of Leighton's model. Using the axisymmetric component (longitudinal average) of the magnetic field, we consider the radial field component at the solar surface and the radially integrated toroidal magnetic flux in the convection zone, both as functions of latitude. No assumptions are made with regard to the radial location of the toroidal flux. The model includes the effects of (i) turbulent diffusion at the surface and in the convection zone; (ii) poleward meridional flow at the surface and an equatorward return flow affecting the toroidal flux; (iii) latitudinal differential rotation and the near-surface layer of radial rotational shear; (iv) downward convective pumping of magnetic flux in the shear layer; and (v) flux emergence in the form of tilted bipolar magnetic regions treated as a source term for the radial surface field. While the parameters relevant for the transport of the surface field are taken from observations, the model condenses the unknown properties of magnetic field and flow in the convection zone into a few free parameters (turbulent diffusivity, effective return flow, amplitude of the source term, and a parameter describing the effective radial shear). Comparison with the results of 2D flux transport dynamo codes shows that the model captures the essential features of these simulations. We make use of the computational efficiency of the model to carry out an extended parameter study. We cover an extended domain of the 4D parameter space and identify the parameter ranges that provide solar-like solutions. Dipole parity is always preferred and solutions with periods around 22 yr and a correct phase difference between flux emergence in low latitudes and the strength of the polar fields are found for a return flow speed around 2 m s-1, turbulent
Substructure System Identification for Finite Element Model Updating
Craig, Roy R., Jr.; Blades, Eric L.
1997-01-01
This report summarizes research conducted under a NASA grant on the topic 'Substructure System Identification for Finite Element Model Updating.' The research concerns ongoing development of the Substructure System Identification Algorithm (SSID Algorithm), a system identification algorithm that can be used to obtain mathematical models of substructures, like Space Shuttle payloads. In the present study, particular attention was given to the following topics: making the algorithm robust to noisy test data, extending the algorithm to accept experimental FRF data that covers a broad frequency bandwidth, and developing a test analytical model (TAM) for use in relating test data to reduced-order finite element models.
Update on Advection-Diffusion Purge Flow Model
Brieda, Lubos
2015-01-01
Gaseous purge is commonly used in sensitive spacecraft optical or electronic instruments to prevent infiltration of contaminants and/or water vapor. Typically, purge is sized using simplistic zero-dimensional models that do not take into account instrument geometry, surface effects, and the dependence of diffusive flux on the concentration gradient. For this reason, an axisymmetric computational fluid dynamics (CFD) simulation was recently developed to model contaminant infiltration and removal by purge. The solver uses a combined Navier-Stokes and Advection-Diffusion approach. In this talk, we report on updates in the model, namely inclusion of a particulate transport model.
Maximum likelihood reconstruction for Ising models with asynchronous updates
Zeng, Hong-Li; Aurell, Erik; Hertz, John; Roudi, Yasser
2012-01-01
We describe how the couplings in a non-equilibrium Ising model can be inferred from observing the model history. Two cases of an asynchronous update scheme are considered: one in which we know both the spin history and the update times (times at which an attempt was made to flip a spin) and one in which we only know the spin history (i.e., the times at which spins were actually flipped). In both cases, maximizing the likelihood of the data leads to exact learning rules for the couplings in the model. For the first case, we show that one can average over all possible choices of update times to obtain a learning rule that depends only on spin correlations and not on the specific spin history. For the second case, the same rule can be derived within a further decoupling approximation. We study all methods numerically for fully asymmetric Sherrington-Kirkpatrick models, varying the data length, system size, temperature, and external field. Good convergence is observed in accordance with the theoretical expectatio...
An updated geospatial liquefaction model for global application
Zhu, Jing; Baise, Laurie G.; Thompson, Eric
2017-01-01
We present an updated geospatial approach to estimation of earthquake-induced liquefaction from globally available geospatial proxies. Our previous iteration of the geospatial liquefaction model was based on mapped liquefaction surface effects from four earthquakes in Christchurch, New Zealand, and Kobe, Japan, paired with geospatial explanatory variables including slope-derived VS30, compound topographic index, and magnitude-adjusted peak ground acceleration from ShakeMap. The updated geospatial liquefaction model presented herein improves the performance and the generality of the model. The updates include (1) expanding the liquefaction database to 27 earthquake events across 6 countries, (2) addressing the sampling of nonliquefaction for incomplete liquefaction inventories, (3) testing interaction effects between explanatory variables, and (4) overall improving model performance. While we test 14 geospatial proxies for soil density and soil saturation, the most promising geospatial parameters are slope-derived VS30, modeled water table depth, distance to coast, distance to river, distance to closest water body, and precipitation. We found that peak ground velocity (PGV) performs better than peak ground acceleration (PGA) as the shaking intensity parameter. We present two models which offer improved performance over prior models. We evaluate model performance using the area under the curve under the Receiver Operating Characteristic (ROC) curve (AUC) and the Brier score. The best-performing model in a coastal setting uses distance to coast but is problematic for regions away from the coast. The second best model, using PGV, VS30, water table depth, distance to closest water body, and precipitation, performs better in noncoastal regions and thus is the model we recommend for global implementation.
Updated scalar sector constraints in Higgs triplet model
Das, Dipankar
2016-01-01
We show that in the Higgs triplet model, after the Higgs discovery, the mixing angle in the CP-even sector can be strongly constrained from unitarity. We also discuss how large quantum effects in $h\\to\\gamma\\gamma$ may arise in a SM-like scenario and a certain part of the parameter space can be ruled out from the diphoton signal strength. Using $T$-parameter and diphoton signal strength measurements, we update the bounds on the nonstandard scalar masses.
An improved optimal elemental method for updating finite element models
Institute of Scientific and Technical Information of China (English)
Duan Zhongdong(段忠东); Spencer B.F.; Yan Guirong(闫桂荣); Ou Jinping(欧进萍)
2004-01-01
The optimal matrix method and optimal elemental method used to update finite element models may not provide accurate results. This situation occurs when the test modal model is incomplete, as is often the case in practice. An improved optimal elemental method is presented that defines a new objective function, and as a byproduct, circumvents the need for mass normalized modal shapes, which are also not readily available in practice. To solve the group of nonlinear equations created by the improved optimal method, the Lagrange multiplier method and Matlab function fmincon are employed. To deal with actual complex structures,the float-encoding genetic algorithm (FGA) is introduced to enhance the capability of the improved method. Two examples, a 7-degree of freedom (DOF) mass-spring system and a 53-DOF planar frame, respectively, are updated using the improved method.Thc example results demonstrate the advantages of the improved method over existing optimal methods, and show that the genetic algorithm is an effective way to update the models used for actual complex structures.
Procedural Personas for Player Decision Modeling and Procedural Content Generation
DEFF Research Database (Denmark)
Holmgård, Christoffer
2016-01-01
in specific games. It further explores how simple utility functions, easily defined and changed by game designers, can be used to construct agents expressing a variety of decision making styles within a game, using a variety of contemporary AI approaches, naming the resulting agents "Procedural Personas......." These methods for constructing procedural personas are then integrated with existing procedural content generation systems, acting as critics that shape the output of these systems, optimizing generated content for different personas and by extension, different kinds of players and their decision making styles...
A Procedural Model for Process Improvement Projects
Kreimeyer, Matthias;Daniilidis, Charampos;Lindemann, Udo
2017-01-01
Process improvement projects are of a complex nature. It is therefore necessary to use experience and knowledge gained in previous projects when executing a new project. Yet, there are few pragmatic planning aids, and transferring the institutional knowledge from one project to the next is difficult. This paper proposes a procedural model that extends common models for project planning to enable staff on a process improvement project to adequately plan their projects, enabling them to documen...
Directory of Open Access Journals (Sweden)
Lei Qin
2014-05-01
Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.
A procedure for building product models
DEFF Research Database (Denmark)
Hvam, Lars; Riis, Jesper; Malis, Martin
2001-01-01
with product models. The next phase includes an analysis of the product assortment, and the set up of a so-called product master. Finally the product model is designed and implemented using object oriented modelling. The procedure is developed in order to ensure that the product models constructed are fit...... for the business processes they support, and properly structured and documented, in order to facilitate that the systems can be maintained continually and further developed. The research has been carried out at the Centre for Industrialisation of Engineering, Department of Manufacturing Engineering, Technical...
An updated subgrid orographic parameterization for global atmospheric forecast models
Choi, Hyun-Joo; Hong, Song-You
2015-12-01
A subgrid orographic parameterization (SOP) is updated by including the effects of orographic anisotropy and flow-blocking drag (FBD). The impact of the updated SOP on short-range forecasts is investigated using a global atmospheric forecast model applied to a heavy snowfall event over Korea on 4 January 2010. When the SOP is updated, the orographic drag in the lower troposphere noticeably increases owing to the additional FBD over mountainous regions. The enhanced drag directly weakens the excessive wind speed in the low troposphere and indirectly improves the temperature and mass fields over East Asia. In addition, the snowfall overestimation over Korea is improved by the reduced heat fluxes from the surface. The forecast improvements are robust regardless of the horizontal resolution of the model between T126 and T510. The parameterization is statistically evaluated based on the skill of the medium-range forecasts for February 2014. For the medium-range forecasts, the skill improvements of the wind speed and temperature in the low troposphere are observed globally and for East Asia while both positive and negative effects appear indirectly in the middle-upper troposphere. The statistical skill for the precipitation is mostly improved due to the improvements in the synoptic fields. The improvements are also found for seasonal simulation throughout the troposphere and stratosphere during boreal winter.
Two dimensional cellular automaton for evacuation modeling: hybrid shuffle update
Arita, Chikashi; Appert-Rolland, Cécile
2015-01-01
We consider a cellular automaton model with a static floor field for pedestrians evacuating a room. After identifying some properties of real pedestrian flows, we discuss various update schemes, and we introduce a new one, the hybrid shuffle update. The properties specific to pedestrians are incorporated in variables associated to particles called phases, that represent their step cycles. The dynamics of the phases gives naturally raise to some friction, and allows to reproduce several features observed in experiments. We study in particular the crossover between a low- and a high-density regime that occurs when the density of pedestrian increases, the dependency of the outflow in the strength of the floor field, and the shape of the queue in front of the exit.
An updating method for structural dynamics models with unknown excitations
Energy Technology Data Exchange (ETDEWEB)
Louf, F; Charbonnel, P E; Ladeveze, P [LMT-Cachan (ENS Cachan/CNRS/Paris 6 University) 61, avenue du Prsident Wilson, F-94235 Cachan Cedex (France); Gratien, C [Astrium (EADS space transportation) - Service TE 343 66, Route de Verneuil, 78133 Les Mureaux Cedex (France)], E-mail: charbonnel@lmt.ens-cachan.fr, E-mail: ladeveze@lmt.ens-cachan.fr, E-mail: louf@lmt.ens-cachan.fr, E-mail: christian.gratien@astrium.eads.net
2008-11-01
This paper presents an extension of the Constitutive Relation Error (CRE) updating method to complex industrial structures, such as space launchers, for which tests carried out in the functional context can provide significant amounts of information. Indeed, since several sources of excitation are involved simultaneously, a flight test can be viewed as a multiple test. However, there is a serious difficulty in that these sources of excitation are partially unknown. The CRE updating method enables one to obtain an estimate of these excitations. We present a first application of the method using a very simple finite element model of the Ariane V launcher along with measurements performed at the end of an atmospheric flight.
Li, Senhu; Sarment, David
2016-03-01
Image-guided procedure with intraoperative imaging updates has made a big impact on minimally invasive surgery. Compact and mobile CT imaging device combining with current commercial available image guided navigation system is a legitimate and cost-efficient solution for a typical operating room setup. However, the process of manual fiducial-based registration between image and physical spaces (image-to-world) is troublesome for surgeons during the procedure, which results in much procedure interruptions and is the main source of registration errors. In this study, we developed a novel method to eliminate the manual registration process. Instead of using probe to manually localize the fiducials during the surgery, a tracking plate with known fiducial positions relative to the reference coordinates is designed and fabricated through 3D printing technique. The workflow and feasibility of this method has been studied through a phantom experiment.
Procedural Personas for Player Decision Modeling and Procedural Content Generation
DEFF Research Database (Denmark)
Holmgård, Christoffer
2016-01-01
." These methods for constructing procedural personas are then integrated with existing procedural content generation systems, acting as critics that shape the output of these systems, optimizing generated content for different personas and by extension, different kinds of players and their decision making styles....... This thesis explores methods for creating low-complexity, easily interpretable, generative AI agents for use in game and simulation design. Based on insights from decision theory and behavioral economics, the thesis investigates how player decision making styles may be defined, operationalised, and measured...... in specific games. It further explores how simple utility functions, easily defined and changed by game designers, can be used to construct agents expressing a variety of decision making styles within a game, using a variety of contemporary AI approaches, naming the resulting agents "Procedural Personas...
DEFF Research Database (Denmark)
Hansen, Lisbet Sneftrup; Borup, Morten; Moller, Arne
2014-01-01
, and then evaluates and documents the performance of this particular updating procedure for flow forecasting. A hypothetical case study and synthetic observations are used to illustrate how the Update method works and affects downstream nodes. A real case study in a 544 ha urban catchment furthermore shows...
Procedural Optimization Models for Multiobjective Flexible JSSP
Directory of Open Access Journals (Sweden)
Elena Simona NICOARA
2013-01-01
Full Text Available The most challenging issues related to manufacturing efficiency occur if the jobs to be sched-uled are structurally different, if these jobs allow flexible routings on the equipments and mul-tiple objectives are required. This framework, called Multi-objective Flexible Job Shop Scheduling Problems (MOFJSSP, applicable to many real processes, has been less reported in the literature than the JSSP framework, which has been extensively formalized, modeled and analyzed from many perspectives. The MOFJSSP lie, as many other NP-hard problems, in a tedious place where the vast optimization theory meets the real world context. The paper brings to discussion the most optimization models suited to MOFJSSP and analyzes in detail the genetic algorithms and agent-based models as the most appropriate procedural models.
Kaiser, Michael G; Eck, Jason C; Groff, Michael W; Watters, William C; Dailey, Andrew T; Resnick, Daniel K; Choudhri, Tanvir F; Sharan, Alok; Wang, Jeffrey C; Mummaneni, Praveen V; Dhall, Sanjay S; Ghogawala, Zoher
2014-07-01
Fusion procedures are an accepted and successful management strategy to alleviate pain and/or neurological symptoms associated with degenerative disease of the lumbar spine. In 2005, the first version of the "Guidelines for the performance of fusion procedures for degenerative disease of the lumbar spine" was published in the Journal of Neurosurgery: Spine. In an effort to incorporate evidence obtained since the original publication of these guidelines, an expert panel of neurosurgical and orthopedic spine specialists was convened in 2009. Topics reviewed were essentially identical to the original publication. Selected manuscripts from the first iteration of these guidelines as well as relevant publications between 2005 through 2011 were reviewed. Several modifications to the methodology of guideline development were adopted for the current update. In contrast to the 2005 guidelines, a 5-tiered level of evidence strategy was employed, primarily allowing a distinction between lower levels of evidence. The qualitative descriptors (standards/guidelines/options) used in the 2005 recommendations were abandoned and replaced with grades to reflect the strength of medical evidence supporting the recommendation. Recommendations that conflicted with the original publication, if present, were highlighted at the beginning of each chapter. As with the original guideline publication, the intent of this update is to provide a foundation from which an appropriate treatment strategy can be formulated.
Finite element model updating of concrete structures based on imprecise probability
Biswal, S.; Ramaswamy, A.
2017-09-01
Imprecise probability based methods are developed in this study for the parameter estimation, in finite element model updating for concrete structures, when the measurements are imprecisely defined. Bayesian analysis using Metropolis Hastings algorithm for parameter estimation is generalized to incorporate the imprecision present in the prior distribution, in the likelihood function, and in the measured responses. Three different cases are considered (i) imprecision is present in the prior distribution and in the measurements only, (ii) imprecision is present in the parameters of the finite element model and in the measurement only, and (iii) imprecision is present in the prior distribution, in the parameters of the finite element model, and in the measurements. Procedures are also developed for integrating the imprecision in the parameters of the finite element model, in the finite element software Abaqus. The proposed methods are then verified against reinforced concrete beams and prestressed concrete beams tested in our laboratory as part of this study.
Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.
2017-04-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.
Rotating shaft model updating from modal data by a direct energy approach : a feasibility study
Energy Technology Data Exchange (ETDEWEB)
Audebert, S. [Electricite de France (EDF), 75 - Paris (France). Direction des Etudes et Recherches; Girard, A.; Chatelain, J. [Intespace - Division Etudes et Recherche, 31 - Toulouse (France)
1996-12-31
Investigations to improve the rotating machinery monitoring tend more and more to use numerical models. The aim is to obtain multi-fluid bearing rotor models which are able to correctly represent their dynamic behaviour, either modal or forced response type. The possibility of extending the direct energy method, initially developed for undamped structures, to rotating machinery is studied. It is based on the minimization of the kinetic and strain energy gap between experimental and analytic modal data. The preliminary determination of a multi-linear bearing rotor system Eigen modes shows the problem complexity in comparison with undamped non rotating structures: taking into account gyroscopic effects and bearing damping, as factors of rotor velocities, leads to complex component Eigen modes; moreover, non symmetric matrices, related to stiffness and damping bearing contributions, induce distinct left and right-hand side Eigen modes (left hand side Eigenmodes corresponds to the adjoint structure). Theoretically, the extension of the energy method is studied, considering first the intermediate case of an undamped non gyroscopic structure, second the general case of a rotating shaft: dta used for updating procedure are Eigen frequencies and left- and right- hand side mode shapes. Since left hand side mode shapes cannot be directly measured, they are replaced by analytic ones. The method is tested on a two-bearing rotor system, with a mass added; simulated data are used, relative to a non compatible structure, i.e. which is not a part of the set of modified analytic possible structures. Parameters to be corrected are the mass density, the Young`s modulus, and the stiffness and damping linearized characteristics of bearings. If parameters are influent in regard with modes to be updates, the updating method permits a significant improvement of the gap between analytic and experimental modes, even for modes not involves in the procedure. Modal damping appears to be more
Update on dexmedetomidine: use in nonintubated patients requiring sedation for surgical procedures
Directory of Open Access Journals (Sweden)
Mohanad Shukry
2010-03-01
Full Text Available Mohanad Shukry, Jeffrey A MillerUniversity of Oklahoma Health Sciences Center, Department of Anesthesiology, Children’s Hospital of Oklahoma, Oklahoma City, OK, USAAbstract: Dexmedetomidine was introduced two decades ago as a sedative and supplement to sedation in the intensive care unit for patients whose trachea was intubated. However, since that time dexmedetomidine has been commonly used as a sedative and hypnotic for patients undergoing procedures without the need for tracheal intubation. This review focuses on the application of dexmedetomidine as a sedative and/or total anesthetic in patients undergoing procedures without the need for tracheal intubation. Dexmedetomidine was used for sedation in monitored anesthesia care (MAC, airway procedures including fiberoptic bronchoscopy, dental procedures, ophthalmological procedures, head and neck procedures, neurosurgery, and vascular surgery. Additionally, dexmedetomidine was used for the sedation of pediatric patients undergoing different type of procedures such as cardiac catheterization and magnetic resonance imaging. Dexmedetomidine loading dose ranged from 0.5 to 5 μg kg-1, and infusion dose ranged from 0.2 to 10 μg kg-1 h-1. Dexmedetomidine was administered in conjunction with local anesthesia and/or other sedatives. Ketamine was administered with dexmedetomidine and opposed its bradycardiac effects. Dexmedetomidine may by useful in patients needing sedation without tracheal intubation. The literature suggests potential use of dexmedetomidine solely or as an adjunctive agent to other sedation agents. Dexmedetomidine was especially useful when spontaneous breathing was essential such as in procedures on the airway, or when sudden awakening from sedation was required such as for cooperative clinical examination during craniotomies.Keywords: dexmedetomidine, sedation, nonintubated patients
An Updated Gas/grain Sulfur Network for Astrochemical Models
Laas, Jacob; Caselli, Paola
2017-06-01
Sulfur is a chemical element that enjoys one of the highest cosmic abundances. However, it has traditionally played a relatively minor role in the field of astrochemistry, being drowned out by other chemistries after it depletes from the gas phase during the transition from a diffuse cloud to a dense one. A wealth of laboratory studies have provided clues to its rich chemistry in the condensed phase, and most recently, a report by a team behind the Rosetta spacecraft has significantly helped to unveil its rich cometary chemistry. We have set forth to use this information to greatly update/extend the sulfur reactions within the OSU gas/grain astrochemical network in a systematic way, to provide more realistic chemical models of sulfur for a variety of interstellar environments. We present here some results and implications of these models.
Robust estimation procedure in panel data model
Energy Technology Data Exchange (ETDEWEB)
Shariff, Nurul Sima Mohamad [Faculty of Science of Technology, Universiti Sains Islam Malaysia (USIM), 71800, Nilai, Negeri Sembilan (Malaysia); Hamzah, Nor Aishah [Institute of Mathematical Sciences, Universiti Malaya, 50630, Kuala Lumpur (Malaysia)
2014-06-19
The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependence is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.
"Updates to Model Algorithms & Inputs for the Biogenic ...
We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.
Update on plication procedures for Peyronie’s disease and other penile deformities
Mobley, Elizabeth M.; Fuchs, Molly E.; Myers, Jeremy B.
2012-01-01
Plication techniques are not a panacea for deformities associated with Peyronie’s disease or congenital curvature. However, they do provide certain advantages, both theoretic and real, over competing procedures such as grafting. Depending on the technique, plication procedures have minimal risk of de novo erectile dysfunction, minimal risk of injury to the dorsal neurovascular bundle, and may be used for a variety of angulation deformities, including multiplanar curvature and severe degrees of curvature. A variety of incisions may be used, including the classic circumcision with degloving but also ventral raphe, dorsal penile inversion, and penoscrotal. These may be helpful in preventing postoperative morbidity and in sparing the prepuce if desired. Plication may also be combined with procedures such as penile prosthesis for correction of residual curvature. Lastly, despite its complications, plication techniques are very well tolerated, are relatively simple to perform and result in the very high satisfaction rates. PMID:23205060
Simple brane-world inflationary models — An update
Okada, Nobuchika; Okada, Satomi
2016-05-01
In the light of the Planck 2015 results, we update simple inflationary models based on the quadratic, quartic, Higgs and Coleman-Weinberg potentials in the context of the Randall-Sundrum brane-world cosmology. Brane-world cosmological effect alters the inflationary predictions of the spectral index (ns) and the tensor-to-scalar ratio (r) from those obtained in the standard cosmology. In particular, the tensor-to-scalar ratio is enhanced in the presence of the 5th dimension. In order to maintain the consistency with the Planck 2015 results for the inflationary predictions in the standard cosmology, we find a lower bound on the five-dimensional Planck mass (M5). On the other hand, the inflationary predictions laying outside of the Planck allowed region can be pushed into the allowed region by the brane-world cosmological effect with a suitable choice of M5.
Simple brane-world inflationary models: an update
Okada, Nobuchika
2015-01-01
In the light of the Planck 2015 results, we update simple inflationary models based on the quadratic, quartic, Higgs and Coleman-Weinberg potentials in the context of the Randall-Sundrum brane-world cosmology. Brane-world cosmological effect alters the inflationary predictions of the spectral index ($n_s$) and the tensor-to-scalar ratio ($r$) from those obtained in the standard cosmology. In particular, the tensor-to-scalar ratio is enhanced in the presence of the 5th dimension. In order to maintain the consistency with the Planck 2015 results for the inflationary predictions in the standard cosmology, we find a lower bound on the five-dimensional Planck mass. On the other hand, the inflationary predictions laying outside of the Planck allowed region can be pushed into the allowed region by the brane-world cosmological effect.
Finite element model updating of existing steel bridge based on structural health monitoring
Institute of Scientific and Technical Information of China (English)
HE Xu-hui; YU zhi-wu; CHEN Zheng-qing
2008-01-01
Based on the physical meaning of sensitivity, a new finite element (FE) model updating method was proposed. In this method, a three-dimensional FE model of the Nanjing Yangtze River Bridge (NYRB) with ANSYS program was established and updated by modifying some design parameters. To further validate the updated FE model, the analytical stress-time histories responses of main members induced by a moving train were compared with the measured ones. The results show that the relative error of maximum stress is 2.49% and the minimum relative coefficient of analytical stress-time histories responses is 0.793. The updated model has a good agreement between the calculated data and the tested data, and provides a current baseline FE model for long-term health monitoring and condition assessment of the NYRB. At the same time, the model is validated by stress-time histories responses to be feasible and practical for railway steel bridge model updating.
Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model.
Directory of Open Access Journals (Sweden)
Greg Jensen
Full Text Available Transitive inference (the ability to infer that B > D given that B > C and C > D is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1 representing stimulus positions along a unit span using beta distributions, (2 treating positive and negative feedback asymmetrically, and (3 updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort's success (when compared to RPE models and its computational efficiency (when compared to full Markov decision process implementations suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models.
Methods for the Update and Verification of Forest Surface Model
Rybansky, M.; Brenova, M.; Zerzan, P.; Simon, J.; Mikita, T.
2016-06-01
The digital terrain model (DTM) represents the bare ground earth's surface without any objects like vegetation and buildings. In contrast to a DTM, Digital surface model (DSM) represents the earth's surface including all objects on it. The DTM mostly does not change as frequently as the DSM. The most important changes of the DSM are in the forest areas due to the vegetation growth. Using the LIDAR technology the canopy height model (CHM) is obtained by subtracting the DTM and the corresponding DSM. The DSM is calculated from the first pulse echo and DTM from the last pulse echo data. The main problem of the DSM and CHM data using is the actuality of the airborne laser scanning. This paper describes the method of calculating the CHM and DSM data changes using the relations between the canopy height and age of trees. To get a present basic reference data model of the canopy height, the photogrammetric and trigonometric measurements of single trees were used. Comparing the heights of corresponding trees on the aerial photographs of various ages, the statistical sets of the tree growth rate were obtained. These statistical data and LIDAR data were compared with the growth curve of the spruce forest, which corresponds to a similar natural environment (soil quality, climate characteristics, geographic location, etc.) to get the updating characteristics.
Update rules and interevent time distributions: slow ordering versus no ordering in the voter model.
Fernández-Gracia, J; Eguíluz, V M; San Miguel, M
2011-07-01
We introduce a general methodology of update rules accounting for arbitrary interevent time (IET) distributions in simulations of interacting agents. We consider in particular update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully connected, random, and scale-free networks with an activation probability inversely proportional to the time since the last action, where an action can be an update attempt (an exogenous update) or a change of state (an endogenous update). We find that in the thermodynamic limit, at variance with standard updates and the exogenous update, the system orders slowly for the endogenous update. The approach to the absorbing state is characterized by a power-law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined. The IET distributions resulting from both update schemes show power-law tails.
Update rules and interevent time distributions: Slow ordering versus no ordering in the voter model
Fernández-Gracia, J.; Eguíluz, V. M.; San Miguel, M.
2011-07-01
We introduce a general methodology of update rules accounting for arbitrary interevent time (IET) distributions in simulations of interacting agents. We consider in particular update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully connected, random, and scale-free networks with an activation probability inversely proportional to the time since the last action, where an action can be an update attempt (an exogenous update) or a change of state (an endogenous update). We find that in the thermodynamic limit, at variance with standard updates and the exogenous update, the system orders slowly for the endogenous update. The approach to the absorbing state is characterized by a power-law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined. The IET distributions resulting from both update schemes show power-law tails.
Passive Remote Sensing of Oceanic Whitecaps: Updated Geophysical Model Function
Anguelova, M. D.; Bettenhausen, M. H.; Johnston, W.; Gaiser, P. W.
2016-12-01
Many air-sea interaction processes are quantified in terms of whitecap fraction W because oceanic whitecaps are the most visible and direct way of observing breaking of wind waves in the open ocean. Enhanced by breaking waves, surface fluxes of momentum, heat, and mass are critical for ocean-atmosphere coupling and thus affect the accuracy of models used to forecast weather, predict storm intensification, and study climate change. Whitecap fraction has been traditionally measured from photographs or video images collected from towers, ships, and aircrafts. Satellite-based passive remote sensing of whitecap fraction is a recent development that allows long term, consistent observations of whitecapping on a global scale. The method relies on changes of ocean surface emissivity at microwave frequencies (e.g., 6 to 37 GHz) due to presence of sea foam on a rough sea surface. These changes at the ocean surface are observed from the satellite as brightness temperature TB. A year-long W database built with this algorithm has proven useful in analyzing and quantifying the variability of W, as well as estimating fluxes of CO2 and sea spray production. The algorithm to obtain W from satellite observations of TB was developed at the Naval Research Laboratory within the framework of WindSat mission. The W(TB) algorithm estimates W by minimizing the differences between measured and modeled TB data. A geophysical model function (GMF) calculates TB at the top of the atmosphere as contributions from the atmosphere and the ocean surface. The ocean surface emissivity combines the emissivity of rough sea surface and the emissivity of areas covered with foam. Wind speed and direction, sea surface temperature, water vapor, and cloud liquid water are inputs to the atmospheric, roughness and foam models comprising the GMF. The W(TB) algorithm has been recently updated to use new sources and products for the input variables. We present new version of the W(TB) algorithm that uses updated
An Update to the NASA Reference Solar Sail Thrust Model
Heaton, Andrew F.; Artusio-Glimpse, Alexandra B.
2015-01-01
An optical model of solar sail material originally derived at JPL in 1978 has since served as the de facto standard for NASA and other solar sail researchers. The optical model includes terms for specular and diffuse reflection, thermal emission, and non-Lambertian diffuse reflection. The standard coefficients for these terms are based on tests of 2.5 micrometer Kapton sail material coated with 100 nm of aluminum on the front side and chromium on the back side. The original derivation of these coefficients was documented in an internal JPL technical memorandum that is no longer available. Additionally more recent optical testing has taken place and different materials have been used or are under consideration by various researchers for solar sails. Here, where possible, we re-derive the optical coefficients from the 1978 model and update them to accommodate newer test results and sail material. The source of the commonly used value for the front side non-Lambertian coefficient is not clear, so we investigate that coefficient in detail. Although this research is primarily designed to support the upcoming NASA NEA Scout and Lunar Flashlight solar sail missions, the results are also of interest to the wider solar sail community.
Real time hybrid simulation with online model updating: An analysis of accuracy
Ou, Ge; Dyke, Shirley J.; Prakash, Arun
2017-02-01
In conventional hybrid simulation (HS) and real time hybrid simulation (RTHS) applications, the information exchanged between the experimental substructure and numerical substructure is typically restricted to the interface boundary conditions (force, displacement, acceleration, etc.). With additional demands being placed on RTHS and recent advances in recursive system identification techniques, an opportunity arises to improve the fidelity by extracting information from the experimental substructure. Online model updating algorithms enable the numerical model of components (herein named the target model), that are similar to the physical specimen to be modified accordingly. This manuscript demonstrates the power of integrating a model updating algorithm into RTHS (RTHSMU) and explores the possible challenges of this approach through a practical simulation. Two Bouc-Wen models with varying levels of complexity are used as target models to validate the concept and evaluate the performance of this approach. The constrained unscented Kalman filter (CUKF) is selected for using in the model updating algorithm. The accuracy of RTHSMU is evaluated through an estimation output error indicator, a model updating output error indicator, and a system identification error indicator. The results illustrate that, under applicable constraints, by integrating model updating into RTHS, the global response accuracy can be improved when the target model is unknown. A discussion on model updating parameter sensitivity to updating accuracy is also presented to provide guidance for potential users.
Prediction error, ketamine and psychosis: An updated model.
Corlett, Philip R; Honey, Garry D; Fletcher, Paul C
2016-11-01
In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.
A Survey on Procedural Modelling for Virtual Worlds
Smelik, R.M.; Tutenel, T.; Bidarra, R.; Benes, B.
2014-01-01
Procedural modelling deals with (semi-)automatic content generation by means of a program or procedure. Among other advantages, its data compression and the potential to generate a large variety of detailed content with reduced human intervention, have made procedural modelling attractive for creati
Updated Conceptual Model for the 300 Area Uranium Groundwater Plume
Energy Technology Data Exchange (ETDEWEB)
Zachara, John M.; Freshley, Mark D.; Last, George V.; Peterson, Robert E.; Bjornstad, Bruce N.
2012-11-01
The 300 Area uranium groundwater plume in the 300-FF-5 Operable Unit is residual from past discharge of nuclear fuel fabrication wastes to a number of liquid (and solid) disposal sites. The source zones in the disposal sites were remediated by excavation and backfilled to grade, but sorbed uranium remains in deeper, unexcavated vadose zone sediments. In spite of source term removal, the groundwater plume has shown remarkable persistence, with concentrations exceeding the drinking water standard over an area of approximately 1 km2. The plume resides within a coupled vadose zone, groundwater, river zone system of immense complexity and scale. Interactions between geologic structure, the hydrologic system driven by the Columbia River, groundwater-river exchange points, and the geochemistry of uranium contribute to persistence of the plume. The U.S. Department of Energy (DOE) recently completed a Remedial Investigation/Feasibility Study (RI/FS) to document characterization of the 300 Area uranium plume and plan for beginning to implement proposed remedial actions. As part of the RI/FS document, a conceptual model was developed that integrates knowledge of the hydrogeologic and geochemical properties of the 300 Area and controlling processes to yield an understanding of how the system behaves and the variables that control it. Recent results from the Hanford Integrated Field Research Challenge site and the Subsurface Biogeochemistry Scientific Focus Area Project funded by the DOE Office of Science were used to update the conceptual model and provide an assessment of key factors controlling plume persistence.
Network inference using asynchronously updated kinetic Ising Model
Zeng, Hong-Li; Alava, Mikko; Mahmoudi, Hamed
2010-01-01
Network structures are reconstructed from dynamical data by respectively naive mean field (nMF) and Thouless-Anderson-Palmer (TAP) approximations. For TAP approximation, we use two methods to reconstruct the network: a) iteration method; b) casting the inference formula to a set of cubic equations and solving it directly. We investigate inference of the asymmetric Sherrington- Kirkpatrick (S-K) model using asynchronous update. The solutions of the sets cubic equation depend of temperature T in the S-K model, and a critical temperature Tc is found around 2.1. For T Tc there are three real roots. The iteration method is convergent only if the cubic equations have three real solutions. The two methods give same results when the iteration method is convergent. Compared to nMF, TAP is somewhat better at low temperatures, but approaches the same performance as temperature increase. Both methods behave better for longer data length, but for improvement arises, TAP is well pronounced.
Foothills model forest grizzly bear study : project update
Energy Technology Data Exchange (ETDEWEB)
NONE
2002-01-01
This report updates a five year study launched in 1999 to ensure the continued healthy existence of grizzly bears in west-central Alberta by integrating their needs into land management decisions. The objective was to gather better information and to develop computer-based maps and models regarding grizzly bear migration, habitat use and response to human activities. The study area covers 9,700 square km in west-central Alberta where 66 to 147 grizzly bears exist. During the first 3 field seasons, researchers captured and radio collared 60 bears. Researchers at the University of Calgary used remote sensing tools and satellite images to develop grizzly bear habitat maps. Collaborators at the University of Washington used trained dogs to find bear scat which was analyzed for DNA, stress levels and reproductive hormones. Resource Selection Function models are being developed by researchers at the University of Alberta to identify bear locations and to see how habitat is influenced by vegetation cover and oil, gas, forestry and mining activities. The health of the bears is being studied by researchers at the University of Saskatchewan and the Canadian Cooperative Wildlife Health Centre. The study has already advanced the scientific knowledge of grizzly bear behaviour. Preliminary results indicate that grizzlies continue to find mates, reproduce and gain weight and establish dens. These are all good indicators of a healthy population. Most bear deaths have been related to poaching. The study will continue for another two years. 1 fig.
Accommodation-Amplitudes following an Accommodative Lens Refilling Procedure — an in vivo Update
Nishi, Okihiro; Nishi, Yutaro; Chang, S.; Nishi, Kayo
2014-01-01
Purpose To investigate whether a newly developed lens refilling procedure can provide some accommodation in monkey eyes and to evaluate the difference in accommodation with different degrees of capsular bag refilling. Setting Jinshikai Medical Foundation, Nishi Eye Hospital, Osaka, Japan. Design Experimental monkey study. Methods Following a central 3–4 mm continuous curvilinear capsulorhexis, phacoemulsification was performed in the usual manner. A novel accommodative membrane intraocular lens for sealing capsular opening was implanted into the capsular bag. Silicone polymers were injected beneath the intraocular lens into the capsular bag through the delivery hole. In three study groups, each with six monkey eyes, the lens capsule was refilled with 0.08 ml corresponding to 65% bag volume, 0.1 ml corresponding to 80% bag volume, and 0.125 ml of silicone polymers corresponding to 100% bag volume, respectively. To calculate the accommodation-amplitudes achieved, automated refractometry was performed before and 1 hour after the topical 4% pilocarpine application before and four weeks after surgery. Results The refilling technique was successful in all monkeys without polymer leakage. Accommodation-amplitudes attained were 2.56 ± 0.74 dioptries (D), 2.42 ± 1.00D, and 2.71 ± 0.63D, respectively, 4 weeks after surgery in the three study groups. Conclusions Using the technique, some accommodation could be obtained in the young monkey eyes. Leakage of the injectable silicone polymer and anterior capsular opacification at least in the visual axis could be avoided. The results suggest that this lens refilling procedure warrants further studies for a possible clinical application. PMID:24461501
Model updating of rotor systems by using Nonlinear least square optimization
Jha, A. K.; Dewangan, P.; Sarangi, M.
2016-07-01
Mathematical models of structure or machineries are always different from the existing physical system, because the approach of numerical predictions to the behavior of a physical system is limited by the assumptions used in the development of the mathematical model. Model updating is, therefore necessary so that updated model should replicate the physical system. This work focuses on the model updating of rotor systems at various speeds as well as at different modes of vibration. Support bearing characteristics severely influence the dynamics of rotor systems like turbines, compressors, pumps, electrical machines, machine tool spindles etc. Therefore bearing parameters (stiffness and damping) are considered to be updating parameters. A finite element model of rotor systems is developed using Timoshenko beam element. Unbalance response in time domain and frequency response function have been calculated by numerical techniques, and compared with the experimental data to update the FE-model of rotor systems. An algorithm, based on unbalance response in time domain is proposed for updating the rotor systems at different running speeds of rotor. An attempt has been made to define Unbalance response assurance criterion (URAC) to check the degree of correlation between updated FE model and physical model.
"Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"
We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...
Marwala, Tshilidzi
2010-01-01
Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...
Updated free span design procedure DNV RP-F105 Ormen Lange experiences
Energy Technology Data Exchange (ETDEWEB)
Fyrileiv, Olav; Moerk, Kim; Chezhian, Muthu [Det Norsk Veritas (Norway)
2005-07-01
The Ormen Lange gas field is located within a prehistoric slide area with varying water depths from 250 to 1100 m. Due to the slide area, the seabed is very uneven including steep slopes and seabed obstacles up to 50 meters tall. The major technical challenges with respect to pipeline design in this area are: extreme seabed topography combined with inhomogeneous soil conditions; uncertainties related to current velocities and distribution; high number of spans including some very long spans; deep waters and therefore difficult and costly seabed preparation/span intervention; flowlines with large potential to buckle laterally in combination with free spans. In order to minimise span intervention costs, a major testing campaign and research programme has been conducted in the Ormen Lange project to come up with a design procedure in compliance with the DNV-RP-F105 (DNV, 2002) design philosophy. The improvements in terms of reduced seabed intervention and rock dumping costs are in the order of several 100 MNOKs. The lessons learned and the improved knowledge will also be a great value for other project dealing with similar free span problems. (author)
Cognition and procedure representational requirements for predictive human performance models
Corker, K.
1992-01-01
Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods
Update rules and interevent time distributions: Slow ordering vs. no ordering in the Voter Model
Fernández-Gracia, Juan; Miguel, M San
2011-01-01
We introduce a general methodology of update rules accounting for arbitrary interevent time distributions in simulations of interacting agents. In particular we consider update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully-connected, random and scale free networks with an update probability inversely proportional to the persistence, that is, the time since the last event. We find that in the thermodynamic limit, at variance with standard updates, the system orders slowly. The approach to the absorbing state is characterized by a power law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined.
Spread and Quote-Update Frequency of the Limit-Order Driven Sergei Maslov Model
Institute of Scientific and Technical Information of China (English)
QIU Tian; CHEN Guang
2007-01-01
@@ We perform numerical simulations of the limit-order driven Sergei Maslov (SM) model and investigate the probability distribution and autocorrelation function of the bid-ask spread S and the quote-update frequency U.For the probability distribution, the model successfully reproduces the power law decay of the spread and the exponential decay of the quote-update frequency. For the autocorrelation function, both the spread and the quote-update frequency of the model decay by a power law, which is consistent with the empirical study. We obtain the power law exponent 0.54 for the spread, which is in good agreement with the real financial market.
Research on the iterative method for model updating based on the frequency response function
Institute of Scientific and Technical Information of China (English)
Wei-Ming Li; Jia-Zhen Hong
2012-01-01
Model reduction technique is usually employed in model updating process,In this paper,a new model updating method named as cross-model cross-frequency response function (CMCF) method is proposed and a new iterative method associating the model updating method with the model reduction technique is investigated.The new model updating method utilizes the frequency response function to avoid the modal analysis process and it does not need to pair or scale the measured and the analytical frequency response function,which could greatly increase the number of the equations and the updating parameters.Based on the traditional iterative method,a correction term related to the errors resulting from the replacement of the reduction matrix of the experimental model with that of the finite element model is added in the new iterative method.Comparisons between the traditional iterative method and the proposed iterative method are shown by model updating examples of solar panels,and both of these two iterative methods combine the CMCF method and the succession-level approximate reduction technique.Results show the effectiveness of the CMCF method and the proposed iterative method.
Execution model for limited ambiguity rules and its application to derived data update
Energy Technology Data Exchange (ETDEWEB)
Chen, I.M.A. [Lawrence Berkeley National Lab., CA (United States); Hull, R. [Univ. of Colorado, Boulder, CO (United States); McLeod, D. [Univ. of Southern California, Los Angeles, CA (United States)
1995-12-01
A novel execution model for rule application in active databases is developed and applied to the problem of updating derived data in a database represented using a semantic, object-based database model. The execution model is based on the use of `limited ambiguity rules` (LARs), which permit disjunction in rule actions. The execution model essentially performs a breadth-first exploration of alternative extensions of a user-requested update. Given an object-based database scheme, both integrity constraints and specifications of derived classes and attributes are compiled into a family of limited ambiguity rules. A theoretical analysis shows that the approach is sound: the execution model returns all valid `completions` of a user-requested update, or terminates with an appropriate error notification. The complexity of the approach in connection with derived data update is considered. 42 refs., 10 figs., 3 tabs.
An inner-outer nonlinear programming approach for constrained quadratic matrix model updating
Andretta, M.; Birgin, E. G.; Raydan, M.
2016-01-01
The Quadratic Finite Element Model Updating Problem (QFEMUP) concerns with updating a symmetric second-order finite element model so that it remains symmetric and the updated model reproduces a given set of desired eigenvalues and eigenvectors by replacing the corresponding ones from the original model. Taking advantage of the special structure of the constraint set, it is first shown that the QFEMUP can be formulated as a suitable constrained nonlinear programming problem. Using this formulation, a method based on successive optimizations is then proposed and analyzed. To avoid that spurious modes (eigenvectors) appear in the frequency range of interest (eigenvalues) after the model has been updated, additional constraints based on a quadratic Rayleigh quotient are dynamically included in the constraint set. A distinct practical feature of the proposed method is that it can be implemented by computing only a few eigenvalues and eigenvectors of the associated quadratic matrix pencil.
Modeling and prediction of surgical procedure times
P.S. Stepaniak (Pieter); C. Heij (Christiaan); G. de Vries (Guus)
2009-01-01
textabstractAccurate prediction of medical operation times is of crucial importance for cost efficient operation room planning in hospitals. This paper investigates the possible dependence of procedure times on surgeon factors like age, experience, gender, and team composition. The effect of these f
A procedure for Building Product Models
DEFF Research Database (Denmark)
Hvam, Lars
1999-01-01
, easily adaptable concepts and methods from data modeling (object oriented analysis) and domain modeling (product modeling). The concepts are general and can be used for modeling all types of specifications in the different phases in the product life cycle. The modeling techniques presented have been...
Active Magnetic Bearing Rotor Model Updating Using Resonance and MAC Error
Directory of Open Access Journals (Sweden)
Yuanping Xu
2015-01-01
Full Text Available Modern control techniques can improve the performance and robustness of a rotor active magnetic bearing (AMB system. Since those control methods usually rely on system models, it is important to obtain a precise rotor AMB analytical model. However, the interference fits and shrink effects of rotor AMB cause inaccuracy to the final system model. In this paper, an experiment based model updating method is proposed to improve the accuracy of the finite element (FE model used in a rotor AMB system. Modelling error is minimized by applying a numerical optimization Nelder-Mead simplex algorithm to properly adjust FE model parameters. Both the error resonance frequencies and modal assurance criterion (MAC values are minimized simultaneously to account for the rotor natural frequencies as well as for the mode shapes. Verification of the updated rotor model is performed by comparing the experimental and analytical frequency response. The close agreements demonstrate the effectiveness of the proposed model updating methodology.
Improvement and Validation of Weld Residual Stress Modelling Procedure
Energy Technology Data Exchange (ETDEWEB)
Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))
2009-06-15
The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used
Procedures for Geometric Data Reduction in Solid Log Modelling
Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt
1995-01-01
One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2011
Energy Technology Data Exchange (ETDEWEB)
David W. Nigg; Devin A. Steuhm
2011-09-01
, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system is being implemented and initial computational results have been obtained. This capability will have many applications in 2011 and beyond as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation. Finally we note that although full implementation of the new computational models and protocols will extend over a period 3-4 years as noted above, interim applications in the much nearer term have already been demonstrated. In particular, these demonstrations included an analysis that was useful for understanding the cause of some issues in December 2009 that were triggered by a larger than acceptable discrepancy between the measured excess core reactivity and a calculated value that was based on the legacy computational methods. As the Modeling Update project proceeds we anticipate further such interim, informal, applications in parallel with formal qualification of the system under the applicable INL Quality Assurance procedures and standards.
Directory of Open Access Journals (Sweden)
Michal Fusek
2016-11-01
Full Text Available Precipitation records from six stations of the Czech Hydrometeorological Institute were subject to statistical analysis with the objectives of updating the intensity–duration–frequency (IDF curves, by applying extreme value distributions, and comparing the updated curves against those produced by an empirical procedure in 1958. Another objective was to investigate differences between both sets of curves, which could be explained by such factors as different measuring instruments, measuring stations altitudes and data analysis methods. It has been shown that the differences between the two sets of IDF curves are significantly influenced by the chosen method of data analysis.
2013-01-01
Liver fibrosis is defined as excessive extracellular matrix deposition and is based on complex interactions between matrix-producing hepatic stellate cells and an abundance of liver-resident and infiltrating cells. Investigation of these processes requires in vitro and in vivo experimental work in animals. However, the use of animals in translational research will be increasingly challenged, at least in countries of the European Union, because of the adoption of new animal welfare rules in 2013. These rules will create an urgent need for optimized standard operating procedures regarding animal experimentation and improved international communication in the liver fibrosis community. This review gives an update on current animal models, techniques and underlying pathomechanisms with the aim of fostering a critical discussion of the limitations and potential of up-to-date animal experimentation. We discuss potential complications in experimental liver fibrosis and provide examples of how the findings of studies in which these models are used can be translated to human disease and therapy. In this review, we want to motivate the international community to design more standardized animal models which might help to address the legally requested replacement, refinement and reduction of animals in fibrosis research. PMID:24274743
Inherently irrational? A computational model of escalation of commitment as Bayesian Updating.
Gilroy, Shawn P; Hantula, Donald A
2016-06-01
Monte Carlo simulations were performed to analyze the degree to which two-, three- and four-step learning histories of losses and gains correlated with escalation and persistence in extended extinction (continuous loss) conditions. Simulated learning histories were randomly generated at varying lengths and compositions and warranted probabilities were determined using Bayesian Updating methods. Bayesian Updating predicted instances where particular learning sequences were more likely to engender escalation and persistence under extinction conditions. All simulations revealed greater rates of escalation and persistence in the presence of heterogeneous (e.g., both Wins and Losses) lag sequences, with substantially increased rates of escalation when lags comprised predominantly of losses were followed by wins. These methods were then applied to human investment choices in earlier experiments. The Bayesian Updating models corresponded with data obtained from these experiments. These findings suggest that Bayesian Updating can be utilized as a model for understanding how and when individual commitment may escalate and persist despite continued failures.
Liu, Yang; Li, Yan; Wang, Dejun; Zhang, Shaoyi
2014-01-01
Updating the structural model of complex structures is time-consuming due to the large size of the finite element model (FEM). Using conventional methods for these cases is computationally expensive or even impossible. A two-level method, which combined the Kriging predictor and the component mode synthesis (CMS) technique, was proposed to ensure the successful implementing of FEM updating of large-scale structures. In the first level, the CMS was applied to build a reasonable condensed FEM of complex structures. In the second level, the Kriging predictor that was deemed as a surrogate FEM in structural dynamics was generated based on the condensed FEM. Some key issues of the application of the metamodel (surrogate FEM) to FEM updating were also discussed. Finally, the effectiveness of the proposed method was demonstrated by updating the FEM of a real arch bridge with the measured modal parameters.
Directory of Open Access Journals (Sweden)
Yang Liu
2014-01-01
Full Text Available Updating the structural model of complex structures is time-consuming due to the large size of the finite element model (FEM. Using conventional methods for these cases is computationally expensive or even impossible. A two-level method, which combined the Kriging predictor and the component mode synthesis (CMS technique, was proposed to ensure the successful implementing of FEM updating of large-scale structures. In the first level, the CMS was applied to build a reasonable condensed FEM of complex structures. In the second level, the Kriging predictor that was deemed as a surrogate FEM in structural dynamics was generated based on the condensed FEM. Some key issues of the application of the metamodel (surrogate FEM to FEM updating were also discussed. Finally, the effectiveness of the proposed method was demonstrated by updating the FEM of a real arch bridge with the measured modal parameters.
Modeling Conservative Updates in Multi-Hash Approximate Count Sketches
2012-01-01
Multi-hash-based count sketches are fast and memory efficient probabilistic data structures that are widely used in scalable online traffic monitoring applications. Their accuracy significantly improves with an optimization, called conservative update, which is especially effective when the aim is to discriminate a relatively small number of heavy hitters in a traffic stream consisting of an extremely large number of flows. Despite its widespread application, a thorough u...
Power mos devices: structures and modelling procedures
Energy Technology Data Exchange (ETDEWEB)
Rossel, P.; Charitat, G.; Tranduc, H.; Morancho, F.; Moncoqut
1997-05-01
In this survey, the historical evolution of power MOS transistor structures is presented and currently used devices are described. General considerations on current and voltage capabilities are discussed and configurations of popular structures are given. A synthesis of different modelling approaches proposed last three years is then presented, including analytical solutions, for basic electrical parameters such as threshold voltage, on-resistance, saturation and quasi-saturation effects, temperature influence and voltage handling capability. The numerical solutions of basic semiconductor devices is then briefly reviewed along with some typical problems which can be solved this way. A compact circuit modelling method is finally explained with emphasis on dynamic behavior modelling
Proposed reporting model update creates dialogue between FASB and not-for-profits.
Mosrie, Norman C
2016-04-01
Seeing a need to refresh the current guidelines, the Financial Accounting Standards Board (FASB) proposed an update to the financial accounting and reporting model for not-for-profit entities. In a response to solicited feedback, the board is now revisiting its proposed update and has set forth a plan to finalize its new guidelines. The FASB continues to solicit and respond to feedback as the process progresses.
Towards cost-sensitive adaptation: when is it worth updating your predictive model?
Zliobaite, Indre; Budka, Marcin; Stahl, Frederic
2015-01-01
Our digital universe is rapidly expanding, more and more daily activities are digitally recorded, data arrives in streams, it needs to be analyzed in real time and may evolve over time. In the last decade many adaptive learning algorithms and prediction systems, which can automatically update themselves with the new incoming data, have been developed. The majority of those algorithms focus on improving the predictive performance and assume that model update is always desired as soon as possib...
DEFF Research Database (Denmark)
2013-01-01
When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due...... to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon...... the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either...
Impact of time displaced precipitation estimates for on-line updated models
DEFF Research Database (Denmark)
Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen
2012-01-01
catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data......When an online runoff model is updated from system measurements the requirements to the precipitation estimates change. Using rain gauge data as precipitation input there will be a displacement between the time where the rain intensity hits the gauge and the time where the rain hits the actual...... is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple timearea model and historic rain series...
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
A procedure for Applying a Maturity Model to Process Improvement
Directory of Open Access Journals (Sweden)
Elizabeth Pérez Mergarejo
2014-09-01
Full Text Available A maturity model is an evolutionary roadmap for implementing the vital practices from one or moredomains of organizational process. The use of the maturity models is poor in the Latin-Americancontext. This paper presents a procedure for applying the Process and Enterprise Maturity Modeldeveloped by Michael Hammer [1]. The procedure is divided into three steps: Preparation, Evaluationand Improvement plan. The Hammer´s maturity model joint to the proposed procedure can be used byorganizations to improve theirs process, involving managers and employees.
Hanson, D.; Waters, T. P.; Thompson, D. J.; Randall, R. B.; Ford, R. A. J.
2007-01-01
Finite element model updating traditionally makes use of both resonance and modeshape information. The mode shape information can also be obtained from anti-resonance frequencies, as has been suggested by a number of researchers in recent years. Anti-resonance frequencies have the advantage over mode shapes that they can be much more accurately identified from measured frequency response functions. Moreover, anti-resonance frequencies can, in principle, be estimated from output-only measurements on operating machinery. The motivation behind this paper is to explore whether the availability of anti-resonances from such output-only techniques would add genuinely new information to the model updating process, which is not already available from using only resonance frequencies. This investigation employs two-degree-of-freedom models of a rigid beam supported on two springs. It includes an assessment of the contribution made to the overall anti-resonance sensitivity by the mode shape components, and also considers model updating through Monte Carlo simulations, experimental verification of the simulation results, and application to a practical mechanical system, in this case a petrol generator set. Analytical expressions are derived for the sensitivity of anti-resonance frequencies to updating parameters such as the ratio of spring stiffnesses, the position of the centre of gravity, and the beam's radius of gyration. These anti-resonance sensitivities are written in terms of natural frequency and mode shape sensitivities so their relative contributions can be assessed. It is found that the contribution made by the mode shape sensitivity varies considerably depending on the value of the parameters, contributing no new information for significant combinations of parameter values. The Monte Carlo simulations compare the performance of the update achieved when using information from: the resonances only; the resonances and either anti-resonance; and the resonances and both
Yatracos, Yannis G.
2013-01-01
The inherent bias pathology of the maximum likelihood (ML) estimation method is confirmed for models with unknown parameters $\\theta$ and $\\psi$ when MLE $\\hat \\psi$ is function of MLE $\\hat \\theta.$ To reduce $\\hat \\psi$'s bias the likelihood equation to be solved for $\\psi$ is updated using the model for the data $Y$ in it. Model updated (MU) MLE, $\\hat \\psi_{MU},$ often reduces either totally or partially $\\hat \\psi$'s bias when estimating shape parameter $\\psi.$ For the Pareto model $\\hat...
Multi-block and path modelling procedures
DEFF Research Database (Denmark)
Høskuldsson, Agnar
2008-01-01
The author has developed a unified theory of path and multi-block modelling of data. The data blocks are arranged in a directional path. Each data block can lead to one or more data blocks. It is assumed that there is given a collection of input data blocks. Each of them is supposed to describe one...
Inference-based procedural modeling of solids
Biggers, Keith
2011-11-01
As virtual environments become larger and more complex, there is an increasing need for more automated construction algorithms to support the development process. We present an approach for modeling solids by combining prior examples with a simple sketch. Our algorithm uses an inference-based approach to incrementally fit patches together in a consistent fashion to define the boundary of an object. This algorithm samples and extracts surface patches from input models, and develops a Petri net structure that describes the relationship between patches along an imposed parameterization. Then, given a new parameterized line or curve, we use the Petri net to logically fit patches together in a manner consistent with the input model. This allows us to easily construct objects of varying sizes and configurations using arbitrary articulation, repetition, and interchanging of parts. The result of our process is a solid model representation of the constructed object that can be integrated into a simulation-based environment. © 2011 Elsevier Ltd. All rights reserved.
Ambient modal testing of a double-arch dam: the experimental campaign and model updating
García-Palacios, Jaime H.; Soria, José M.; Díaz, Iván M.; Tirado-Andrés, Francisco
2016-09-01
A finite element model updating of a double-curvature-arch dam (La Tajera, Spain) is carried out hereof using the modal parameters obtained from an operational modal analysis. That is, the system modal dampings, natural frequencies and mode shapes have been identified using output-only identification techniques under environmental loads (wind, vehicles). A finite element model of the dam-reservoir-foundation system was initially created. Then, a testing campaing was then carried out from the most significant test points using high-sensitivity accelerometers wirelessly synchronized. Afterwards, the model updating of the initial model was done using a Monte Carlo based approach in order to match it to the recorded dynamic behaviour. The updated model may be used within a structural health monitoring system for damage detection or, for instance, for the analysis of the seismic response of the arch dam- reservoir-foundation coupled system.
Institute of Scientific and Technical Information of China (English)
LIDian-qing; ZHANGSheng-kun
2004-01-01
The classical probability theory cannot effectively quantify the parameter uncertainty in probability of detection.Furthermore,the conventional data analytic method and expert judgment method fail to handle the problem of model uncertainty updating with the information from nondestructive inspection.To overcome these disadvantages,a Bayesian approach was proposed to quantify the parameter uncertainty in probability of detection.Furthermore,the formulae of the multiplication factors to measure the statistical uncertainties in the probability of detection following the Weibull distribution were derived.A Bayesian updating method was applied to compute the posterior probabilities of model weights and the posterior probability density functions of distribution parameters of probability of detection.A total probability model method was proposed to analyze the problem of multi-layered model uncertainty updating.This method was then applied to the problem of multilayered corrosion model uncertainty updating for ship structures.The results indicate that the proposed method is very effective in analyzing the problem of multi-layered model uncertainty updating.
SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating.
Lee, Young-Joo; Cho, Soojin
2016-03-02
Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed.
Barton, E.; Middleton, C.; Koo, K.; Crocker, L.; Brownjohn, J.
2011-07-01
This paper presents the results from collaboration between the National Physical Laboratory (NPL) and the University of Sheffield on an ongoing research project at NPL. A 50 year old reinforced concrete footbridge has been converted to a full scale structural health monitoring (SHM) demonstrator. The structure is monitored using a variety of techniques; however, interrelating results and converting data to knowledge are not possible without a reliable numerical model. During the first stage of the project, the work concentrated on static loading and an FE model of the undamaged bridge was created, and updated, under specified static loading and temperature conditions. This model was found to accurately represent the response under static loading and it was used to identify locations for sensor installation. The next stage involves the evaluation of repair/strengthening patches under both static and dynamic loading. Therefore, before deliberately introducing significant damage, the first set of dynamic tests was conducted and modal properties were estimated. The measured modal properties did not match the modal analysis from the statically updated FE model; it was clear that the existing model required updating. This paper introduces the results of the dynamic testing and model updating. It is shown that the structure exhibits large non-linear, amplitude dependant characteristics. This creates a difficult updating process, but we attempt to produce the best linear representation of the structure. A sensitivity analysis is performed to determine the most sensitive locations for planned damage/repair scenarios and is used to decide whether additional sensors will be necessary.
Energy Technology Data Exchange (ETDEWEB)
Barton, E; Crocker, L [Structural health monitoring, National Physical Laboratory, Hampton Road, Teddington, Middlesex, TW11 0LW (United Kingdom); Middleton, C; Koo, K; Brownjohn, J, E-mail: elena.barton@npl.co.uk, E-mail: C.J.Middleton@sheffield.ac.uk, E-mail: k.koo@sheffield.ac.uk, E-mail: louise.crocker@npl.co.uk, E-mail: j.brownjohn@sheffield.ac.uk [University of Sheffield, Department of Civil and Structural Engineering, Vibration Engineering Research Section, Sir Frederick Mappin Building Mappin Street, Sheffield, S1 3JD (United Kingdom)
2011-07-19
This paper presents the results from collaboration between the National Physical Laboratory (NPL) and the University of Sheffield on an ongoing research project at NPL. A 50 year old reinforced concrete footbridge has been converted to a full scale structural health monitoring (SHM) demonstrator. The structure is monitored using a variety of techniques; however, interrelating results and converting data to knowledge are not possible without a reliable numerical model. During the first stage of the project, the work concentrated on static loading and an FE model of the undamaged bridge was created, and updated, under specified static loading and temperature conditions. This model was found to accurately represent the response under static loading and it was used to identify locations for sensor installation. The next stage involves the evaluation of repair/strengthening patches under both static and dynamic loading. Therefore, before deliberately introducing significant damage, the first set of dynamic tests was conducted and modal properties were estimated. The measured modal properties did not match the modal analysis from the statically updated FE model; it was clear that the existing model required updating. This paper introduces the results of the dynamic testing and model updating. It is shown that the structure exhibits large non-linear, amplitude dependant characteristics. This creates a difficult updating process, but we attempt to produce the best linear representation of the structure. A sensitivity analysis is performed to determine the most sensitive locations for planned damage/repair scenarios and is used to decide whether additional sensors will be necessary.
Dynamic finite element model updating of prestressed concrete continuous box-girder bridge
Institute of Scientific and Technical Information of China (English)
Lin Xiankun; Zhang Lingmi; Guo Qintao; Zhang Yufeng
2009-01-01
The dynamic finite element model (FEM) of a prestressed concrete continuous box-girder bridge, called the Tongyang Canal Bridge, is built and updated based on the results of ambient vibration testing (AVT) using a real-coded accelerating genetic algorithm (RAGA). The objective functions are defined based on natural frequency and modal assurance criterion (MAC) metrics to evaluate the updated FEM. Two objective functions are defined to fully account for the relative errors and standard deviations of the natural frequencies and MAC between the AVT results and the updated FEM predictions. The dynamically updated FEM of the bridge can better represent its structural dynamics and serve as a baseline in long-term health monitoring, condition assessment and damage identification over the service life of the bridge.
Andreasen, D. T.; Sousa, S. G.; Tsantaki, M.; Teixeira, G. D. C.; Mortier, A.; Santos, N. C.; Suárez-Andrés, L.; Delgado-Mena, E.; Ferreira, A. C. S.
2017-04-01
Context. Thanks to the importance that the star-planet relation has to our understanding of the planet formation process, the precise determination of stellar parameters for the ever increasing number of discovered extrasolar planets is of great relevance. Furthermore, precise stellar parameters are needed to fully characterize the planet properties. It is thus important to continue the efforts to determine, in the most uniform way possible, the parameters for stars with planets as new discoveries are announced. Aims: In this paper we present new precise atmospheric parameters for a sample of 50 stars with planets. The results are presented in the catalogue: SWEET-Cat. Methods: Stellar atmospheric parameters and masses for the 50 stars were derived assuming local thermodynamic equilibrium and using high-resolution and high signal-to-noise spectra. The methodology used is based on the measurement of equivalent widths with ARES2 for a list of iron lines. The line abundances were derived using MOOG. We then used the curve of growth analysis to determine the parameters. We implemented a new minimization procedure which significantly improves the computational time. Results: The stellar parameters for the 50 stars are presented and compared with previously determined literature values. For SWEET-Cat, we compile values for the effective temperature, surface gravity, metallicity, and stellar mass for almost all the planet host stars listed in the Extrasolar Planets Encyclopaedia. This data will be updated on a continuous basis. The data can be used for statistical studies of the star-planet correlation, and for the derivation of consistent properties for known planets. Based on observations collected at the La Silla Observatory, ESO (Chile), with FEROS/2.2 m (run 2014B/020), with UVES/VLT at the Cerro Paranal Observatory (runs ID 092.C-0695, 093.C-0219, 094.C-0367, 095.C-0324, and 096.C-0092), and with FIES/NOT at Roque de los Muchachos (Spain; runs ID 14AF14 and 53
Bakir, Pelin Gundes; Reynders, Edwin; De Roeck, Guido
2007-08-01
The use of changes in dynamic system characteristics to detect damage has received considerable attention during the last years. Within this context, FE model updating technique, which belongs to the class of inverse problems in classical mechanics, is used to detect, locate and quantify damage. In this study, a sensitivity-based finite element (FE) model updating scheme using a trust region algorithm is developed and implemented in a complex structure. A damage scenario is applied on the structure in which the stiffness values of the beam elements close to the beam-column joints are decreased by stiffness reduction factors. A worst case and complex damage pattern is assumed such that the stiffnesses of adjacent elements are decreased by substantially different stiffness reduction factors. The objective of the model updating is to minimize the differences between the eigenfrequency and eigenmodes residuals. The updating parameters of the structure are the stiffness reduction factors. The changes of these parameters are determined iteratively by solving a nonlinear constrained optimization problem. The FE model updating algorithm is also tested in the presence of two levels of noise in simulated measurements. In all three cases, the updated MAC values are above 99% and the relative eigenfrequency differences improve substantially after model updating. In cases without noise and with moderate levels of noise; detection, localization and quantification of damage are successfully accomplished. In the case with substantially noisy measurements, detection and localization of damage are successfully realized. Damage quantification is also promising in the presence of high noise as the algorithm can still predict 18 out of 24 damage parameters relatively accurately in that case.
Design Transformations for Rule-based Procedural Modeling
Lienhard, Stefan
2017-05-24
We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.
Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations
Energy Technology Data Exchange (ETDEWEB)
Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu [Laboratory of Genetics, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715 (United States)
2014-11-28
To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution. Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.
The hourly updated US High-Resolution Rapid Refresh (HRRR) storm-scale forecast model
Alexander, Curtis; Dowell, David; Benjamin, Stan; Weygandt, Stephen; Olson, Joseph; Kenyon, Jaymes; Grell, Georg; Smirnova, Tanya; Ladwig, Terra; Brown, John; James, Eric; Hu, Ming
2016-04-01
The 3-km convective-allowing High-Resolution Rapid Refresh (HRRR) is a US NOAA hourly updating weather forecast model that use a specially configured version of the Advanced Research WRF (ARW) model and assimilate many novel and most conventional observation types on an hourly basis using Gridpoint Statistical Interpolation (GSI). Included in this assimilation is a procedure for initializing ongoing precipitation systems from observed radar reflectivity data (and proxy reflectivity from lightning and satellite data), a cloud analysis to initialize stable layer clouds from METAR and satellite observations, and special techniques to enhance retention of surface observation information. The HRRR is run hourly out to 15 forecast hours over a domain covering the entire conterminous United States using initial and boundary conditions from the hourly-cycled 13km Rapid Refresh (RAP, using similar physics and data assimilation) covering North America and a significant part of the Northern Hemisphere. The HRRR is continually developed and refined at NOAA's Earth System Research Laboratory, and an initial version was implemented into the operational NOAA/NCEP production suite in September 2014. Ongoing experimental RAP and HRRR model development throughout 2014 and 2015 has culminated in a set of data assimilation and model enhancements that will be incorporated into the first simultaneous upgrade of both the operational RAP and HRRR that is scheduled for spring 2016 at NCEP. This presentation will discuss the operational RAP and HRRR changes contained in this upgrade. The RAP domain is being expanded to encompass the NAM domain and the forecast lengths of both the RAP and HRRR are being extended. RAP and HRRR assimilation enhancements have focused on (1) extending surface data assimilation to include mesonet observations and improved use of all surface observations through better background estimates of 2-m temperature and dewpoint including projection of 2-m temperature
Updated Peach Bottom Model for MELCOR 1.8.6: Description and Comparisons
Energy Technology Data Exchange (ETDEWEB)
Robb, Kevin R [ORNL
2014-09-01
A MELCOR 1.8.5 model of the Peach Bottom Unit 2 or 3 has been updated for MELCOR 1.8.6. Primarily, this update involved modification of the lower head modeling. Three additional updates were also performed. First, a finer nodalization of the containment wet well was employed. Second, the pressure differential used by the logic controlling the safety relief valve actuation was modified. Finally, an additional stochastic failure mechanism for the safety relief valves was added. Simulation results from models with and without the modifications were compared. All the analysis was performed by comparing key figures of merit from simulations of a long-term station blackout scenario. This report describes the model changes and the results of the comparisons.
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.
Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong
2016-04-15
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update
Directory of Open Access Journals (Sweden)
Changxin Gao
2016-04-01
Full Text Available Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors, which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.
Function-weighted frequency response function sensitivity method for analytical model updating
Lin, R. M.
2017-09-01
Since the frequency response function (FRF) sensitivity method was first proposed [26], it has since become a most powerful and practical method for analytical model updating. Nevertheless, the original formulation of the FRF sensitivity method does suffer the limitation that the initial analytical model to be updated should be reasonably close to the final updated model to be sought, due the assumed mathematical first order approximation implicit to most sensitivity based methods. Convergence to correct model is not guaranteed when large modelling errors exist and blind application often leads to optimal solutions which are truly sought. This paper seeks to examine all the important numerical characteristics of the original FRF sensitivity method including frequency data selection, numerical balance and convergence performance. To further improve the applicability of the method to cases of large modelling errors, a new novel function-weighted sensitivity method is developed. The new method has shown much superior performance on convergence even in the presence of large modelling errors. Extensive numerical case studies based on a mass-spring system and a GARTEUR structure have been conducted and very encouraging results have been achieved. Effect of measurement noise has been examined and the method works reasonably well in the presence of measurement uncertainties. The new method removes the restriction of modelling error magnitude being of second order in Euclidean norm as compared with that of system matrices, thereby making it a truly general method applicable to most practical model updating problems.
Comparisons of Estimation Procedures for Nonlinear Multilevel Models
Directory of Open Access Journals (Sweden)
Ali Reza Fotouhi
2003-05-01
Full Text Available We introduce General Multilevel Models and discuss the estimation procedures that may be used to fit multilevel models. We apply the proposed procedures to three-level binary data generated in a simulation study. We compare the procedures by two criteria, Bias and efficiency. We find that the estimates of the fixed effects and variance components are substantially and significantly biased using Longford's Approximation and Goldstein's Generalized Least Squares approaches by two software packages VARCL and ML3. These estimates are not significantly biased and are very close to real values when we use Markov Chain Monte Carlo (MCMC using Gibbs sampling or Nonparametric Maximum Likelihood (NPML approach. The Gaussian Quadrature (GQ approach, even with small number of mass points results in consistent estimates but computationally problematic. We conclude that the MCMC and the NPML approaches are the recommended procedures to fit multilevel models.
MODELING THE EFFECTS OF UPDATING THE INFLUENZA VACCINE ON THE EFFICACY OF REPEATED VACCINATION.
Energy Technology Data Exchange (ETDEWEB)
D. SMITH; A. LAPEDES; ET AL
2000-11-01
The accumulated wisdom is to update the vaccine strain to the expected epidemic strain only when there is at least a 4-fold difference [measured by the hemagglutination inhibition (HI) assay] between the current vaccine strain and the expected epidemic strain. In this study we investigate the effect, on repeat vaccines, of updating the vaccine when there is a less than 4-fold difference. Methods: Using a computer model of the immune response to repeated vaccination, we simulated updating the vaccine on a 2-fold difference and compared this to not updating the vaccine, in each case predicting the vaccine efficacy in first-time and repeat vaccines for a variety of possible epidemic strains. Results: Updating the vaccine strain on a 2-fold difference resulted in increased vaccine efficacy in repeat vaccines compared to leaving the vaccine unchanged. Conclusions: These results suggest that updating the vaccine strain on a 2-fold difference between the existing vaccine strain and the expected epidemic strain will increase vaccine efficacy in repeat vaccines compared to leaving the vaccine unchanged.
Active control and parameter updating techniques for nonlinear thermal network models
Papalexandris, M. V.; Milman, M. H.
The present article reports on active control and parameter updating techniques for thermal models based on the network approach. Emphasis is placed on applications where radiation plays a dominant role. Examples of such applications are the thermal design and modeling of spacecrafts and space-based science instruments. Active thermal control of a system aims to approximate a desired temperature distribution or to minimize a suitably defined temperature-dependent functional. Similarly, parameter updating aims to update the values of certain parameters of the thermal model so that the output approximates a distribution obtained through direct measurements. Both problems are formulated as nonlinear, least-square optimization problems. The proposed strategies for their solution are explained in detail and their efficiency is demonstrated through numerical tests. Finally, certain theoretical results pertaining to the characterization of solutions of the problems of interest are also presented.
Adaptive update using visual models for lifting-based motion-compensated temporal filtering
Li, Song; Xiong, H. K.; Wu, Feng; Chen, Hong
2005-03-01
Motion compensated temporal filtering is a useful framework for fully scalable video compression schemes. However, when supposed motion models cannot represent a real motion perfectly, both the temporal high and the temporal low frequency sub-bands may contain artificial edges, which possibly lead to a decreased coding efficiency, and ghost artifacts appear in the reconstructed video sequence at lower bit rates or in case of temporal scaling. We propose a new technique that is based on utilizing visual models to mitigate ghosting artifacts in the temporal low frequency sub-bands. Specifically, we propose content adaptive update schemes where visual models are used to determine image dependent upper bounds on information to be updated. Experimental results show that the proposed algorithm can significantly improve subjective visual quality of the low-pass temporal frames and at the same time, coding performance can catch or exceed the classical update steps.
Dynamic test and finite element model updating of bridge structures based on ambient vibration
Institute of Scientific and Technical Information of China (English)
2008-01-01
The dynamic characteristics of bridge structures are the basis of structural dynamic response and seismic analysis,and are also an important target of health condition monitoring.In this paper,a three-dimensional finite-element model is first established for a highway bridge over a railroad on No.312 National Highway.Based on design drawings,the dynamic characteristics of the bridge are studied using finite element analysis and ambient vibration measurements.Thus,a set of data is selected based on sensitivity analysis and optimization theory;the finite element model of the bridge is updated.The numerical and experimental results show that the updated method is more simple and effective,the updated finite element model can reflect the dynamic characteristics of the bridge better,and it can be used to predict the dynamic response under complex external forces.It is also helpful for further damage identification and health condition monitoring.
A hierarchical updating method for finite element model of airbag buffer system under landing impact
Institute of Scientific and Technical Information of China (English)
He Huan; Chen Zhe; He Cheng; Ni Lei; Chen Guoping
2015-01-01
In this paper, we propose an impact finite element (FE) model for an airbag landing buf-fer system. First, an impact FE model has been formulated for a typical airbag landing buffer sys-tem. We use the independence of the structure FE model from the full impact FE model to develop a hierarchical updating scheme for the recovery module FE model and the airbag system FE model. Second, we define impact responses at key points to compare the computational and experimental results to resolve the inconsistency between the experimental data sampling frequency and experi-mental triggering. To determine the typical characteristics of the impact dynamics response of the airbag landing buffer system, we present the impact response confidence factors (IRCFs) to evalu-ate how consistent the computational and experiment results are. An error function is defined between the experimental and computational results at key points of the impact response (KPIR) to serve as a modified objective function. A radial basis function (RBF) is introduced to construct updating variables for a surrogate model for updating the objective function, thereby converting the FE model updating problem to a soluble optimization problem. Finally, the developed method has been validated using an experimental and computational study on the impact dynamics of a classic airbag landing buffer system.
A hierarchical updating method for finite element model of airbag buffer system under landing impact
Directory of Open Access Journals (Sweden)
He Huan
2015-12-01
Full Text Available In this paper, we propose an impact finite element (FE model for an airbag landing buffer system. First, an impact FE model has been formulated for a typical airbag landing buffer system. We use the independence of the structure FE model from the full impact FE model to develop a hierarchical updating scheme for the recovery module FE model and the airbag system FE model. Second, we define impact responses at key points to compare the computational and experimental results to resolve the inconsistency between the experimental data sampling frequency and experimental triggering. To determine the typical characteristics of the impact dynamics response of the airbag landing buffer system, we present the impact response confidence factors (IRCFs to evaluate how consistent the computational and experiment results are. An error function is defined between the experimental and computational results at key points of the impact response (KPIR to serve as a modified objective function. A radial basis function (RBF is introduced to construct updating variables for a surrogate model for updating the objective function, thereby converting the FE model updating problem to a soluble optimization problem. Finally, the developed method has been validated using an experimental and computational study on the impact dynamics of a classic airbag landing buffer system.
Energy Technology Data Exchange (ETDEWEB)
Tencate, Alister J. [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); Kalivas, John H., E-mail: kalijohn@isu.edu [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); White, Alexander J. [Department of Physics and Optical Engineering, Rose-Hulman Institute of Technology, Terre Huate, IN 47803 (United States)
2016-05-19
New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of
Finite element modelling and updating of friction stir welding (FSW joint for vibration analysis
Directory of Open Access Journals (Sweden)
Zahari Siti Norazila
2017-01-01
Full Text Available Friction stir welding of aluminium alloys widely used in automotive and aerospace application due to its advanced and lightweight properties. The behaviour of FSW joints plays a significant role in the dynamic characteristic of the structure due to its complexities and uncertainties therefore the representation of an accurate finite element model of these joints become a research issue. In this paper, various finite elements (FE modelling technique for prediction of dynamic properties of sheet metal jointed by friction stir welding will be presented. Firstly, nine set of flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by FSW are used. Nine set of specimen was fabricated using various types of welding parameters. In order to find the most optimum set of FSW plate, the finite element model using equivalence technique was developed and the model validated using experimental modal analysis (EMA on nine set of specimen and finite element analysis (FEA. Three types of modelling were engaged in this study; rigid body element Type 2 (RBE2, bar element (CBAR and spot weld element connector (CWELD. CBAR element was chosen to represent weld model for FSW joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, total error of the natural frequencies for CBAR model is improved significantly. Therefore, CBAR element was selected as the most reliable element in FE to represent FSW weld joint.
Tencate, Alister J; Kalivas, John H; White, Alexander J
2016-05-19
New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of
A new Gibbs sampling based algorithm for Bayesian model updating with incomplete complex modal data
Cheung, Sai Hung; Bansal, Sahil
2017-08-01
Model updating using measured system dynamic response has a wide range of applications in system response evaluation and control, health monitoring, or reliability and risk assessment. In this paper, we are interested in model updating of a linear dynamic system with non-classical damping based on incomplete modal data including modal frequencies, damping ratios and partial complex mode shapes of some of the dominant modes. In the proposed algorithm, the identification model is based on a linear structural model where the mass and stiffness matrix are represented as a linear sum of contribution of the corresponding mass and stiffness matrices from the individual prescribed substructures, and the damping matrix is represented as a sum of individual substructures in the case of viscous damping, in terms of mass and stiffness matrices in the case of Rayleigh damping or a combination of the former. To quantify the uncertainties and plausibility of the model parameters, a Bayesian approach is developed. A new Gibbs-sampling based algorithm is proposed that allows for an efficient update of the probability distribution of the model parameters. In addition to the model parameters, the probability distribution of complete mode shapes is also updated. Convergence issues and numerical issues arising in the case of high-dimensionality of the problem are addressed and solutions to tackle these problems are proposed. The effectiveness and efficiency of the proposed method are illustrated by numerical examples with complex modes.
Generic Graph Grammar: A Simple Grammar for Generic Procedural Modelling
DEFF Research Database (Denmark)
Christiansen, Asger Nyman; Bærentzen, Jakob Andreas
2012-01-01
in a directed cyclic graph. Furthermore, the basic productions are chosen such that Generic Graph Grammar seamlessly combines the capabilities of L-systems to imitate biological growth (to model trees, animals, etc.) and those of split grammars to design structured objects (chairs, houses, etc.). This results......Methods for procedural modelling tend to be designed either for organic objects, which are described well by skeletal structures, or for man-made objects, which are described well by surface primitives. Procedural methods, which allow for modelling of both kinds of objects, are few and usually...
Clustering of Parameter Sensitivities: Examples from a Helicopter Airframe Model Updating Exercise
Shahverdi, H.; C. Mares; W. Wang; J. E. Mottershead
2009-01-01
The need for high fidelity models in the aerospace industry has become ever more important as increasingly stringent requirements on noise and vibration levels, reliability, maintenance costs etc. come into effect. In this paper, the results of a finite element model updating exercise on a Westland Lynx XZ649 helicopter are presented. For large and complex structures, such as a helicopter airframe, the finite element model represents the main tool for obtaining accurate models which could pre...
A New Probability of Detection Model for Updating Crack Distribution of Offshore Structures
Institute of Scientific and Technical Information of China (English)
李典庆; 张圣坤; 唐文勇
2003-01-01
There exists model uncertainty of probability of detection for inspecting ship structures with nondestructive inspection techniques. Based on a comparison of several existing probability of detection (POD) models, a new probability of detection model is proposed for the updating of crack size distribution. Furthermore, the theoretical derivation shows that most existing probability of detection models are special cases of the new probability of detection model. The least square method is adopted for determining the values of parameters in the new POD model. This new model is also compared with other existing probability of detection models. The results indicate that the new probability of detection model can fit the inspection data better. This new probability of detection model is then applied to the analysis of the problem of crack size updating for offshore structures. The Bayesian updating method is used to analyze the effect of probability of detection models on the posterior distribution of a crack size. The results show that different probabilities of detection models generate different posterior distributions of a crack size for offshore structures.
Wang, Zuo-Cai; Xin, Yu; Ren, Wei-Xin
2016-08-01
This paper proposes a new nonlinear joint model updating method for shear type structures based on the instantaneous characteristics of the decomposed structural dynamic responses. To obtain an accurate representation of a nonlinear system's dynamics, the nonlinear joint model is described as the nonlinear spring element with bilinear stiffness. The instantaneous frequencies and amplitudes of the decomposed mono-component are first extracted by the analytical mode decomposition (AMD) method. Then, an objective function based on the residuals of the instantaneous frequencies and amplitudes between the experimental structure and the nonlinear model is created for the nonlinear joint model updating. The optimal values of the nonlinear joint model parameters are obtained by minimizing the objective function using the simulated annealing global optimization method. To validate the effectiveness of the proposed method, a single-story shear type structure subjected to earthquake and harmonic excitations is simulated as a numerical example. Then, a beam structure with multiple local nonlinear elements subjected to earthquake excitation is also simulated. The nonlinear beam structure is updated based on the global and local model using the proposed method. The results show that the proposed local nonlinear model updating method is more effective for structures with multiple local nonlinear elements. Finally, the proposed method is verified by the shake table test of a real high voltage switch structure. The accuracy of the proposed method is quantified both in numerical and experimental applications using the defined error indices. Both the numerical and experimental results have shown that the proposed method can effectively update the nonlinear joint model.
Improvement of procedures for evaluating photochemical models. Final report
Energy Technology Data Exchange (ETDEWEB)
Tesche, T.W.; Lurmann, F.R.; Roth, P.M.; Georgopoulos, P.; Seinfeld, J.H.
1990-08-01
The study establishes a set of procedures that should be used by all groups evaluating the performance of a photochemical model application. A set of ten numerical measures are recommended for evaluating a photochemical model's accuracy in predicting ozone concentrations. Nine graphical methods and six investigative simulations are also recommended to give additional insight into model performance. Standards are presented that each modeling study should try to meet. To complement the operational model evaluation procedures, several diagnostic procedures are suggested. The sensitivity of the model to uncertainties in hydrocarbon emission rates and speciation, and other parameters should be assessed. Uncertainty bounds of key input variables and parameters can be propagated through the model to provide estimated uncertainties in the ozone predictions. Comparisons between measurements and predictions of species other than ozone will help ensure that the model is predicting the right ozone for the right reasons. Plotting concentrations residuals (differences) against a variety of variables may give insight into the reasons for poor model performance. Mass flux and balance calculations can identify the relative importance of emissions and transport. The study also identifies testing a model's response to emission changes as the most important research need. Another important area is testing the emissions inventory.
Using radar altimetry to update a routing model of the Zambezi River Basin
DEFF Research Database (Denmark)
Michailovsky, Claire Irene B.; Bauer-Gottwein, Peter
2012-01-01
Satellite radar altimetry allows for the global monitoring of lakes and river levels. However, the widespread use of altimetry for hydrological studies is limited by the coarse temporal and spatial resolution provided by current altimetric missions and the fact that discharge rather than level...... is needed for hydrological applications. To overcome these limitations, altimetry river levels can be combined with hydrological modeling in a dataassimilation framework. This study focuses on the updating of a river routing model of the Zambezi using river levels from radar altimetry. A hydrological model...... of the basin was built to simulate the land phase of the water cycle and produce inflows to a Muskingum routing model. River altimetry from the ENVISAT mission was then used to update the storages in the reaches of the Muskingum model using the Extended Kalman Filter. The method showed improvements in modeled...
Landing Procedure in Model Ditching Tests of Bf 109
Sottorf, W.
1949-01-01
The purpose of the model tests is to clarify the motions in the alighting on water of a land plane. After discussion of the model laws, the test method and test procedure are described. The deceleration-time-diagrams of the landing of a model of the Bf 109 show a high deceleration peek of greater than 20g which can be lowered to 4 to 6g by radiator cowling and brake skid.
A pose-based structural dynamic model updating method for serial modular robots
Mohamed, Richard Phillip; Xi, Fengfeng (Jeff); Chen, Tianyan
2017-02-01
A new approach is presented for updating the structural dynamic component models of serial modular robots using experimental data from component tests such that the updated model of the entire robot assembly can provide accurate results in any pose. To accomplish this, a test-analysis component mode synthesis (CMS) model with fixed-free component boundaries is implemented to directly compare measured frequency response functions (FRFs) from vibration experiments of individual modules. The experimental boundary conditions are made to emulate module connection interfaces and can enable individual joint and link modules to be tested in arbitrary poses. By doing so, changes in the joint dynamics can be observed and more FRF data points can be obtained from experiments to be used in the updating process. Because this process yields an overdetermined system of equations, a direct search method with nonlinear constraints on the resonances and antiresonances is used to update the FRFs of the analytical component models. The effectiveness of the method is demonstrated with experimental case studies on an adjustable modular linkage system. Overall, the method can enable virtual testing of modular robot systems without the need to perform further testing on entire assemblies.
Progressive collapse analysis using updated models for alternate path analysis after a blast
Eskew, Edward; Jang, Shinae; Bertolaccini, Kelly
2016-04-01
Progressive collapse is of rising importance within the structural engineering community due to several recent cases. The alternate path method is a design technique to determine the ability of a structure to sustain the loss of a critical element, or elements, and still resist progressive collapse. However, the alternate path method only considers the removal of the critical elements. In the event of a blast, significant damage may occur to nearby members not included in the alternate path design scenarios. To achieve an accurate assessment of the current condition of the structure after a blast or other extreme event, it may be necessary to reduce the strength or remove additional elements beyond the critical members designated in the alternate path design method. In this paper, a rapid model updating technique utilizing vibration measurements is used to update the structural model to represent the real-time condition of the structure after a blast occurs. Based upon the updated model, damaged elements will either have their strength reduced, or will be removed from the simulation. The alternate path analysis will then be performed, but only utilizing the updated structural model instead of numerous scenarios. After the analysis, the simulated response from the analysis will be compared to failure conditions to determine the buildings post-event condition. This method has the ability to incorporate damage to noncritical members into the analysis. This paper will utilize numerical simulations based upon a unified facilities criteria (UFC) example structure subjected to an equivalent blast to validate the methodology.
Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model
Sea spray aerosols (SSA) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. In this study, the Community Multiscale Air Quality (CMAQ) model is updated to enhance fine mode SSA emissions,...
Spatial coincidence modelling, automated database updating and data consistency in vector GIS.
Kufoniyi, O.
1995-01-01
This thesis presents formal approaches for automated database updating and consistency control in vector- structured spatial databases. To serve as a framework, a conceptual data model is formalized for the representation of geo-data from multiple map layers in which a map layer denotes a set of ter
Towards an integrated workflow for structural reservoir model updating and history matching
Leeuwenburgh, O.; Peters, E.; Wilschut, F.
2011-01-01
A history matching workflow, as typically used for updating of petrophysical reservoir model properties, is modified to include structural parameters including the top reservoir and several fault properties: position, slope, throw and transmissibility. A simple 2D synthetic oil reservoir produced by
Towards an integrated workflow for structural reservoir model updating and history matching
Leeuwenburgh, O.; Peters, E.; Wilschut, F.
2011-01-01
A history matching workflow, as typically used for updating of petrophysical reservoir model properties, is modified to include structural parameters including the top reservoir and several fault properties: position, slope, throw and transmissibility. A simple 2D synthetic oil reservoir produced by
Model and Variable Selection Procedures for Semiparametric Time Series Regression
Directory of Open Access Journals (Sweden)
Risa Kato
2009-01-01
Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.
The HSG procedure for modelling integrated urban wastewater systems.
Muschalla, D; Schütze, M; Schroeder, K; Bach, M; Blumensaat, F; Gruber, G; Klepiszewski, K; Pabst, M; Pressl, A; Schindler, N; Solvi, A-M; Wiese, J
2009-01-01
Whilst the importance of integrated modelling of urban wastewater systems is ever increasing, there is still no concise procedure regarding how to carry out such modelling studies. After briefly discussing some earlier approaches, the guideline for integrated modelling developed by the Central European Simulation Research Group (HSG - Hochschulgruppe) is presented. This contribution suggests a six-step standardised procedure to integrated modelling. This commences with an analysis of the system and definition of objectives and criteria, covers selection of modelling approaches, analysis of data availability, calibration and validation and also includes the steps of scenario analysis and reporting. Recent research findings as well as experience gained from several application projects from Central Europe have been integrated in this guideline.
A model updating method for hybrid composite/aluminum bolted joints using modal test data
Adel, Farhad; Shokrollahi, Saeed; Jamal-Omidi, Majid; Ahmadian, Hamid
2017-05-01
The aim of this paper is to present a simple and applicable model for predicting the dynamic behavior of bolted joints in hybrid aluminum/composite structures and its model updating using modal test data. In this regards, after investigations on bolted joints in metallic structures which led to a new concept called joint affected region (JAR) published in Shokrollahi and Adel (2016), now, a doubly connective layer is established in order to simulate the bolted joint interfaces in hybrid structures. Using the proposed model, the natural frequencies of the hybrid bolted joint structure are computed and compared to the modal test results in order to evaluate and verify the new model predictions. Because of differences in the results of two approaches, the finite element (FE) model is updated based on the genetic algorithm (GA) by minimizing the differences between analytical model and test results. This is done by identifying the parameters at the JAR including isotropic Young's modulus in metallic substructure and that of anisotropic composite substructure. The updated model compared to the initial model simulates experimental results more properly. Therefore, the proposed model can be used for modal analysis of the hybrid joint interfaces in complex and large structures.
Updating Known Distribution Models for Forecasting Climate Change Impact on Endangered Species
Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo
2013-01-01
To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only. PMID:23840330
Astroza, Rodrigo; Ebrahimian, Hamed; Conte, Joel P.
2015-03-01
This paper describes a novel framework that combines advanced mechanics-based nonlinear (hysteretic) finite element (FE) models and stochastic filtering techniques to estimate unknown time-invariant parameters of nonlinear inelastic material models used in the FE model. Using input-output data recorded during earthquake events, the proposed framework updates the nonlinear FE model of the structure. The updated FE model can be directly used for damage identification and further used for damage prognosis. To update the unknown time-invariant parameters of the FE model, two alternative stochastic filtering methods are used: the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). A three-dimensional, 5-story, 2-by-1 bay reinforced concrete (RC) frame is used to verify the proposed framework. The RC frame is modeled using fiber-section displacement-based beam-column elements with distributed plasticity and is subjected to the ground motion recorded at the Sylmar station during the 1994 Northridge earthquake. The results indicate that the proposed framework accurately estimate the unknown material parameters of the nonlinear FE model. The UKF outperforms the EKF when the relative root-mean-square error of the recorded responses are compared. In addition, the results suggest that the convergence of the estimate of modeling parameters is smoother and faster when the UKF is utilized.
Updating known distribution models for forecasting climate change impact on endangered species.
Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo
2013-01-01
To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only.
Updating parameters of the chicken processing line model
DEFF Research Database (Denmark)
Kurowicka, Dorota; Nauta, Maarten; Jozwiak, Katarzyna
2010-01-01
A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows...... of the chicken processing line model....
Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update
Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; van den Berg, Stéphanie Martine
2017-01-01
Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the
Updated Results for the Wake Vortex Inverse Model
Robins, Robert E.; Lai, David Y.; Delisi, Donald P.; Mellman, George R.
2008-01-01
NorthWest Research Associates (NWRA) has developed an Inverse Model for inverting aircraft wake vortex data. The objective of the inverse modeling is to obtain estimates of the vortex circulation decay and crosswind vertical profiles, using time history measurements of the lateral and vertical position of aircraft vortices. The Inverse Model performs iterative forward model runs using estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Iterations are performed until a user-defined criterion is satisfied. Outputs from an Inverse Model run are the best estimates of the time history of the vortex circulation derived from the observed data, the vertical crosswind profile, and several vortex parameters. The forward model, named SHRAPA, used in this inverse modeling is a modified version of the Shear-APA model, and it is described in Section 2 of this document. Details of the Inverse Model are presented in Section 3. The Inverse Model was applied to lidar-observed vortex data at three airports: FAA acquired data from San Francisco International Airport (SFO) and Denver International Airport (DEN), and NASA acquired data from Memphis International Airport (MEM). The results are compared with observed data. This Inverse Model validation is documented in Section 4. A summary is given in Section 5. A user's guide for the inverse wake vortex model is presented in a separate NorthWest Research Associates technical report (Lai and Delisi, 2007a).
Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update
Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.
2016-01-01
Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the
Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update
Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.
2017-01-01
Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the
DEFF Research Database (Denmark)
Luczak, Marcin; Manzato, Simone; Peeters, Bart;
2014-01-01
of model parameters was selected for the model updating process. Design of experiment and response surface method was implemented to find values of model parameters yielding results closest to the experimental. The updated finite element model is producing results more consistent with the measurement...... is to validate finite element model of the modified wind turbine blade section mounted in the flexible support structure accordingly to the experimental results. Bend-twist coupling was implemented by adding angled unidirectional layers on the suction and pressure side of the blade. Dynamic test and simulations...... were performed on a section of a full scale wind turbine blade provided by Vestas Wind Systems A/S. The numerical results are compared to the experimental measurements and the discrepancies are assessed by natural frequency difference and modal assurance criterion. Based on sensitivity analysis, set...
Procedural Skills Education – Colonoscopy as a Model
Directory of Open Access Journals (Sweden)
Maitreyi Raman
2008-01-01
Full Text Available Traditionally, surgical and procedural apprenticeship has been an assumed activity of students, without a formal educational context. With increasing barriers to patient and operating room access such as shorter work week hours for residents, and operating room and endoscopy time at a premium, alternate strategies to maximizing procedural skill development are being considered. Recently, the traditional surgical apprenticeship model has been challenged, with greater emphasis on the need for surgical and procedural skills training to be more transparent and for alternatives to patient-based training to be considered. Colonoscopy performance is a complex psychomotor skill requiring practioners to integrate multiple sensory inputs, and involves higher cortical centres for optimal performance. Colonoscopy skills involve mastery in the cognitive, technical and process domains. In the present review, we propose a model for teaching colonoscopy to the novice trainee based on educational theory.
van Wessem, J.M.; Reijmer, C.H.; Lenaerts, J.T.M.; van de Berg, W.J.; van den Broeke, M.R.; van Meijgaard, E.
2014-01-01
In this study the effects of changes in the physics package of the regional atmospheric climate model RACMO2 on the modelled surface energy balance, nearsurface temperature and wind speed of Antarctica are presented. The physics package update primarily consists of an improved turbulent and radiativ
Vector operations for modelling data-conversion procedures
Energy Technology Data Exchange (ETDEWEB)
Rivkin, M.N.
1992-03-01
This article presents a set of vector operations that permit effective modelling of operations from extended relational algebra for implementations of variable-construction procedures in data-conversion processors. Vector operations are classified, and test results are given for the ARIUS UP and other popular database management systems for PC`s. 10 refs., 5 figs.
Energy Technology Data Exchange (ETDEWEB)
Yoo, Jun Soo [Idaho National Lab. (INL), Idaho Falls, ID (United States); Choi, Yong Joon [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-09-01
This document addresses two subjects involved with the RELAP-7 Software Verification and Validation Plan (SVVP): (i) the principles and plan to assure the independence of RELAP-7 assessment through the code development process, and (ii) the work performed to establish the RELAP-7 assessment plan, i.e., the assessment strategy, literature review, and identification of RELAP-7 requirements. Then, the Requirements Traceability Matrices (RTMs) proposed in previous document (INL-EXT-15-36684) are updated. These RTMs provide an efficient way to evaluate the RELAP-7 development status as well as the maturity of RELAP-7 assessment through the development process.
Model update and variability assessment for automotive crash simulations
Sun, J.; He, J.; Vlahopoulos, N.; Ast, P. van
2007-01-01
In order to develop confidence in numerical models which are used for automotive crash simulations, results are often compared with test data, and in some cases the numerical models are adjusted in order to improve the correlation. Comparisons between the time history of acceleration responses from
Varying facets of a model of competitive learning: the role of updates and memory
Bhat, Ajaz Ahmad
2011-01-01
The effects of memory and different updating paradigms in a game-theoretic model of competitive learning, comprising two distinct agent types, are analysed. For nearly all the updating schemes, the phase diagram of the model consists of a disordered phase separating two ordered phases at coexistence: the critical exponents of these transitions belong to the generalised universality class of the voter model. Also, as appropriate for a model of competing strategies, we examine the situation when the two types have different characteristics, i.e. their parameters are chosen to be away from coexistence. We find linear response behaviour in the expected regimes but, more interestingly, are able to probe the effect of memory. This suggests that even the less successful agent types can win over the more successful ones, provided they have better retentive powers.
An update on land-ice modeling in the CESM
Energy Technology Data Exchange (ETDEWEB)
Lipscomb, William H [Los Alamos National Laboratory
2011-01-18
Mass loss from land ice, including the Greenland and Antarctic ice sheets as well as smaller glacier and ice caps, is making a large and growing contribution to global sea-level rise. Land ice is only beginning to be incorporated in climate models. The goal of the Land Ice Working Group (LIWG) is to develop improved land-ice models and incorporate them in CESM, in order to provide useful, physically-based sea-level predictions. LJWG efforts to date have led to the inclusion of a dynamic ice-sheet model (the Glimmer Community Ice Sheet Model, or Glimmer-CISM) in the Community Earth System Model (CESM), which was released in June 2010. CESM also includes a new surface-mass-balance scheme for ice sheets in the Community Land Model. Initial modeling efforts are focused on the Greenland ice sheet. Preliminary results are promising. In particular, the simulated surface mass balance for Greenland is in good agreement with observations and regional model results. The current model, however, has significant limitations: The land-ice coupling is one-way; we are using a serial version of Glimmer-CISM with the shallow-ice approximation; and there is no ice-ocean coupling. During the next year we plan to implement two-way coupling (including ice-ocean coupling with a dynamic Antarctic ice sheet) with a parallel , higher-order version of Glimmer-CISM. We will also add parameterizations of small glaciers and ice caps. With these model improvements, CESM will be able to simulate all the major contributors to 21st century global sea-level rise. Results of the first round of simulations should be available in time to be included in the Fifth Assessment Report (ARS) of the Intergovernmental Panel on Climate Change.
Directory of Open Access Journals (Sweden)
Hong-jun BAO
2011-03-01
Full Text Available A real-time channel flood forecast model was developed to simulate channel flow in plain rivers based on the dynamic wave theory. Taking into consideration channel shape differences along the channel, a roughness updating technique was developed using the Kalman filter method to update Manning’s roughness coefficient at each time step of the calculation processes. Channel shapes were simplified as rectangles, triangles, and parabolas, and the relationships between hydraulic radius and water depth were developed for plain rivers. Based on the relationship between the Froude number and the inertia terms of the momentum equation in the Saint-Venant equations, the relationship between Manning’s roughness coefficient and water depth was obtained. Using the channel of the Huaihe River from Wangjiaba to Lutaizi stations as a case, to test the performance and rationality of the present flood routing model, the original hydraulic model was compared with the developed model. Results show that the stage hydrographs calculated by the developed flood routing model with the updated Manning’s roughness coefficient have a good agreement with the observed stage hydrographs. This model performs better than the original hydraulic model.
Markov chain decision model for urinary incontinence procedures.
Kumar, Sameer; Ghildayal, Nidhi; Ghildayal, Neha
2017-03-13
Purpose Urinary incontinence (UI) is a common chronic health condition, a problem specifically among elderly women that impacts quality of life negatively. However, UI is usually viewed as likely result of old age, and as such is generally not evaluated or even managed appropriately. Many treatments are available to manage incontinence, such as bladder training and numerous surgical procedures such as Burch colposuspension and Sling for UI which have high success rates. The purpose of this paper is to analyze which of these popular surgical procedures for UI is effective. Design/methodology/approach This research employs randomized, prospective studies to obtain robust cost and utility data used in the Markov chain decision model for examining which of these surgical interventions is more effective in treating women with stress UI based on two measures: number of quality adjusted life years (QALY) and cost per QALY. Treeage Pro Healthcare software was employed in Markov decision analysis. Findings Results showed the Sling procedure is a more effective surgical intervention than the Burch. However, if a utility greater than certain utility value, for which both procedures are equally effective, is assigned to persistent incontinence, the Burch procedure is more effective than the Sling procedure. Originality/value This paper demonstrates the efficacy of a Markov chain decision modeling approach to study the comparative effectiveness analysis of available treatments for patients with UI, an important public health issue, widely prevalent among elderly women in developed and developing countries. This research also improves upon other analyses using a Markov chain decision modeling process to analyze various strategies for treating UI.
Interval model updating using perturbation method and Radial Basis Function neural networks
Deng, Zhongmin; Guo, Zhaopu; Zhang, Xinjie
2017-02-01
In recent years, stochastic model updating techniques have been applied to the quantification of uncertainties inherently existing in real-world engineering structures. However in engineering practice, probability density functions of structural parameters are often unavailable due to insufficient information of a structural system. In this circumstance, interval analysis shows a significant advantage of handling uncertain problems since only the upper and lower bounds of inputs and outputs are defined. To this end, a new method for interval identification of structural parameters is proposed using the first-order perturbation method and Radial Basis Function (RBF) neural networks. By the perturbation method, each random variable is denoted as a perturbation around the mean value of the interval of each parameter and that those terms can be used in a two-step deterministic updating sense. Interval model updating equations are then developed on the basis of the perturbation technique. The two-step method is used for updating the mean values of the structural parameters and subsequently estimating the interval radii. The experimental and numerical case studies are given to illustrate and verify the proposed method in the interval identification of structural parameters.
An Updating Method for Structural Dynamics Models with Uncertainties
Directory of Open Access Journals (Sweden)
B. Faverjon
2008-01-01
Full Text Available One challenge in the numerical simulation of industrial structures is model validation based on experimental data. Among the indirect or parametric methods available, one is based on the “mechanical” concept of constitutive relation error estimator introduced in order to quantify the quality of finite element analyses. In the case of uncertain measurements obtained from a family of quasi-identical structures, parameters need to be modeled randomly. In this paper, we consider the case of a damped structure modeled with stochastic variables. Polynomial chaos expansion and reduced bases are used to solve the stochastic problems involved in the calculation of the error.
iTree-Hydro: Snow hydrology update for the urban forest hydrology model
Yang Yang; Theodore A. Endreny; David J. Nowak
2011-01-01
This article presents snow hydrology updates made to iTree-Hydro, previously called the Urban Forest EffectsâHydrology model. iTree-Hydro Version 1 was a warm climate model developed by the USDA Forest Service to provide a process-based planning tool with robust water quantity and quality predictions given data limitations common to most urban areas. Cold climate...
PACIAE 2.0: An Updated Parton and Hadron Cascade Model (Program) for Relativistic Nuclear Collisions
Institute of Scientific and Technical Information of China (English)
SA; Ben-hao; ZHOU; Dai-mei; YAN; Yu-liang; LI; Xiao-mei; FENG; Sheng-qing; DONG; Bao-guo; CAI; Xu
2012-01-01
<正>We have updated the parton and hadron cascade model PACIAE for the relativistic nuclear collisions, from based on JETSET 6.4 and PYTHIA 5.7, and referred to as PACIAE 2.0. The main physics concerning the stages of the parton initiation, parton rescattering, hadronization, and hadron rescattering were discussed. The structures of the programs were briefly explained. In addition, some calculated examples were compared with the experimental data. It turns out that this model (program) works well.
Dental caries: an updated medical model of risk assessment.
Kutsch, V Kim
2014-04-01
Dental caries is a transmissible, complex biofilm disease that creates prolonged periods of low pH in the mouth, resulting in a net mineral loss from the teeth. Historically, the disease model for dental caries consisted of mutans streptococci and Lactobacillus species, and the dental profession focused on restoring the lesions/damage from the disease by using a surgical model. The current recommendation is to implement a risk-assessment-based medical model called CAMBRA (caries management by risk assessment) to diagnose and treat dental caries. Unfortunately, many of the suggestions of CAMBRA have been overly complicated and confusing for clinicians. The risk of caries, however, is usually related to just a few common factors, and these factors result in common patterns of disease. This article examines the biofilm model of dental caries, identifies the common disease patterns, and discusses their targeted therapeutic strategies to make CAMBRA more easily adaptable for the privately practicing professional.
Model Updating and Uncertainty Management for Aircraft Prognostic Systems Project
National Aeronautics and Space Administration — This proposal addresses the integration of physics-based damage propagation models with diagnostic measures of current state of health in a mathematically rigorous...
U.S. Environmental Protection Agency — The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size...
Robinson, Orin J.; McGowan, Conor; Devers, Patrick K.
2017-01-01
Density dependence regulates populations of many species across all taxonomic groups. Understanding density dependence is vital for predicting the effects of climate, habitat loss and/or management actions on wild populations. Migratory species likely experience seasonal changes in the relative influence of density dependence on population processes such as survival and recruitment throughout the annual cycle. These effects must be accounted for when characterizing migratory populations via population models.To evaluate effects of density on seasonal survival and recruitment of a migratory species, we used an existing full annual cycle model framework for American black ducks Anas rubripes, and tested different density effects (including no effects) on survival and recruitment. We then used a Bayesian model weight updating routine to determine which population model best fit observed breeding population survey data between 1990 and 2014.The models that best fit the survey data suggested that survival and recruitment were affected by density dependence and that density effects were stronger on adult survival during the breeding season than during the non-breeding season.Analysis also suggests that regulation of survival and recruitment by density varied over time. Our results showed that different characterizations of density regulations changed every 8–12 years (three times in the 25-year period) for our population.Synthesis and applications. Using a full annual cycle, modelling framework and model weighting routine will be helpful in evaluating density dependence for migratory species in both the short and long term. We used this method to disentangle the seasonal effects of density on the continental American black duck population which will allow managers to better evaluate the effects of habitat loss and potential habitat management actions throughout the annual cycle. The method here may allow researchers to hone in on the proper form and/or strength of
Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen
2013-01-01
When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.
Clustering of Parameter Sensitivities: Examples from a Helicopter Airframe Model Updating Exercise
Directory of Open Access Journals (Sweden)
H. Shahverdi
2009-01-01
Full Text Available The need for high fidelity models in the aerospace industry has become ever more important as increasingly stringent requirements on noise and vibration levels, reliability, maintenance costs etc. come into effect. In this paper, the results of a finite element model updating exercise on a Westland Lynx XZ649 helicopter are presented. For large and complex structures, such as a helicopter airframe, the finite element model represents the main tool for obtaining accurate models which could predict the sensitivities of responses to structural changes and optimisation of the vibration levels. In this study, the eigenvalue sensitivities with respect to Young's modulus and mass density are used in a detailed parameterisation of the structure. A new methodology is developed using an unsupervised learning technique based on similarity clustering of the columns of the sensitivity matrix. An assessment of model updating strategies is given and comparative results for the correction of vibration modes are discussed in detail. The role of the clustering technique in updating large-scale models is emphasised.
Finite element model updating for large span spatial steel structure considering uncertainties
Institute of Scientific and Technical Information of China (English)
TENG Jun; ZHU Yan-huang; ZHOU Feng; LI Hui; OU Jin-ping
2010-01-01
In order to establish the baseline finite element model for structural health monitoring,a new method of model updating was proposed after analyzing the uncertainties of measured data and the error of finite element model.In the new method,the finite element model was replaced by the multi-output support vector regression machine(MSVR).The interval variables of the measured frequency were sampled by Latin hypercube sampling method.The samples of frequency were regarded as the inputs of the trained MSVR.The outputs of MSVR were the target values of design parameters.The steel structure of National Aquatic Center for Beijing Olympic Games was introduced as a case for finite element model updating.The results show that the proposed method can avoid solving the problem of complicated calculation.Both the estimated values and associated uncertainties of the structure parameters can be obtained by the method.The static and dynamic characteristics of the updated finite element model are in good agreement with the measured data.
Keiner, Doerthe; Gaab, Michael R; Backhaus, Vanessa; Piek, Juergen; Oertel, Joachim
2010-12-01
Water jet dissection represents a promising technique for precise brain tissue dissection with preservation of blood vessels. In the past, the water jet dissector has been used for various pathologies. A detailed report of the surgical technique is lacking. The authors present their results after 208 procedures with a special focus on surgical technique, intraoperative suitability, advantages, and disadvantages. Between March 1997 and April 2009, 208 patients with various intracranial neurosurgical pathologies were operated on with the water jet dissector. Handling of the device and its usefulness and extent of application were assessed. The pressures encountered, potential risks, and complications were documented. The patients were followed 1 to 24 months postoperatively. A detailed presentation of the surgical technique is given. Differences and limitations of the water jet dissection device in the various pathologies were evaluated. The water jet dissector was intensively used in 127 procedures (61.1%), intermittently used in 56 procedures (26.9%), and scarcely used in 25 procedures (12%). The device was considered to be very helpful in 166 procedures (79.8%) and helpful to some extent in 33 procedures (15.9%). In 8 (3.8%) procedures, it was not helpful, and in 1 procedure (0.5%), the usefulness was not documented by the surgeon. The water jet dissector can be applied easily and very safely. Precise tissue dissection with preservation of blood vessels and no greater risk of complications are possible. However, the clinical consequences of the described qualities need to be demonstrated in a randomized clinical trial.
Status Update: Modeling Energy Balance in NIF Hohlraums
Energy Technology Data Exchange (ETDEWEB)
Jones, O. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-07-22
We have developed a standardized methodology to model hohlraum drive in NIF experiments. We compare simulation results to experiments by 1) comparing hohlraum xray fluxes and 2) comparing capsule metrics, such as bang times. Long-pulse, high gas-fill hohlraums require a 20-28% reduction in simulated drive and inclusion of ~15% backscatter to match experiment through (1) and (2). Short-pulse, low fill or near-vacuum hohlraums require a 10% reduction in simulated drive to match experiment through (2); no reduction through (1). Ongoing work focuses on physical model modifications to improve these matches.
Status Update: Modeling Energy Balance in NIF Hohlraums
Energy Technology Data Exchange (ETDEWEB)
Jones, O. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-07-22
We have developed a standardized methodology to model hohlraum drive in NIF experiments. We compare simulation results to experiments by 1) comparing hohlraum xray fluxes and 2) comparing capsule metrics, such as bang times. Long-pulse, high gas-fill hohlraums require a 20-28% reduction in simulated drive and inclusion of ~15% backscatter to match experiment through (1) and (2). Short-pulse, low fill or near-vacuum hohlraums require a 10% reduction in simulated drive to match experiment through (2); no reduction through (1). Ongoing work focuses on physical model modifications to improve these matches.
Energy Technology Data Exchange (ETDEWEB)
McCright, R D
1998-06-30
This Engineered Materials Characterization Report (EMCR), Volume 3, discusses in considerable detail the work of the past 18 months on testing the candidate materials proposed for the waste-package (WP) container and on modeling the performance of those materials in the Yucca Mountain (YM) repository setting This report was prepared as an update of information and serves as one of the supporting documents to the Viability Assessment (VA) of the Yucca Mountain Project. Previous versions of the EMCR have provided a history and background of container-materials selection and evaluation (Volume I), a compilation of physical and mechanical properties for the WP design effort (Volume 2), and corrosion-test data and performance-modeling activities (Volume 3). Because the information in Volumes 1 and 2 is still largely current, those volumes are not being revised. As new information becomes available in the testing and modeling efforts, Volume 3 is periodically updated to include that information.
Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin
DEFF Research Database (Denmark)
Finsen, F.; Milzow, Christian; Smith, R.
2014-01-01
Measurements of river and lake water levels from space-borne radar altimeters (past missions include ERS, Envisat, Jason, Topex) are useful for calibration and validation of large-scale hydrological models in poorly gauged river basins. Altimetry data availability over the downstream reaches...... of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... are converted to discharge using rating curves of simulated discharge versus observed altimetry. This approach makes it possible to use altimetry data from river cross sections where both in-situ rating curves and accurate river cross section geometry are not available. Model updating based on radar altimetry...
An updated natural history model of cervical cancer: derivation of model parameters.
Campos, Nicole G; Burger, Emily A; Sy, Stephen; Sharma, Monisha; Schiffman, Mark; Rodriguez, Ana Cecilia; Hildesheim, Allan; Herrero, Rolando; Kim, Jane J
2014-09-01
Mathematical models of cervical cancer have been widely used to evaluate the comparative effectiveness and cost-effectiveness of preventive strategies. Major advances in the understanding of cervical carcinogenesis motivate the creation of a new disease paradigm in such models. To keep pace with the most recent evidence, we updated a previously developed microsimulation model of human papillomavirus (HPV) infection and cervical cancer to reflect 1) a shift towards health states based on HPV rather than poorly reproducible histological diagnoses and 2) HPV clearance and progression to precancer as a function of infection duration and genotype, as derived from the control arm of the Costa Rica Vaccine Trial (2004-2010). The model was calibrated leveraging empirical data from the New Mexico Surveillance, Epidemiology, and End Results Registry (1980-1999) and a state-of-the-art cervical cancer screening registry in New Mexico (2007-2009). The calibrated model had good correspondence with data on genotype- and age-specific HPV prevalence, genotype frequency in precancer and cancer, and age-specific cancer incidence. We present this model in response to a call for new natural history models of cervical cancer intended for decision analysis and economic evaluation at a time when global cervical cancer prevention policy continues to evolve and evidence of the long-term health effects of cervical interventions remains critical. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
An updated summary of MATHEW/ADPIC model evaluation studies
Energy Technology Data Exchange (ETDEWEB)
Foster, K.T.; Dickerson, M.H.
1990-05-01
This paper summarizes the major model evaluation studies conducted for the MATHEW/ADPIC atmospheric transport and diffusion models used by the US Department of Energy's Atmospheric Release Advisory Capability. These studies have taken place over the last 15 years and involve field tracer releases influenced by a variety of meteorological and topographical conditions. Neutrally buoyant tracers released both as surface and elevated point sources, as well as material dispersed by explosive, thermally bouyant release mechanisms have been studied. Results from these studies show that the MATHEW/ADPIC models estimate the tracer air concentrations to within a factor of two of the measured values 20% to 50% of the time, and within a factor of five of the measurements 35% to 85% of the time depending on the complexity of the meteorology and terrain, and the release height of the tracer. Comparisons of model estimates to peak downwind deposition and air concentration measurements from explosive releases are shown to be generally within a factor of two to three. 24 refs., 14 figs., 3 tabs.
General equilibrium basic needs policy model, (updating part).
Kouwenaar A
1985-01-01
ILO pub-WEP pub-PREALC pub. Working paper, econometric model for the assessment of structural change affecting development planning for basic needs satisfaction in Ecuador - considers population growth, family size (households), labour force participation, labour supply, wages, income distribution, profit rates, capital ownership, etc.; examines nutrition, education and health as factors influencing productivity. Diagram, graph, references, statistical tables.
Institute of Scientific and Technical Information of China (English)
Ping Wang; Chaohe Yang; Xuemin Tian; Dexian Huang
2014-01-01
The performance of data-driven models relies heavily on the amount and quality of training samples, so it might deteriorate significantly in the regions where samples are scarce. The objective of this paper is to develop an on-line SVR model updating strategy to track the change in the process characteristics efficiently with affordable computational burden. This is achieved by adding a new sample that violates the Karush-Kuhn-Tucker condi-tions of the existing SVR model and by deleting the old sample that has the maximum distance with respect to the newly added sample in feature space. The benefits offered by such an updating strategy are exploited to develop an adaptive model-based control scheme, where model updating and control task perform alternately. The effectiveness of the adaptive controller is demonstrated by simulation study on a continuous stirred tank reactor. The results reveal that the adaptive MPC scheme outperforms its non-adaptive counterpart for large-magnitude set point changes and variations in process parameters.
Employing incomplete complex modes for model updating and damage detection of damped structures
Institute of Scientific and Technical Information of China (English)
LI HuaJun; LIU FuShun; HU Sau-Lon James
2008-01-01
In the study of finite element model updating or damage detection, most papers are devoted to undamped systems. Thus, their objective has been exclusively re-stricted to the correction of the mass and stiffness matrices. In contrast, this paper performs the model updating and damage detection for damped structures. A theoretical contribution of this paper is to extend the cross-model cross-mode (CMCM) method to simultaneously update the mass, damping and stiffness matri-ces of a finite element model when only few spatially incomplete, complex-valued modes are available. Numerical studies are conducted for a 30-DOF (degree-of-freedom) cantilever beam with multiple damaged elements, as the measured modes are synthesized from finite element models. The numerical results reveal that ap-plying the CMCM method, together with an iterative Guyan reduction scheme, can yield good damage detection in general. When the measured modes utilized in the CMCM method are corrupted with irregular errors, assessing damage at the loca-tion that possesses larger modal strain energy is less sensitive to the corrupted modes.
Employing incomplete complex modes for model updating and damage detection of damped structures
Institute of Scientific and Technical Information of China (English)
HU; Sau-Lon; James
2008-01-01
In the study of finite element model updating or damage detection,most papers are devoted to undamped systems.Thus,their objective has been exclusively restricted to the correction of the mass and stiffness matrices.In contrast,this paper performs the model updating and damage detection for damped structures.A theoretical contribution of this paper is to extend the cross-model cross-mode(CMCM) method to simultaneously update the mass,damping and stiffness matrices of a finite element model when only few spatially incomplete,complex-valued modes are available.Numerical studies are conducted for a 30-DOF(degree-of-freedom) cantilever beam with multiple damaged elements,as the measured modes are synthesized from finite element models.The numerical results reveal that applying the CMCM method,together with an iterative Guyan reduction scheme,can yield good damage detection in general.When the measured modes utilized in the CMCM method are corrupted with irregular errors,assessing damage at the location that possesses larger modal strain energy is less sensitive to the corrupted modes.
Resampling procedures to validate dendro-auxometric regression models
Directory of Open Access Journals (Sweden)
2009-03-01
Full Text Available Regression analysis has a large use in several sectors of forest research. The validation of a dendro-auxometric model is a basic step in the building of the model itself. The more a model resists to attempts of demonstrating its groundlessness, the more its reliability increases. In the last decades many new theories, that quite utilizes the calculation speed of the calculators, have been formulated. Here we show the results obtained by the application of a bootsprap resampling procedure as a validation tool.
Revisiting the Carrington Event: Updated modeling of atmospheric effects
Thomas, Brian C; Snyder, Brock R
2011-01-01
The terrestrial effects of major solar events such as the Carrington white-light flare and subsequent geomagnetic storm of August-September 1859 are of considerable interest, especially in light of recent predictions that such extreme events will be more likely over the coming decades. Here we present results of modeling the atmospheric effects, especially production of odd nitrogen compounds and subsequent depletion of ozone, by solar protons associated with the Carrington event. This study combines approaches from two previous studies of the atmospheric effect of this event. We investigate changes in NOy compounds as well as depletion of O3 using a two-dimensional atmospheric chemistry and dynamics model. Atmospheric ionization is computed using a range-energy relation with four different proxy proton spectra associated with more recent well-known solar proton events. We find that changes in atmospheric constituents are in reasonable agreement with previous studies, but effects of the four proxy spectra use...
Energy Technology Data Exchange (ETDEWEB)
Hwang, Ho-Ling [ORNL; Davis, Stacy Cagle [ORNL
2009-12-01
This report is designed to document the analysis process and estimation models currently used by the Federal Highway Administration (FHWA) to estimate the off-highway gasoline consumption and public sector fuel consumption. An overview of the entire FHWA attribution process is provided along with specifics related to the latest update (2008) on the Off-Highway Gasoline Use Model and the Public Use of Gasoline Model. The Off-Highway Gasoline Use Model is made up of five individual modules, one for each of the off-highway categories: agricultural, industrial and commercial, construction, aviation, and marine. This 2008 update of the off-highway models was the second major update (the first model update was conducted during 2002-2003) after they were originally developed in mid-1990. The agricultural model methodology, specifically, underwent a significant revision because of changes in data availability since 2003. Some revision to the model was necessary due to removal of certain data elements used in the original estimation method. The revised agricultural model also made use of some newly available information, published by the data source agency in recent years. The other model methodologies were not drastically changed, though many data elements were updated to improve the accuracy of these models. Note that components in the Public Use of Gasoline Model were not updated in 2008. A major challenge in updating estimation methods applied by the public-use model is that they would have to rely on significant new data collection efforts. In addition, due to resource limitation, several components of the models (both off-highway and public-us models) that utilized regression modeling approaches were not recalibrated under the 2008 study. An investigation of the Environmental Protection Agency's NONROAD2005 model was also carried out under the 2008 model update. Results generated from the NONROAD2005 model were analyzed, examined, and compared, to the extent that
Energy Technology Data Exchange (ETDEWEB)
Johanna H Oxstrand; Katya L Le Blanc
2012-07-01
The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups
Impact of time displaced precipitation estimates for on-line updated models
DEFF Research Database (Denmark)
Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen
2012-01-01
is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple timearea model and historic rain series...... that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60% and 100%, respectively, independent of the catchments time of concentration....
UPDATING THE FREIGHT TRUCK STOCK ADJUSTMENT MODEL: 1997 VEHICLE INVENTORY AND USE SURVEY DATA
Energy Technology Data Exchange (ETDEWEB)
Davis, S.C.
2000-11-16
The Energy Information Administration's (EIA's) National Energy Modeling System (NEMS) Freight Truck Stock Adjustment Model (FTSAM) was created in 1995 relying heavily on input data from the 1992 Economic Census, Truck Inventory and Use Survey (TIUS). The FTSAM is part of the NEMS Transportation Sector Model, which provides baseline energy projections and analyzes the impacts of various technology scenarios on consumption, efficiency, and carbon emissions. The base data for the FTSAM can be updated every five years as new Economic Census information is released. Because of expertise in using the TIUS database, Oak Ridge National Laboratory (ORNL) was asked to assist the EIA when the new Economic Census data were available. ORNL provided the necessary base data from the 1997 Vehicle Inventory and Use Survey (VIUS) and other sources to update the FTSAM. The next Economic Census will be in the year 2002. When those data become available, the EIA will again want to update the FTSAM using the VIUS. This report, which details the methodology of estimating and extracting data from the 1997 VIUS Microdata File, should be used as a guide for generating the data from the next VIUS so that the new data will be as compatible as possible with the data in the model.
The Chandra X-Ray Observatory Radiation Environmental Model Update
Blackwell, William C.; Minow, Joseph I.; ODell, Stephen L.; Cameron, Robert A.; Virani, Shanil N.
2003-01-01
CRMFLX (Chandra Radiation Model of ion FLUX) is a radiation environment risk mitigation tool for use as a decision aid in planning the operation times for Chandra's Advanced CCD Imaging Spectrometer (ACIS) detector. The accurate prediction of the proton flux environment with energies of 100 - 200 keV is needed in order to protect the ACIS detector against proton degradation. Unfortunately, protons of this energy are abundant in the region of space where Chandra must operate. In addition, on-board particle detectors do not measure proton flux levels of the required energy range. CRMFLX is an engineering environment model developed to predict the proton flux in the solar wind, magnetosheath, and magnetosphere phenomenological regions of geospace. This paper describes the upgrades to the ion flux databases for the magnetosphere, magnetosheath, and solar wind regions. These data files were created by using Geotail and Polar spacecraft flux measurements only when the Advanced Composition Explorer (ACE) spacecraft's 0.14 MeV particle flux was below a threshold value. This new database allows for CRMFLX output to be correlated with both the geomagnetic activity level, as represented by the Kp index, as well as with solar proton events. Also, reported in this paper are results of analysis leading to a change in Chandra operations that successfully mitigates the false trigger rate for autonomous radiation events caused by relativistic electron flux contamination of proton channels.
Updates on measurements and modeling techniques for expendable countermeasures
Gignilliat, Robert; Tepfer, Kathleen; Wilson, Rebekah F.; Taczak, Thomas M.
2016-10-01
The potential threat of recently-advertised anti-ship missiles has instigated research at the United States (US) Naval Research Laboratory (NRL) into the improvement of measurement techniques for visual band countermeasures. The goal of measurements is the collection of radiometric imagery for use in the building and validation of digital models of expendable countermeasures. This paper will present an overview of measurement requirements unique to the visual band and differences between visual band and infrared (IR) band measurements. A review of the metrics used to characterize signatures in the visible band will be presented and contrasted to those commonly used in IR band measurements. For example, the visual band measurements require higher fidelity characterization of the background, including improved high-transmittance measurements and better characterization of solar conditions to correlate results more closely with changes in the environment. The range of relevant engagement angles has also been expanded to include higher altitude measurements of targets and countermeasures. In addition to the discussion of measurement techniques, a top-level qualitative summary of modeling approaches will be presented. No quantitative results or data will be presented.
Directory of Open Access Journals (Sweden)
Sarah E Murray
2014-03-01
Full Text Available This paper discusses three potential varieties of update: updates to the common ground, structuring updates, and updates that introduce discourse referents. These different types of update are used to model different aspects of natural language phenomena. Not-at-issue information directly updates the common ground. The illocutionary mood of a sentence structures the context. Other updates introduce discourse referents of various types, including propositional discourse referents for at-issue information. Distinguishing these types of update allows a unified treatment of a broad range of phenomena, including the grammatical evidentials found in Cheyenne (Algonquian as well as English evidential parentheticals, appositives, and mood marking. An update semantics that can formalize all of these varieties of update is given, integrating the different kinds of semantic contributions into a single representation of meaning. http://dx.doi.org/10.3765/sp.7.2 BibTeX info
Box models for the evolution of atmospheric oxygen: an update
Kasting, J. F.
1991-01-01
A simple 3-box model of the atmosphere/ocean system is used to describe the various stages in the evolution of atmospheric oxygen. In Stage I, which probably lasted until redbeds began to form about 2.0 Ga ago, the Earth's surface environment was generally devoid of free O2, except possibly in localized regions of high productivity in the surface ocean. In Stage II, which may have lasted for less than 150 Ma, the atmosphere and surface ocean were oxidizing, while the deep ocean remained anoxic. In Stage III, which commenced with the disappearance of banded iron formations around 1.85 Ga ago and has lasted until the present, all three surface reservoirs contained appreciable amounts of free O2. Recent and not-so-recent controversies regarding the abundance of oxygen in the Archean atmosphere are identified and discussed. The rate of O2 increase during the Middle and Late Proterozoic is identified as another outstanding question.
Box models for the evolution of atmospheric oxygen: an update
Kasting, J. F.
1991-01-01
A simple 3-box model of the atmosphere/ocean system is used to describe the various stages in the evolution of atmospheric oxygen. In Stage I, which probably lasted until redbeds began to form about 2.0 Ga ago, the Earth's surface environment was generally devoid of free O2, except possibly in localized regions of high productivity in the surface ocean. In Stage II, which may have lasted for less than 150 Ma, the atmosphere and surface ocean were oxidizing, while the deep ocean remained anoxic. In Stage III, which commenced with the disappearance of banded iron formations around 1.85 Ga ago and has lasted until the present, all three surface reservoirs contained appreciable amounts of free O2. Recent and not-so-recent controversies regarding the abundance of oxygen in the Archean atmosphere are identified and discussed. The rate of O2 increase during the Middle and Late Proterozoic is identified as another outstanding question.
An Updated GA Signaling 'Relief of Repression' Regulatory Model
Institute of Scientific and Technical Information of China (English)
Xiu-Hua Gao; Sen-Lin Xiao; Qin-Fang Yao; Yu-Juan Wang; Xiang-Dong Fu
2011-01-01
Gibberellic acid (GA)regulates many aspects of plant growth and development. The DELLA proteins act to restrain plant growth, and GA relieves this repression by promoting their degradation via the 26S proteasome pathway.The elucidation of the crystalline structure of the GA soluble receptor GID1 protein represents an important breakthrough for understanding the way in which GA is perceived and how it induces the destabilization of the DELLA proteins. Recent advances have revealed that the DELLA proteins are involved in protein-protein interactions within various environmental and hormone signaling pathways. In this review, we highlight our current understanding of the 'relief of repression" model that aims to explain the role of GA and the function of the DELLA proteins, incorporating the many aspects of cross-talk shown to exist in the control of plant development and the response to stress.
Ding, Zhong-Jun; Jiang, Rui; Gao, Zi-You; Wang, Bing-Hong; Long, Jiancheng
2013-08-01
The effect of overpasses in the Biham-Middleton-Levine traffic flow model with random and parallel update rules has been studied. An overpass is a site that can be occupied simultaneously by an eastbound car and a northbound one. Under periodic boundary conditions, both self-organized and random patterns are observed in the free-flowing phase of the parallel update model, while only the random pattern is observed in the random update model. We have developed mean-field analysis for the moving phase of the random update model, which agrees with the simulation results well. An intermediate phase is observed in which some cars could pass through the jamming cluster due to the existence of free paths in the random update model. Two intermediate states are observed in the parallel update model, which have been ignored in previous studies. The intermediate phases in which the jamming skeleton is only oriented along the diagonal line in both models have been analyzed, with the analyses agreeing well with the simulation results. With the increase of overpass ratio, the jamming phase and the intermediate phases disappear in succession for both models. Under open boundary conditions, the system exhibits only two phases when the ratio of overpasses is below a threshold in the random update model. When the ratio of the overpass is close to 1, three phases could be observed, similar to the totally asymmetric simple exclusion process model. The dependence of the average velocity, the density, and the flow rate on the injection probability in the moving phase has also been obtained through mean-field analysis. The results of the parallel model under open boundary conditions are similar to that of the random update model.
User's guide to the MESOI diffusion model and to the utility programs UPDATE and LOGRVU
Energy Technology Data Exchange (ETDEWEB)
Athey, G.F.; Allwine, K.J.; Ramsdell, J.V.
1981-11-01
MESOI is an interactive, Lagrangian puff trajectory diffusion model. The model is documented separately (Ramsdell and Athey, 1981); this report is intended to provide MESOI users with the information needed to successfully conduct model simulations. The user is also provided with guidance in the use of the data file maintenance and review programs; UPDATE and LOGRVU. Complete examples are given for the operaton of all three programs and an appendix documents UPDATE and LOGRVU.
Update on PHELIX Pulsed-Power Hydrodynamics Experiments and Modeling
Rousculp, Christopher; Reass, William; Oro, David; Griego, Jeffery; Turchi, Peter; Reinovsky, Robert; Devolder, Barbara
2013-10-01
The PHELIX pulsed-power driver is a 300 kJ, portable, transformer-coupled, capacitor bank capable of delivering 3-5 MA, 10 μs pulse into a low inductance load. Here we describe further testing and hydrodynamics experiments. First, a 4 nH static inductive load has been constructed. This allows for repetitive high-voltage, high-current testing of the system. Results are used in the calibration of simple circuit models and numerical simulations across a range of bank charges (+/-20 < V0 < +/-40 kV). Furthermore, a dynamic liner-on-target load experiment has been conducted to explore the shock-launched transport of particulates (diam. ~ 1 μm) from a surface. The trajectories of the particulates are diagnosed with radiography. Results are compared to 2D hydro-code simulations. Finally, initial studies are underway to assess the feasibility of using the PHELIX driver as an electromagnetic launcher for planer shock-physics experiments. Work supported by United States-DOE under contract DE-AC52-06NA25396.
Research of Cadastral Data Modelling and Database Updating Based on Spatio-temporal Process
Directory of Open Access Journals (Sweden)
ZHANG Feng
2016-02-01
Full Text Available The core of modern cadastre management is to renew the cadastre database and keep its currentness,topology consistency and integrity.This paper analyzed the changes and their linkage of various cadastral objects in the update process.Combined object-oriented modeling technique with spatio-temporal objects' evolution express,the paper proposed a cadastral data updating model based on the spatio-temporal process according to people's thought.Change rules based on the spatio-temporal topological relations of evolution cadastral spatio-temporal objects are drafted and further more cascade updating and history back trace of cadastral features,land use and buildings are realized.This model implemented in cadastral management system-ReGIS.Achieved cascade changes are triggered by the direct driving force or perceived external events.The system records spatio-temporal objects' evolution process to facilitate the reconstruction of history,change tracking,analysis and forecasting future changes.
Grip, Niklas; Sabourova, Natalia; Tu, Yongming
2017-02-01
Sensitivity-based Finite Element Model Updating (FEMU) is one of the widely accepted techniques used for damage identification in structures. FEMU can be formulated as a numerical optimization problem and solved iteratively making automatic updating of the unknown model parameters by minimizing the difference between measured and analytical structural properties. However, in the presence of noise in the measurements, the updating results are usually prone to errors. This is mathematically described as instability of the damage identification as an inverse problem. One way to resolve this problem is by using regularization. In this paper, we compare a well established interpolation-based regularization method against methods based on the minimization of the total variation of the unknown model parameters. These are new regularization methods for structural damage identification. We investigate how using Huber and pseudo Huber functions in the definition of total variation affects important properties of the methods. For instance, for well-localized damages the results show a clear advantage of the total variation based regularization in terms of the identified location and severity of damage compared with the interpolation-based solution. For a practical test of the proposed method we use a reinforced concrete plate. Measurements and analysis were performed first on an undamaged plate, and then repeated after applying four different degrees of damage.
Modeling a radiotherapy clinical procedure: total body irradiation.
Esteban, Ernesto P; García, Camille; De La Rosa, Verónica
2010-09-01
Leukemia, non-Hodgkin's lymphoma, and neuroblastoma patients prior to bone marrow transplants may be subject to a clinical radiotherapy procedure called total body irradiation (TBI). To mimic a TBI procedure, we modified the Jones model of bone marrow radiation cell kinetics by adding mutant and cancerous cell compartments. The modified Jones model is mathematically described by a set of n + 4 differential equations, where n is the number of mutations before a normal cell becomes a cancerous cell. Assuming a standard TBI radiotherapy treatment with a total dose of 1320 cGy fractionated over four days, two cases were considered. In the first, repopulation and sub-lethal repair in the different cell populations were not taken into account (model I). In this case, the proposed modified Jones model could be solved in a closed form. In the second, repopulation and sub-lethal repair were considered, and thus, we found that the modified Jones model could only be solved numerically (model II). After a numerical and graphical analysis, we concluded that the expected results of TBI treatment can be mimicked using model I. Model II can also be used, provided the cancer repopulation factor is less than the normal cell repopulation factor. However, model I has fewer free parameters compared to model II. In either case, our results are in agreement that the standard dose fractionated over four days, with two irradiations each day, provides the needed conditioning treatment prior to bone marrow transplant. Partial support for this research was supplied by the NIH-RISE program, the LSAMP-Puerto Rico program, and the University of Puerto Rico-Humacao.
Recreation of architectural structures using procedural modeling based on volumes
Directory of Open Access Journals (Sweden)
Santiago Barroso Juan
2013-11-01
Full Text Available While the procedural modeling of buildings and other architectural structures has evolved very significantly in recent years, there is noticeable absence of high-level tools that allow a designer, an artist or an historian, creating important buildings or architectonic structures in a particular city. In this paper we present a tool for creating buildings in a simple and clear, following rules that use the language and methodology of creating their own buildings, and hiding the user the algorithmic details of the creation of the model.
Procedural Modeling for Rapid-Prototyping of Multiple Building Phases
Saldana, M.; Johanson, C.
2013-02-01
RomeLab is a multidisciplinary working group at UCLA that uses the city of Rome as a laboratory for the exploration of research approaches and dissemination practices centered on the intersection of space and time in antiquity. In this paper we present a multiplatform workflow for the rapid-prototyping of historical cityscapes through the use of geographic information systems, procedural modeling, and interactive game development. Our workflow begins by aggregating archaeological data in a GIS database. Next, 3D building models are generated from the ArcMap shapefiles in Esri CityEngine using procedural modeling techniques. A GIS-based terrain model is also adjusted in CityEngine to fit the building elevations. Finally, the terrain and city models are combined in Unity, a game engine which we used to produce web-based interactive environments which are linked to the GIS data using keyhole markup language (KML). The goal of our workflow is to demonstrate that knowledge generated within a first-person virtual world experience can inform the evaluation of data derived from textual and archaeological sources, and vice versa.
Davies, H. C.; Turner, R. E.
1977-01-01
A dynamical relaxation technique for updating prediction models is analyzed with the help of the linear and nonlinear barotropic primitive equations. It is assumed that a complete four-dimensional time history of some prescribed subset of the meteorological variables is known. The rate of adaptation of the flow variables toward the true state is determined for a linearized f-model, and for mid-latitude and equatorial beta-plane models. The results of the analysis are corroborated by numerical experiments with the nonlinear shallow-water equations.
Testing the prognostic accuracy of the updated pediatric sepsis biomarker risk model.
Directory of Open Access Journals (Sweden)
Hector R Wong
Full Text Available BACKGROUND: We previously derived and validated a risk model to estimate mortality probability in children with septic shock (PERSEVERE; PEdiatRic SEpsis biomarkEr Risk modEl. PERSEVERE uses five biomarkers and age to estimate mortality probability. After the initial derivation and validation of PERSEVERE, we combined the derivation and validation cohorts (n = 355 and updated PERSEVERE. An important step in the development of updated risk models is to test their accuracy using an independent test cohort. OBJECTIVE: To test the prognostic accuracy of the updated version PERSEVERE in an independent test cohort. METHODS: Study subjects were recruited from multiple pediatric intensive care units in the United States. Biomarkers were measured in 182 pediatric subjects with septic shock using serum samples obtained during the first 24 hours of presentation. The accuracy of PERSEVERE 28-day mortality risk estimate was tested using diagnostic test statistics, and the net reclassification improvement (NRI was used to test whether PERSEVERE adds information to a physiology-based scoring system. RESULTS: Mortality in the test cohort was 13.2%. Using a risk cut-off of 2.5%, the sensitivity of PERSEVERE for mortality was 83% (95% CI 62-95, specificity was 75% (68-82, positive predictive value was 34% (22-47, and negative predictive value was 97% (91-99. The area under the receiver operating characteristic curve was 0.81 (0.70-0.92. The false positive subjects had a greater degree of organ failure burden and longer intensive care unit length of stay, compared to the true negative subjects. When adding PERSEVERE to a physiology-based scoring system, the net reclassification improvement was 0.91 (0.47-1.35; p<0.001. CONCLUSIONS: The updated version of PERSEVERE estimates mortality probability reliably in a heterogeneous test cohort of children with septic shock and provides information over and above a physiology-based scoring system.
Belcher, Wayne R.; Sweetkind, Donald S.; Faunt, Claudia C.; Pavelko, Michael T.; Hill, Mary C.
2017-01-19
Since the original publication of the Death Valley regional groundwater flow system (DVRFS) numerical model in 2004, more information on the regional groundwater flow system in the form of new data and interpretations has been compiled. Cooperators such as the Bureau of Land Management, National Park Service, U.S. Fish and Wildlife Service, the Department of Energy, and Nye County, Nevada, recognized a need to update the existing regional numerical model to maintain its viability as a groundwater management tool for regional stakeholders. The existing DVRFS numerical flow model was converted to MODFLOW-2005, updated with the latest available data, and recalibrated. Five main data sets were revised: (1) recharge from precipitation varying in time and space, (2) pumping data, (3) water-level observations, (4) an updated regional potentiometric map, and (5) a revision to the digital hydrogeologic framework model.The resulting DVRFS version 2.0 (v. 2.0) numerical flow model simulates groundwater flow conditions for the Death Valley region from 1913 to 2003 to correspond to the time frame for the most recently published (2008) water-use data. The DVRFS v 2.0 model was calibrated by using the Tikhonov regularization functionality in the parameter estimation and predictive uncertainty software PEST. In order to assess the accuracy of the numerical flow model in simulating regional flow, the fit of simulated to target values (consisting of hydraulic heads and flows, including evapotranspiration and spring discharge, flow across the model boundary, and interbasin flow; the regional water budget; values of parameter estimates; and sensitivities) was evaluated. This evaluation showed that DVRFS v. 2.0 simulates conditions similar to DVRFS v. 1.0. Comparisons of the target values with simulated values also indicate that they match reasonably well and in some cases (boundary flows and discharge) significantly better than in DVRFS v. 1.0.
The four-dimensional data assimilation (FDDA) technique in the Weather Research and Forecasting (WRF) meteorological model has recently undergone an important update from the original version. Previous evaluation results have demonstrated that the updated FDDA approach in WRF pr...
Automatically updating predictive modeling workflows support decision-making in drug design.
Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O
2016-09-01
Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.
Update on Small Modular Reactors Dynamic System Modeling Tool: Web Application
Energy Technology Data Exchange (ETDEWEB)
Hale, Richard Edward [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Batteh, John J [Modelon Corporation (Sweden); Tiller, Michael M. [Xogeny Corporation (United States)
2015-01-01
Previous reports focused on the development of component and system models as well as end-to-end system models using Modelica and Dymola for two advanced reactor architectures: (1) Advanced Liquid Metal Reactor and (2) fluoride high-temperature reactor (FHR). The focus of this report is the release of the first beta version of the web-based application for model use and collaboration, as well as an update on the FHR model. The web-based application allows novice users to configure end-to-end system models from preconfigured choices to investigate the instrumentation and controls implications of these designs and allows for the collaborative development of individual component models that can be benchmarked against test systems for potential inclusion in the model library. A description of this application is provided along with examples of its use and a listing and discussion of all the models that currently exist in the library.
Energy Technology Data Exchange (ETDEWEB)
Chen, Yun; Geng, Chao-Qiang [Department of Physics, National Tsing Hua University, Hsinchu, 300 Taiwan (China); Cao, Shuo; Huang, Yu-Mei; Zhu, Zong-Hong, E-mail: chenyun@bao.ac.cn, E-mail: geng@phys.nthu.edu.tw, E-mail: caoshuo@bnu.edu.cn, E-mail: huangymei@gmail.com, E-mail: zhuzh@bnu.edu.cn [Department of Astronomy, Beijing Normal University, Beijing 100875 (China)
2015-02-01
We constrain the scalar field dark energy model with an inverse power-law potential, i.e., V(φ) ∝ φ{sup −α} (α > 0), from a set of recent cosmological observations by compiling an updated sample of Hubble parameter measurements including 30 independent data points. Our results show that the constraining power of the updated sample of H(z) data with the HST prior on H{sub 0} is stronger than those of the SCP Union2 and Union2.1 compilations. A recent sample of strong gravitational lensing systems is also adopted to confine the model even though the results are not significant. A joint analysis of the strong gravitational lensing data with the more restrictive updated Hubble parameter measurements and the Type Ia supernovae data from SCP Union2 indicates that the recent observations still can not distinguish whether dark energy is a time-independent cosmological constant or a time-varying dynamical component.
Propagation Modeling of Food Safety Crisis Information Update Based on the Multi-agent System
Directory of Open Access Journals (Sweden)
Meihong Wu
2015-08-01
Full Text Available This study propose a new multi-agent system frame based on epistemic default complex adaptive theory and use the agent based simulation and modeling the information updating process to study food safety crisis information dissemination. Then, we explore interaction effect between each agent in food safety crisis information dissemination at the current environment and mostly reveals how the government agent, food company agent and network media agent influence users confidence in food safety. The information updating process give a description on how to guide a normal spread of food safety crisis in public opinion in the current environment and how to enhance the confidence of food quality and safety of the average users.
Lock, Alan; Wallschläger, Dirk; McMurdo, Colin; Tyler, Laura; Belzile, Nelson; Spiers, Graeme
2016-12-01
A sequential extraction procedure (SEP) for the speciation analysis of As(III) and As(V) in oxic and suboxic soils and sediments was validated using a natural lake sediment and three certified reference materials, as well as spike recoveries of As(III) and As(V). Many of the extraction steps have been previously validated making the procedure useful for comparisons to similar previous SEP studies. The novel aspect of this research is the validation for the SEP to maintain As(III) and As(V) species. The proposed five step extraction procedure includes the extraction agents (NH4)2SO4, NH4H2PO4, H3PO4 + NH2OH·HCl, oxalate + ascorbic acid (heated), and HNO3 + HCl + HF, targeting operationally defined easily exchangeable, strongly sorbed, amorphous Fe oxide bound, crystalline Fe oxide bound, and residual As fractions, respectively. The third extraction step, H3PO4 + NH2OH·HCl, has not been previously validated for fraction selectivity. We present evidence for this extraction step to target As complexed with amorphous Fe oxides when used in the SEP proposed here. All solutions were analyzed on ICP-MS. The greatest concentrations of As were extracted from the amorphous Fe oxide fraction and the dominant species was As(V). Lake sediment materials were found to have higher As(III) concentrations than the soil materials. Because different soils/sediments have different chemical characteristics, maintenance of As species during extractions must be validated for specific soil/sediment types using spiking experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.
Su, Chiu-Wen; Ming-Fang Yen, Amy; Lai, Hongmin; Chen, Hsiu-Hsi; Chen, Sam Li-Sheng
2017-07-28
Background The accuracy of a prediction model for periodontal disease using the community periodontal index (CPI) has been undertaken by using an area receiver operating characteristics (AUROC) curve, but how the uncalibrated CPI, as measured by general dentists trained by periodontists in a large epidemiological study, required for constructing a prediction model that affects its performance has not been researched yet. Methods We conducted a two-stage design by first proposing a validation study to calibrate the CPI between a senior periodontal specialist and trained general dentists who measured CPIs in the main study of a nationwide survey. A Bayesian hierarchical logistic regression model was applied to estimate the non-updated and updated clinical weights used for building up risk scores. How the calibrated CPI affected the performance of the updated prediction model was quantified by comparing the AUROC curves between the original and the updated model. Results The estimates regarding the calibration of CPI obtained from the validation study were 66% and 85% for sensitivity and specificity, respectively. After updating, the clinical weights of each predictor were inflated, and the risk score for the highest risk category was elevated from 434 to 630. Such an update improved the AUROC performance of the two corresponding prediction models from 62.6% (95% CI: 61.7%-63.6%) for the non-updated model to 68.9% (95% CI: 68.0%-69.6%) for the updated one, reaching a statistically significant difference (P periodontal disease as measured by the calibrated CPI derived from a large epidemiological survey.
Update of the Computing Models of the WLCG and the LHC Experiments
Bird, I; Carminati, F; Cattaneo, M; Clarke, P; Fisk, I; Girone, M; Harvey, J; Kersevan, B; Mato, P; Mount, R; Panzer-Steindel, B; CERN. Geneva. The LHC experiments Committee; LHCC
2014-01-01
In preparation for the data collection and analysis in LHC Run 2, the LHCC and Computing Scrutiny Group of the RRB requested a detailed review of the current computing models of the LHC experiments and a consolidated plan for the future computing needs. This document represents the status of the work of the WLCG collaboration and the four LHC experiments in updating the computing models to reflect the advances in understanding of the most effective ways to use the distributed computing and storage resources, based upon the experience gained during LHC Run 1.
a Procedural Solution to Model Roman Masonry Structures
Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.
2013-07-01
The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.
Neuroadaptation in nicotine addiction: update on the sensitization-homeostasis model.
DiFranza, Joseph R; Huang, Wei; King, Jean
2012-10-17
The role of neuronal plasticity in supporting the addictive state has generated much research and some conceptual theories. One such theory, the sensitization-homeostasis (SH) model, postulates that nicotine suppresses craving circuits, and this triggers the development of homeostatic adaptations that autonomously support craving. Based on clinical studies, the SH model predicts the existence of three distinct forms of neuroplasticity that are responsible for withdrawal, tolerance and the resolution of withdrawal. Over the past decade, many controversial aspects of the SH model have become well established by the literature, while some details have been disproven. Here we update the model based on new studies showing that nicotine dependence develops through a set sequence of symptoms in all smokers, and that the latency to withdrawal, the time it takes for withdrawal symptoms to appear during abstinence, is initially very long but shortens by several orders of magnitude over time. We conclude by outlining directions for future research based on the updated model, and commenting on how new experimental studies can gain from the framework put forth in the SH model.
Neuroadaptation in Nicotine Addiction: Update on the Sensitization-Homeostasis Model
Directory of Open Access Journals (Sweden)
Jean King
2012-10-01
Full Text Available The role of neuronal plasticity in supporting the addictive state has generated much research and some conceptual theories. One such theory, the sensitization-homeostasis (SH model, postulates that nicotine suppresses craving circuits, and this triggers the development of homeostatic adaptations that autonomously support craving. Based on clinical studies, the SH model predicts the existence of three distinct forms of neuroplasticity that are responsible for withdrawal, tolerance and the resolution of withdrawal. Over the past decade, many controversial aspects of the SH model have become well established by the literature, while some details have been disproven. Here we update the model based on new studies showing that nicotine dependence develops through a set sequence of symptoms in all smokers, and that the latency to withdrawal, the time it takes for withdrawal symptoms to appear during abstinence, is initially very long but shortens by several orders of magnitude over time. We conclude by outlining directions for future research based on the updated model, and commenting on how new experimental studies can gain from the framework put forth in the SH model.
Current Developments in Dementia Risk Prediction Modelling: An Updated Systematic Review.
Directory of Open Access Journals (Sweden)
Eugene Y H Tang
Full Text Available Accurate identification of individuals at high risk of dementia influences clinical care, inclusion criteria for clinical trials and development of preventative strategies. Numerous models have been developed for predicting dementia. To evaluate these models we undertook a systematic review in 2010 and updated this in 2014 due to the increase in research published in this area. Here we include a critique of the variables selected for inclusion and an assessment of model prognostic performance.Our previous systematic review was updated with a search from January 2009 to March 2014 in electronic databases (MEDLINE, Embase, Scopus, Web of Science. Articles examining risk of dementia in non-demented individuals and including measures of sensitivity, specificity or the area under the curve (AUC or c-statistic were included.In total, 1,234 articles were identified from the search; 21 articles met inclusion criteria. New developments in dementia risk prediction include the testing of non-APOE genes, use of non-traditional dementia risk factors, incorporation of diet, physical function and ethnicity, and model development in specific subgroups of the population including individuals with diabetes and those with different educational levels. Four models have been externally validated. Three studies considered time or cost implications of computing the model.There is no one model that is recommended for dementia risk prediction in population-based settings. Further, it is unlikely that one model will fit all. Consideration of the optimal features of new models should focus on methodology (setting/sample, model development and testing in a replication cohort and the acceptability and cost of attaining the risk variables included in the prediction score. Further work is required to validate existing models or develop new ones in different populations as well as determine the ethical implications of dementia risk prediction, before applying the particular
Directory of Open Access Journals (Sweden)
Marcin Luczak
2014-01-01
Full Text Available This paper presents selected results and aspects of the multidisciplinary and interdisciplinary research oriented for the experimental and numerical study of the structural dynamics of a bend-twist coupled full scale section of a wind turbine blade structure. The main goal of the conducted research is to validate finite element model of the modified wind turbine blade section mounted in the flexible support structure accordingly to the experimental results. Bend-twist coupling was implemented by adding angled unidirectional layers on the suction and pressure side of the blade. Dynamic test and simulations were performed on a section of a full scale wind turbine blade provided by Vestas Wind Systems A/S. The numerical results are compared to the experimental measurements and the discrepancies are assessed by natural frequency difference and modal assurance criterion. Based on sensitivity analysis, set of model parameters was selected for the model updating process. Design of experiment and response surface method was implemented to find values of model parameters yielding results closest to the experimental. The updated finite element model is producing results more consistent with the measurement outcomes.
A Review of the Updated Pharmacophore for the Alpha 5 GABA(A Benzodiazepine Receptor Model
Directory of Open Access Journals (Sweden)
Terry Clayton
2015-01-01
Full Text Available An updated model of the GABA(A benzodiazepine receptor pharmacophore of the α5-BzR/GABA(A subtype has been constructed prompted by the synthesis of subtype selective ligands in light of the recent developments in both ligand synthesis, behavioral studies, and molecular modeling studies of the binding site itself. A number of BzR/GABA(A α5 subtype selective compounds were synthesized, notably α5-subtype selective inverse agonist PWZ-029 (1 which is active in enhancing cognition in both rodents and primates. In addition, a chiral positive allosteric modulator (PAM, SH-053-2′F-R-CH3 (2, has been shown to reverse the deleterious effects in the MAM-model of schizophrenia as well as alleviate constriction in airway smooth muscle. Presented here is an updated model of the pharmacophore for α5β2γ2 Bz/GABA(A receptors, including a rendering of PWZ-029 docked within the α5-binding pocket showing specific interactions of the molecule with the receptor. Differences in the included volume as compared to α1β2γ2, α2β2γ2, and α3β2γ2 will be illustrated for clarity. These new models enhance the ability to understand structural characteristics of ligands which act as agonists, antagonists, or inverse agonists at the Bz BS of GABA(A receptors.
Experimental Update of the Overtopping Model Used for the Wave Dragon Wave Energy Converter
Energy Technology Data Exchange (ETDEWEB)
Parmeggiani, Stefano [Wave Dragon Ltd., London (United Kingdom); Kofoed, Jens Peter [Aalborg Univ. (Denmark). Department of Civil Engineering; Friis-Madsen, Erik [Wave Dragon Ltd., London (United Kingdom)
2013-04-15
An overtopping model specifically suited for Wave Dragon is needed in order to improve the reliability of its performance estimates. The model shall be comprehensive of all relevant physical processes that affect overtopping and flexible to adapt to any local conditions and device configuration. An experimental investigation is carried out to update an existing formulation suited for 2D draft-limited, low-crested structures, in order to include the effects on the overtopping flow of the wave steepness, the 3D geometry of Wave Dragon, the wing reflectors, the device motions and the non-rigid connection between platform and reflectors. The study is carried out in four phases, each of them specifically targeted at quantifying one of these effects through a sensitivity analysis and at modeling it through custom-made parameters. These are depending on features of the wave or the device configuration, all of which can be measured in real-time. Instead of using new fitting coefficients, this approach allows a broader applicability of the model beyond the Wave Dragon case, to any overtopping WEC or structure within the range of tested conditions. Predictions reliability of overtopping over Wave Dragon increased, as the updated model allows improved accuracy and precision respect to the former version.
Experimental Update of the Overtopping Model Used for the Wave Dragon Wave Energy Converter
Directory of Open Access Journals (Sweden)
Erik Friis-Madsen
2013-04-01
Full Text Available An overtopping model specifically suited for Wave Dragon is needed in order to improve the reliability of its performance estimates. The model shall be comprehensive of all relevant physical processes that affect overtopping and flexible to adapt to any local conditions and device configuration. An experimental investigation is carried out to update an existing formulation suited for 2D draft-limited, low-crested structures, in order to include the effects on the overtopping flow of the wave steepness, the 3D geometry of Wave Dragon, the wing reflectors, the device motions and the non-rigid connection between platform and reflectors. The study is carried out in four phases, each of them specifically targeted at quantifying one of these effects through a sensitivity analysis and at modeling it through custom-made parameters. These are depending on features of the wave or the device configuration, all of which can be measured in real-time. Instead of using new fitting coefficients, this approach allows a broader applicability of the model beyond the Wave Dragon case, to any overtopping WEC or structure within the range of tested conditions. Predictions reliability of overtopping over Wave Dragon increased, as the updated model allows improved accuracy and precision respect to the former version.
DEFF Research Database (Denmark)
Finlay, Chris; Olsen, Nils; Tøffner-Clausen, Lars
th order spline representation with knot points spaced at 0.5 year intervals. The resulting field model is able to consistently fit data from six independent low Earth orbit satellites: Oersted, CHAMP, SAC-C and the three Swarm satellites. As an example, we present comparisons of the excellent model......Ten months of data from ESA's Swarm mission, together with recent ground observatory monthly means, are used to update the CHAOS series of geomagnetic field models with a focus on time-changes of the core field. As for previous CHAOS field models quiet-time, night-side, data selection criteria...
Lumping procedure for a kinetic model of catalytic naphtha reforming
Directory of Open Access Journals (Sweden)
H. M. Arani
2009-12-01
Full Text Available A lumping procedure is developed for obtaining kinetic and thermodynamic parameters of catalytic naphtha reforming. All kinetic and deactivation parameters are estimated from industrial data and thermodynamic parameters are calculated from derived mathematical expressions. The proposed model contains 17 lumps that include the C6 to C8+ hydrocarbon range and 15 reaction pathways. Hougen-Watson Langmuir-Hinshelwood type reaction rate expressions are used for kinetic simulation of catalytic reactions. The kinetic parameters are benchmarked with several sets of plant data and estimated by the SQP optimization method. After calculation of deactivation and kinetic parameters, plant data are compared with model predictions and only minor deviations between experimental and calculated data are generally observed.
Procedures and Methods of Digital Modeling in Representation Didactics
La Mantia, M.
2011-09-01
At the Bachelor degree course in Engineering/Architecture of the University "La Sapienza" of Rome, the courses of Design and Survey, in addition to considering the learning of methods of representation, the application of descriptive geometry and survey, in order to expand the vision and spatial conception of the student, pay particular attention to the use of information technology for the preparation of design and survey drawings, achieving their goals through an educational path of "learning techniques, procedures and methods of modeling architectural structures." The fields of application involved two different educational areas: the analysis and that of survey, both from the acquisition of the given metric (design or survey) to the development of three-dimensional virtual model.
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.
2017-02-01
This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic
Updated Hungarian Gravity Field Solution Based on Fifth Generation GOCE Gravity Field Models
Toth, Gyula; Foldvary, Lorant
2015-03-01
With the completion of the ESA's GOCE satellite's mission fifth generation gravity field models are available from the ESA's GOCE High Processing Facility. Our contribution is an updated gravity field solution for Hungary using the latest DIR R05 GOCE gravity field model. The solution methodology is least squares gravity field parameter estimation using Spherical Radial Base Functions (SRBF). Regional datasets include deflections of the vertical (DOV), gravity anomalies and quasigeoid heights by GPS/levelling. The GOCE DIR R05 model has been combined with the EGM20008 model and has been evaluated in comparison with the EGM2008 and EIGEN-6C3stat models to assess the performance of our regional gravity field solution.
Learning curve estimation in medical devices and procedures: hierarchical modeling.
Govindarajulu, Usha S; Stillo, Marco; Goldfarb, David; Matheny, Michael E; Resnic, Frederic S
2017-07-30
In the use of medical device procedures, learning effects have been shown to be a critical component of medical device safety surveillance. To support their estimation of these effects, we evaluated multiple methods for modeling these rates within a complex simulated dataset representing patients treated by physicians clustered within institutions. We employed unique modeling for the learning curves to incorporate the learning hierarchy between institution and physicians and then modeled them within established methods that work with hierarchical data such as generalized estimating equations (GEE) and generalized linear mixed effect models. We found that both methods performed well, but that the GEE may have some advantages over the generalized linear mixed effect models for ease of modeling and a substantially lower rate of model convergence failures. We then focused more on using GEE and performed a separate simulation to vary the shape of the learning curve as well as employed various smoothing methods to the plots. We concluded that while both hierarchical methods can be used with our mathematical modeling of the learning curve, the GEE tended to perform better across multiple simulated scenarios in order to accurately model the learning effect as a function of physician and hospital hierarchical data in the use of a novel medical device. We found that the choice of shape used to produce the 'learning-free' dataset would be dataset specific, while the choice of smoothing method was negligibly different from one another. This was an important application to understand how best to fit this unique learning curve function for hierarchical physician and hospital data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Predictive models of procedural human supervisory control behavior
Boussemart, Yves
Human supervisory control systems are characterized by the computer-mediated nature of the interactions between one or more operators and a given task. Nuclear power plants, air traffic management and unmanned vehicles operations are examples of such systems. In this context, the role of the operators is typically highly proceduralized due to the time and mission-critical nature of the tasks. Therefore, the ability to continuously monitor operator behavior so as to detect and predict anomalous situations is a critical safeguard for proper system operation. In particular, such models can help support the decision J]l8king process of a supervisor of a team of operators by providing alerts when likely anomalous behaviors are detected By exploiting the operator behavioral patterns which are typically reinforced through standard operating procedures, this thesis proposes a methodology that uses statistical learning techniques in order to detect and predict anomalous operator conditions. More specifically, the proposed methodology relies on hidden Markov models (HMMs) and hidden semi-Markov models (HSMMs) to generate predictive models of unmanned vehicle systems operators. Through the exploration of the resulting HMMs in two distinct single operator scenarios, the methodology presented in this thesis is validated and shown to provide models capable of reliably predicting operator behavior. In addition, the use of HSMMs on the same data scenarios provides the temporal component of the predictions missing from the HMMs. The final step of this work is to examine how the proposed methodology scales to more complex scenarios involving teams of operators. Adopting a holistic team modeling approach, both HMMs and HSMMs are learned based on two team-based data sets. The results show that the HSMMs can provide valuable timing information in the single operator case, whereas HMMs tend to be more robust to increased team complexity. In addition, this thesis discusses the
Contact-based model for strategy updating and evolution of cooperation
Zhang, Jianlei; Chen, Zengqiang
2016-06-01
To establish an available model for the astoundingly strategy decision process of players is not easy, sparking heated debate about the related strategy updating rules is intriguing. Models for evolutionary games have traditionally assumed that players imitate their successful partners by the comparison of respective payoffs, raising the question of what happens if the game information is not easily available. Focusing on this yet-unsolved case, the motivation behind the work presented here is to establish a novel model for the updating of states in a spatial population, by detouring the required payoffs in previous models and considering much more players' contact patterns. It can be handy and understandable to employ switching probabilities for determining the microscopic dynamics of strategy evolution. Our results illuminate the conditions under which the steady coexistence of competing strategies is possible. These findings reveal that the evolutionary fate of the coexisting strategies can be calculated analytically, and provide novel hints for the resolution of cooperative dilemmas in a competitive context. We hope that our results have disclosed new explanations about the survival and coexistence of competing strategies in structured populations.
A Multi-objective Procedure for Efficient Regression Modeling
Sinha, Ankur; Kuosmanen, Timo
2012-01-01
Variable selection is recognized as one of the most critical steps in statistical modeling. The problems encountered in engineering and social sciences are commonly characterized by over-abundance of explanatory variables, non-linearities and unknown interdependencies between the regressors. An added difficulty is that the analysts may have little or no prior knowledge on the relative importance of the variables. To provide a robust method for model selection, this paper introduces a technique called the Multi-objective Genetic Algorithm for Variable Selection (MOGA-VS) which provides the user with an efficient set of regression models for a given data-set. The algorithm considers the regression problem as a two objective task, where the purpose is to choose those models over the other which have less number of regression coefficients and better goodness of fit. In MOGA-VS, the model selection procedure is implemented in two steps. First, we generate the frontier of all efficient or non-dominated regression m...
Institute of Scientific and Technical Information of China (English)
LIN Xiankun; LI Yanjun; LI Haolin
2014-01-01
Linear motors generate high heat and cause significant deformation in high speed direct feed drive mechanisms. It is relevant to estimate their deformation behavior to improve their application in precision machine tools. This paper describes a method to estimate its thermal deformation based on updated finite element(FE) model methods. Firstly, a FE model is established for a linear motor drive test rig that includes the correlation between temperature rise and its resulting deformation. The relationship between the input and output variables of the FE model is identified with a modified multivariate input/output least square support vector regression machine. Additionally, the temperature rise and displacements at some critical points on the mechanism are obtained experimentally by a system of thermocouples and an interferometer. The FE model is updated through intelligent comparison between the experimentally measured values and the results from the regression machine. The experiments for testing thermal behavior along with the updated FE model simulations is conducted on the test rig in reciprocating cycle drive conditions. The results show that the intelligently updated FE model can be implemented to analyze the temperature variation distribution of the mechanism and to estimate its thermal behavior. The accuracy of the thermal behavior estimation with the optimally updated method can be more than double that of the initial theoretical FE model. This paper provides a simulation method that is effective to estimate the thermal behavior of the direct feed drive mechanism with high accuracy.
The updated geodetic mean dynamic topography model – DTU15MDT
DEFF Research Database (Denmark)
Knudsen, Per; Andersen, Ole Baltazar; Maximenko, Nikolai
An update to the global mean dynamic topography model DTU13MDT is presented. For DTU15MDT the newer gravity model EIGEN-6C4 has been combined with the DTU15MSS mean sea surface model to construct this global mean dynamic topography model. The EIGEN-6C4 is derived using the full series of GOCE data...... re-tracked CRYOSAT-2 altimetry also, hence, increasing its resolution. Also, some issues in the Polar regions have been solved. Finally, the filtering was re-evaluated by adjusting the quasi-gaussian filter width to optimize the fit to drifter velocities. Subsequently, geostrophic surface currents...... were derived from the DTU15MDT. The results show that geostrophic surface currents associated with the mean circulation have been further improved and that currents having speeds down to below 4 cm/s have been recovered....
Modeling of Explorative Procedures for Remote Object Identification
1991-09-01
music ; or as the action of detection and exploration of the world by actively searching and looking for updated information, which is defined as...world, can produce hallucinations or fantasies which dominates the person. The overall activity of the stimuli, cells, organs and systems is what
Developing Physiologic Models for Emergency Medical Procedures Under Microgravity
Parker, Nigel; O'Quinn, Veronica
2012-01-01
Several technological enhancements have been made to METI's commercial Emergency Care Simulator (ECS) with regard to how microgravity affects human physiology. The ECS uses both a software-only lung simulation, and an integrated mannequin lung that uses a physical lung bag for creating chest excursions, and a digital simulation of lung mechanics and gas exchange. METI s patient simulators incorporate models of human physiology that simulate lung and chest wall mechanics, as well as pulmonary gas exchange. Microgravity affects how O2 and CO2 are exchanged in the lungs. Procedures were also developed to take into affect the Glasgow Coma Scale for determining levels of consciousness by varying the ECS eye-blinking function to partially indicate the level of consciousness of the patient. In addition, the ECS was modified to provide various levels of pulses from weak and thready to hyper-dynamic to assist in assessing patient conditions from the femoral, carotid, brachial, and pedal pulse locations.
An All-Time-Domain Moving Object Data Model, Location Updating Strategy, and Position Estimation
National Research Council Canada - National Science Library
Wu, Qunyong; Huang, Junyi; Luo, Jianping; Yang, Jianjun
2015-01-01
.... Secondly, we proposed a new dynamic threshold location updating strategy. The location updating threshold was given dynamically in accordance with the velocity, accuracy, and azimuth positioning information from the GPS...
Summary of Expansions, Updates, and Results in GREET® 2016 Suite of Models
Energy Technology Data Exchange (ETDEWEB)
None, None
2016-10-01
This report documents the technical content of the expansions and updates in Argonne National Laboratory’s GREET® 2016 release and provides references and links to key documents related to these expansions and updates.
Experiences with a procedure for modeling product knowledge
DEFF Research Database (Denmark)
Hansen, Benjamin Loer; Hvam, Lars
2002-01-01
This paper presents experiences with a procedure for building configurators. The procedure has been used in an American company producing custom-made precision air conditioning equipment. The paper describes experiences with the use of the procedure and experiences with the project in general....
THE SCHEME FOR THE DATABASE BUILDING AND UPDATING OF 1:10 000 DIGITAL ELEVATION MODELS
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The National Bureau of Surveying and Mapping of China has planned to speed up the development of spatial data infrastructure (SDI) in the coming few years. This SDI consists of four types of digital products, i. e., digital orthophotos, digital elevation models,digital line graphs and digital raster graphs. For the DEM,a scheme for the database building and updating of 1:10 000 digital elevation models has been proposed and some experimental tests have also been accomplished. This paper describes the theoretical (and/or technical)background and reports some of the experimental results to support the scheme. Various aspects of the scheme such as accuracy, data sources, data sampling, spatial resolution, terrain modeling, data organization, etc are discussed.
Experimental Update of the Overtopping Model Used for the Wave Dragon Wave Energy Converter
DEFF Research Database (Denmark)
Parmeggiani, Stefano; Kofoed, Jens Peter; Friis-Madsen, Erik
2013-01-01
An overtopping model specifically suited for Wave Dragon is needed in order to improve the reliability of its performance estimates. The model shall be comprehensive of all relevant physical processes that affect overtopping and flexible to adapt to any local conditions and device configuration....... An experimental investigation is carried out to update an existing formulation suited for 2D draft-limited, low-crested structures, in order to include the effects on the overtopping flow of the wave steepness, the 3D geometry of Wave Dragon, the wing reflectors, the device motions and the non-rigid connection...... of which can be measured in real-time. Instead of using new fitting coefficients, this approach allows a broader applicability of the model beyond the Wave Dragon case, to any overtopping WEC or structure within the range of tested conditions. Predictions reliability of overtopping over Wave Dragon...
Updated Life-Cycle Assessment of Aluminum Production and Semi-fabrication for the GREET Model
Energy Technology Data Exchange (ETDEWEB)
Dai, Qiang [Argonne National Lab. (ANL), Argonne, IL (United States); Kelly, Jarod C. [Argonne National Lab. (ANL), Argonne, IL (United States); Burnham, Andrew [Argonne National Lab. (ANL), Argonne, IL (United States); Elgowainy, Amgad [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-09-01
This report serves as an update for the life-cycle analysis (LCA) of aluminum production based on the most recent data representing the state-of-the-art of the industry in North America. The 2013 Aluminum Association (AA) LCA report on the environmental footprint of semifinished aluminum products in North America provides the basis for the update (The Aluminum Association, 2013). The scope of this study covers primary aluminum production, secondary aluminum production, as well as aluminum semi-fabrication processes including hot rolling, cold rolling, extrusion and shape casting. This report focuses on energy consumptions, material inputs and criteria air pollutant emissions for each process from the cradle-to-gate of aluminum, which starts from bauxite extraction, and ends with manufacturing of semi-fabricated aluminum products. The life-cycle inventory (LCI) tables compiled are to be incorporated into the vehicle cycle model of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET) Model for the release of its 2015 version.
Update of structural models at SFR nuclear waste repository, Forsmark, Sweden
Energy Technology Data Exchange (ETDEWEB)
Axelsson, C.L.; Maersk Hansen, L. [Golder Associates AB (Sweden)
1997-12-01
The final repository for low and medium-level waste, SFR, is located below the Baltic, off Forsmark. A number off various geo-scientific investigations have been performed and used to design a conceptual model of the fracture system, to be used in hydraulic modeling for a performance assessment study of the SFR facility in 1987. An updated study was reported in 1993. No formal basic revision of the original conceptual model of the fracture system around SFR has so far been made. During review, uncertainties in the model of the fracture system were found. The previous local structure model is reviewed and an alternative model is presented together with evidence for the new interpretation. The model is based on review of geophysical data, geological mapping, corelogs, hydraulic testing, water inflow etc. The fact that two different models can result from the same data represent an interpretation uncertainty which can not be resolved without more data and basic interpretations of such data. Further refinement of the structure model could only be motivated in case the two different models discussed here would lead to significantly different consequences 20 refs, 24 figs, 16 tabs
Vergara-Jaque, Ariela; Fenollar-Ferrer, Cristina; Kaufmann, Desirée; Forrest, Lucy R
2015-01-01
Secondary active transporters are critical for neurotransmitter clearance and recycling during synaptic transmission and uptake of nutrients. These proteins mediate the movement of solutes against their concentration gradients, by using the energy released in the movement of ions down pre-existing concentration gradients. To achieve this, transporters conform to the so-called alternating-access hypothesis, whereby the protein adopts at least two conformations in which the substrate binding sites are exposed to one or other side of the membrane, but not both simultaneously. Structures of a bacterial homolog of neuronal glutamate transporters, GltPh, in several different conformational states have revealed that the protein structure is asymmetric in the outward- and inward-open states, and that the conformational change connecting them involves a elevator-like movement of a substrate binding domain across the membrane. The structural asymmetry is created by inverted-topology repeats, i.e., structural repeats with similar overall folds whose transmembrane topologies are related to each other by two-fold pseudo-symmetry around an axis parallel to the membrane plane. Inverted repeats have been found in around three-quarters of secondary transporter folds. Moreover, the (a)symmetry of these systems has been successfully used as a bioinformatic tool, called "repeat-swap modeling" to predict structural models of a transporter in one conformation using the known structure of the transporter in the complementary conformation as a template. Here, we describe an updated repeat-swap homology modeling protocol, and calibrate the accuracy of the method using GltPh, for which both inward- and outward-facing conformations are known. We then apply this repeat-swap homology modeling procedure to a concentrative nucleoside transporter, VcCNT, which has a three-dimensional arrangement related to that of GltPh. The repeat-swapped model of VcCNT predicts that nucleoside transport also
An Updated Analytical Structural Pounding Force Model Based on Viscoelasticity of Materials
Directory of Open Access Journals (Sweden)
Qichao Xue
2016-01-01
Full Text Available Based on the summary of existing pounding force analytical models, an updated pounding force analysis method is proposed by introducing viscoelastic constitutive model and contact mechanics method. Traditional Kelvin viscoelastic pounding force model can be expanded to 3-parameter linear viscoelastic model by separating classic pounding model parameters into geometry parameters and viscoelastic material parameters. Two existing pounding examples, the poundings of steel-to-steel and concrete-to-concrete, are recalculated by utilizing the proposed method. Afterwards, the calculation results are compared with other pounding force models. The results show certain accuracy in proposed model. The relative normalized errors of steel-to-steel and concrete-to-concrete experiments are 19.8% and 12.5%, respectively. Furthermore, a steel-to-polymer pounding example is calculated, and the application of the proposed method in vibration control analysis for pounding tuned mass damper (TMD is simulated consequently. However, due to insufficient experiment details, the proposed model can only give a rough trend for both single pounding process and vibration control process. Regardless of the cheerful prospect, the study in this paper is only the first step of pounding force calculation. It still needs a more careful assessment of the model performance, especially in the presence of inelastic response.
DEFF Research Database (Denmark)
Kristensen, Anders Ringgaard; Søllested, Thomas Algot
2004-01-01
that really uses all these methodological improvements. In this paper, the biological model describing the performance and feed intake of sows is presented. In particular, estimation of herd specific parameters is emphasized. The optimization model is described in a subsequent paper......Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...
Uncertainty quantification of voice signal production mechanical model and experimental updating
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is
Modified uterine allotransplantation and immunosuppression procedure in the sheep model.
Directory of Open Access Journals (Sweden)
Li Wei
Full Text Available OBJECTIVE: To develop an orthotopic, allogeneic, uterine transplantation technique and an effective immunosuppressive protocol in the sheep model. METHODS: In this pilot study, 10 sexually mature ewes were subjected to laparotomy and total abdominal hysterectomy with oophorectomy to procure uterus allografts. The cold ischemic time was 60 min. End-to-end vascular anastomosis was performed using continuous, non-interlocking sutures. Complete tissue reperfusion was achieved in all animals within 30 s after the vascular re-anastomosis, without any evidence of arterial or venous thrombosis. The immunosuppressive protocol consisted of tacrolimus, mycophenolate mofetil and methylprednisolone tablets. Graft viability was assessed by transrectal ultrasonography and second-look laparotomy at 2 and 4 weeks, respectively. RESULTS: Viable uterine tissue and vascular patency were observed on transrectal ultrasonography and second-look laparotomy. Histological analysis of the graft tissue (performed in one ewe revealed normal tissue architecture with a very subtle inflammatory reaction but no edema or stasis. CONCLUSION: We have developed a modified procedure that allowed us to successfully perform orthotopic, allogeneic, uterine transplantation in sheep, whose uterine and vascular anatomy (apart from the bicornuate uterus is similar to the human anatomy, making the ovine model excellent for human uterine transplant research.
A Traffic Information Estimation Model Using Periodic Location Update Events from Cellular Network
Lin, Bon-Yeh; Chen, Chi-Hua; Lo, Chi-Chun
In recent years considerable concerns have arisen over building Intelligent Transportation System (ITS) which focuses on efficiently managing the road network. One of the important purposes of ITS is to improve the usability of transportation resources so as extend the durability of vehicle, reduce the fuel consumption and transportation times. Before this goal can be achieved, it is vital to obtain correct and real-time traffic information, so that traffic information services can be provided in a timely and effective manner. Using Mobile Stations (MS) as probe to tracking the vehicle movement is a low cost and immediately solution to obtain the real-time traffic information. In this paper, we propose a model to analyze the relation between the amount of Periodic Location Update (PLU) events and traffic density. Finally, the numerical analysis shows that this model is feasible to estimate the traffic density.
Institute of Scientific and Technical Information of China (English)
Nan Liang; Pu-Xun Wua; Zong-Hong Zhu
2011-01-01
We constrain the Cardassian expansion models from the latest observations,including the updated Gamma-ray bursts (GRBs),which are calibrated using a cosmology independent method from the Union2 compilation of type Ia supernovae (SNe Ia).By combining the GRB data with the joint observations from the Union2SNe Ia set,along with the results from the Cosmic Microwave Background radiation observation from the seven-year Wilkinson Microwave Anisotropy Probe and the baryonic acoustic oscillation observation galaxy sample from the spectroscopic Sloan Digital Sky Survey Data Release,we find significant constraints on the model parameters of the original Cardassian model ΩM0=n 282+0.015-0.014,n=0.03+0.05-0.05;and n = -0.16+0.25-3.26,β=-0.76+0.34-0.58 of the modified polytropic Cardassian model,which are consistent with the ACDM model in a l-σ confidence region.From the reconstruction of the deceleration parameter q(z) in Cardassian models,we obtain the transition redshift ZT = 0.73 ± 0.04 for the original Cardassian model and ZT = 0.68 ± 0.04 for the modified polytropic Cardassian model.
Updating the Finite Element Model of the Aerostructures Test Wing using Ground Vibration Test Data
Lung, Shun-fat; Pak, Chan-gi
2009-01-01
Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the Aerostructures Test Wing (ATW), which was designed and tested at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center (DFRC) (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.
Institute of Scientific and Technical Information of China (English)
LEE Hyeon-deok; SON Myeong-jo; OH Min-jae; LEE Hyung-woo; KIM Tae-wan
2012-01-01
In early 2000,large domestic shipyards introduced shipbuilding 3D computer-aided design (CAD)to the hull production design process to define manufacturing and assembly information.The production design process accounts for most of the man-hours (M/H) of the entire design process and is closely connected to yard production because designs must take into account the production schedule of the shipyard,the current state of the dock needed to mount the ship's block,and supply information.Therefore,many shipyards are investigating the complete automation of the production design process to reduce the M/H for designers.However,these problems are still currently unresolved,and a clear direction is needed for research on the automatic design base of manufacturing rules,batches reflecting changed building specifications,batch updates of boundary information for hull members,and management of the hull model change history to automate the production design process.In this study,a process was developed to aid production design engineers in designing a new ship's hull block model from that of a similar ship previously built,based on AVEVA Marine.An automation system that uses the similar ship's hull block model is proposed to reduce M/H and human errors by the production design engineer.First,scheme files holding important information were constructed in a database to automatically update hull block model modifications.Second,for batch updates,the database's table,including building specifications and the referential integrity of a relational database were compared.In particular,this study focused on reflecting the frequent modification of building specifications and regeneration of boundary information of the adjacent panel due to changes in a specific panel.Third,the rollback function is proposed in which the database (DB) is used to return to the previously designed panels.
Baart, A.M.; Atsma, F.; McSweeney, E.N.; Moons, K.G.; Vergouwe, Y.; Kort, W.L. de
2014-01-01
BACKGROUND: Recently, sex-specific prediction models for low hemoglobin (Hb) deferral have been developed in Dutch whole blood donors. In the present study, we validated and updated the models in a cohort of Irish whole blood donors. STUDY DESIGN AND METHODS: Prospectively collected data from 45,031
Directory of Open Access Journals (Sweden)
J. M. van Wessem
2013-07-01
Full Text Available The physics package of the polar version of the regional atmospheric climate model RACMO2 has been updated from RACMO2.1 to RACMO2.3. The update constitutes, amongst others, the inclusion of a parameterization for cloud ice super-saturation, an improved turbulent and radiative flux scheme and a changed cloud scheme. In this study the effects of these changes on the modelled near-surface climate of Antarctica are presented. Significant biases remain, but overall RACMO2.3 better represents the near-surface climate in terms of the modelled surface energy balance, based on a comparison with > 750 months of data from nine automatic weather stations located in East Antarctica. Especially the representation of the sensible heat flux and net longwave radiative flux has improved with a decrease in biases of up to 40 %. These improvements are mainly caused by the inclusion of ice super-saturation, which has led to more moisture being transported onto the continent, resulting in more and optically thicker clouds and more downward longwave radiation. As a result, modelled surface temperatures have increased and the bias, when compared to 10 m snow temperatures from 64 ice core observations, has decreased from −2.3 K to −1.3 K. The weaker surface temperature inversion consequently improves the representation of the sensible heat flux, whereas wind speed remains unchanged.
Bayesian updating in a fault tree model for shipwreck risk assessment.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M
2017-03-14
Shipwrecks containing oil and other hazardous substances have been deteriorating on the seabeds of the world for many years and are threatening to pollute the marine environment. The status of the wrecks and the potential volume of harmful substances present in the wrecks are affected by a multitude of uncertainties. Each shipwreck poses a unique threat, the nature of which is determined by the structural status of the wreck and possible damage resulting from hazardous activities that could potentially cause a discharge. Decision support is required to ensure the efficiency of the prioritisation process and the allocation of resources required to carry out risk mitigation measures. Whilst risk assessments can provide the requisite decision support, comprehensive methods that take into account key uncertainties related to shipwrecks are limited. The aim of this paper was to develop a method for estimating the probability of discharge of hazardous substances from shipwrecks. The method is based on Bayesian updating of generic information on the hazards posed by different activities in the surroundings of the wreck, with information on site-specific and wreck-specific conditions in a fault tree model. Bayesian updating is performed using Monte Carlo simulations for estimating the probability of a discharge of hazardous substances and formal handling of intrinsic uncertainties. An example application involving two wrecks located off the Swedish coast is presented. Results show the estimated probability of opening, discharge and volume of the discharge for the two wrecks and illustrate the capability of the model to provide decision support. Together with consequence estimations of a discharge of hazardous substances, the suggested model enables comprehensive and probabilistic risk assessments of shipwrecks to be made.
Procedure for identifying models for the heat dynamics of buildings
DEFF Research Database (Denmark)
Bacher, Peder; Madsen, Henrik
This report describes a new method for obtaining detailed information about the heat dynamics of a building using frequent reading of the heat consumption. Such a procedure is considered to be of uttermost importance as a key procedure for using readings from smart meters, which is expected...... to be installed in almost all buildings in the coming years....
Towards a neural basis of music perception -- A review and updated model
Directory of Open Access Journals (Sweden)
Stefan eKoelsch
2011-06-01
Full Text Available Music perception involves acoustic analysis, auditory memory, auditoryscene analysis, processing of interval relations, of musical syntax and semantics,and activation of (premotor representations of actions. Moreover, music percep-tion potentially elicits emotions, thus giving rise to the modulation of emotionaleffector systems such as the subjective feeling system, the autonomic nervoussystem, the hormonal, and the immune system. Building on a previous article(Koelsch & Siebel, 2005, this review presents an updated model of music percep-tion and its neural correlates. The article describes processes involved in musicperception, and reports EEG and fMRI studies that inform about the time courseof these processes, as well as about where in the brain these processes might belocated.
[Social determinants of health and disability: updating the model for determination].
Tamayo, Mauro; Besoaín, Álvaro; Rebolledo, Jame
2017-03-05
Social determinants of health (SDH) are conditions in which people live. These conditions impact their lives, health status and social inclusion level. In line with the conceptual and comprehensive progression of disability, it is important to update SDH due to their broad implications in implementing health interventions in society. This proposal supports incorporating disability in the model as a structural determinant, as it would lead to the same social inclusion/exclusion of people described in other structural SDH. This proposal encourages giving importance to designing and implementing public policies to improve societal conditions and contribute to social equity. This will be an act of reparation, justice and fulfilment with the Convention on the Rights of Persons with Disabilities.
Toward a Neural Basis of Music Perception – A Review and Updated Model
Koelsch, Stefan
2011-01-01
Music perception involves acoustic analysis, auditory memory, auditory scene analysis, processing of interval relations, of musical syntax and semantics, and activation of (pre)motor representations of actions. Moreover, music perception potentially elicits emotions, thus giving rise to the modulation of emotional effector systems such as the subjective feeling system, the autonomic nervous system, the hormonal, and the immune system. Building on a previous article (Koelsch and Siebel, 2005), this review presents an updated model of music perception and its neural correlates. The article describes processes involved in music perception, and reports EEG and fMRI studies that inform about the time course of these processes, as well as about where in the brain these processes might be located. PMID:21713060
Schaewe, Timothy J.; Fan, Xiaoyao; Ji, Songbai; Hartov, Alex; Hiemenz Holton, Leslie; Roberts, David W.; Paulsen, Keith D.; Simon, David A.
2013-03-01
Dartmouth and Medtronic have established an academic-industrial partnership to develop, validate, and evaluate a multimodality neurosurgical image-guidance platform for brain tumor resection surgery that is capable of updating the spatial relationships between preoperative images and the current surgical field. Previous studies have shown that brain shift compensation through a modeling framework using intraoperative ultrasound and/or visible light stereovision to update preoperative MRI appears to result in improved accuracy in navigation. However, image updates have thus far only been produced retrospective to surgery in large part because of gaps in the software integration and information flow between the co-registration and tracking, image acquisition and processing, and image warping tasks which are required during a case. This paper reports the first demonstration of integration of a deformation-based image updating process for brain shift modeling with an industry-standard image guided surgery platform. Specifically, we have completed the first and most critical data transfer operation to transmit volumetric image data generated by the Dartmouth brain shift modeling process to the Medtronic StealthStation® system. StealthStation® comparison views, which allow the surgeon to verify the correspondence of the received updated image volume relative to the preoperative MRI, are presented, along with other displays of image data such as the intraoperative 3D ultrasound used to update the model. These views and data represent the first time that externally acquired and manipulated image data has been imported into the StealthStation® system through the StealthLink® portal and visualized on the StealthStation® display.
FEM Updating of the Heritage Court Building Structure
DEFF Research Database (Denmark)
Ventura, C. E.; Brincker, Rune; Dascotte, E.
2001-01-01
. The starting model of the structure was developed from the information provided in the design documentation of the building. Different parameters of the model were then modified using an automated procedure to improve the correlation between measured and calculated modal parameters. Careful attention......This paper describes results of a model updating study conducted on a 15-storey reinforced concrete shear core building. The output-only modal identification results obtained from ambient vibration measurements of the building were used to update a finite element model of the structure...... was placed to the selection of the parameters to be modified by the updating software in order to ensure that the necessary changes to the model were realistic and physically realisable and meaningful. The paper highlights the model updating process and provides an assessment of the usefulness of using...
Experimental test of spatial updating models for monkey eye-head gaze shifts.
Directory of Open Access Journals (Sweden)
Tom J Van Grootel
Full Text Available How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static, or during (dynamic the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.
On-line updating of a distributed flow routing model - River Vistula case study
Karamuz, Emilia; Romanowicz, Renata; Napiorkowski, Jaroslaw
2015-04-01
This paper presents an application of methods of on-line updating in the River Vistula flow forecasting system. All flow-routing codes make simplifying assumptions and consider only a reduced set of the processes known to occur during a flood. Hence, all models are subject to a degree of structural error that is typically compensated for by calibration of the friction parameters. Calibrated parameter values are not, therefore, physically realistic, as in estimating them we also make allowance for a number of distinctly non-physical effects, such as model structural error and any energy losses or flow processes which occur at sub-grid scales. Calibrated model parameters are therefore area-effective, scale-dependent values which are not drawn from the same underlying statistical distribution as the equivalent at-a-point parameter of the same name. The aim of this paper is the derivation of real-time updated, on-line flow forecasts at certain strategic locations along the river, over a specified time horizon into the future, based on information on the behaviour of the flood wave upstream and available on-line measurements at a site. Depending on the length of the river reach and the slope of the river bed, a realistic forecast lead time, obtained in this manner, may range from hours to days. The information upstream can include observations of river levels and/or rainfall measurements. The proposed forecasting system will integrate distributed modelling, acting as a spatial interpolator with lumped parameter Stochastic Transfer Function models. Daily stage data from gauging stations are typically available at sites 10-60 km apart and test only the average routing performance of hydraulic models and not their ability to produce spatial predictions. Application of a distributed flow routing model makes it possible to interpolate forecasts both in time and space. This work was partly supported by the project "Stochastic flood forecasting system (The River Vistula reach
Slab2 - Providing updated subduction zone geometries and modeling tools to the community
Hayes, G. P.; Hearne, M. G.; Portner, D. E.; Borjas, C.; Moore, G.; Flamme, H.
2015-12-01
The U.S. Geological Survey database of global subduction zone geometries (Slab1.0) combines a variety of geophysical data sets (earthquake hypocenters, moment tensors, active source seismic survey images of the shallow subduction zone, bathymetry, trench locations, and sediment thickness information) to image the shape of subducting slabs in three dimensions, at approximately 85% of the world's convergent margins. The database is used extensively for a variety of purposes, from earthquake source imaging, to magnetotelluric modeling. Gaps in Slab1.0 exist where input data are sparse and/or where slabs are geometrically complex (and difficult to image with an automated approach). Slab1.0 also does not include information on the uncertainty in the modeled geometrical parameters, or the input data used to image them, and provides no means to reproduce the models it described. Currently underway, Slab2 will update and replace Slab1.0 by: (1) extending modeled slab geometries to all global subduction zones; (2) incorporating regional data sets that may describe slab geometry in finer detail than do previously used teleseismic data; (3) providing information on the uncertainties in each modeled slab surface; (4) modifying our modeling approach to a fully-three dimensional data interpolation, rather than following the 2-D to 3-D steps of Slab1.0; (5) migrating the slab modeling code base to a more universally distributable language, Python; and (6) providing the code base and input data we use to create our models, such that the community can both reproduce the slab geometries, and add their own data sets to ours to further improve upon those models in the future. In this presentation we describe our vision for Slab2, and the first results of this modeling process.
Using GOMS models and hypertext to create representations of medical procedures for online display
Gugerty, Leo; Halgren, Shannon; Gosbee, John; Rudisill, Marianne
1991-01-01
This study investigated two methods to improve organization and presentation of computer-based medical procedures. A literature review suggested that the GOMS (goals, operators, methods, and selecton rules) model can assist in rigorous task analysis, which can then help generate initial design ideas for the human-computer interface. GOMS model are hierarchical in nature, so this study also investigated the effect of hierarchical, hypertext interfaces. We used a 2 x 2 between subjects design, including the following independent variables: procedure organization - GOMS model based vs. medical-textbook based; navigation type - hierarchical vs. linear (booklike). After naive subjects studies the online procedures, measures were taken of their memory for the content and the organization of the procedures. This design was repeated for two medical procedures. For one procedure, subjects who studied GOMS-based and hierarchical procedures remembered more about the procedures than other subjects. The results for the other procedure were less clear. However, data for both procedures showed a 'GOMSification effect'. That is, when asked to do a free recall of a procedure, subjects who had studies a textbook procedure often recalled key information in a location inconsistent with the procedure they actually studied, but consistent with the GOMS-based procedure.
Directory of Open Access Journals (Sweden)
B. Gantt
2015-05-01
Full Text Available Sea spray aerosols (SSA impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Despite their importance, the emission magnitude of SSA remains highly uncertain with global estimates varying by nearly two orders of magnitude. In this study, the Community Multiscale Air Quality (CMAQ model was updated to enhance fine mode SSA emissions, include sea surface temperature (SST dependency, and reduce coastally-enhanced emissions. Predictions from the updated CMAQ model and those of the previous release version, CMAQv5.0.2, were evaluated using several regional and national observational datasets in the continental US. The updated emissions generally reduced model underestimates of sodium, chloride, and nitrate surface concentrations for an inland site of the Bay Regional Atmospheric Chemistry Experiment (BRACE near Tampa, Florida. Including SST-dependency to the SSA emission parameterization led to increased sodium concentrations in the southeast US and decreased concentrations along parts of the Pacific coast and northeastern US. The influence of sodium on the gas-particle partitioning of nitrate resulted in higher nitrate particle concentrations in many coastal urban areas due to increased condensation of nitric acid in the updated simulations, potentially affecting the predicted nitrogen deposition in sensitive ecosystems. Application of the updated SSA emissions to the California Research at the Nexus of Air Quality and Climate Change (CalNex study period resulted in modest improvement in the predicted surface concentration of sodium and nitrate at several central and southern California coastal sites. This SSA emission update enabled a more realistic simulation of the atmospheric chemistry in environments where marine air mixes with urban pollution.
Institute of Scientific and Technical Information of China (English)
Yuan-Ko HUANG; Lien-Fa LIN
2014-01-01
Spatio-temporal databases aim at appropriately managing moving objects so as to support various types of queries. While much research has been conducted on developing query processing techniques, less effort has been made to address the issue of when and how to update location information of moving objects. Previous work shifts the workload of processing updates to each object which usually has limited CPU and battery capacities. This results in a tremendous processing overhead for each moving object. In this paper, we focus on designing efficient update strategies for two important types of moving objects, free-moving objects (FMOs) and network-constrained objects (NCOs), which are classified based on object movement models. For FMOs, we develop a novel update strategy, namely the FMO update strategy (FMOUS), to explicitly indicate a time point at which the object needs to update location information. As each object knows in advance when to update (meaning that it does not have to continuously check), the processing overhead can be greatly reduced. In addition, the FMO update procedure (FMOUP) is designed to efficiently process the updates issued from moving objects. Similarly, for NCOs, we propose the NCO update strategy (NCOUS) and the NCO update procedure (NCOUP) to inform each object when and how to update location information. Exten-sive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed update strategies.
Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2)
Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus
2017-07-01
The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs) and Earth system models (ESMs) to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx), HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect caused by the
PACIAE 2.1: An Updated Issue of Parton and Hadron Cascade Model PACIAE 2.0
Institute of Scientific and Technical Information of China (English)
SA; Ben-hao; ZHOU; Dai-mei; YAN; Yu-liang; DONG; Bao-guo; CAI; Xu
2013-01-01
We have updated the parton and hadron cascade model PACIAE 2.0 to the new issue of PACIAE 2.1.The PACIAE model is based on PYTHIA.In the PYTHIA model,once the generated particle or parton transverse momentum pT is randomly sampled,the px and py components are originally put on the circle with radius pT randomly.Now,it is put
A visual graphic/haptic rendering model for hysteroscopic procedures.
Lim, Fabian; Brown, Ian; McColl, Ryan; Seligman, Cory; Alsaraira, Amer
2006-03-01
Hysteroscopy is an extensively popular option in evaluating and treating women with infertility. The procedure utilises an endoscope, inserted through the vagina and cervix to examine the intra-uterine cavity via a monitor. The difficulty of hysteroscopy from the surgeon's perspective is the visual spatial perception of interpreting 3D images on a 2D monitor, and the associated psychomotor skills in overcoming the fulcrum-effect. Despite the widespread use of this procedure, current qualified hysteroscopy surgeons have not been trained the fundamentals through an organised curriculum. The emergence of virtual reality as an educational tool for this procedure, and for other endoscopic procedures, has undoubtedly raised interests. The ultimate objective is for the inclusion of virtual reality training as a mandatory component for gynaecologic endoscopy training. Part of this process involves the design of a simulator, encompassing the technical difficulties and complications associated with the procedure. The proposed research examines fundamental hysteroscopy factors, current training and accreditation, and proposes a hysteroscopic simulator design that is suitable for educating and training.
A Comparison of Exposure Control Procedures in CATs Using the 3PL Model
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G.
2013-01-01
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
Institute of Scientific and Technical Information of China (English)
LI Hong-wei; YANG He; SUN Zhi-chao
2006-01-01
Computational stability and efficiency are the key problems for numerical modeling of crystal plasticity,which will limit its development and application in finite element (FE) simulation evidently. Since implicit iterative algorithms are inefficient and have difficulty to determine initial values,an explicit incremental-update algorithm for the elasto-viscoplastic constitutive relation was developed in the intermediate frame by using the second Piola-Kirchoff (P-K) stress and Green stain. The increment of stress and slip resistance were solved by a calculation loop of linear equations sets. The reorientation of the crystal as well as the elastic strain can be obtained from a polar decomposition of the elastic deformation gradient. User material subroutine VUMAT was developed to combine crystal elasto-viscoplastic constitutive model with ABAQUS/Explicit. Numerical studies were performed on a cubic upset model with OFHC material (FCC crystal). The comparison of the numerical results with those obtained by implicit iterative algorithm and those from experiments demonstrates that the explicit algorithm is reliable. Furthermore,the effect rules of material anisotropy,rate sensitivity coefficient (RSC) and loading speeds on the deformation were studied. The numerical studies indicate that the explicit algorithm is suitable and efficient for large deformation analyses where anisotropy due to texture is important.
An updated analytic model for the attenuation by the intergalactic medium
Inoue, Akio K; Iwata, Ikuru
2014-01-01
We present an updated version of the so-called Madau model for the attenuation by the intergalactic neutral hydrogen against the radiation from distant objects. First, we derive a distribution function of the intergalactic absorbers from the latest observational statistics of the Ly$\\alpha$ forest, Lyman limit systems, and damped Ly$\\alpha$ systems. The distribution function excellently reproduces the observed redshift evolutions of the Ly$\\alpha$ depression and of the mean-free-path of the Lyman continuum simultaneously. Then, we derive a set of the analytic functions which describe the mean intergalactic attenuation curve for objects at $z>0.5$. Our new model predicts, for some redshifts, more than 0.5--1 mag different attenuation magnitudes through usual broad-band filters relative to the original Madau model. Such a difference would cause uncertainty of the photometric redshift of 0.2, in particular, at $z\\simeq3$--4. Finally, we find a more than 0.5 mag overestimation of the Lyman continuum attenuation i...
Procedure to Determine Coefficients for the Sandia Array Performance Model (SAPM)
Energy Technology Data Exchange (ETDEWEB)
King, Bruce Hardison; Hansen, Clifford; Riley, Daniel; Robinson, Charles David; Pratt, Larry
2016-06-01
The Sandia Array Performance Model (SAPM), a semi-empirical model for predicting PV system power, has been in use for more than a decade. While several studies have presented comparisons of measurements and analysis results among laboratories, detailed procedures for determining model coefficients have not yet been published. Independent test laboratories must develop in-house procedures to determine SAPM coefficients, which contributes to uncertainty in the resulting models. Here we present a standard procedure for calibrating the SAPM using outdoor electrical and meteorological measurements. Analysis procedures are illustrated with data measured outdoors for a 36-cell silicon photovoltaic module.
A Survey of Procedural Methods for Terrain Modelling
Smelik, R.M.; Kraker, J.K. de; Groenewegen, S.A.; Tutenel, T.; Bidarra, R.
2009-01-01
Procedural methods are a promising but underused alternative to manual content creation. Commonly heard drawbacks are the randomness of and the lack of control over the output and the absence of integrated solutions, although more recent publications increasingly address these issues. This paper sur
Energy Technology Data Exchange (ETDEWEB)
Manrique, A. [Centre Hospitalier Universitaire, 76 - Rouen (France); Marie, P.Y. [Centre Hospitalier Universitaire Nancy-Brabois, Medecine Nucleaire, 54 - Vandoeuvre-les-Nancy (France); Maunoury, Ch.; Acar, Ph. [Centre Hospitalier Universitaire Necker Enfants Malades Biophysique et Medecine Nucleaire, 75 - Paris (France); Agostini, D. [Centre Hospitalier Universitaire, Service de Medecine Nucleaire, 14 - Caen (France)
2002-12-01
The guidelines update for nuclear cardiology procedures are studied in this article. We find the minimum technique conditions for the stress testing practice, the recommendations for the different ischemia activation tests, the choice of the stress test. (N.C.)
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2010
Energy Technology Data Exchange (ETDEWEB)
Rahmat Aryaeinejad; Douglas S. Crawford; Mark D. DeHart; George W. Griffith; D. Scott Lucas; Joseph W. Nielsen; David W. Nigg; James R. Parry; Jorge Navarro
2010-09-01
Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance and, to some extent, experiment management are obsolete, inconsistent with the state of modern nuclear engineering practice, and are becoming increasingly difficult to properly verify and validate (V&V). Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In 2009 the Idaho National Laboratory (INL) initiated a focused effort to address this situation through the introduction of modern high-fidelity computational software and protocols, with appropriate V&V, within the next 3-4 years via the ATR Core Modeling and Simulation and V&V Update (or “Core Modeling Update”) Project. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF).
Badhwar-O'Neill 2011 Galactic Cosmic Ray Model Update and Future Improvements
O'Neill, Pat M.; Kim, Myung-Hee Y.
2014-01-01
The Badhwar-O'Neill Galactic Cosmic Ray (GCR) Model based on actual GR measurements is used by deep space mission planners for the certification of micro-electronic systems and the analysis of radiation health risks to astronauts in space missions. The BO GCR Model provides GCR flux in deep space (outside the earth's magnetosphere) for any given time from 1645 to present. The energy spectrum from 50 MeV/n-20 GeV/n is provided for ions from hydrogen to uranium. This work describes the most recent version of the BO GCR model (BO'11). BO'11 determines the GCR flux at a given time applying an empirical time delay function to past sunspot activity. We describe the GCR measurement data used in the BO'11 update - modern data from BESS, PAMELA, CAPRICE, and ACE emphasized for than the older balloon data used for the previous BO model (BO'10). We look at the GCR flux for the last 24 solar minima and show how much greater the flux was for the cycle 24 minimum in 2010. The BO'11 Model uses the traditional, steady-state Fokker-Planck differential equation to account for particle transport in the heliosphere due to diffusion, convection, and adiabatic deceleration. It assumes a radially symmetrical diffusion coefficient derived from magnetic disturbances caused by sunspots carried onward by a constant solar wind. A more complex differential equation is now being tested to account for particle transport in the heliosphere in the next generation BO model. This new model is time-dependent (no longer a steady state model). In the new model, the dynamics and anti-symmetrical features of the actual heliosphere are accounted for so empirical time delay functions will no longer be required. The new model will be capable of simulating the more subtle features of modulation - such as the Sun's polarity and modulation dependence on the gradient and curvature drift. This improvement is expected to significantly improve the fidelity of the BO GCR model. Preliminary results of its
Evaluating procedural modelling for 3D models of informal settlements in urban design activities
Directory of Open Access Journals (Sweden)
Victoria Rautenbach
2015-11-01
Full Text Available Three-dimensional (3D modelling and visualisation is one of the fastest growing application fields in geographic information science. 3D city models are being researched extensively for a variety of purposes and in various domains, including urban design, disaster management, education and computer gaming. These models typically depict urban business districts (downtown or suburban residential areas. Despite informal settlements being a prevailing feature of many cities in developing countries, 3D models of informal settlements are virtually non-existent. 3D models of informal settlements could be useful in various ways, e.g. to gather information about the current environment in the informal settlements, to design upgrades, to communicate these and to educate inhabitants about environmental challenges. In this article, we described the development of a 3D model of the Slovo Park informal settlement in the City of Johannesburg Metropolitan Municipality, South Africa. Instead of using time-consuming traditional manual methods, we followed the procedural modelling technique. Visualisation characteristics of 3D models of informal settlements were described and the importance of each characteristic in urban design activities for informal settlement upgrades was assessed. Next, the visualisation characteristics of the Slovo Park model were evaluated. The results of the evaluation showed that the 3D model produced by the procedural modelling technique is suitable for urban design activities in informal settlements. The visualisation characteristics and their assessment are also useful as guidelines for developing 3D models of informal settlements. In future, we plan to empirically test the use of such 3D models in urban design projects in informal settlements.
Update of an Object Oriented Track Reconstruction Model for LHC Experiments
Institute of Scientific and Technical Information of China (English)
DavidCandilin; SijinQIAN; 等
2001-01-01
In this update report about an Object Oriented (OO) track reconstruction model,which was presented at CHEP'97,CHEP'98,and CHEP'2000,we shall describe subsequent new developments since the beginning of year 2000.The OO model for the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders.It has been coded in the C++ programming language originally for the CMS experiment at the future Large Hadron Collider (LHC) at CERN,and later has been successfully implemented into three different OO computing environments(including the level-2 trigger and offline software systems)of the ATLAS(another major experiment at LHC).For the level-2 trigger software environment.we shall selectively present some latest performance results(e.g.the B-physics event selection for ATLAS level-2 trigger,the robustness study result,ets.).For the offline environment,we shall present a new 3-D space point package which provides the essential offline input.A major development after CHEP'2000 is the implementation of the OO model into the new OO software frameworkAthena"of ATLAS experiment.The new modularization of this OO package enables the model to be more flexible and to be more easily implemented into different software environments.Also it provides the potential to handle the more comlpicated realistic situation(e.g.to include the calibration correction and the alignment correction,etc.) Some general interface issues(e.g.design of the common track class)of the algorithms to different framework environments have been investigated by using this OO package.
Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model
Energy Technology Data Exchange (ETDEWEB)
Taylor, G. A.; Hiergesell, R. A.
2013-11-12
The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow
A Long-Term Memory Competitive Process Model of a Common Procedural Error
2013-08-01
A novel computational cognitive model explains human procedural error in terms of declarative memory processes. This is an early version of a process ... model intended to predict and explain multiple classes of procedural error a priori. We begin with postcompletion error (PCE) a type of systematic
Transport Simulation Model Calibration with Two-Step Cluster Analysis Procedure
Directory of Open Access Journals (Sweden)
Zenina Nadezda
2015-12-01
Full Text Available The calibration results of transport simulation model depend on selected parameters and their values. The aim of the present paper is to calibrate a transport simulation model by a two-step cluster analysis procedure to improve the reliability of simulation model results. Two global parameters have been considered: headway and simulation step. Normal, uniform and exponential headway generation models have been selected for headway. Application of two-step cluster analysis procedure to the calibration procedure has allowed reducing time needed for simulation step and headway generation model value selection.
A Procedure for Building Product Models in Intelligent Agent-based OperationsManagement
DEFF Research Database (Denmark)
Hvam, Lars; Riis, Jesper; Malis, Martin;
2003-01-01
by product models. The next phase includes an analysis of the product assortment, and the set up of a so-called product master. Finally the product model is designed and implemented by using object oriented modelling. The procedure is developed in order to ensure that the product models constructed are fit......This article presents a procedure for building product models to support the specification processes dealing with sales, design of product variants and production preparation. The procedure includes, as the first phase, an analysis and redesign of the business processes that are to be supported...
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2013
Energy Technology Data Exchange (ETDEWEB)
David W. Nigg
2013-09-01
Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for effective application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Update Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF).
An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.
Sander, P Martin
2013-01-01
Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM). This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism"). Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits) were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.
An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.
Directory of Open Access Journals (Sweden)
P Martin Sander
Full Text Available Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM. This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism". Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.
Comparison of Real World Energy Consumption to Models and Department of Energy Test Procedures
Energy Technology Data Exchange (ETDEWEB)
Goetzler, William [Navigant Consulting, Inc., Burlington, MA (United States); Sutherland, Timothy [Navigant Consulting, Inc., Burlington, MA (United States); Kar, Rahul [Navigant Consulting, Inc., Burlington, MA (United States); Foley, Kevin [Navigant Consulting, Inc., Burlington, MA (United States)
2011-09-01
This study investigated the real-world energy performance of appliances and equipment as it compared with models and test procedures. The study looked to determine whether the U.S. Department of Energy and industry test procedures actually replicate real world conditions, whether performance degrades over time, and whether installation patterns and procedures differ from the ideal procedures. The study first identified and prioritized appliances to be evaluated. Then, the study determined whether real world energy consumption differed substantially from predictions and also assessed whether performance degrades over time. Finally, the study recommended test procedure modifications and areas for future research.
modelling room cooling capacity with fuzzy logic procedure
African Journals Online (AJOL)
user
Modelling with fuzzy logic is an approach to forming ... the way humans think and make judgments [10]. ... artificial intelligence and expert systems [17, 18] to .... from selected cases, human professional computation and the Model predictions.
Recent updates in the aerosol component of the C-IFS model run by ECMWF
Remy, Samuel; Boucher, Olivier; Hauglustaine, Didier; Kipling, Zak; Flemming, Johannes
2017-04-01
The Composition-Integrated Forecast System (C-IFS) is a global atmospheric composition forecasting tool, run by ECMWF within the framework of the Copernicus Atmospheric Monitoring Service (CAMS). The aerosol model of C-IFS is a simple bulk scheme that forecasts 5 species: dust, sea-salt, black carbon, organic matter and sulfate. Three bins represent the dust and sea-salt, for the super-coarse, coarse and fine mode of these species (Morcrette et al., 2009). This talk will present recent updates of the aerosol model, and also introduce forthcoming developments. It will also present the impact of these changes as measured scores against AERONET Aerosol Optical Depth (AOD) and Airbase PM10 observations. The next cycle of C-IFS will include a mass fixer, because the semi-Lagrangian advection scheme used in C-IFS is not mass-conservative. C-IFS now offers the possibility to emit biomass-burning aerosols at an injection height that is provided by a new version of the Global Fire Assimilation System (GFAS). Secondary Organic Aerosols (SOA) production will be scaled on non-biomass burning CO fluxes. This approach allows to represent the anthropogenic contribution to SOA production; it brought a notable improvement in the skill of the model, especially over Europe. Lastly, the emissions of SO2 are now provided by the MACCity inventory instead of and older version of the EDGAR dataset. The seasonal and yearly variability of SO2 emissions are better captured by the MACCity dataset. Upcoming developments of the aerosol model of C-IFS consist mainly in the implementation of a nitrate and ammonium module, with 2 bins (fine and coarse) for nitrate. Nitrate and ammonium sulfate particle formation from gaseous precursors is represented following Hauglustaine et al. (2014); formation of coarse nitrate over pre-existing sea-salt or dust particles is also represented. This extension of the forward model improved scores over heavily populated areas such as Europe, China and Eastern
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012
Energy Technology Data Exchange (ETDEWEB)
David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager
2012-09-01
Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Update Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core
Hageman, Philip L.; Desborough, George A.; Lamothe, Paul J.; Theodorakos, Peter M.
2000-01-01
This report supersedes, revises, and updates information and data previously released in Open-File Report 98-624 (Montour and others, 1998). Data for this report were derived from leaching of mine-waste composite samples using a modification of E.P. A. Method 1312, Synthetic Precipitation Leaching Procedure (SPLP). In 1997, members of the U.S. Geological Survey Mine Waste Characterization Project collected four mine-waste composite samples from mining districts near Silverton, Colorado (MAY and YUK), and near Leadville, Colorado (VEN and SUN). This report presents analytical results from these sites.
Institute of Scientific and Technical Information of China (English)
Chunying Zhang; Sun Chen; Fang Wu; Kai Song
2015-01-01
To overcome the large time-delay in measuring the hardness of mixed rubber, rheological parameters were used to predict the hardness. A novel Q-based model updating strategy was proposed as a universal platform to track time-varying properties. Using a few selected support samples to update the model, the strategy could dramat-ical y save the storage cost and overcome the adverse influence of low signal-to-noise ratio samples. Moreover, it could be applied to any statistical process monitoring system without drastic changes to them, which is practical for industrial practices. As examples, the Q-based strategy was integrated with three popular algorithms (partial least squares (PLS), recursive PLS (RPLS), and kernel PLS (KPLS)) to form novel regression ones, QPLS, QRPLS and QKPLS, respectively. The applications for predicting mixed rubber hardness on a large-scale tire plant in east China prove the theoretical considerations.
Indian Academy of Sciences (India)
J C Fu; M H Hsu; Y Duann
2016-02-01
Flood is the worst weather-related hazard in Taiwan because of steep terrain and storm. The tropical storm often results in disastrous flash flood. To provide reliable forecast of water stages in rivers is indispensable for proper actions in the emergency response during flood. The river hydraulic model based on dynamic wave theory using an implicit finite-difference method is developed with river roughness updating for flash flood forecast. The artificial neural network (ANN) is employed to update the roughness of rivers in accordance with the observed river stages at each time-step of the flood routing process. Several typhoon events at Tamsui River are utilized to evaluate the accuracy of flood forecasting. The results present the adaptive n-values of roughness for river hydraulic model that can provide a better flow state for subsequent forecasting at significant locations and longitudinal profiles along rivers.
Parabolic Trough Collector Cost Update for the System Advisor Model (SAM)
Energy Technology Data Exchange (ETDEWEB)
Kurup, Parthiv [National Renewable Energy Lab. (NREL), Golden, CO (United States); Turchi, Craig S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2015-11-01
This report updates the baseline cost for parabolic trough solar fields in the United States within NREL's System Advisor Model (SAM). SAM, available at no cost at https://sam.nrel.gov/, is a performance and financial model designed to facilitate decision making for people involved in the renewable energy industry. SAM is the primary tool used by NREL and the U.S. Department of Energy (DOE) for estimating the performance and cost of concentrating solar power (CSP) technologies and projects. The study performed a bottom-up build and cost estimate for two state-of-the-art parabolic trough designs -- the SkyTrough and the Ultimate Trough. The SkyTrough analysis estimated the potential installed cost for a solar field of 1500 SCAs as $170/m^{2} +/- $6/m^{2}. The investigation found that SkyTrough installed costs were sensitive to factors such as raw aluminum alloy cost and production volume. For example, in the case of the SkyTrough, the installed cost would rise to nearly $210/m^{2} if the aluminum alloy cost was $1.70/lb instead of $1.03/lb. Accordingly, one must be aware of fluctuations in the relevant commodities markets to track system cost over time. The estimated installed cost for the Ultimate Trough was only slightly higher at $178/m^{2}, which includes an assembly facility of $11.6 million amortized over the required production volume. Considering the size and overall cost of a 700 SCA Ultimate Trough solar field, two parallel production lines in a fully covered assembly facility, each with the specific torque box, module and mirror jigs, would be justified for a full CSP plant.
Energy Technology Data Exchange (ETDEWEB)
Katya Le Blanc; Johanna Oxstrand
2012-04-01
The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.
Cholewa, Jason; Guimarães-Ferreira, Lucas; da Silva Teixeira, Tamiris; Naimo, Marshall Alan; Zhi, Xia; de Sá, Rafaele Bis Dal Ponte; Lodetti, Alice; Cardozo, Mayara Quadros; Zanchi, Nelo Eidy
2014-09-01
Human muscle hypertrophy brought about by voluntary exercise in laboratorial conditions is the most common way to study resistance exercise training, especially because of its reliability, stimulus control and easy application to resistance training exercise sessions at fitness centers. However, because of the complexity of blood factors and organs involved, invasive data is difficult to obtain in human exercise training studies due to the integration of several organs, including adipose tissue, liver, brain and skeletal muscle. In contrast, studying skeletal muscle remodeling in animal models are easier to perform as the organs can be easily obtained after euthanasia; however, not all models of resistance training in animals displays a robust capacity to hypertrophy the desired muscle. Moreover, some models of resistance training rely on voluntary effort, which complicates the results observed when animal models are employed since voluntary capacity is something theoretically impossible to measure in rodents. With this information in mind, we will review the modalities used to simulate resistance training in animals in order to present to investigators the benefits and risks of different animal models capable to provoke skeletal muscle hypertrophy. Our second objective is to help investigators analyze and select the experimental resistance training model that best promotes the research question and desired endpoints.
Spatial Statistical Procedures to Validate Input Data in Energy Models
Energy Technology Data Exchange (ETDEWEB)
Johannesson, G.; Stewart, J.; Barr, C.; Brady Sabeff, L.; George, R.; Heimiller, D.; Milbrandt, A.
2006-01-01
Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the abovementioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.
Spatial Statistical Procedures to Validate Input Data in Energy Models
Energy Technology Data Exchange (ETDEWEB)
Lawrence Livermore National Laboratory
2006-01-27
Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy-related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the above-mentioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.
Rossini, P M; Burke, D; Chen, R; Cohen, L G; Daskalakis, Z; Di Iorio, R; Di Lazzaro, V; Ferreri, F; Fitzgerald, P B; George, M S; Hallett, M; Lefaucheur, J P; Langguth, B; Matsumoto, H; Miniussi, C; Nitsche, M A; Pascual-Leone, A; Paulus, W; Rossi, S; Rothwell, J C; Siebner, H R; Ugawa, Y; Walsh, V; Ziemann, U
2015-06-01
These guidelines provide an up-date of previous IFCN report on "Non-invasive electrical and magnetic stimulation of the brain, spinal cord and roots: basic principles and procedures for routine clinical application" (Rossini et al., 1994). A new Committee, composed of international experts, some of whom were in the panel of the 1994 "Report", was selected to produce a current state-of-the-art review of non-invasive stimulation both for clinical application and research in neuroscience. Since 1994, the international scientific community has seen a rapid increase in non-invasive brain stimulation in studying cognition, brain-behavior relationship and pathophysiology of various neurologic and psychiatric disorders. New paradigms of stimulation and new techniques have been developed. Furthermore, a large number of studies and clinical trials have demonstrated potential therapeutic applications of non-invasive brain stimulation, especially for TMS. Recent guidelines can be found in the literature covering specific aspects of non-invasive brain stimulation, such as safety (Rossi et al., 2009), methodology (Groppa et al., 2012) and therapeutic applications (Lefaucheur et al., 2014). This up-dated review covers theoretical, physiological and practical aspects of non-invasive stimulation of brain, spinal cord, nerve roots and peripheral nerves in the light of more updated knowledge, and include some recent extensions and developments.
Recent updates in the aerosol model of C-IFS and their impact on skill scores
Remy, Samuel; Boucher, Olivier; Hauglustaine, Didier
2016-04-01
The Composition-Integrated Forecast System (C-IFS) is a global atmospheric composition forecasting tool, run by ECMWF within the framework of the Copernicus Atmospheric Monitoring Services (CAMS). The aerosol model of C-IFS is a simple bulk scheme that forecasts 5 species: dust, sea-salt, black carbon, organic matter and sulfates. Three bins represent the dust and sea-salt, for the super-coarse, coarse and fine mode of these species (Morcrette et al., 2009). This talk will present recent updates of the aerosol model, and also introduce coming upgrades. It will also present evaluations of these scores against AERONET observations. Next cycle of the C-IFS will include a mass fixer, because the semi-Lagrangian advection scheme used in C-IFS is not mass-conservative. This modification has a negligible impact for most species except for black carbon and organic matter; it allows to close the budgets between sources and sinks in the diagnostics. Dust emissions have been tuned to favor the emissions of large particles, which were under-represented. This brought an overall decrease of the burden of dust aerosol and improved scores especially close to source regions. The biomass-burning aerosol emissions are now emitted at an injection height that is provided by a new version of the Global Fire Assimilation System (GFAS). This brought a small increase in biomass burning aerosols, and a better representation of some large fire events. Lastly, SO2 emissions are now provided by the MACCity dataset instead of and older version of the EDGAR dataset. The seasonal and yearly variability of SO2 emissions are better captured by the MACCity dataset; the use of which brought significant improvements of the forecasts against observations. Upcoming upgrades of the aerosol model of C-IFS consist mainly in the overhaul of the representation of secondary aerosols. Secondary Organic Aerosols (SOA) production will be dynamically estimated by scaling them on CO fluxes. This approach has been
Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.
Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh
2014-07-01
This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management.
[The emphases and basic procedures of genetic counseling in psychotherapeutic model].
Zhang, Yuan-Zhi; Zhong, Nanbert
2006-11-01
The emphases and basic procedures of genetic counseling are all different with those in old models. In the psychotherapeutic model, genetic counseling will not only focus on counselees' genetic disorders and birth defects, but also their psychological problems. "Client-centered therapy" termed by Carl Rogers plays an important role in genetic counseling process. The basic procedures of psychotherapeutic model of genetic counseling include 7 steps: initial contact, introduction, agendas, inquiry of family history, presenting information, closing the session and follow-up.
A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling.
Kuprat, A P; Kabilan, S; Carson, J P; Corley, R A; Einstein, D R
2013-07-01
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton's Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple sets
A bidirectional coupling procedure applied to multiscale respiratory modeling
Kuprat, A. P.; Kabilan, S.; Carson, J. P.; Corley, R. A.; Einstein, D. R.
2013-07-01
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton's method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural
Energy Technology Data Exchange (ETDEWEB)
WHEELER, TIMOTHY A.; WYSS, GREGORY D.; HARPER, FREDERICK T.
2000-11-01
Uncertainty distributions for specific parameters of the Cassini General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) Final Safety Analysis Report consequence risk analysis were revised and updated. The revisions and updates were done for all consequence parameters for which relevant information exists from the joint project on Probabilistic Accident Consequence Uncertainty Analysis by the United States Nuclear Regulatory Commission and the Commission of European Communities.
A review of mechanisms and modelling procedures for landslide tsunamis
Løvholt, Finn; Harbitz, Carl B.; Glimsdal, Sylfest
2017-04-01
Landslides, including volcano flank collapses or volcanically induced flows, constitute the second-most important cause of tsunamis after earthquakes. Compared to earthquakes, landslides are more diverse with respect to how they generation tsunamis. Here, we give an overview over the main tsunami generation mechanisms for landslide tsunamis. In the presentation, a mix of results using analytical models, numerical models, laboratory experiments, and case studies are used to illustrate the diversity, but also to point out some common characteristics. Different numerical modelling techniques for the landslide evolution, and the tsunami generation and propagation, as well as the effect of frequency dispersion, are also briefly discussed. Basic tsunami generation mechanisms for different types of landslides, including large submarine translational landslide, to impulsive submarine slumps, and violent subaerial landslides and volcano flank collapses, are reviewed. The importance of the landslide kinematics is given attention, including the interplay between landslide acceleration, landslide velocity to depth ratio (Froude number) and dimensions. Using numerical simulations, we demonstrate how landslide deformation and retrogressive failure development influence tsunamigenesis. Generation mechanisms for subaerial landslides, are reviewed by means of scaling relations from laboratory experiments and numerical modelling. Finally, it is demonstrated how the different degree of complexity in the landslide tsunamigenesis needs to be reflected by increased sophistication in numerical models.
Energy Technology Data Exchange (ETDEWEB)
Augustine, C.
2011-10-01
The U.S. Department of Energy (DOE) Geothermal Technologies Program (GTP) tasked the National Renewable Energy Laboratory (NREL) with conducting the annual geothermal supply curve update. This report documents the approach taken to identify geothermal resources, determine the electrical producing potential of these resources, and estimate the levelized cost of electricity (LCOE), capital costs, and operating and maintenance costs from these geothermal resources at present and future timeframes under various GTP funding levels. Finally, this report discusses the resulting supply curve representation and how improvements can be made to future supply curve updates.
Updates on Modeling the Water Cycle with the NASA Ames Mars Global Climate Model
Kahre, M. A.; Haberle, R. M.; Hollingsworth, J. L.; Montmessin, F.; Brecht, A. S.; Urata, R.; Klassen, D. R.; Wolff, M. J.
2017-01-01
Global Circulation Models (GCMs) have made steady progress in simulating the current Mars water cycle. It is now widely recognized that clouds are a critical component that can significantly affect the nature of the simulated water cycle. Two processes in particular are key to implementing clouds in a GCM: the microphysical processes of formation and dissipation, and their radiative effects on heating/ cooling rates. Together, these processes alter the thermal structure, change the dynamics, and regulate inter-hemispheric transport. We have made considerable progress representing these processes in the NASA Ames GCM, particularly in the presence of radiatively active water ice clouds. We present the current state of our group's water cycle modeling efforts, show results from selected simulations, highlight some of the issues, and discuss avenues for further investigation.
Pasyanos, M. E.; Masters, G.; Laske, G.; Ma, Z.
2012-12-01
Models such as CRUST2.0 (Bassin et al., 2000) have proven very useful to many seismic studies on regional, continental, and global scales. We have developed an updated, higher resolution model called LITHO1.0 that extends deeper to include the lithospheric lid, and includes mantle anisotropy, potentially making it more useful for a wider variety of applications. The model is evolving away from the crustal types strongly used in CRUST5.1 (Mooney et al., 1998) to a more data-driven model. This is accomplished by performing a targeted grid search with multiple data inputs. We seek to find the most plausible model which is able to fit multiple constraints, including updated sediment and crustal thickness models, upper mantle velocities derived from travel times, and surface wave dispersion. The latter comes from a new, very large, global surface wave dataset built using a new, efficient measurement technique that employs cluster analysis (Ma et al., 2012), and includes the group and phase velocities of both Love and Rayleigh waves. We will discuss datasets and methodology, highlight significant features of the model, and provide detailed information on the availability of the model in various formats.
Optimal control of CPR procedure using hemodynamic circulation model
Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok
2007-12-25
A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.
Statistical procedures for evaluating daily and monthly hydrologic model predictions
Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.
2004-01-01
The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.
A Review of Different Estimation Procedures in the Rasch Model. Research Report 87-6.
Engelen, R. J. H.
A short review of the different estimation procedures that have been used in association with the Rasch model is provided. These procedures include joint, conditional, and marginal maximum likelihood methods; Bayesian methods; minimum chi-square methods; and paired comparison estimation. A comparison of the marginal maximum likelihood estimation…
A Connectionist Model of Stimulus Class Formation with a Yes/No Procedure and Compound Stimuli
Tovar, Angel E.; Chavez, Alvaro Torres
2012-01-01
We analyzed stimulus class formation in a human study and in a connectionist model (CM) with a yes/no procedure, using compound stimuli. In the human study, the participants were six female undergraduate students; the CM was a feed-forward back-propagation network. Two 3-member stimulus classes were trained with a similar procedure in both the…
TSCALE: A New Multidimensional Scaling Procedure Based on Tversky's Contrast Model.
DeSarbo, Wayne S.; And Others
1992-01-01
TSCALE, a multidimensional scaling procedure based on the contrast model of A. Tversky for asymmetric three-way, two-mode proximity data, is presented. TSCALE conceptualizes a latent dimensional structure to describe the judgmental stimuli. A Monte Carlo analysis and two consumer psychology applications illustrate the procedure. (SLD)
Communication and Procedural Models of the E-Commerce Systems
Suchánek, Petr
2009-01-01
E-commerce systems became a standard interface between sellers (or suppliers) and customers. One of basic condition of an e-commerce system to be efficient is correct definitions and describes of the all internal and external processes. All is targeted the customers´ needs and requirements. The optimal and most exact way how to obtain and find optimal solution of e-commerce system and its processes structure in companies is the modeling and simulation. In this article author shows basic model...
Communication and Procedural Models of the E-commerce Systems
Suchánek, Petr
2009-01-01
E-commerce systems became a standard interface between sellers (or suppliers) and customers. One of basic condition of an e-commerce system to be efficient is correct definitions and describes of the all internal and external processes. All is targeted the customers´ needs and requirements. The optimal and most exact way how to obtain and find optimal solution of e-commerce system and its processes structure in companies is the modeling and simulation. In this article author shows basic model...
Qiu, Lei; Yuan, Shenfang; Chang, Fu-Kuo; Bao, Qiao; Mei, Hanfei
2014-12-01
Structural health monitoring technology for aerospace structures has gradually turned from fundamental research to practical implementations. However, real aerospace structures work under time-varying conditions that introduce uncertainties to signal features that are extracted from sensor signals, giving rise to difficulty in reliably evaluating the damage. This paper proposes an online updating Gaussian Mixture Model (GMM)-based damage evaluation method to improve damage evaluation reliability under time-varying conditions. In this method, Lamb-wave-signal variation indexes and principle component analysis (PCA) are adopted to obtain the signal features. A baseline GMM is constructed on the signal features acquired under time-varying conditions when the structure is in a healthy state. By adopting the online updating mechanism based on a moving feature sample set and inner probability structural reconstruction, the probability structures of the GMM can be updated over time with new monitoring signal features to track the damage progress online continuously under time-varying conditions. This method can be implemented without any physical model of damage or structure. A real aircraft wing spar, which is an important load-bearing structure of an aircraft, is adopted to validate the proposed method. The validation results show that the method is effective for edge crack growth monitoring of the wing spar bolts holes under the time-varying changes in the tightness degree of the bolts.
Hart, Danielle; McNeil, Mary Ann; Hegarty, Cullen; Rush, Robert; Chipman, Jeffery; Clinton, Joseph; Reihsen, Troy; Sweet, Robert
2016-01-01
There are many models currently used for teaching and assessing performance of trauma-related airway, breathing, and hemorrhage procedures. Although many programs use live animal (live tissue [LT]) models, there is a congressional effort to transition to the use of nonanimal- based methods (i.e., simulators, cadavers) for military trainees. We examined the existing literature and compared the efficacy, acceptability, and validity of available models with a focus on comparing LT models with synthetic systems. Literature and Internet searches were conducted to examine current models for seven core trauma procedures. We identified 185 simulator systems. Evidence on acceptability and validity of models was sparse. We found only one underpowered study comparing the performance of learners after training on LT versus simulator models for tube thoracostomy and cricothyrotomy. There is insufficient data-driven evidence to distinguish superior validity of LT or any other model for training or assessment of critical trauma procedures.
Reliability analysis and updating of deteriorating systems with subset simulation
DEFF Research Database (Denmark)
Schneider, Ronald; Thöns, Sebastian; Straub, Daniel
2017-01-01
Bayesian updating of the system deterioration model. The updated system reliability is then obtained through coupling the updated deterioration model with a probabilistic structural model. The underlying high-dimensional structural reliability problems are solved using subset simulation, which...
WEMo (Wave Exposure Model): Formulation, Procedures and Validation
Malhotra, Amit; Mark S. Fonseca
2007-01-01
This report describes the working of National Centers for Coastal Ocean Service (NCCOS) Wave Exposure Model (WEMo) capable of predicting the exposure of a site in estuarine and closed water to local wind generated waves. WEMo works in two different modes: the Representative Wave Energy (RWE) mode calculates the exposure using physical parameters like wave energy and wave height, while the Relative Exposure Index (REI) empirically calculates exposure as a unitless index. Detailed working of th...
Comparison of Estimation Procedures for Multilevel AR(1 Models
Directory of Open Access Journals (Sweden)
Tanja eKrone
2016-04-01
Full Text Available To estimate a time series model for multiple individuals, a multilevel model may be used.In this paper we compare two estimation methods for the autocorrelation in Multilevel AR(1 models, namely Maximum Likelihood Estimation (MLE and Bayesian Markov Chain Monte Carlo.Furthermore, we examine the difference between modeling fixed and random individual parameters.To this end, we perform a simulation study with a fully crossed design, in which we vary the length of the time series (10 or 25, the number of individuals per sample (10 or 25, the mean of the autocorrelation (-0.6 to 0.6 inclusive, in steps of 0.3 and the standard deviation of the autocorrelation (0.25 or 0.40.We found that the random estimators of the population autocorrelation show less bias and higher power, compared to the fixed estimators. As expected, the random estimators profit strongly from a higher number of individuals, while this effect is small for the fixed estimators.The fixed estimators profit slightly more from a higher number of time points than the random estimators.When possible, random estimation is preferred to fixed estimation.The difference between MLE and Bayesian estimation is nearly negligible. The Bayesian estimation shows a smaller bias, but MLE shows a smaller variability (i.e., standard deviation of the parameter estimates.Finally, better results are found for a higher number of individuals and time points, and for a lower individual variability of the autocorrelation. The effect of the size of the autocorrelation differs between outcome measures.
A baseline-free procedure for transformation models under interval censorship.
Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin
2005-12-01
An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.
Abu Husain, Nurulakmar; Haddad Khodaparast, Hamed; Ouyang, Huajiang
2012-10-01
Parameterisation in stochastic problems is a major issue in real applications. In addition, complexity of test structures (for example, those assembled through laser spot welds) is another challenge. The objective of this paper is two-fold: (1) stochastic uncertainty in two sets of different structures (i.e., simple flat plates, and more complicated formed structures) is investigated to observe how updating can be adequately performed using the perturbation method, and (2) stochastic uncertainty in a set of welded structures is studied by using two parameter weighting matrix approaches. Different combinations of parameters are explored in the first part; it is found that geometrical features alone cannot converge the predicted outputs to the measured counterparts, hence material properties must be included in the updating process. In the second part, statistical properties of experimental data are considered and updating parameters are treated as random variables. Two weighting approaches are compared; results from one of the approaches are in very good agreement with the experimental data and excellent correlation between the predicted and measured covariances of the outputs is achieved. It is concluded that proper selection of parameters in solving stochastic updating problems is crucial. Furthermore, appropriate weighting must be used in order to obtain excellent convergence between the predicted mean natural frequencies and their measured data.
Directory of Open Access Journals (Sweden)
Soldić-Aleksić Jasna
2009-01-01
Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.
Loop electrosurgical excision procedure: an effective, inexpensive, and durable teaching model.
Connor, R Shae; Dizon, A Mitch; Kimball, Kristopher J
2014-12-01
The effectiveness of simulation training for enhancing operative skills is well established. Here we describe the construction of a simple, low-cost model for teaching the loop electrosurgical excision procedure. Composed of common materials such as polyvinyl chloride pipe and sausages, the simulation model, shown in the accompanying figure, can be easily reproduced by other training programs. In addition, we also present an instructional video that utilizes this model to review loop electrosurgical excision procedure techniques, highlighting important steps in the procedure and briefly addressing challenging situations and common mistakes as well as strategies to prevent them. The video and model can be used in conjunction with a simulation skills laboratory to teach the procedure to students, residents, and new practitioners.
Normal response function method for mass and stiffness matrix updating using complex FRFs
Pradhan, S.; Modak, S. V.
2012-10-01
Quite often a structural dynamic finite element model is required to be updated so as to accurately predict the dynamic characteristics like natural frequencies and the mode shapes. Since in many situations undamped natural frequencies and mode shapes need to be predicted, it has generally been the practice in these situations to seek updating of only mass and stiffness matrix so as to obtain a reliable prediction model. Updating using frequency response functions (FRFs) has been one of the widely used approaches for updating, including updating of mass and stiffness matrices. However, the problem with FRF based methods, for updating mass and stiffness matrices, is that these methods are based on use of complex FRFs. Use of complex FRFs to update mass and stiffness matrices is not theoretically correct as complex FRFs are not only affected by these two matrices but also by the damping matrix. Therefore, in situations where updating of only mass and stiffness matrices using FRFs is required, the use of complex FRFs based updating formulation is not fully justified and would lead to inaccurate updated models. This paper addresses this difficulty and proposes an improved FRF based finite element model updating procedure using the concept of normal FRFs. The proposed method is a modified version of the existing response function method that is based on the complex FRFs. The effectiveness of the proposed method is validated through a numerical study of a simple but representative beam structure. The effect of coordinate incompleteness and robustness of method under presence of noise is investigated. The results of updating obtained by the improved method are compared with the existing response function method. The performance of the two approaches is compared for cases of light, medium and heavily damped structures. It is found that the proposed improved method is effective in updating of mass and stiffness matrices in all the cases of complete and incomplete data and
A MODEL SELECTION PROCEDURE IN MIXTURE-PROCESS EXPERIMENTS FOR INDUSTRIAL PROCESS OPTIMIZATION
Directory of Open Access Journals (Sweden)
Márcio Nascimento de Souza Leão
2015-08-01
Full Text Available We present a model selection procedure for use in Mixture and Mixture-Process Experiments. Certain combinations of restrictions on the proportions of the mixture components can result in a very constrained experimental region. This results in collinearity among the covariates of the model, which can make it difficult to fit the model using the traditional method based on the significance of the coefficients. For this reason, a model selection methodology based on information criteria will be proposed for process optimization. Two examples are presented to illustrate this model selection procedure.
Low-dose kaolin-induced feline hydrocephalus and feline ventriculostomy: an updated model
Lollis, S. Scott; Hoopes, P. Jack; Kane, Susan; Paulsen, Keith; Weaver, John; Roberts, David W.
2013-01-01
Object Intracisternal injection of kaolin is a well-described model of feline hydrocephalus. Its principal disadvantage is a high rate of procedure-related morbidity and mortality. The authors describe a series of modifications to a commonly used protocol, intended to ameliorate animal welfare concerns without compromising the degree of ventricular enlargement. Methods In 11 adult cats, hydrocephalus was induced by injection of kaolin into the cisterna magna. Kaolin doses were reduced to 10 mg, compared with historical doses of ~ 200 mg, and high-dose dexamethasone was used to reduce the severity of meningeal irritation. A control cohort of 6 additional animals received injections of isotonic saline into the cisterna magna. Results The mean ventricular volume increased from a baseline of 0.183 ± 0.068 ml to 1.43 ± 0.184 ml. Two animals were killed prior to completion of the study. Of the remaining animals, all were ambulatory by postinjection Day 1, and all had resumed normal oral intake by postinjection Day 3. Two animals required subcutaneous fluid supplementation. Ventriculostomy using anatomical landmarks was performed to ascertain intraventricular pressure. The mean intraventricular pressure after hydrocephalus was 15 cm H2O above the ear (range 11–20 cm H2O). Conclusions Reduction in kaolin dosage and the postoperative administration of high-dose corticosteroid therapy appear to reduce morbidity and mortality rates compared with historical experiences. Hydrocephalus is radiographically evident as soon as 3 days after injection, but it does not substantially interfere with feeding and basic self-care. To the extent that animal welfare concerns may have limited the use of this model in recent years, the procedures described in the present study may offer some guidance for its future use. PMID:19834994
Wu, Jie; Yan, Quan-sheng; Li, Jian; Hu, Min-yi
2016-04-01
In bridge construction, geometry control is critical to ensure that the final constructed bridge has the consistent shape as design. A common method is by predicting the deflections of the bridge during each construction phase through the associated finite element models. Therefore, the cambers of the bridge during different construction phases can be determined beforehand. These finite element models are mostly based on the design drawings and nominal material properties. However, the accuracy of these bridge models can be large due to significant uncertainties of the actual properties of the materials used in construction. Therefore, the predicted cambers may not be accurate to ensure agreement of bridge geometry with design, especially for long-span bridges. In this paper, an improved geometry control method is described, which incorporates finite element (FE) model updating during the construction process based on measured bridge deflections. A method based on the Kriging model and Latin hypercube sampling is proposed to perform the FE model updating due to its simplicity and efficiency. The proposed method has been applied to a long-span continuous girder concrete bridge during its construction. Results show that the method is effective in reducing construction error and ensuring the accuracy of the geometry of the final constructed bridge.
DEFF Research Database (Denmark)
Bissacco, Giuliano
2005-01-01
and their rapid warming and cooling, which prevent the achievement of a steady state. Several other factors, independent on the tool-workpiece interaction, influence the machining accuracy. The cutting parameter most heavily affected is the axial depth of cut which is the most critical when using micro end mills......, due to the easy breakage particularly when milling on hard materials [1]. Typical values for the errors on the control of the axial depth of cut are in the order of 50 microns, while the aimed depth of cut can be as low as 5 microns. The author has developed a machining procedure for optimal control......In order to maintain an optimum cutting speed, the reduction of mill diameters requires machine tools with high rotational speed capabilities. A solution to update existing machine tools is the use of high speed attached spindles. Major drawbacks of these attachments are the high thermal expansion...
A Computational Approach for Model Update of an LS-DYNA Energy Absorbing Cell
Horta, Lucas G.; Jackson, Karen E.; Kellas, Sotiris
2008-01-01
NASA and its contractors are working on structural concepts for absorbing impact energy of aerospace vehicles. Recently, concepts in the form of multi-cell honeycomb-like structures designed to crush under load have been investigated for both space and aeronautics applications. Efforts to understand these concepts are progressing from tests of individual cells to tests of systems with hundreds of cells. Because of fabrication irregularities, geometry irregularities, and material properties uncertainties, the problem of reconciling analytical models, in particular LS-DYNA models, with experimental data is a challenge. A first look at the correlation results between single cell load/deflection data with LS-DYNA predictions showed problems which prompted additional work in this area. This paper describes a computational approach that uses analysis of variance, deterministic sampling techniques, response surface modeling, and genetic optimization to reconcile test with analysis results. Analysis of variance provides a screening technique for selection of critical parameters used when reconciling test with analysis. In this study, complete ignorance of the parameter distribution is assumed and, therefore, the value of any parameter within the range that is computed using the optimization procedure is considered to be equally likely. Mean values from tests are matched against LS-DYNA solutions by minimizing the square error using a genetic optimization. The paper presents the computational methodology along with results obtained using this approach.
New Procedure to Develop Lumped Kinetic Models for Heavy Fuel Oil Combustion
Han, Yunqing
2016-09-20
A new procedure to develop accurate lumped kinetic models for complex fuels is proposed, and applied to the experimental data of the heavy fuel oil measured by thermogravimetry. The new procedure is based on the pseudocomponents representing different reaction stages, which are determined by a systematic optimization process to ensure that the separation of different reaction stages with highest accuracy. The procedure is implemented and the model prediction was compared against that from a conventional method, yielding a significantly improved agreement with the experimental data. © 2016 American Chemical Society.
Modelling dental implant extraction by pullout and torque procedures.
Rittel, D; Dorogoy, A; Shemtov-Yona, K
2017-07-01
Dental implants extraction, achieved either by applying torque or pullout force, is used to estimate the bone-implant interfacial strength. A detailed description of the mechanical and physical aspects of the extraction process in the literature is still missing. This paper presents 3D nonlinear dynamic finite element simulations of a commercial implant extraction process from the mandible bone. Emphasis is put on the typical load-displacement and torque-angle relationships for various types of cortical and trabecular bone strengths. The simulations also study of the influence of the osseointegration level on those relationships. This is done by simulating implant extraction right after insertion when interfacial frictional contact exists between the implant and bone, and long after insertion, assuming that the implant is fully bonded to the bone. The model does not include a separate representation and model of the interfacial layer for which available data is limited. The obtained relationships show that the higher the strength of the trabecular bone the higher the peak extraction force, while for application of torque, it is the cortical bone which might dictate the peak torque value. Information on the relative strength contrast of the cortical and trabecular components, as well as the progressive nature of the damage evolution, can be revealed from the obtained relations. It is shown that full osseointegration might multiply the peak and average load values by a factor 3-12 although the calculated work of extraction varies only by a factor of 1.5. From a quantitative point of view, it is suggested that, as an alternative to reporting peak load or torque values, an average value derived from the extraction work be used to better characterize the bone-implant interfacial strength. Copyright © 2017 Elsevier Ltd. All rights reserved.
Procedure for assessing the performance of a rockfall fragmentation model
Matas, Gerard; Lantada, Nieves; Corominas, Jordi; Gili, Josep Antoni; Ruiz-Carulla, Roger; Prades, Albert
2017-04-01
A Rockfall is a mass instability process frequently observed in road cuts, open pit mines and quarries, steep slopes and cliffs. It is frequently observed that the detached rock mass becomes fragmented when it impacts with the slope surface. The consideration of the fragmentation of the rockfall mass is critical for the calculation of block's trajectories and their impact energies, to further assess their potential to cause damage and design adequate preventive structures. We present here the performance of the RockGIS model. It is a GIS-Based tool that simulates stochastically the fragmentation of the rockfalls, based on a lumped mass approach. In RockGIS, the fragmentation initiates by the disaggregation of the detached rock mass through the pre-existing discontinuities just before the impact with the ground. An energy threshold is defined in order to determine whether the impacting blocks break or not. The distribution of the initial mass between a set of newly generated rock fragments is carried out stochastically following a power law. The trajectories of the new rock fragments are distributed within a cone. The model requires the calibration of both the runout of the resultant blocks and the spatial distribution of the volumes of fragments generated by breakage during their propagation. As this is a coupled process which is controlled by several parameters, a set of performance criteria to be met by the simulation have been defined. The criteria includes: position of the centre of gravity of the whole block distribution, histogram of the runout of the blocks, extent and boundaries of the young debris cover over the slope surface, lateral dispersion of trajectories, total number of blocks generated after fragmentation, volume distribution of the generated fragments, the number of blocks and volume passages past a reference line and the maximum runout distance Since the number of parameters to fit increases significantly when considering fragmentation, the
Discovering Plate Boundaries Update: Builds Content Knowledge and Models Inquiry-based Learning
Sawyer, D. S.; Pringle, M. S.; Henning, A. T.
2009-12-01
Discovering Plate Boundaries (DPB) is a jigsaw-structured classroom exercise in which students explore the fundamental datasets from which plate boundary processes were discovered. The exercise has been widely used in the past ten years as a classroom activity for students in fifth grade through high school, and for Earth Science major and general education courses in college. Perhaps more importantly, the exercise has been used extensively for professional development of in-service and pre-service K-12 science teachers, where it simultaneously builds content knowledge in plate boundary processes (including natural hazards), models an effective data-rich, inquiry-based pedagogy, and provides a set of lesson plans and materials which teachers can port directly into their own classroom (see Pringle, et al, this session for a specific example). DPB is based on 4 “specialty” data maps, 1) earthquake locations, 2) modern volcanic activity, 3) seafloor age, and 4) topography and bathymetry, plus a fifth map of (undifferentiated) plate boundary locations. The jigsaw is structured so that students are first split into one of the four “specialties,” then re-arranged into groups with each of the four specialties to describe the boundaries of a particular plate. We have taken the original DPB materials, used the latest digital data sets to update all the basic maps, and expanded the opportunities for further student and teacher learning. The earthquake maps now cover the recent period including the deadly Banda Aceh event. The topography/bathymetry map now has global coverage and uses ice-free elevations, which can, for example, extend to further inquiry about mantle viscosity and loading processes (why are significant portions of the bedrock surface of Greenland and Antarctica below sea level?). The volcanic activity map now differentiates volcano type and primary volcanic lithology, allowing a more elaborate understanding of volcanism at different plate boundaries
Mahanama, Sarith P.; Koster, Randal D.; Walker, Gregory K.; Takacs, Lawrence L.; Reichle, Rolf H.; De Lannoy, Gabrielle; Liu, Qing; Zhao, Bin; Suarez, Max J.
2015-01-01
The Earths land surface boundary conditions in the Goddard Earth Observing System version 5 (GEOS-5) modeling system were updated using recent high spatial and temporal resolution global data products. The updates include: (i) construction of a global 10-arcsec land-ocean lakes-ice mask; (ii) incorporation of a 10-arcsec Globcover 2009 land cover dataset; (iii) implementation of Level 12 Pfafstetter hydrologic catchments; (iv) use of hybridized SRTM global topography data; (v) construction of the HWSDv1.21-STATSGO2 merged global 30 arc second soil mineral and carbon data in conjunction with a highly-refined soil classification system; (vi) production of diffuse visible and near-infrared 8-day MODIS albedo climatologies at 30-arcsec from the period 2001-2011; and (vii) production of the GEOLAND2 and MODIS merged 8-day LAI climatology at 30-arcsec for GEOS-5. The global data sets were preprocessed and used to construct global raster data files for the software (mkCatchParam) that computes parameters on catchment-tiles for various atmospheric grids. The updates also include a few bug fixes in mkCatchParam, as well as changes (improvements in algorithms, etc.) to mkCatchParam that allow it to produce tile-space parameters efficiently for high resolution AGCM grids. The update process also includes the construction of data files describing the vegetation type fractions, soil background albedo, nitrogen deposition and mean annual 2m air temperature to be used with the future Catchment CN model and the global stream channel network to be used with the future global runoff routing model. This report provides detailed descriptions of the data production process and data file format of each updated data set.
Lu, Xiaoman; Zheng, Guang; Miller, Colton; Alvarado, Ernesto
2017-09-08
Monitoring and understanding the spatio-temporal variations of forest aboveground biomass (AGB) is a key basis to quantitatively assess the carbon sequestration capacity of a forest ecosystem. To map and update forest AGB in the Greater Khingan Mountains (GKM) of China, this work proposes a physical-based approach. Based on the baseline forest AGB from Landsat Enhanced Thematic Mapper Plus (ETM+) images in 2008, we dynamically updated the annual forest AGB from 2009 to 2012 by adding the annual AGB increment (ABI) obtained from the simulated daily and annual net primary productivity (NPP) using the Boreal Ecosystem Productivity Simulator (BEPS) model. The 2012 result was validated by both field- and aerial laser scanning (ALS)-based AGBs. The predicted forest AGB for 2012 estimated from the process-based model can explain 31% (n = 35, p < 0.05, RMSE = 2.20 kg/m²) and 85% (n = 100, p < 0.01, RMSE = 1.71 kg/m²) of variation in field- and ALS-based forest AGBs, respectively. However, due to the saturation of optical remote sensing-based spectral signals and contribution of understory vegetation, the BEPS-based AGB tended to underestimate/overestimate the AGB for dense/sparse forests. Generally, our results showed that the remotely sensed forest AGB estimates could serve as the initial carbon pool to parameterize the process-based model for NPP simulation, and the combination of the baseline forest AGB and BEPS model could effectively update the spatiotemporal distribution of forest AGB.
Wenmackers, Sylvia; Douven, Igor
2014-01-01
We present a model for studying communities of epistemically interacting agents who update their belief states by averaging (in a specified way) the belief states of other agents in the community. The agents in our model have a rich belief state, involving multiple independent issues which are interrelated in such a way that they form a theory of the world. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating (in the given way). To that end, an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. It is shown that, under the assumptions of our model, an agent always has a probability of less than 2% of ending up in an inconsistent belief state. Moreover, this probability can be made arbitrarily small by increasing the number of independent issues the agents have to judge or by increasing the group size. A real-world situation to which this model applies is a group of experts participating in a Delp...
Giannakakos, Antonia R; Vladescu, Jason C; Kisamore, April N; Reeve, Sharon A
2016-06-01
Direct teaching procedures are often an important part of early intensive behavioral intervention for consumers with autism spectrum disorder. In the present study, a video model with voiceover (VMVO) instruction plus feedback was evaluated to train three staff trainees to implement a most-to-least direct (MTL) teaching procedure. Probes for generalization were conducted with untrained direct teaching procedures (i.e., least-to-most, prompt delay) and with an actual consumer. The results indicated that VMVO plus feedback was effective in training the staff trainees to implement the MTL procedure. Although additional feedback was required for the staff trainees to show mastery of the untrained direct teaching procedures (i.e., least-to-most and prompt delay) and with an actual consumer, moderate to high levels of generalization were observed.
Olexová, Lucia; Talarovičová, Alžbeta; Lewis-Evans, Ben; Borbélyová, Veronika; Kršková, Lucia
2012-12-01
Research on autism has been gaining more and more attention. However, its aetiology is not entirely known and several factors are thought to contribute to the development of this neurodevelopmental disorder. These potential contributing factors range from genetic heritability to environmental effects. A significant number of reviews have already been published on different aspects of autism research as well as focusing on using animal models to help expand current knowledge around its aetiology. However, the diverse range of symptoms and possible causes of autism have resulted in as equally wide variety of animal models of autism. In this update article we focus only on the animal models with neurobehavioural characteristics of social deficit related to autism and present an overview of the animal models with alterations in brain regions, neurotransmitters, or hormones that are involved in a decrease in sociability.
Villacañas de Castro, Luis S.
2016-01-01
This article aims to apply Stenhouse's process model of curriculum to foreign language (FL) education, a model which is characterized by enacting "principles of procedure" which are specific to the discipline which the school subject belongs to. Rather than to replace or dissolve current approaches to FL teaching and curriculum…
Automatically generating procedure code and database maintenance scripts
Energy Technology Data Exchange (ETDEWEB)
Hatley, J.W. [Sandia National Labs., Albuquerque, NM (United States). Information Technologies and Methodologies Dept.
1994-10-01
Over the past couple of years the Information Technology Department at Sandia Laboratories has developed software to automatically generate database/4gl procedure code and database maintenance scripts based on database table information. With this software developers simply enter table and referential integrity information and the software generates code and scripts as required. The generated procedure code includes simple insert/delete/update procedures, transaction logging procedures as well as referential integrity procedures. The generated database maintenance scripts include scripts to modify structures, update remote databases, create views, and create indexes. Additionally, the software can generate EPSI representations of Binder diagrams for the tables. This paper will discuss the software application and use of it in real world applications. The automated generation of procedure code and maintenance scripts allows the developers to concentrate on the development of user interface code. The technique involves generating database/4 gl procedure code and maintenance scripts automatically from the database table information. The database/4gl procedure code provides standard insert/update/delete interfaces for upper level code as well as enforces the data constraints defined in the information model. The maintenance scripts provide maintenance scripts and migration scripts. This has resulted in fully updated database applications with complete rules enforcement and database maintenance scripts within days of a database modification.
Escudero, Miguel; Hooper, Dan; Witte, Samuel J.
2017-02-01
Utilizing an exhaustive set of simplified models, we revisit dark matter scenarios potentially capable of generating the observed Galactic Center gamma-ray excess, updating constraints from the LUX and PandaX-II experiments, as well as from the LHC and other colliders. We identify a variety of pseudoscalar mediated models that remain consistent with all constraints. In contrast, dark matter candidates which annihilate through a spin-1 mediator are ruled out by direct detection constraints unless the mass of the mediator is near an annihilation resonance, or the mediator has a purely vector coupling to the dark matter and a purely axial coupling to Standard Model fermions. All scenarios in which the dark matter annihilates through t-channel processes are now ruled out by a combination of the constraints from LUX/PandaX-II and the LHC.
Energy Technology Data Exchange (ETDEWEB)
Escudero, Miguel [Valencia U., IFIC; Hooper, Dan [Fermilab; Witte, Samuel J. [UCLA
2017-02-20
Utilizing an exhaustive set of simplified models, we revisit dark matter scenarios potentially capable of generating the observed Galactic Center gamma-ray excess, updating constraints from the LUX and PandaX-II experiments, as well as from the LHC and other colliders. We identify a variety of pseudoscalar mediated models that remain consistent with all constraints. In contrast, dark matter candidates which annihilate through a spin-1 mediator are ruled out by direct detection constraints unless the mass of the mediator is near an annihilation resonance, or the mediator has a purely vector coupling to the dark matter and a purely axial coupling to Standard Model fermions. All scenarios in which the dark matter annihilates through $t$-channel processes are now ruled out by a combination of the constraints from LUX/PandaX-II and the LHC.
Energy Technology Data Exchange (ETDEWEB)
Quinn, John J. [Argonne National Lab. (ANL), Argonne, IL (United States); Greer, Christopher B. [Argonne National Lab. (ANL), Argonne, IL (United States); Carr, Adrianne E. [Argonne National Lab. (ANL), Argonne, IL (United States)
2014-10-01
The purpose of this study is to update a one-dimensional analytical groundwater flow model to examine the influence of potential groundwater withdrawal in support of utility-scale solar energy development at the Afton Solar Energy Zone (SEZ) as a part of the Bureau of Land Management’s (BLM’s) Solar Energy Program. This report describes the modeling for assessing the drawdown associated with SEZ groundwater pumping rates for a 20-year duration considering three categories of water demand (high, medium, and low) based on technology-specific considerations. The 2012 modeling effort published in the Final Programmatic Environmental Impact Statement for Solar Energy Development in Six Southwestern States (Solar PEIS; BLM and DOE 2012) has been refined based on additional information described below in an expanded hydrogeologic discussion.
Procedural learning as a measure of functional impairment in a mouse model of ischemic stroke.
Linden, Jérôme; Van de Beeck, Lise; Plumier, Jean-Christophe; Ferrara, André
2016-07-01
Basal ganglia stroke is often associated with functional deficits in patients, including difficulties to learn and execute new motor skills (procedural learning). To measure procedural learning in a murine model of stroke (30min right MCAO), we submitted C57Bl/6J mice to various sensorimotor tests, then to an operant procedure (Serial Order Learning) specifically assessing the ability to learn a simple motor sequence. Results showed that MCAO affected the performance in some of the sensorimotor tests (accelerated rotating rod and amphetamine rotation test) and the way animals learned a motor sequence. The later finding seems to be caused by difficulties regarding the chunking of operant actions into a coherent motor sequence; the appeal for food rewards and ability to press levers appeared unaffected by MCAO. We conclude that assessment of motor learning in rodent models of stroke might improve the translational value of such models.
Tu, Hong Anh T.; Rozenbaum, Mark H.; de Boer, Pieter T.; Noort, Albert C.; Postma, Maarten J.
2013-01-01
Background: To update a cost-effectiveness analysis of rotavirus vaccination in the Netherlands previously published in 2011.Methods: The rotavirus burden of disease and the indirect protection of older children and young adults (herd protection) were updated.Results: When updated data was used,
National Aeronautics and Space Administration — Updates to Website: (Please add new items at the top of this description with the date of the website change) May 9, 2012: Uploaded experimental data in matlab...
National Oceanic and Atmospheric Administration, Department of Commerce — Circular Updates are periodic sequentially numbered instructions to debriefing staff and observers informing them of changes or additions to scientific and specimen...
Heagerty, Denise
2008-01-01
An update on recent security issues and vulnerabilities affecting Windows, Linux and Mac platforms. This talk is based on contributions and input from a range of colleagues both within and outside CERN. It covers clients, servers and control systems.
The Chain-Link Fence Model: A Framework for Creating Security Procedures
Houghton, Robert F.
2013-01-01
A long standing problem in information technology security is how to help reduce the security footprint. Many specific proposals exist to address specific problems in information technology security. Most information technology solutions need to be repeatable throughout the course of an information systems lifecycle. The Chain-Link Fence Model is a new model for creating and implementing information technology procedures. This model was validated by two different methods: the first being int...
Directory of Open Access Journals (Sweden)
Andrea Pacheco Pacífico
2013-01-01
Full Text Available This article recommends a new way to improve Refugee Status Determination (RSD procedures by proposing a network society communicative model based on active involvement and dialogue among all implementing partners. This model, named after proposals from Castells, Habermas, Apel, Chimni, and Betts, would be mediated by the United Nations High Commissioner for Refugees (UNHCR, whose role would be modeled after that of the International Committee of the Red Cross (ICRC practice.
Unthank, Michael D.
2013-01-01
The Ohio River alluvial aquifer near Carrollton, Ky., is an important water resource for the cities of Carrollton and Ghent, as well as for several industries in the area. The groundwater of the aquifer is the primary source of drinking water in the region and a highly valued natural resource that attracts various water-dependent industries because of its quantity and quality. This report evaluates the performance of a numerical model of the groundwater-flow system in the Ohio River alluvial aquifer near Carrollton, Ky., published by the U.S. Geological Survey in 1999. The original model simulated conditions in November 1995 and was updated to simulate groundwater conditions estimated for September 2010. The files from the calibrated steady-state model of November 1995 conditions were imported into MODFLOW-2005 to update the model to conditions in September 2010. The model input files modified as part of this update were the well and recharge files. The design of the updated model and other input files are the same as the original model. The ability of the updated model to match hydrologic conditions for September 2010 was evaluated by comparing water levels measured in wells to those computed by the model. Water-level measurements were available for 48 wells in September 2010. Overall, the updated model underestimated the water levels at 36 of the 48 measured wells. The average difference between measured water levels and model-computed water levels was 3.4 feet and the maximum difference was 10.9 feet. The root-mean-square error of the simulation was 4.45 for all 48 measured water levels. The updated steady-state model could be improved by introducing more accurate and site-specific estimates of selected field parameters, refined model geometry, and additional numerical methods. Collection of field data to better estimate hydraulic parameters, together with continued review of available data and information from area well operators, could provide the model with
Analyzing longitudinal data with the linear mixed models procedure in SPSS.
West, Brady T
2009-09-01
Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.
A Stepwise Time Series Regression Procedure for Water Demand Model Identification
Miaou, Shaw-Pin
1990-09-01
Annual time series water demand has traditionally been studied through multiple linear regression analysis. Four associated model specification problems have long been recognized: (1) the length of the available time series data is relatively short, (2) a large set of candidate explanatory or "input" variables needs to be considered, (3) input variables can be highly correlated with each other (multicollinearity problem), and (4) model error series are often highly autocorrelated or even nonstationary. A step wise time series regression identification procedure is proposed to alleviate these problems. The proposed procedure adopts the sequential input variable selection concept of stepwise regression and the "three-step" time series model building strategy of Box and Jenkins. Autocorrelated model error is assumed to follow an autoregressive integrated moving average (ARIMA) process. The stepwise selection procedure begins with a univariate time series demand model with no input variables. Subsequently, input variables are selected and inserted into the equation one at a time until the last entered variable is found to be statistically insignificant. The order of insertion is determined by a statistical measure called between-variable partial correlation. This correlation measure is free from the contamination of serial autocorrelation. Three data sets from previous studies are employed to illustrate the proposed procedure. The results are then compared with those from their original studies.
Talo, Seija A; Rytökoski, Ulla M
2016-03-01
The transformation of International Classification of Impairments, Disabilities and Handicaps into International Classification of Functioning, Disability and Health (ICF) meant a lot for those needing to communicate in terms of functioning concept in their daily work. With ICF's commonly understood language, the decades' uncertainty on what concepts and terms describe functioning and disabilities seemed to be dispelled. Instead, operationalizing ICF to measure the level of functioning along with the new nomenclature has not been as unambiguous. Transforming linguistic terms into quantified functioning seems to need another type of theorizing. Irrespective of challenging tasks, numerous projects were formulated during the past decades to apply ICF for measurement purposes. This article updates one of them, the so-called biopsychosocial-ICF model, which uses all ICF categories but classifies them into more components than ICF for measurement purposes. The model suggests that both disabilities and functional resources should be described by collecting and organizing functional measurement data in a multidisciplinary, biopsychosocial data matrice.
Lauenstein, Jean-Marie
2015-01-01
The JEDEC JESD57 test standard, Procedures for the Measurement of Single-Event Effects in Semiconductor Devices from Heavy-Ion Irradiation, is undergoing its first revision since 1996. In this talk, we place this test standard into context with other relevant radiation test standards to show its importance for single-event effect radiation testing for space applications. We show the range of industry, government, and end-user party involvement in the revision. Finally, we highlight some of the key changes being made and discuss the trade-space in which setting standards must be made to be both useful and broadly adopted.
Nagai, Kazuyuki; Yagi, Shintaro; Uemoto, Shinji; Tolba, Rene H
2013-03-07
Orthotopic liver transplantation (OLT) in rats using a whole or partial graft is an indispensable experimental model for transplantation research, such as studies on graft preservation and ischemia-reperfusion injury, immunological responses, hemodynamics, and small-for-size syndrome. The rat OLT is among the most difficult animal models in experimental surgery and demands advanced microsurgical skills that take a long time to learn. Consequently, the use of this model has been limited. Since the reliability and reproducibility of results are key components of the experiments in which such complex animal models are used, it is essential for surgeons who are involved in rat OLT to be trained in well-standardized and sophisticated procedures for this model. While various techniques and modifications of OLT in rats have been reported since the first model was described by Lee et al. in 1973, the elimination of the hepatic arterial reconstruction and the introduction of the cuff anastomosis technique by Kamada et al. were a major advancement in this model, because they simplified the reconstruction procedures to a great degree. In the model by Kamada et al., the hepatic rearterialization was also eliminated. Since rats could survive without hepatic arterial flow after liver transplantation, there was considerable controversy over the value of hepatic arterialization. However, the physiological superiority of the arterialized model has been increasingly acknowledged, especially in terms of preserving the bile duct system and the liver integrity. In this article, we present detailed surgical procedures for a rat model of OLT with hepatic arterial reconstruction using a 50% partial graft after ex vivo liver resection. The reconstruction procedures for each vessel and the bile duct are performed by the following methods: a 7-0 polypropylene continuous suture for the supra- and infrahepatic vena cava; a cuff technique for the portal vein; and a stent technique for the
Hou, Tsung-Chin; Gao, Wei-Yuan; Chang, Chia-Sheng; Zhu, Guan-Rong; Su, Yu-Min
2017-04-01
The three-span steel-arch-steel-girder Jiaxian Bridge was newly constructed in 2010 to replace the former one that has been destroyed by Typhoon Sinlaku (2008, Taiwan). It was designed and built to continue the domestic service requirement, as well as to improve the tourism business of the Kaohsiung city government, Taiwan. This study aimed at establishing the baseline model of Jiaxian Bridge for hazardous scenario simulation such as typhoons, floods and earthquakes. Necessities of these precaution works were attributed to the inherent vulnerability of the sites: near fault and river cross. The uncalibrated baseline bridge model was built with structural finite element in accordance with the blueprints. Ambient vibration measurements were performed repeatedly to acquire the elastic dynamic characteristics of the bridge structure. Two frequency domain system identification algorithms were employed to extract the measured operational modal parameters. Modal shapes, frequencies, and modal assurance criteria (MAC) were configured as the fitting targets so as to calibrate/update the structural parameters of the baseline model. It has been recognized that different types of structural parameters contribute distinguishably to the fitting targets, as this study has similarly explored. For steel-arch-steel-girder bridges in particular this case, joint rigidity of the steel components was found to be dominant while material properties and section geometries relatively minor. The updated model was capable of providing more rational elastic responses of the bridge superstructure under normal service conditions as well as hazardous scenarios, and can be used for manage the health conditions of the bridge structure.
An incremental procedure model for e-learning projects at universities
Directory of Open Access Journals (Sweden)
Pahlke, Friedrich
2006-11-01
Full Text Available E-learning projects at universities are produced under different conditions than in industry. The main characteristic of many university projects is that these are realized quasi in a solo effort. In contrast, in private industry the different, interdisciplinary skills that are necessary for the development of e-learning are typically supplied by a multimedia agency.A specific procedure tailored for the use at universities is therefore required to facilitate mastering the amount and complexity of the tasks.In this paper an incremental procedure model is presented, which describes the proceeding in every phase of the project. It allows a high degree of flexibility and emphasizes the didactical concept – instead of the technical implementation. In the second part, we illustrate the practical use of the theoretical procedure model based on the project “Online training in Genetic Epidemiology”.
Zika virus: An update on epidemiology, pathology, molecular biology, and animal model.
Ramos da Silva, Suzane; Gao, Shou-Jiang
2016-08-01
Zika virus (ZIKV) was first described in 1947, and became a health emergency problem in 2016 when its association with fetal microcephaly cases was confirmed by Centers for Disease Control and Prevention (CDC) in the United States. To date, ZIKV infection has been documented in 66 countries. ZIKV is recognized as a neurotropic virus and numerous diseases manifested in multiple neurological disorders have been described, mainly in countries that have been exposed to ZIKV after the 2007 outbreak in the Federated States of Micronesia. The most dramatic consequence of ZIKV infection documented is the abrupt increase in fetal microcephaly cases in Brazil. Here, we present an update of the published research progress in the past few months. J. Med. Virol. 88:1291-1296, 2016. © 2016 Wiley Periodicals, Inc.
DEFF Research Database (Denmark)
Kristensen, Anders Ringgaard; Søllested, Thomas Algot
2004-01-01
herds. It is concluded that the Bayesian updating technique and the hierarchical structure decrease the size of the state space dramatically. Since parameter estimates vary considerably among herds it is concluded that decision support concerning sow replacement only makes sense with parameters...... estimated at herd level. It is argued that the multi-level formulation and the standard software comprise a flexible tool and a shortcut to working prototypes...
Martino, K G; Marks, B P
2007-12-01
Two different microbial modeling procedures were compared and validated against independent data for Listeria monocytogenes growth. The most generally used method is two consecutive regressions: growth parameters are estimated from a primary regression of microbial counts, and a secondary regression relates the growth parameters to experimental conditions. A global regression is an alternative method in which the primary and secondary models are combined, giving a direct relationship between experimental factors and microbial counts. The Gompertz equation was the primary model, and a response surface model was the secondary model. Independent data from meat and poultry products were used to validate the modeling procedures. The global regression yielded the lower standard errors of calibration, 0.95 log CFU/ml for aerobic and 1.21 log CFU/ml for anaerobic conditions. The two-step procedure yielded errors of 1.35 log CFU/ml for aerobic and 1.62 log CFU/ ml for anaerobic conditions. For food products, the global regression was more robust than the two-step procedure for 65% of the cases studied. The robustness index for the global regression ranged from 0.27 (performed better than expected) to 2.60. For the two-step method, the robustness index ranged from 0.42 to 3.88. The predictions were overestimated (fail safe) in more than 50% of the cases using the global regression and in more than 70% of the cases using the two-step regression. Overall, the global regression performed better than the two-step procedure for this specific application.
A new procedure to built a model covariance matrix: first results
Barzaghi, R.; Marotta, A. M.; Splendore, R.; Borghi, A.
2012-04-01
In order to validate the results of geophysical models a common procedure is to compare model predictions with observations by means of statistical tests. A limit of this approach is the lack of a covariance matrix associated to model results, that may frustrate the achievement of a confident statistical significance of the results. Trying to overcome this limit, we have implemented a new procedure to build a model covariance matrix that could allow a more reliable statistical analysis. This procedure has been developed in the frame of the thermo-mechanical model described in Splendore et al. (2010), that predicts the present-day crustal velocity field in the Tyrrhenian due to Africa-Eurasia convergence and to lateral rheological heterogeneities of the lithosphere. Modelled tectonic velocity field has been compared to the available surface velocity field based on GPS observation, determining the best fit model and the degree of fitting, through the use of a χ2 test. Once we have identified the key models parameters and defined their appropriate ranges of variability, we have run 100 different models for 100 sets of randomly values of the parameters extracted within the corresponding interval, obtaining a stack of 100 velocity fields. Then, we calculated variance and empirical covariance for the stack of results, taking into account also cross-correlation, obtaining a positive defined, diagonal matrix that represents the covariance matrix of the model. This empirical approach allows us to define a more robust statistical analysis with respect the classic approach. Reference Splendore, Marotta, Barzaghi, Borghi and Cannizzaro, 2010. Block model versus thermomechanical model: new insights on the present-day regional deformation in the surroundings of the Calabrian Arc. In: Spalla, Marotta and Gosso (Eds) Advances in Interpretation of Geological Processes: Refinement of Multi scale Data and Integration in Numerical Modelling. Geological Society, London, Special
Sharkawi, K.-H.; Abdul-Rahman, A.
2013-09-01
to LoD4. The accuracy and structural complexity of the 3D objects increases with the LoD level where LoD0 is the simplest LoD (2.5D; Digital Terrain Model (DTM) + building or roof print) while LoD4 is the most complex LoD (architectural details with interior structures). Semantic information is one of the main components in CityGML and 3D City Models, and provides important information for any analyses. However, more often than not, the semantic information is not available for the 3D city model due to the unstandardized modelling process. One of the examples is where a building is normally generated as one object (without specific feature layers such as Roof, Ground floor, Level 1, Level 2, Block A, Block B, etc). This research attempts to develop a method to improve the semantic data updating process by segmenting the 3D building into simpler parts which will make it easier for the users to select and update the semantic information. The methodology is implemented for 3D buildings in LoD2 where the buildings are generated without architectural details but with distinct roof structures. This paper also introduces hybrid semantic-geometric 3D segmentation method that deals with hierarchical segmentation of a 3D building based on its semantic value and surface characteristics, fitted by one of the predefined primitives. For future work, the segmentation method will be implemented as part of the change detection module that can detect any changes on the 3D buildings, store and retrieve semantic information of the changed structure, automatically updates the 3D models and visualize the results in a userfriendly graphical user interface (GUI).
A new experimental procedure for incorporation of model contaminants in polymer hosts
Papaspyrides, C.D.; Voultzatis, Y.; Pavlidou, S.; Tsenoglou, C.; Dole, P.; Feigenbaum, A.; Paseiro, P.; Pastorelli, S.; Cruz Garcia, C. de la; Hankemeier, T.; Aucejo, S.
2005-01-01
A new experimental procedure for incorporation of model contaminants in polymers was developed as part of a general scheme for testing the efficiency of functional barriers in food packaging. The aim was to progressively pollute polymers in a controlled fashion up to a high level in the range of 100
A new experimental procedure for incorporation of model contaminants in polymer hosts
Papaspyrides, C.D.; Voultzatis, Y.; Pavlidou, S.; Tsenoglou, C.; Dole, P.; Feigenbaum, A.; Paseiro, P.; Pastorelli, S.; Cruz Garcia, C. de la; Hankemeier, T.; Aucejo, S.
2005-01-01
A new experimental procedure for incorporation of model contaminants in polymers was developed as part of a general scheme for testing the efficiency of functional barriers in food packaging. The aim was to progressively pollute polymers in a controlled fashion up to a high level in the range of 100
Raykov, Tenko; Marcoulides, George A.; Lee, Chun-Lung; Chang, Chi
2013-01-01
This note is concerned with a latent variable modeling approach for the study of differential item functioning in a multigroup setting. A multiple-testing procedure that can be used to evaluate group differences in response probabilities on individual items is discussed. The method is readily employed when the aim is also to locate possible…
User Acceptance of YouTube for Procedural Learning: An Extension of the Technology Acceptance Model
Lee, Doo Young; Lehto, Mark R.
2013-01-01
The present study was framed using the Technology Acceptance Model (TAM) to identify determinants affecting behavioral intention to use YouTube. Most importantly, this research emphasizes the motives for using YouTube, which is notable given its extrinsic task goal of being used for procedural learning tasks. Our conceptual framework included two…
A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics
DEFF Research Database (Denmark)
Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist;
2015-01-01
exercise, thereby bypassing the challenging task of model structure determination and identification. Parameter identification problems can thus lead to ill-calibrated models with low predictive power and large model uncertainty. Every calibration exercise should therefore be precededby a proper model...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...
Progress on Updating the 1961-1990 National Solar Radiation Database: Preprint
Energy Technology Data Exchange (ETDEWEB)
Renne, D.; Wilcox, S.; Marion, B.; George, R.; Myers, D.; Stoffel, T.; Perez, R.; Stackhouse, P. Jr.
2003-04-01
The 1961-1990 National Solar Radiation Data Base (NSRDB) provides a 30-year climate summary and solar characterization of 239 locations throughout the United States. Over the past several years, the National Renewable Energy Laboratory (NREL) has received numerous inquiries from a range of constituents as to whether an update of the database to include the 1990s will be developed. However, there are formidable challenges to creating an update of the serially complete station-specific database for the 1971-2000 period. During the 1990s, the National Weather Service changed its observational procedures from a human-based to an automated system, resulting in the loss of important input variables to the model used to complete the 19611990 NSRDB. As a result, alternative techniques are required for an update that covers the 1990s. This paper examines several alternative approaches for creating this update and describes preliminary NREL plans for implementing the update.
Baccino, Francesco; Marinelli, Mattia; Nørgård, Per; Silvestro, Federico
2014-05-01
The paper aims at characterizing the electrochemical and thermal parameters of a 15 kW/320 kWh vanadium redox flow battery (VRB) installed in the SYSLAB test facility of the DTU Risø Campus and experimentally validating the proposed dynamic model realized in Matlab-Simulink. The adopted testing procedure consists of analyzing the voltage and current values during a power reference step-response and evaluating the relevant electrochemical parameters such as the internal resistance. The results of different tests are presented and used to define the electrical characteristics and the overall efficiency of the battery system. The test procedure has general validity and could also be used for other storage technologies. The storage model proposed and described is suitable for electrical studies and can represent a general model in terms of validity. Finally, the model simulation outputs are compared with experimental measurements during a discharge-charge sequence.
Energy Technology Data Exchange (ETDEWEB)
Young, R. P.; Collins, D.; Hazzard, J.; Heath, A. [Department of Earth Sciences, Liverpool University, 4 Brownlow street, UK-0 L69 3GP Liverpool (United Kingdom); Pettitt, W.; Baker, C. [Applied Seismology Consultants LTD, 10 Belmont, Shropshire, UK-S41 ITE Shrewsbury (United Kingdom); Billaux, D.; Cundall, P.; Potyondy, D.; Dedecker, F. [Itasca Consultants S.A., Centre Scientifique A. Moiroux, 64, chemin des Mouilles, F69130 Ecully (France); Svemar, C. [Svensk Karnbranslemantering AB, SKB, Aspo Hard Rock Laboratory, PL 300, S-57295 Figeholm (Sweden); Lebon, P. [ANDRA, Parc de la Croix Blanche, 7, rue Jean Monnet, F-92298 Chatenay-Malabry (France)
2004-07-01
This paper presents current results from work performed within the European Commission project SAFETI. The main objective of SAFETI is to develop and test an innovative 3D numerical modelling procedure that will enable the 3-D simulation of nuclear waste repositories in rock. The modelling code is called AC/DC (Adaptive Continuum/ Dis-Continuum) and is partially based on Itasca Consulting Group's Particle Flow Code (PFC). Results are presented from the laboratory validation study where algorithms and procedures have been developed and tested to allow accurate 'Models for Rock' to be produced. Preliminary results are also presented on the use of AC/DC with parallel processors and adaptive logic. During the final year of the project a detailed model of the Prototype Repository Experiment at SKB's Hard Rock Laboratory will be produced using up to 128 processors on the parallel super computing facility at Liverpool University. (authors)
Parameter Vertex Color Pada Animation Procedural 3D Model Vegetasi Musaceae
Directory of Open Access Journals (Sweden)
I Gede Ngurah Arya Indrayasa
2017-02-01
Full Text Available Penggunaan vegetasi untuk industri film, video game, simulasi, dan arsitektur visualisas merupakan faktor penting untuk menghasilkan adegan pemandangan alam lebih hidup. Penelitian ini bertujuan untuk mengetahui pengaruh dari vertex color terhadap efek angin pada animasi prosedural 3d model vegetasi musaceae serta parameter vertex color yang tepat untuk menghasilkan animasi 3d model vegetasi musaceae realistis. Hasil akhir yang di capai adalah meneliti apakah perubahan parameter vertex color dapat mempengaruhi bentuk animasi procedural 3d vegetasi musaceae serta pengaruh dari vertex color terhadap efek angin pada animasi prosedural 3d model vegetasi Musaceae. Berdasarkan pengamat dan perbandingan pada pengujian 5 sample vertex color diperoleh hasil bahwa perubahan parameter vertex color dapat mempengaruhi bentuk animasi procedural 3d vegetasi musaceae serta di peroleh kesimpulan Sample No.5 merupakan parameter vertex color yang tepat untuk menghasilkan animasi 3d model vegetasi Musaceae yang realistis. Kata kunci—3D, Animasi Prosedural, Vegetation
Dhall, Sanjay S; Choudhri, Tanvir F; Eck, Jason C; Groff, Michael W; Ghogawala, Zoher; Watters, William C; Dailey, Andrew T; Resnick, Daniel K; Sharan, Alok; Mummaneni, Praveen V; Wang, Jeffrey C; Kaiser, Michael G
2014-07-01
In an effort to diminish pain or progressive instability, due to either the pathological process or as a result of surgical decompression, one of the primary goals of a fusion procedure is to achieve a solid arthrodesis. Assuming that pain and disability result from lost mechanical integrity of the spine, the objective of a fusion across an unstable segment is to eliminate pathological motion and improve clinical outcome. However, conclusive evidence of this correlation, between successful fusion and clinical outcome, remains elusive, and thus the necessity of documenting successful arthrodesis through radiographic analysis remains debatable. Although a definitive cause and effect relationship has not been demonstrated, there is moderate evidence that demonstrates a positive association between radiographic presence of fusion and improved clinical outcome. Due to this growing body of literature, it is recommended that strategies intended to enhance the potential for radiographic fusion are considered when performing a lumbar arthrodesis for degenerative spine disease.
A Review of Models and Procedures for Synthetic Validation for Entry-Level Army Jobs
1988-12-01
ARI Research Note 88-107 A Review of Models and Procedures for Co Synthetic Validation for Entry-LevelM -£.2 Army Jobs C i Jennifer L. Crafts, Philip...of Models and Procecures for Synthetic Validation for Entry-Level Army Jobs 12. PERSONAL AUTHOR(S) Crafts, Jennifer L., Szenas, Fhilip L., Chia, Wel...well as ability. ProJect A Validity Results Campbell (1986) and McHerry, Houigh. Thquam, Hanson, and Ashworth (1987) have conducted extensive
A P-value model for theoretical power analysis and its applications in multiple testing procedures
Directory of Open Access Journals (Sweden)
Fengqing Zhang
2016-10-01
Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.
Renormalization procedure for random tensor networks and the canonical tensor model
Sasakura, Naoki
2015-01-01
We discuss a renormalization procedure for random tensor networks, and show that the corresponding renormalization-group flow is given by the Hamiltonian vector flow of the canonical tensor model, which is a discretized model of quantum gravity. The result is the generalization of the previous one concerning the relation between the Ising model on random networks and the canonical tensor model with N=2. We also prove a general theorem which relates discontinuity of the renormalization-group flow and the phase transitions of random tensor networks.
Energy Technology Data Exchange (ETDEWEB)
Billet, L.; Moine, P. [Electricite de France (EDF), Direction des Etudes at Recherches, 1, Avenue du GENERAL-DE-GAULLE, BP 408, 92141 Clamart Cedex (France); Aubry, D. [Ecole Centrale de Paris, LMSSM, 92295 Chatenay Malabry Cedex (France)
1997-12-31
In this paper the feasibility of the extension of two updating methods to rotating machinery models is considered, the particularity of rotating machinery models is to use non-symmetric stiffness and damping matrices. It is shown that the two methods described here, the inverse Eigen-sensitivity method and the error in constitutive relation method can be adapted to such models given some modification.As far as inverse sensitivity method is concerned, an error function based on the difference between right hand calculated and measured Eigen mode shapes and calculated and measured Eigen values is used. Concerning the error in constitutive relation method, the equation which defines the error has to be modified due to the non definite positiveness of the stiffness matrix. The advantage of this modification is that, in some cases, it is possible to focus the updating process on some specific model parameters. Both methods were validated on a simple test model consisting in a two-bearing and disc rotor system. (author). 12 refs.
Combining Decision Diagrams and SAT Procedures for Efficient Symbolic Model Checking
DEFF Research Database (Denmark)
Williams, Poul Frederick; Biere, Armin; Clarke, Edmund M.
2000-01-01
in the specification of a 16 bit multiplier. As opposed to Bounded Model Checking (BMC) our method is complete in practice. Our technique is based on a quantification procedure that allows us to eliminate quantifiers in Quantified Boolean Formulas (QBF). The basic step of this procedure is the up-one operation...... for BEDs. In addition we list a number of important optimizations to reduce the number of basic steps. In particular the optimization rule of quantification-by-substitution turned out to be very useful: exists x : {g /\\ ( x f )} = g[f/x]. The rule is used (1) during fixed point iterations, (2) for deciding...
Detection Procedure for a Single Additive Outlier and Innovational Outlier in a Bilinear Model
Directory of Open Access Journals (Sweden)
Azami Zaharim
2007-01-01
Full Text Available A single outlier detection procedure for data generated from BL(1,1,1,1 models is developed. It is carried out in three stages. Firstly, the measure of impact of an IO and AO denoted by IO ω , AO ω , respectively are derived based on least squares method. Secondly, test statistics and test criteria are defined for classifying an observation as an outlier of its respective type. Finally, a general single outlier detection procedure is presented to distinguish a particular type of outlier at a time point t.
Net Metering and Interconnection Procedures-- Incorporating Best Practices
Energy Technology Data Exchange (ETDEWEB)
Jason Keyes, Kevin Fox, Joseph Wiedman, Staff at North Carolina Solar Center
2009-04-01
State utility commissions and utilities themselves are actively developing and revising their procedures for the interconnection and net metering of distributed generation. However, the procedures most often used by regulators and utilities as models have not been updated in the past three years, in which time most of the distributed solar facilities in the United States have been installed. In that period, the Interstate Renewable Energy Council (IREC) has been a participant in more than thirty state utility commission rulemakings regarding interconnection and net metering of distributed generation. With the knowledge gained from this experience, IREC has updated its model procedures to incorporate current best practices. This paper presents the most significant changes made to IRECÃÂÃÂ¢ÃÂÃÂÃÂÃÂs model interconnection and net metering procedures.
Glaucoma-inducing Procedure in an In Vivo Rat Model and Whole-mount Retina Preparation.
Gossman, Cynthia A; Linn, David M; Linn, Cindy
2016-01-01
Glaucoma is a disease of the central nervous system affecting retinal ganglion cells (RGCs). RGC axons making up the optic nerve carry visual input to the brain for visual perception. Damage to RGCs and their axons leads to vision loss and/or blindness. Although the specific cause of glaucoma is unknown, the primary risk factor for the disease is an elevated intraocular pressure. Glaucoma-inducing procedures in animal models are a valuable tool to researchers studying the mechanism of RGC death. Such information can lead to the development of effective neuroprotective treatments that could aid in the prevention of vision loss. The protocol in this paper describes a method of inducing glaucoma - like conditions in an in vivo rat model where 50 µl of 2 M hypertonic saline is injected into the episcleral venous plexus. Blanching of the vessels indicates successful injection. This procedure causes loss of RGCs to simulate glaucoma. One month following injection, animals are sacrificed and eyes are removed. Next, the cornea, lens, and vitreous are removed to make an eyecup. The retina is then peeled from the back of the eye and pinned onto sylgard dishes using cactus needles. At this point, neurons in the retina can be stained for analysis. Results from this lab show that approximately 25% of RGCs are lost within one month of the procedure when compared to internal controls. This procedure allows for quantitative analysis of retinal ganglion cell death in an in vivo rat glaucoma model.
Directory of Open Access Journals (Sweden)
Gustavo Victor
2014-12-01
Full Text Available The corneal transplantation (CT is the most commonly performed type of transplant in the world and the Eye Banks are organizations whose capture, evaluate, preserve, store and distribute ocular tissues. With the evolution of surgical techniques and equipment for CT, the BOs had to evolve to keep up with these requirements. This evolution goes from tissues capture techniques, donating money and clarification to the patient (e.g. internet-based, use of current equipment for more adequate tissues supply for the most current surgical techniques, integration of BOs of certain country and real-time management of stocks of ocular tissues, and adequacy of laws that manage the entire process. This review aims to make a comparative review between the updated models of Brazilian, United Kingdon and American Eye Banks. Like, check what the trend towards lamellar transplants in these three countries.
Schubert, Siegfried
2011-01-01
The Global Modeling and Assimilation Office at NASA's Goddard Space Flight Center is developing a number of experimental prediction and analysis products suitable for research and applications. The prediction products include a large suite of subseasonal and seasonal hindcasts and forecasts (as a contribution to the US National MME), a suite of decadal (10-year) hindcasts (as a contribution to the IPCC decadal prediction project), and a series of large ensemble and high resolution simulations of selected extreme events, including the 2010 Russian and 2011 US heat waves. The analysis products include an experimental atlas of climate (in particular drought) and weather extremes. This talk will provide an update on those activities, and discuss recent efforts by WCRP to leverage off these and similar efforts at other institutions throughout the world to develop an experimental global drought early warning system.
2013-07-01
A Long-Term Memory Competitive Process Model of a Common Procedural Error, Part II: Working Memory Load and Capacity Franklin P. Tamborello, II...00-00-2013 4. TITLE AND SUBTITLE A Long-Term Memory Competitive Process Model of a Common Procedural Error, Part II: Working Memory Load and...07370024.2011.601692 Tamborello, F. P., & Trafton, J. G. (2013). A long-term competitive process model of a common procedural error. In Proceedings of the 35th
2011-03-24
2010 Report sent NATC CATL 1922 20K Treadwear Bridgestone 2 LT235/85R16 (L/R E) GMC 2500 (2WD) Awaiting new control tire NATC CATL 1922 20K Treadwear...1 DSCC Annual Tire Conference CATL UPDATE March 24, 2011 UNCLASSIFIED: Dist A. Approved for public release Report Documentation Page...A 3. DATES COVERED - 4. TITLE AND SUBTITLE CATL Update 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d
Sullivan, Annett B.; Rounds, Stewart A.; Asbill-Case, Jessica R.; Deas, Michael L.
2013-01-01
A hydrodynamic, water temperature, and water-quality model of the Link River to Keno Dam reach of the upper Klamath River was updated to account for macrophytes and enhanced pH buffering from dissolved organic matter, ammonia, and orthophosphorus. Macrophytes had been observed in this reach by field personnel, so macrophyte field data were collected in summer and fall (June-October) 2011 to provide a dataset to guide the inclusion of macrophytes in the model. Three types of macrophytes were most common: pondweed (Potamogeton species), coontail (Ceratophyllum demersum), and common waterweed (Elodea canadensis). Pondweed was found throughout the Link River to Keno Dam reach in early summer with densities declining by mid-summer and fall. Coontail and common waterweed were more common in the lower reach near Keno Dam and were at highest density in summer. All species were most dense in shallow water (less than 2 meters deep) near shore. The highest estimated dry weight biomass for any sample during the study was 202 grams per square meter for coontail in August. Guided by field results, three macrophyte groups were incorporated into the CE-QUAL-W2 model for calendar years 2006-09. The CE-QUAL-W2 model code was adjusted to allow the user to initialize macrophyte populations spatially across the model grid. The default CE-QUAL-W2 model includes pH buffering by carbonates, but does not include pH buffering by organic matter, ammonia, or orthophosphorus. These three constituents, especially dissolved organic matter, are present in the upper Klamath River at concentrations that provide substantial pH buffering capacity. In this study, CE-QUAL-W2 was updated to include this enhanced buffering capacity in the simulation of pH. Acid dissociation constants for ammonium and phosphoric acid were taken from the literature. For dissolved organic matter, the number of organic acid groups and each group's acid dissociation constant (Ka) and site density (moles of sites per mole of
Shek, Daniel T L; Ma, Cecilia M S
2011-01-05
Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.
Ghogawala, Zoher; Whitmore, Robert G; Watters, William C; Sharan, Alok; Mummaneni, Praveen V; Dailey, Andrew T; Choudhri, Tanvir F; Eck, Jason C; Groff, Michael W; Wang, Jeffrey C; Resnick, Daniel K; Dhall, Sanjay S; Kaiser, Michael G
2014-07-01
A comprehensive economic analysis generally involves the calculation of indirect and direct health costs from a societal perspective as opposed to simply reporting costs from a hospital or payer perspective. Hospital charges for a surgical procedure must be converted to cost data when performing a cost-effectiveness analysis. Once cost data has been calculated, quality-adjusted life year data from a surgical treatment are calculated by using a preference-based health-related quality-of-life instrument such as the EQ-5D. A recent cost-utility analysis from a single study has demonstrated the long-term (over an 8-year time period) benefits of circumferential fusions over stand-alone posterolateral fusions. In addition, economic analysis from a single study has found that lumbar fusion for selected patients with low-back pain can be recommended from an economic perspective. Recent economic analysis, from a single study, finds that femoral ring allograft might be more cost-effective compared with a specific titanium cage when performing an anterior lumbar interbody fusion plus posterolateral fusion.
Mummaneni, Praveen V; Dhall, Sanjay S; Eck, Jason C; Groff, Michael W; Ghogawala, Zoher; Watters, William C; Dailey, Andrew T; Resnick, Daniel K; Choudhri, Tanvir F; Sharan, Alok; Wang, Jeffrey C; Kaiser, Michael G
2014-07-01
Interbody fusion techniques have been promoted as an adjunct to lumbar fusion procedures in an effort to enhance fusion rates and potentially improve clinical outcome. The medical evidence continues to suggest that interbody techniques are associated with higher fusion rates compared with posterolateral lumbar fusion (PLF) in patients with degenerative spondylolisthesis who demonstrate preoperative instability. There is no conclusive evidence demonstrating improved clinical or radiographic outcomes based on the different interbody fusion techniques. The addition of a PLF when posterior or anterior interbody lumbar fusion is performed remains an option, although due to increased cost and complications, it is not recommended. No substantial clinical benefit has been demonstrated when a PLF is included with an interbody fusion. For lumbar degenerative disc disease without instability, there is moderate evidence that the standalone anterior lumbar interbody fusion (ALIF) has better clinical outcomes than the ALIF plus instrumented, open PLF. With regard to type of interbody spacer used, frozen allograft is associated with lower pseudarthrosis rates compared with freeze-dried allograft; however, this was not associated with a difference in clinical outcome.
Resnick, Daniel K; Watters, William C; Sharan, Alok; Mummaneni, Praveen V; Dailey, Andrew T; Wang, Jeffrey C; Choudhri, Tanvir F; Eck, Jason; Ghogawala, Zoher; Groff, Michael W; Dhall, Sanjay S; Kaiser, Michael G
2014-07-01
Patients presenting with stenosis associated with a spondylolisthesis will often describe signs and symptoms consistent with neurogenic claudication, radiculopathy, and/or low-back pain. The primary objective of surgery, when deemed appropriate, is to decompress the neural elements. As a result of the decompression, the inherent instability associated with the spondylolisthesis may progress and lead to further misalignment that results in pain or recurrence of neurological complaints. Under these circumstances, lumbar fusion is considered appropriate to stabilize the spine and prevent delayed deterioration. Since publication of the original guidelines there have been a significant number of studies published that continue to support the utility of lumbar fusion for patients presenting with stenosis and spondylolisthesis. Several recently published trials, including the Spine Patient Outcomes Research Trial, are among the largest prospective randomized investigations of this issue. Despite limitations of study design or execution, these trials have consistently demonstrated superior outcomes when patients undergo surgery, with the majority undergoing some type of lumbar fusion procedure. There is insufficient evidence, however, to recommend a standard approach to achieve a solid arthrodesis. When formulating the most appropriate surgical strategy, it is recommended that an individualized approach be adopted, one that takes into consideration the patient's unique anatomical constraints and desires, as well as surgeon's experience.
A step-by-step procedure for pH model construction in aquatic systems
Directory of Open Access Journals (Sweden)
A. F. Hofmann
2007-10-01
Full Text Available We present, by means of a simple example, a comprehensive step-by-step procedure to consistently derive a pH model of aquatic systems. As pH modeling is inherently complex, we make every step of the model generation process explicit, thus ensuring conceptual, mathematical, and chemical correctness. Summed quantities, such as total inorganic carbon and total alkalinity, and the influences of modeled processes on them are consistently derived. The model is subsequently reformulated until numerically and computationally simple dynamical solutions, like a variation of the operator splitting approach (OSA and the direct substitution approach (DSA, are obtained. As several solution methods are pointed out, connections between previous pH modelling approaches are established. The final reformulation of the system according to the DSA allows for quantification of the influences of kinetic processes on the rate of change of proton concentration in models containing multiple biogeochemical processes. These influences are calculated including the effect of re-equilibration of the system due to a set of acid-base reactions in local equilibrium. This possibility of quantifying influences of modeled processes on the pH makes the end-product of the described model generation procedure a powerful tool for understanding the internal pH dynamics of aquatic systems.
Ohlberger, Mario; Smetana, Kathrin
2016-09-01
In this article we introduce a procedure, which allows to recover the potentially very good approximation properties of tensor-based model reduction procedures for the solution of partial differential equations in the presence of interfaces or strong gradients in the solution which are skewed with respect to the coordinate axes. The two key ideas are the location of the interface either by solving a lower-dimensional partial differential equation or by using data functions and the subsequent removal of the interface of the solution by choosing the determined interface as the lifting function of the Dirichlet boundary conditions. We demonstrate in numerical experiments for linear elliptic equations and the reduced basis-hierarchical model reduction approach that the proposed procedure locates the interface well and yields a significantly improved convergence behavior even in the case when we only consider an approximation of the interface.
Calibrating and Updating the Global Forest Products Model (GFPM version 2014 with BPMPD)
Joseph Buongiorno; Shushuai Zhu
2014-01-01
The Global Forest Products Model (GFPM) is an economic model of global production, consumption, and trade of forest products. An earlier version of the model is described in Buongiorno et al. (2003). The GFPM 2014 has data and parameters to simulate changes of the forest sector from 2010 to 2030. Buongiorno and Zhu (2014) describe how to use the model for simulation....
Calibrating and updating the Global Forest Products Model (GFPM version 2016 with BPMPD)
Joseph Buongiorno; Shushuai Zhu
2016-01-01
The Global Forest Products Model (GFPM) is an economic model of global production, consumption, and trade of forest products. An earlier version of the model is described in Buongiorno et al. (2003). The GFPM 2016 has data and parameters to simulate changes of the forest sector from 2013 to 2030. Buongiorno and Zhu (2015) describe how to use the model for...
A double-step truncation procedure for large-scale shell-model calculations
Coraggio, L; Itaco, N
2016-01-01
We present a procedure that is helpful to reduce the computational complexity of large-scale shell-model calculations, by preserving as much as possible the role of the rejected degrees of freedom in an effective approach. Our truncation is driven first by the analysis of the effective single-particle energies of the original large-scale shell-model hamiltonian, so to locate the relevant degrees of freedom to describe a class of isotopes or isotones, namely the single-particle orbitals that will constitute a new truncated model space. The second step is to perform an unitary transformation of the original hamiltonian from its model space into the truncated one. This transformation generates a new shell-model hamiltonian, defined in a smaller model space, that retains effectively the role of the excluded single-particle orbitals. As an application of this procedure, we have chosen a realistic shell-model hamiltonian defined in a large model space, set up by seven and five proton and neutron single-particle orb...
Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2
Directory of Open Access Journals (Sweden)
I. Wohltmann
2017-07-01
Full Text Available The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs and Earth system models (ESMs to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx, HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect
Directory of Open Access Journals (Sweden)
Rosa Ana Salas
2013-11-01
Full Text Available We propose a modeling procedure specifically designed for a ferrite inductor excited by a waveform in time domain. We estimate the loss resistance in the core (parameter of the electrical model of the inductor by means of a Finite Element Method in 2D which leads to significant computational advantages over the 3D model. The methodology is validated for an RM (rectangular modulus ferrite core working in the linear and the saturation regions. Excellent agreement is found between the experimental data and the computational results.
Hughes, Lauren; Mackay, Don; Powell, David E; Kim, Jaeshin
2012-04-01
The EQuilibrium Criterion (EQC) model developed and published in 1996 has been widely used for screening level evaluations of the multimedia, fugacity-based environmental fate of organic chemicals for educational, industrial, and regulatory purposes. Advances in the science of chemical partitioning and reactivity and the need for more rigorous regulatory evaluations have resulted in a need to update the model. The New EQC model is described which includes an improved treatment of input partitioning and reactivity data, temperature dependence and an easier sensitivity and uncertainty analysis but uses the same multi-level approach, equations and environmental parameters as in the original version. A narrative output is also produced. The New EQC model, which uses a Microsoft Excel platform, is described and applied in detail to decamethylcyclopentasiloxane (D5; CAS No. 541-02-6). The implications of these results for the more detailed exposure and risk assessment of D5 are discussed. The need for rigorous evaluation and documentation of the input parameters is outlined.
Nestler, Steffen; Egloff, Boris; Küfner, Albrecht C P; Back, Mitja D
2012-10-01
The present article integrates research on the accurate inference of personality traits with process models of hindsight bias (the tendency to exaggerate in hindsight what one had said in foresight). Specifically, the article suggests a new model that integrates assumptions of the lens model on accurate personality judgments and accounts that view hindsight effects as a by-product of knowledge updating. We suggest 3 processes that have the potential to explain the occurrence of hindsight effects in personality judgments: (a) changes in an individual's cue perceptions, (b) changes in the utilization of more valid cues, and (c) changes in the consistency with which cue knowledge is applied. In 2 studies (N1 = 91, N2 = 93), participants were presented with target pictures and were asked to judge each target's levels of the Big Five. Thereafter, they received feedback and had to recall their original judgments. Results show that there were clear hindsight effects for all 5 personality dimensions. Importantly, we found evidence that both the utilization of more valid cues and changes in cue perceptions--but not changes in the consistency with which cue knowledge is applied--account for the hindsight effects. Implications of these results for models explaining hindsight effects, the inference of personality judgments, and the accuracy of these inferences are discussed.
Institute of Scientific and Technical Information of China (English)
ZHANG Ziyang; XIE Shousheng; HU Jinhai; MIAO Zhuoguang; WANG Lei
2012-01-01
This article,in order to improve the assembly of the high-pressure spool,presents an assembly variation identification method achieved by response surface method (RSM)-based model updating using IV-optimal designs.The method involves screening out non-relevant assembly parameters using IV-optimal designs and the preload of the joints is chosen as the input features and modal frequency is the only response feature.Emphasis is placed on the construction of response surface models including the interactions between the bolted joints by which the non-linear relationship between the assembly variation caused by the changes of preload and the output frequency variation is established.By achieving an optimal process of selected variables in the model,assembly variation can be identified.With a case study of the laboratory bolted disks as an example,the proposed method is verified and it gives enough accuracy in variation identification.It has been observed that the first-order response surface models considering the interactions between the bolted joints based on the IV-optimal criterion are adequate for assembly purposes.
Improved meteorology from an updated WRF/CMAQ modeling system with MODIS vegetation and albedo
Realistic vegetation characteristics and phenology from the Moderate Resolution Imaging Spectroradiometer (MODIS) products improve the simulation for the meteorology and air quality modeling system WRF/CMAQ (Weather Research and Forecasting model and Community Multiscale Air Qual...
Renard, Benjamin; Vidal, Jean-Philippe
2016-04-01
In recent years, the climate modeling community has put a lot of effort into releasing the outputs of multimodel experiments for use by the wider scientific community. In such experiments, several structurally distinct GCMs are run using the same observed forcings (for the historical period) or the same projected forcings (for the future period). In addition, several members are produced for a single given model structure, by running each GCM with slightly different initial conditions. This multiplicity of GCM outputs offers many opportunities in terms of uncertainty quantification or GCM comparisons. In this presentation, we propose a new procedure to weight GCMs according to their ability to reproduce the observed climate. Such weights can be used to combine the outputs of several models in a way that rewards good-performing models and discards poorly-performing ones. The proposed procedure has the following main properties: 1. It is based on explicit probabilistic models describing the time series produced by the GCMs and the corresponding historical observations, 2. It can use several members whenever available, 3. It accounts for the uncertainty in observations, 4. It assigns a weight to each GCM (all weights summing up to one), 5. It can also assign a weight to the "H0 hypothesis" that all GCMs in the multimodel ensemble are not compatible with observations. The application of the weighting procedure is illustrated with several case studies including synthetic experiments, simple cases where the target GCM output is a simple univariate variable and more realistic cases where the target GCM output is a multivariate and/or a spatial variable. These case studies illustrate the generality of the procedure which can be applied in a wide range of situations, as long as the analyst is prepared to make an explicit probabilistic assumption on the target variable. Moreover, these case studies highlight several interesting properties of the weighting procedure. In
Penalized variable selection procedure for Cox models with semiparametric relative risk
Du, Pang; Liang, Hua; 10.1214/09-AOS780
2010-01-01
We study the Cox models with semiparametric relative risk, which can be partially linear with one nonparametric component, or multiple additive or nonadditive nonparametric components. A penalized partial likelihood procedure is proposed to simultaneously estimate the parameters and select variables for both the parametric and the nonparametric parts. Two penalties are applied sequentially. The first penalty, governing the smoothness of the multivariate nonlinear covariate effect function, provides a smoothing spline ANOVA framework that is exploited to derive an empirical model selection tool for the nonparametric part. The second penalty, either the smoothly-clipped-absolute-deviation (SCAD) penalty or the adaptive LASSO penalty, achieves variable selection in the parametric part. We show that the resulting estimator of the parametric part possesses the oracle property, and that the estimator of the nonparametric part achieves the optimal rate of convergence. The proposed procedures are shown to work well i...
A Robbins-Monro procedure for a class of models of deformation
Fraysse, Philippe
2012-01-01
The paper deals with the statistical analysis of several data sets associated with shape invariant models with different translation, height and scaling parameters. We propose to estimate these parameters together with the common shape function. Our approach extends the recent work of Bercu and Fraysse to multivariate shape invariant models. We propose a very efficient Robbins-Monro procedure for the estimation of the translation parameters and we use these estimates in order to evaluate scale parameters. The main pattern is estimated by a weighted Nadaraya-Watson estimator. We provide almost sure convergence and asymptotic normality for all estimators. Finally, we illustrate the convergence of our estimation procedure on simulated data as well as on real ECG data.
Directory of Open Access Journals (Sweden)
J. Mailier
2010-09-01
Full Text Available The purpose of this paper is to report on the development of a procedure for inferring black-box, yet biologically interpretable, dynamic models of bioprocesses based on sets of measurements of a few external components (biomass, substrates, and products of interest. The procedure has three main steps: (a the determination of the number of macroscopic biological reactions linking the measured components; (b the estimation of a first reaction scheme, which has interesting mathematical properties, but might lack a biological interpretation; and (c the "projection" (or transformation of this reaction scheme onto a biologically-consistent scheme. The advantage of the method is that it allows the fast prototyping of models for the culture of microorganisms that are not well documented. The good performance of the third step of the method is demonstrated by application to an example of microalgal culture.
Single-cluster-update Monte Carlo method for the random anisotropy model
Rößler, U. K.
1999-06-01
A Wolff-type cluster Monte Carlo algorithm for random magnetic models is presented. The algorithm is demonstrated to reduce significantly the critical slowing down for planar random anisotropy models with weak anisotropy strength. Dynamic exponents zcluster algorithms are estimated for models with ratio of anisotropy to exchange constant D/J=1.0 on cubic lattices in three dimensions. For these models, critical exponents are derived from a finite-size scaling analysis.
An updated MILES stellar library and stellar population models (Research Note)
Falcon-Barroso, J.; Sanchez-Blazquez, P.; Vazdekis, A.; Ricciardelli, E.; Cardiel, N.; Cenarro, A. J.; Gorgas, J.; Peletier, R. F.
Aims: We present a number of improvements to the MILES library and stellar population models. We correct some small errors in the radial velocities of the stars, measure the spectral resolution of the library and models more accurately, and give a better absolute flux calibration of the models.
An Updated Geophysical Model for AMSR-E and SSMIS Brightness Temperature Simulations over Oceans
Directory of Open Access Journals (Sweden)
Elizaveta Zabolotskikh
2014-03-01
Full Text Available In this study, we considered the geophysical model for microwave brightness temperature (BT simulation for the Atmosphere-Ocean System under non-precipitating conditions. The model is presented as a combination of atmospheric absorption and ocean emission models. We validated this model for two satellite instruments—for Advanced Microwave Sounding Radiometer-Earth Observing System (AMSR-E onboard Aqua satellite and for Special Sensor Microwave Imager/Sounder (SSMIS onboard F16 satellite of Defense Meteorological Satellite Program (DMSP series. We compared simulated BT values with satellite BT measurements for different combinations of various water vapor and oxygen absorption models and wind induced ocean emission models. A dataset of clear sky atmospheric and oceanic parameters, collocated in time and space with satellite measurements, was used for the comparison. We found the best model combination, providing the least root mean square error between calculations and measurements. A single combination of models ensured the best results for all considered radiometric channels. We also obtained the adjustments to simulated BT values, as averaged differences between the model simulations and satellite measurements. These adjustments can be used in any research based on modeling data for removing model/calibration inconsistencies. We demonstrated the application of the model by means of the development of the new algorithm for sea surface wind speed retrieval from AMSR-E data.
ETM documentation update – including modelling conventions and manual for software tools
DEFF Research Database (Denmark)
Grohnheit, Poul Erik
, it summarises the work done during 2013, and it also contains presentations for promotion of fusion as a future element in the electricity generation mix and presentations for the modelling community concerning model development and model documentation – in particular for TIAM collaboration workshops....
A New Cluster Updating for 2-D SU(2) × SU(2) Chiral Model
Zhang, Jianbo; Ji, Daren
1993-09-01
We propose a variant version of Wolff's cluster algorithm, which may be extended to SU(N) × SU(N) chiral model, and test it in 2-dimensional SU(2) × SU(2) chiral model. The results show that the new method can efficiently reduce the critical slowing down in SU(2) × SU(2) chiral model.
An updated 18S rRNA phylogeny of tunicates based on mixture and secondary structure models
Directory of Open Access Journals (Sweden)
Shenkar Noa
2009-08-01
Full Text Available Abstract Background Tunicates have been recently revealed to be the closest living relatives of vertebrates. Yet, with more than 2500 described species, details of their evolutionary history are still obscure. From a molecular point of view, tunicate phylogenetic relationships have been mostly studied based on analyses of 18S rRNA sequences, which indicate several major clades at odds with the traditional class-level arrangements. Nonetheless, substantial uncertainty remains about the phylogenetic relationships and taxonomic status of key groups such as the Aplousobranchia, Appendicularia, and Thaliacea. Results Thirty new complete 18S rRNA sequences were acquired from previously unsampled tunicate species, with special focus on groups presenting high evolutionary rate. The updated 18S rRNA dataset has been aligned with respect to the constraint on homology imposed by the rRNA secondary structure. A probabilistic framework of phylogenetic reconstruction was adopted to accommodate the particular evolutionary dynamics of this ribosomal marker. Detailed Bayesian analyses were conducted under the non-parametric CAT mixture model accounting for site-specific heterogeneity of the evolutionary process, and under RNA-specific doublet models accommodating the occurrence of compensatory substitutions in stem regions. Our results support the division of tunicates into three major clades: 1 Phlebobranchia + Thaliacea + Aplousobranchia, 2 Appendicularia, and 3 Stolidobranchia, but the position of Appendicularia could not be firmly resolved. Our study additionally reveals that most Aplousobranchia evolve at extremely high rates involving changes in secondary structure of their 18S rRNA, with the exception of the family Clavelinidae, which appears to be slowly evolving. This extreme rate heterogeneity precluded resolving with certainty the exact phylogenetic placement of Aplousobranchia. Finally, the best fitting secondary-structure and CAT-mixture models
Nakstad, Espen Rostrup; Opdahl, Helge; Heyerdahl, Fridtjof; Borchsenius, Fredrik; Skjønsberg, Ole Henning
2017-01-01
Introduction Removal of pulmonary secretions in mechanically ventilated patients usually requires suction with closed catheter systems or flexible bronchoscopes. Manual ventilation is occasionally performed during such procedures if clinicians suspect inadequate ventilation. Suctioning can also be performed with the ventilator entirely disconnected from the endotracheal tube (ETT). The aim of this study was to investigate if these two procedures generate negative airway pressures, which may contribute to atelectasis. Methods The effects of device insertion and suctioning in ETTs were examined in a mechanical lung model with a pressure transducer inserted distal to ETTs of 9 mm, 8 mm and 7 mm internal diameter (ID). A 16 Fr bronchoscope and 12, 14 and 16 Fr suction catheters were used at two different vacuum levels during manual ventilation and with the ETTs disconnected. Results During manual ventilation with ETTs of 9 mm, 8 mm and 7 mm ID, and bronchoscopic suctioning at moderate suction level, peak pressure (PPEAK) dropped from 23, 22 and 24.5 cm H2O to 16, 16 and 15 cm H2O, respectively. Maximum suction reduced PPEAK to 20, 17 and 11 cm H2O, respectively, and the end-expiratory pressure fell from 5, 5.5 and 4.5 cm H2O to –2, –6 and –17 cm H2O. Suctioning through disconnected ETTs (open suction procedure) gave negative model airway pressures throughout the duration of the procedures. Conclusions Manual ventilation and open suction procedures induce negative end-expiratory pressure during endotracheal suctioning, which may have clinical implications in patients who need high PEEP (positive end-expiratory pressure). PMID:28725445
An empirical approach to update multivariate regression models intended for routine industrial use
Energy Technology Data Exchange (ETDEWEB)
Garcia-Mencia, M.V.; Andrade, J.M.; Lopez-Mahia, P.; Prada, D. [University of La Coruna, La Coruna (Spain). Dept. of Analytical Chemistry
2000-11-01
Many problems currently tackled by analysts are highly complex and, accordingly, multivariate regression models need to be developed. Two intertwined topics are important when such models are to be applied within the industrial routines: (1) Did the model account for the 'natural' variance of the production samples? (2) Is the model stable on time? This paper focuses on the second topic and it presents an empirical approach where predictive models developed by using Mid-FTIR and PLS and PCR hold its utility during about nine months when used to predict the octane number of platforming naphthas in a petrochemical refinery. 41 refs., 10 figs., 1 tab.
Geostatistical Procedures for Developing Three-Dimensional Aquifer Models from Drillers' Logs
Bohling, G.; Helm, C.
2013-12-01
The Hydrostratigraphic Drilling Record Assessment (HyDRA) project is developing procedures for employing the vast but highly qualitative hydrostratigraphic information contained in drillers' logs in the development of quantitative three-dimensional (3D) depictions of subsurface properties for use in flow and transport models to support groundwater management practices. One of the project's objectives is to develop protocols for 3D interpolation of lithological data from drillers' logs, properly accounting for the categorical nature of these data. This poster describes the geostatistical procedures developed to accomplish this objective. Using a translation table currently containing over 62,000 unique sediment descriptions encountered during the transcription of over 15,000 logs in the Kansas High Plains aquifer, the sediment descriptions are translated into 71 standardized terms, which are then mapped into a small number of categories associated with different representative property (e.g., hydraulic conductivity [K]) values. Each log is partitioned into regular intervals and the proportion of each K category within each interval is computed. To properly account for their compositional nature, a logratio transform is applied to the proportions. The transformed values are then kriged to the 3D model grid and backtransformed to determine the proportion of each category within each model cell. Various summary measures can then be computed from the proportions, including a proportion-weighted average K and an entropy measure representing the degree of mixing of categories within each cell. We also describe a related cross-validation procedure for assessing log quality.
Gotelli, Nicholas J.; Dorazio, Robert M.; Ellison, Aaron M.; Grossman, Gary D.
2010-01-01
Quantifying patterns of temporal trends in species assemblages is an important analytical challenge in community ecology. We describe methods of analysis that can be applied to a matrix of counts of individuals that is organized by species (rows) and time-ordered sampling periods (columns). We first developed a bootstrapping procedure to test the null hypothesis of random sampling from a stationary species abundance distribution with temporally varying sampling probabilities. This procedure can be modified to account for undetected species. We next developed a hierarchical model to estimate species-specific trends in abundance while accounting for species-specific probabilities of detection. We analysed two long-term datasets on stream fishes and grassland insects to demonstrate these methods. For both assemblages, the bootstrap test indicated that temporal trends in abundance were more heterogeneous than expected under the null model. We used the hierarchical model to estimate trends in abundance and identified sets of species in each assemblage that were steadily increasing, decreasing or remaining constant in abundance over more than a decade of standardized annual surveys. Our methods of analysis are broadly applicable to other ecological datasets, and they represent an advance over most existing procedures, which do not incorporate effects of incomplete sampling and imperfect detection.
TWO-PROCEDURE OF MODEL RELIABILITY-BASED OPTIMIZATION FOR WATER DISTRIBUTION SYSTEMS
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
Recently, considerable emphasis has been laid to the reliability-based optimization model for water distribution systems. But considerable computational effort is needed to determine the reliability-based optimal design of large networks, even of mid-sized networks. In this paper, a new methodology is presented for the reliability analysis for water distribution systems. This methodology consists of two procedures. The first is that the optimal design is constrained only by the pressure heads at demand nodes, done in GRG2. Because the reliability constrains are removed from the optimal problem, a number of simulations do not need to be conducted, so the computer time is greatly decreased. Then, the second procedure is a linear optimal search procedure. In this linear procedure, the optimal results obtained by GRG2 are adjusted by the reliability constrains. The results are a group of commercial diameters of pipes and the constraints of pressure heads and reliability at nodes are satisfied. Therefore, the computer burden is significantly decreased, and the reliability-based optimization is of more practical use.
Directory of Open Access Journals (Sweden)
Matthew ePelowski
2016-04-01
Full Text Available The last decade has witnessed a renaissance of empirical and psychological approaches to art study, especially regarding cognitive models of art processing experience. This new emphasis on modeling has often become the basis for our theoretical understanding of human interaction with art. Models also often define areas of focus and hypotheses for new empirical research, and are increasingly important for connecting psychological theory to discussions of the brain. However, models are often made by different researchers, with quite different emphases or visual styles. Inputs and psychological outcomes may be differently considered, or can be under-reported with regards to key functional components. Thus, we may lose the major theoretical improvements and ability for comparison that can be had with models. To begin addressing this, this paper presents a theoretical assessment, comparison, and new articulation of a selection of key contemporary cognitive or information-processing-based approaches detailing the mechanisms underlying the viewing of art. We review six major models in contemporary psychological aesthetics. We in turn present redesigns of these models using a unified visual form, in some cases making additions or creating new models where none had previously existed. We also frame these approaches in respect to their targeted outputs (e.g., emotion, appraisal, physiological reaction and their strengths within a more general framework of early, intermediate and later processing stages. This is used as a basis for general comparison and discussion of implications and future directions for modeling, and for theoretically understanding our engagement with visual art.
An Updated Coupled Model for Land-Atmosphere Interaction. Part Ⅰ: Simulations of Physical Processes
Institute of Scientific and Technical Information of China (English)
ZENG Hongling; WANG Zaizhi; JI Jinjun; WU Guoxiong
2008-01-01
A new two-way land-atmosphere interaction model (R42_AVIM) is fulfilled by coupling the spectral at- mospheric model (SAMIL_R42L9) developed at the State Key Laboratory of Numerical Modeling for Atmo- spheric Sciences and Geophysical Fluid Dynamics, Institute of Atmospheric Physics, Chinese Academy of Sci- ences (LASG/IAP/CAS) with the land surface model, Atmosphere-Vegetation-Interaction-Model (AVIM). In this coupled model, physical and biological components of AVIM are both included. Climate base state and land surface physical fluxes simulated by R42_AVIM are analyzed and compared with the results of R42_SSIB [which is coupled by SAMIL_R42L9 and Simplified Simple Biosphere (SSIB) models]. The results show the performance of the new model is closer to the observations. It can basically guarantee that the land surface energy budget is balanced, and can simulate June-July-August (JJA) and December-January- February (DJF) land surface air temperature, sensible heat flux, latent heat flux, precipitation, sea level pressure and other variables reasonably well. Compared with R42_SSIB, there are obvious improvements in the JJA simulations of surface air temperature and surface fluxes. Thus, this land-atmosphere coupled model will offer a good experiment platform for land-atmosphere interaction research.
An updated prediction model of the global risk of cardiovascular disease in HIV-positive persons
DEFF Research Database (Denmark)
Friis-Møller, Nina; Nielsen, Lene Ryom; Smith, Colette;
2016-01-01
status, family history of CVD, diabetes, total cholesterol, high-density lipoprotein, CD4 lymphocyte count, cumulative exposure to protease- and nucleoside reverse transcriptase-inhibitors, and current use of abacavir. A reduced model omitted antiretroviral therapies. The D:A:D models statistically...
The updated geodetic mean dynamic topography model – DTU15MDT
DEFF Research Database (Denmark)
Knudsen, Per; Andersen, Ole Baltazar; Maximenko, Nikolai
and provides a better resolution. The better resolution fixes a few problems related to geoid signals in the former model DTU13MDT. Slicing in the GOCO05S gravity model up to harmonic degree 150 has solved some issues related to striations. Compared to the DTU13MSS, the DTU15MSS has been derived by including...
Update on Small Modular Reactors Dynamics System Modeling Tool -- Molten Salt Cooled Architecture
Energy Technology Data Exchange (ETDEWEB)
Hale, Richard Edward [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Qualls, A L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Borum, Robert C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chaleff, Ethan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogerson, Doug W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Batteh, John J. [Modelon Corporation (Sweden); Tiller, Michael M. [Xogeny Corporation, Canton, MI (United States)
2014-08-01
The Small Modular Reactor (SMR) Dynamic System Modeling Tool project is in the third year of development. The project is designed to support collaborative modeling and study of various advanced SMR (non-light water cooled) concepts, including the use of multiple coupled reactors at a single site. The objective of the project is to provide a common simulation environment and baseline modeling resources to facilitate rapid development of dynamic advanced reactor SMR models, ensure consistency among research products within the Instrumentation, Controls, and Human-Machine Interface (ICHMI) technical area, and leverage cross-cutting capabilities while minimizing duplication of effort. The combined simulation environment and suite of models are identified as the Modular Dynamic SIMulation (MoDSIM) tool. The critical elements of this effort include (1) defining a standardized, common simulation environment that can be applied throughout the program, (2) developing a library of baseline component modules that can be assembled into full plant models using existing geometry and thermal-hydraulic data, (3) defining modeling conventions for interconnecting component models, and (4) establishing user interfaces and support tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.
Institute of Scientific and Technical Information of China (English)
曾建超; HidehikoSanada; 等
1995-01-01
A support system for form-correction of Chinese Characters is developed based upon a generation model SAM,and its feasibility is evaluated.SAM is excellent as a model for generating Chinese characters,but it is difficult to determine appropriate parameters because the use of calligraphic knowledge is needed.by noticing that calligraphic knowledge of calligraphists is included in their corrective actions, we adopt a strategy to acquire calligraphic knowledge by monitoring,recording and analyzing corrective actions of calligraphists,and try to realize an environment under which calligraphists can easily make corrections to character forms and which can record corrective actions of calligraphists without interfering with them.In this paper,we first construct a model of correcting procedures of calligraphists,which is composed of typical correcting procedures that are acquired by extensively observing their corrective actions and interviewing them,and develop a form-correcting system for brush-written Chinese characters by using the model.Secondly,through actual correcting experiments,we demonstrate that parameters within SAM can be easily corrected at the level of character patterns by our system,and show that it is effective and easy for calligraphists to be used by evaluating effectiveness of the correcting model,sufficiency of its functions and execution speed.
Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C
2003-01-01
The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.
Steil, Garry M; Hipszer, Brian; Reifman, Jaques
2010-05-01
One year after its initial meeting, the Glycemia Modeling Working Group reconvened during the 2009 Diabetes Technology Meeting in San Francisco, CA. The discussion, involving 39 scientists, again focused on the need for individual investigators to have access to the clinical data required to develop and refine models of glucose metabolism, the need to understand the differences among the distinct models and control algorithms, and the significance of day-to-day subject variability. The key conclusion was that model-based comparisons of different control algorithms, or the models themselves, are limited by the inability to access individual model-patient parameters. It was widely agreed that these parameters, as opposed to the average parameters that are typically reported, are necessary to perform such comparisons. However, the prevailing view was that, if investigators were to make the parameters available, it would limit their ability (and that of their institution) to benefit from the invested work in developing their models. A general agreement was reached regarding the importance of each model having an insulin pharmacokinetic/pharmacodynamic profile that is not different from profiles reported in the literature (88% of the respondents agreed that the model should have similar curves or be analyzed separately) and the importance of capturing intraday variance in insulin sensitivity (91% of the respondents indicated that this could result in changes in fasting glucose of >or=15%, with 52% of the respondents believing that the variability could effect changes of >or=30%). Seventy-six percent of the participants indicated that high-fat meals were thought to effect changes in other model parameters in addition to gastric emptying. There was also widespread consensus as to how a closed-loop controller should respond to day-to-day changes in model parameters (with 76% of the participants indicating that fasting glucose should be within 15% of target, with 30% of the
Ciuchini, Marco; Mishima, Satoshi; Pierini, Maurizio; Reina, Laura; Silvestrini, Luca
2014-01-01
We present updated global fits of the Standard Model and beyond to electroweak precision data, taking into account recent progress in theoretical calculations and experimental measurements. From the fits, we derive model-independent constraints on new physics by introducing oblique and epsilon parameters, and modified $Zb\\bar{b}$ and $HVV$ couplings. Furthermore, we also perform fits of the scale factors of the Higgs-boson couplings to observed signal strengths of the Higgs boson.
Update: Advancement of Contact Dynamics Modeling for Human Spaceflight Simulation Applications
Brain, Thomas A.; Kovel, Erik B.; MacLean, John R.; Quiocho, Leslie J.
2017-01-01
Pong is a new software tool developed at the NASA Johnson Space Center that advances interference-based geometric contact dynamics based on 3D graphics models. The Pong software consists of three parts: a set of scripts to extract geometric data from 3D graphics models, a contact dynamics engine that provides collision detection and force calculations based on the extracted geometric data, and a set of scripts for visualizing the dynamics response with the 3D graphics models. The contact dynamics engine can be linked with an external multibody dynamics engine to provide an integrated multibody contact dynamics simulation. This paper provides a detailed overview of Pong including the overall approach and modeling capabilities, which encompasses force generation from contact primitives and friction to computational performance. Two specific Pong-based examples of International Space Station applications are discussed, and the related verification and validation using this new tool are also addressed.
A collective opinion formation model under Bayesian updating and confirmation bias
Nishi, Ryosuke
2013-01-01
We propose a collective opinion formation model with a so-called confirmation bias. The confirmation bias is a psychological effect with which, in the context of opinion formation, an individual in favor of an opinion is prone to misperceive new incoming information as supporting the current belief of the individual. Our model modifies a Bayesian decision-making model for single individuals (Rabin and Schrag, Q. J. Econ. 114, 37 (1999)) to the case of a well-mixed population of interacting individuals in the absence of the external input. We numerically simulate the model to show that all the agents eventually agree on one of the two opinions only when the confirmation bias is weak. Otherwise, the stochastic population dynamics ends up creating a disagreement configuration (also called polarization), particularly for large system sizes. A strong confirmation bias allows various final disagreement configurations with different fractions of the individuals in favor of the opposite opinions.
Stalford, Catherine B
2004-04-01
Recent epidemiological research places the incidence of obstructive sleep apnea as high as 16% in the general population. Serious postoperative respiratory complications and death have been reported in this population. Anesthetic drugs contribute to these complications secondary to acute and residual influences on the complex orchestration of airway muscles and reflexes involved in airway patency. The Starling resistor model is a theoretical model that has application in explaining upper airway dynamics and the treatment and management of obstructive sleep apnea. The model postulates the oropharynx as a collapsible tube. The oropharynx remains open or partially or completely closed as a result of pressure upstream at the nose and mouth, pressure downstream at the trachea and below, or tissue pressure surrounding the oropharynx. This AANA Journal course provides an overview of the Starling resistor model, its application to obstructive sleep apnea, and preoperative and postoperative anesthetic considerations.
Update on single-screw expander geometry model integrated into an open-source simulation tool
Ziviani, D.; Bell, I. H.; De Paepe, M.; van den Broek, M.
2015-08-01
In this paper, a mechanistic steady-state model of a single-screw expander is described with emphasis on the geometric description. Insights into the calculation of the main parameters and the definition of the groove profile are provided. Additionally, the adopted chamber model is discussed. The model has been implemented by means of the open-source software PDSim (Positive Displacement SIMulation), written in the Python language, and the solution algorithm is described. The single-screw expander model is validated with a set of steady-state measurement points collected from a 11 kWe organic Rankine cycle test-rig with SES36 and R245fa as working fluid. The overall performance and behavior of the expander are also further analyzed.
Quantifying Update Effects in Citizen-Oriented Software
Directory of Open Access Journals (Sweden)
Ion Ivan
2009-02-01
Full Text Available Defining citizen-oriented software. Detailing technical issues regarding update process in this kind of software. Presenting different effects triggered by types of update. Building model for update costs estimation, including producer-side and consumer-side effects. Analyzing model applicability on INVMAT – large scale matrix inversion software. Proposing a model for update effects estimation. Specifying ways for softening effects of inaccurate updates.
Baseline groundwater model update for p-area groundwater operable unit, NBN
Energy Technology Data Exchange (ETDEWEB)
Ross, J. [Savannah River Site (SRS), Aiken, SC (United States); Amidon, M. [Savannah River Site (SRS), Aiken, SC (United States)
2015-09-01
This report documents the development of a numerical groundwater flow and transport model of the hydrogeologic system of the P-Area Reactor Groundwater Operable Unit at the Savannah River Site (SRS) (Figure 1-1). The P-Area model provides a tool to aid in understanding the hydrologic and geochemical processes that control the development and migration of the current tritium, tetrachloroethene (PCE), and trichloroethene (TCE) plumes in this region.
DeLorme, D.; Lea, K.; Hagen, S. C.
2016-12-01
As coastal Louisiana evolves morphologically, ecologically, and from engineering advancements, there is a crucial need to continually adjust real-time forecasting and coastal restoration planning models. This presentation discusses planning, conducting, and evaluating stakeholder workshops to support such an endeavor. The workshops are part of an ongoing Louisiana Sea Grant-sponsored project. The project involves updating an ADCIRC (Advanced Circulation) mesh representation of topography including levees and other flood control structures by applying previously-collected elevation data and new data acquired during the project. The workshops are designed to educate, solicit input, and ensure incorporation of topographic features into the framework is accomplished in the best interest of stakeholders. During this project's first year, three one-day workshops directed to levee managers and other local officials were convened at agricultural extension facilities in Hammond, Houma, and Lake Charles, Louisiana. The objectives were to provide a forum for participants to learn about the ADCIRC framework, understand the importance of accurate elevations for a robust surge model, discuss and identify additional data sources, and become familiar with the CERA (Coastal Emergency Risks Assessment) visualization tool. The workshop structure consisted of several scientific presentations with questions/answer time (ADCIRC simulation inputs and outputs; ADCIRC framework elevation component; description and examples of topographic features such as levees, roadways, railroads, etc. currently utilized in the mesh; ADCIRC model validation demonstration through historic event simulations; CERA demonstration), a breakout activity for participant groups to identify and discuss raised features not currently in the mesh and document them on provided worksheets, and a closing session for debriefing and discussion of future model improvements. Evaluation involved developing, and analyzing a
Updates to the dust-agglomerate collision model and implications for planetesimal formation
Blum, Jürgen; Brisset, Julie; Bukhari, Mohtashim; Kothe, Stefan; Landeck, Alexander; Schräpler, Rainer; Weidling, René
2016-10-01
Since the publication of our first dust-agglomerate collision model in 2010, several new laboratory experiments have been performed, which have led to a refinement of the model. Substantial improvement of the model has been achieved in the low-velocity regime (where we investigated the abrasion in bouncing collisions), in the high-velocity regime (where we have studied the fragmentation behavior of colliding dust aggregates), in the erosion regime (in which we extended the experiments to impacts of small projectile agglomerates into large target agglomerates), and in the very-low velocity collision regime (where we studied further sticking collisions). We also have applied the new dust-agglomerate collision model to the solar nebula conditions and can constrain the potential growth of planetesimals by mass transfer to a very small parameter space, which makes this growth path very unlikely. Experimental examples, an outline of the new collision model, and applications to dust agglomerate growth in the solar nebula will be presented.
An updated fracture-flow model for total-system performance assessment of Yucca Mountain
Energy Technology Data Exchange (ETDEWEB)
Gauthier, J.H. [SPECTRA Research Institute, Albuquerque, NM (United States)
1994-12-31
Improvements have been made to the fracture-flow model being used in the total-system performance assessment of a potential high-level radioactive waste repository at Yucca Mountain, Nevada. The {open_quotes}weeps model{close_quotes} now includes (1) weeps of varied sizes, (2) flow-pattern fluctuations caused by climate change, and (3) flow-pattern perturbations caused by repository heat generation. Comparison with the original weeps model indicates that allowing weeps of varied sizes substantially reduces the number of weeps and the number of containers contacted by weeps. However, flow-pattern perturbations caused by either climate change or repository heat generation greatly increases the number of containers contacted by weeps. In preliminary total-system calculations, using a phenomenological container-failure and radionuclide-release model, the weeps model predicts that radionuclide releases from a high-level radioactive waste repository at Yucca Mountain will be below the EPA standard specified in 40 CFR 191, but that the maximum radiation dose to an individual could be significant. Specific data from the site are required to determine the validity of the weep-flow mechanism and to better determine the parameters to which the dose calculation is sensitive.
GENERATION OF MULTI-LOD 3D CITY MODELS IN CITYGML WITH THE PROCEDURAL MODELLING ENGINE RANDOM3DCITY
Directory of Open Access Journals (Sweden)
F. Biljecki
2016-09-01
Full Text Available The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is – as we discuss in this paper – well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.
Generation of Multi-Lod 3d City Models in Citygml with the Procedural Modelling Engine RANDOM3DCITY
Biljecki, F.; Ledoux, H.; Stoter, J.
2016-09-01
The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is - as we discuss in this paper - well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at github.com/tudelft3d/Random3Dcity"target="_blank">http://github.com/tudelft3d/Random3Dcity.
Animal models for glucocorticoid-induced postmenopausal osteoporosis: An updated review.
Zhang, Zhida; Ren, Hui; Shen, Gengyang; Qiu, Ting; Liang, De; Yang, Zhidong; Yao, Zhensong; Tang, Jingjing; Jiang, Xiaobing; Wei, Qiushi
2016-12-01
Glucocorticoid-induced postmenopausal osteoporosis is a severe osteoporosis, with high risk of major osteoporotic fractures. This severe osteoporosis urges more extensive and deeper basic study, in which suitable animal models are indispensable. However, no relevant review is available introducing this model systematically. Based on the recent studies on GI-PMOP, this brief review introduces the GI-PMOP animal model in terms of its establishment, evaluation of bone mass and discuss its molecular mechanism. Rat, rabbit and sheep with their respective merits were chosen. Both direct and indirect evaluation of bone mass help to understand the bone metabolism under different intervention. The crucial signaling pathways, miRNAs, osteogenic- or adipogenic- related factors and estrogen level may be the predominant contributors to the development of glucocorticoid-induced postmenopausal osteoporosis.
Output-only identification of civil structures using nonlinear finite element model updating
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.
2015-03-01
This paper presents a novel approach for output-only nonlinear system identification of structures using data recorded during earthquake events. In this approach, state-of-the-art nonlinear structural FE modeling and analysis techniques are combined with Bayesian Inference method to estimate (i) time-invariant parameters governing the nonlinear hysteretic material constitutive models used in the FE model of the structure, and (ii) the time history of the earthquake ground motion. To validate the performance of the proposed framework, the simulated responses of a bridge pier to an earthquake ground motion is polluted with artificial output measurement noise and used to jointly estimate the unknown material parameters and the time history of the earthquake ground motion. This proof-of-concept example illustrates the successful performance of the proposed approach even in the presence of high measurement noise.
Energy Technology Data Exchange (ETDEWEB)
Considine, D.B.; Douglass, A.R.; Jackman, C.H. [Applied Research Corp., Landover, MD (United States)]|[NASA, Goddard Space Flight Center, Greenbelt, MD (United States)
1995-02-01
The Goddard Space Flight Center (GSFC) two-dimensional model of stratospheric photochemistry and dynamics has been used to calculate the O3 response to stratospheric aircraft (high-speed civil transport (HSCT)) emissions. The sensitivity of the model O3 response was examined for systematic variations of five parameters and two reaction rates over a wide range, expanding on calculations by various modeling groups for the NASA High Speed Research Program and the World Meteorological Organization. In all, 448 model runs were required to test the effects of variations in the latitude, altitude, and magnetitude of the aircraft emissions perturbation, the background chlorine levels, the background sulfate aerosol surface area densities, and the rates of two key reactions. No deviation from previous conclusions concerning the response of O3 to HSCTs was found in this more exhaustive exploration of parameter space. Maximum O3 depletions occur for high-altitude, low altitude HSCT perturbations. Small increases in global total O3 can occur for low-altitude, high-altitude injections. Decreasing aerosol surface area densities and background chlorine levels increases the sensitivity of model O3 to the HSCT perturbations. The location of the aircraft emissions is the most important determinant of the model response. Response to the location of the HSCT emissions is not changed qualitatively by changes in background chlorine and aerosol loading. The response is also not very sensitive to changes in the rates of the reactions NO + HO2 yields NO2 + OH and HO2 + O3 yields OH + 2O2 over the limits of their respective uncertainties. Finally, levels of lower stratospheric HO(sub x) generally decrease when the HSCT perturbation is included, even though there are large increases in H2O due to the perturbation.
Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K. T.
2012-12-01
Accuracy of reservoir inflow forecasts is instrumental for maximizing value of water resources and influences operation of hydropower reservoirs significantly. Improving hourly reservoir inflow forecasts over a 24 hours lead-time is considered with the day-ahead (Elspot) market of the Nordic exchange market in perspectives. The procedure presented comprises of an error model added on top of an un-alterable constant parameter conceptual model, and a sequential data assimilation routine. The structure of the error model was investigated using freely available software for detecting mathematical relationships in a given dataset (EUREQA) and adopted to contain minimum complexity for computational reasons. As new streamflow data become available the extra information manifested in the discrepancies between measurements and conceptual model outputs are extracted and assimilated into the forecasting system recursively using Sequential Monte Carlo technique. Besides improving forecast skills significantly, the probabilistic inflow forecasts provided by the present approach entrains suitable information for reducing uncertainty in decision making processes related to hydropower systems operation. The potential of the current procedure for improving accuracy of inflow forecasts at lead-times unto 24 hours and its reliability in different seasons of the year will be illustrated and discussed thoroughly.
An Update on the Conceptual-Production Systems Model of Apraxia: Evidence from Stroke
Stamenova, Vessela; Black, Sandra E.; Roy, Eric A.
2012-01-01
Limb apraxia is a neurological disorder characterized by an inability to pantomime and/or imitate gestures. It is more commonly observed after left hemisphere damage (LHD), but has also been reported after right hemisphere damage (RHD). The Conceptual-Production Systems model (Roy, 1996) suggests that three systems are involved in the control of…